Comsol -leaderboard other pages

Topics

CDF announces intriguing results

CCnew7_04_11

The CDF collaboration at Fermilab’s Tevatron has published two measurements that hint at the existence of physics beyond the well tested Standard Model of particles and their interactions. The first measurement revealed an unexpected asymmetry in the production of top/anti-top (tt) quark pairs. The second analysis unveiled surprising evidence for an excess of events that contain a W boson accompanied by two hadronic jets. The excess cannot be due to the long sought-after Higgs boson but could perhaps be explained by new physics ideas.

While both measurements rely on the Tevatron’s unique ability to produce proton–antiproton collisions, if the new physics hinted at in these results does exist, it will manifest itself in some other form in the particle collisions at the LHC at CERN.

The Tevatron has been producing tt pairs since the early 1990s. In first-order Standard Model calculations, the direction of flight of tt pairs produced in proton–antiproton collisions should be independent of the colliding particles’ charge, thus there should be equal numbers of t and t quarks emitted along either beam direction. More detailed, next-to-leading-order calculations predict an asymmetry of 9 ± 1% at large rapidity, favouring the proton beam’s direction.

CDF announced in March that it measured a tt production asymmetry of 48 ±11% for an invariant mass of the tt pair (Mtt) larger than 450 GeV/c2, which is three standard deviations above the Standard Model expectation. The result is based on the analysis of 5.3 fb–1 of collision data, about half of the number of collisions that CDF has recorded to date. The asymmetries were observed in both the laboratory frame of reference and the tt rest frame. A number of theoretical models predict such asymmetries, including models with a Z’ or large extra dimensions.

The analysis was repeated more recently on events where the t and t quarks decay to a different final state. The asymmetry was again measured at close to a 3 σ level with a value of 0.42 ± 0.15 ± 0.05, averaged over all masses, compared with a 6% Standard Model expectation (T Aaltonen et al. 2011a). This confirms the earlier result with a completely independent data sample.

CCnew8_04_11

The second surprising result from CDF started out as a routine Standard Model measurement of collisions, where a W boson was detected in coincidence with two hadronic jets. The team found an unexpected peak in the spectrum of the invariant mass of the pair of jets. The excess of approximately 250 events appeared as a bump around 144 GeV/c2 (T Aaltonen et al. 2011b).

The analysis required the presence of a high-transverse-momentum, isolated lepton; a significant amount of missing energy; and two hadronic jets. The invariant mass spectrum of the jet pair shows a clear peak at 80–90 GeV from a W or Z boson decaying into a jet pair. The surprising peak shows up at a higher mass (figure 2). It has a width compatible with the CDF detector resolution and its significance is 3.2 σ, which takes into account systematics and trial factors. If the peak is from a single particle, the particle would have a production cross-section of approximately 4 pb–1.

The peak cannot result from the Higgs boson predicted by the Standard Model. If a Higgs boson had a mass of 140 GeV/c2 and such a large production rate, both the CDF and DØ experiments at the Tevatron would have seen its decay into pairs of W bosons a long time ago. Furthermore, such a Higgs would decay mainly into bottom-quark jets, which are not observed in an appreciable amount in the CDF data peak. There are, however, new physics ideas that predict the appearance of resonances with the observed features, such as technicolour-based models. If the peak does not originate from a new particle, particle physicists will need to reconsider how the Standard Model is used to make precise predictions for the production of a W boson and two jets.

Physicists from CDF and DØ are in the process of analysing larger data samples, up to 10 fb–1, to either refute or confirm these two results. At the same time, they may find even more interesting signals.

NA63’s enlightening experiments

CCna61_04_11

“Why still do experimental quantum electrodynamics, isn’t everything known?” This provocative question is often heard by the collaborators at one of the smaller CERN experiments, NA63. Their answer is almost as short as the question: it is precisely the fact that everything is supposed to be known that makes it interesting. This understanding enables the exploration of physics in regimes of strong electromagnetic fields, for example as a function of interaction times or in studies of scattering. The results cast light on phenomena in various branches of physics.

Take, as an example, the emission of beamstrahlung, which is expected in the next generation of electron–positron linear colliders, such as the Compact Linear Collider (CLIC) currently under conceptual design at CERN. Particles in a bunch of particles in one beam “see” the electric field in the opposing bunch as boosted by 2γ2–1, where γ is the Lorentz factor. This appears as a strong electric field in the bunch’s rest frame and leads to the emission of intense synchrotron-like radiation, which is known as beamstrahlung. The electric field seen by the particles is comparable to the so-called critical field, which depends only on the reduced Planck’s constant, ħ, the speed of light, c, and the mass, m, and charge, e, of the electron – m2c3/he – and is equivalent to 1.32 × 1016 V/cm and a corresponding magnetic field of 4.41 × 109 T. In such fields, quantum corrections to the emission of synchrotron radiation become important in determining the emission spectrum. They lead to a strong suppression when compared with the classical calculations that are applicable in most other contexts for synchrotron radiation emission.

Into the laboratory

The effects of strong fields are also relevant in many other branches, ranging from the so-called “bubble-regime” in plasma wakefields used for extremely high-gradient particle acceleration, through astrophysical objects such as magnetars, to intense lasers and heavy-ion collisions. The concept even applies in a gravitational analogue – Hawking radiation. Therefore, further investigation of the underlying phenomena is of broad interest.

Clearly, electric fields of the order the critical field are inaccessible in the laboratory. However, by replacing the opposing bunch in the example of a linear collider by a crystalline target, processes linked to the critical field can be studied with relative ease because the crystalline electric fields are orders of magnitude higher. At small angles of incidence to a crystallographic axis or plane, the strong electric fields of the nuclear constituents add coherently to form a macroscopic, continuous field with a peak value around 1011 V/cm. In the rest frame of an ultra-relativistic electron with γ around 105, the field encountered by the incident particle thus becomes comparable to the critical field.

Applications of these strong crystalline electric fields are widely known, in particular in “channelling”, where a beam of charged particles is steered by the fields within a crystal. This has been used, for example in the NA48 experiment at CERN, to deflect a well defined fraction of the main proton beam for the generation of kaons.

The NA63 experiment, following on from its predecessor, NA43, focuses on fundamental investigations of the strong fields themselves. The results have already shown that the emission of synchrotron radiation in the quantum regime is, indeed, well understood, being strongly suppressed as expected. These results mean that reliable estimates based on QED of beamstrahlung in future machines can now be made. In addition, the spin-flip component of the synchrotron-like radiation that is emitted as the beam passes through the crystal is many orders of magnitude higher in energy and intensity than that of a storage ring, with corresponding polarization times of femtoseconds instead of hours.

Strong scattering effects

The suppression in the emission of radiation arises loosely speaking because the field becomes so strong that the particle is deviated out of the formation zone necessary for the generation of the photon – in effect before it has time to generate the radiation. It is equivalent to a shortening of the formation zone. Although the concept of the formation zone was introduced more than 50 years ago by the Armenian physicist Mikhail Ter-Mikaelian, it is still a surprise to many that it can take time corresponding to macroscopic travel distances for a relativistic electron to emit a photon. This is the basis of the Landau-Pomeranchuk-Migdal (LPM) effect, where multiple scattering within the formation length leads to a reduction in radiation emission.

CCna62_04_11

Figure 1 illustrates the suppression mechanism at play. It depicts the electric field from a particle, incident along the dashed line, that has scattered twice (at locations marked by crosses). Outside a radius given by the time since the scattering event, the field points towards the location that the particle would have had if it had not scattered. This is a result of the finite propagation time of information; inside the corresponding sphere, the field follows the particle. The transverse components correspond to radiation and, because of the short time between the scattering events, they are closely spaced and pointing in opposite directions. A distant observer looking at low frequencies will see two electric field lines that mutually cancel – and, therefore, less radiation. It is as if a “semi-bare” electron is interacting.

However, as the NA63 collaboration has recently shown, if a particle impinges on a target that is so thin that the formation zone extends beyond the target, then the LPM suppression is alleviated. To study this effect the collaboration measured the radiation emission from ultra-relativistic electrons in targets consisting of a number of thin foils of tantalum corresponding to 0.03%–5% radiation lengths. They found that, for the thinnest targets, the radiation emission agrees with expectations from the Bethe-Heitler formulation of bremsstrahlung, with the target acting as a single scatterer. Only as the thickness increases does the distorted Coulomb field resulting from the first scattering lead to a suppression of radiation emission in subsequent scattering such that the radiation yield becomes a logarithmic function of the thickness, eventually to become LPM suppression (Thomsen et al. 2010).

CCna63_04_11

The NA63 collaboration has also studied higher-order processes, such as “trident production”, in which an electron impinging on an electromagnetic field produces a positron–electron pair directly through the emission of a virtual photon. The process is illustrated in figure 2 in a reference frame close to the rest frame of the incident electron, in which the field has the critical value. In the laboratory frame, the original particle plus the pair are all directed forwards in a three-prong pattern, giving rise to the name “trident”. The effect is reminiscent of a phenomenon studied by Oskar Klein and Fritz Sauter 80 years ago – the so-called Klein paradox. Klein was one of the first to do calculations using the celebrated equation of Paul Dirac. In 1929 Klein looked at the probability of reflection of an electron from the steep potential barrier provided by an electric field and found that the probability for transmission into a potential of infinite height approached the velocity of the incident electron in units of the speed of light, i.e. that transmission into a “forbidden” region approaches certainty. Soon after, Sauter found that the process takes place for electric fields beyond the critical field, i.e. when the field is so high that an electron transported over a Compton wavelength produces its rest mass, mc2. Today, this process is understood in terms of pair production at the boundary, but without knowledge of the positron this was an impossible conclusion for Klein, hence the name “Klein paradox”.

CCna64_04_11

Studies by NA63 of trident production, with crystals of germanium a few hundred micrometres thick, have shown a similar phenomenon: that when the crystal is turned to an axial direction along the beam, giving rise to a critical field in the particle’s rest frame, the trident process increases significantly (Esberg et al. 2010). Recent calculations have shown that trident production is an important factor in the design of the collision zone at CLIC, underlining the relevance of these experimental investigations.

A suppression mechanism also occurs in the case of pair production. In this case mutual screening of the charges in the pair substantially reduces the energy deposition in matter in the vicinity of the creation vertex. Because of the directionality of the pair, at high energies this internal screening – the King-Perkins-Chudakov effect – takes place over a distance of several tens of micrometres. This is a distance comparable to the sensitive layers in a CCD or a silicon vertex detector (figure 3), which can be used to study the effect.

Finally, as Allan Sørensen of Aarhus University has recently calculated, bremsstrahlung from relativistic heavy ions is expected to show a peak-structure connected to the finite size of the nucleus. The detection of this effect is among the future plans of NA63.

So QED still presents challenges, even for the otherwise well known case of radiation emission. In the words of one of the originators of the quantum theory of beamstrahlung, Richard Blankenbecler: “It is surprising that there is so much more to learn about such a well understood process.”

High-energy interactions in the Alps

CCmor1_04_11

Held in the picturesque mountain setting of La Thuile in the Italian Alps, the international Rencontres de Moriond is one of the most important winter conferences for particle physics. Composed of meetings spread over two weeks, it covers the main themes of electroweak interactions, QCD and high-energy interactions, cosmology, gravitation, astroparticle physics and nanophysics. This article reviews some selected results from the approximately 90 talks presented at the 2011 QCD and high-energy interactions session on 20–27 March.

In the well known spirit of the Moriond meetings, the conference provided an important platform for young physicists to present their latest results. In particular, the sessions this year covered the search for the Higgs boson, the physics of heavy flavours and the top quark, the search for new objects and the first results from the heavy-ion run at the LHC. Lively discussions between theorists and experimentalists followed the presentations and were particularly motivating for the young physicists present.

The LHC had an outstanding first year of operation in 2010, with beam intensity rising systematically over the course of the year. The LHC experiments collected 35–40 pb–1 of proton–proton collision data, of which around 50% were taken during one of the last weeks of proton running. Lead–ion collisions were observed for the first time in November. In 2011 and 2012, most of the run time is planned for physics data-taking, with the aim of collecting 1–3 fb–1 of proton collisions per experiment in 2011.

In the quest for the highest collision energies, the LHC was preceded by the Tevatron at Fermilab in the US. In La Thuile, the collaborations for the CDF and DØ experiments at the Tevatron presented new, combined results, confirming that there is no Standard Model Higgs boson in the mass region between 159 GeV and 173 GeV (95% confidence level). This year, both collaborations also presented exclusion limits within this Higgs-mass region. The Tevatron will end its successful period of data-taking in September. With all of the collected data and improved analyses, the CDF and DØ teams expect to exclude the existence of the Higgs boson in the whole mass region between 114 GeV and 200 GeV – if it does not exist. On the other hand, the experiments will not have enough data to prove discovery if a Higgs does, indeed, exist in this mass region.

The CMS and ATLAS experiments at the LHC cannot yet reach the Tevatron experiments’ level of sensitivity in the search for the Higgs boson. However, within a year and if all goes well and the LHC delivers the expected number of collisions then both CMS and ATLAS will be able to explore the full range between 130 GeV and 460 GeV. If the teams do not see evidence of the Higgs in this wide mass region then they can conclude that no new particle exists with the properties of the Higgs boson and that mass. If a new signal does appear in the data, they will need to wait for more data and improved statistics before confirming any new discovery – but this will happen only in 2012.

The region for a low-mass Higgs, between the 114 GeV limit set by the experiments at the Large Electron–Positron collider and 130 GeV, is more difficult at the LHC. More data time will be needed to exclude or discover the Higgs in this region. The exclusion limits depend on the theoretical calculations of Higgs boson production. The theoretical uncertainties of these calculations formed the subject of a long and interesting discussion between experimentalists and theorists during the Moriond meeting.

One important area of the LHC programme relates to direct searches for new phenomena. The ATLAS and CMS collaborations presented results from the 2010 data-taking period, which show that new physics has not (yet?) been found. However, in many cases the exclusion limits have already surpassed the ones from the Tevatron. The search for new phenomena has always played an important role at the Moriond meetings and is set to become even more so following the increase in luminosity and energy at the LHC.

The LHC experiments are also searching indirectly for new physics. LHCb is doing so through the lens of rare decays of the B particle. This requires high sensitivity of the experimental apparatus and extremely high accuracy in the data analysis. At La Thuile, the LHCb collaboration showed that – after just a few months of operation – their detector has reached a sensitivity that in some cases is already comparable to other detectors that have run for years. These include the measurements of the rare decay of the Bs meson to pairs of muons, where the Standard Model branching ratio is precisely calculated, as well as the mixing frequency in the Bs system. By the end of 2011, LHCb may be able to measure, among other things, the production rate of like-sign muon pairs in B decay. This is important to complement the measurement by DØ, which showed an unexpectedly high matter–antimatter asymmetry in the number of pairs from B0 decay. LHCb should confirm whether or not the observed phenomenon can be associated with new physics.

In early December last year, the first ion–ion collisions at the LHC confirmed the astonishing jet-quenching phenomenon, one of the possible signatures of quark–gluon plasma. For the first time, the LHC experiments could actually see the disappearance of the energy of the recoiling jet that is interacting with the produced medium, providing new insights into the strong interaction through quantitative studies of the dynamics of jet quenching. The Moriond conference provided a good opportunity to discuss the redistribution of the jet energy, which happens over an unexpectedly wide angle, as observed recently by CMS and ATLAS. This is an important step towards understanding jet quenching, as well as the behaviour of the medium in heavy-ion collisions. In another highlight, the ALICE collaboration has found that the effects of the strongly interacting medium at lower particle momenta are stronger than those observed at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. These recent findings will give valuable input to theorists and improve understanding of the jet-quenching phenomenon, and the LHC will allow the effects of the medium to be studied at high particle momenta.

The top quark was discovered at the Tevatron in 1995 but it has yet to be explored fully because – with its high mass – it sits astride the border between Standard Model physics and new physics. At La Thuile, the CMS and ATLAS collaborations presented for the first time results of their analyses of the whole 2010 dataset. Their sensitivity in measurements of the top cross-section is approaching that of the Tevatron experiments and they are now ready to study other properties of the particle, for example making a precise measurement of the mass. For the time being, the most precise measurements of the properties come from DØ and CDF, but the LHC experiments have already seen the production of single top quarks, something that it took 14 years to observe at the Tevatron.

CDF and DØ have observed significant forwards–backwards tt asymmetries in the proton–antiproton collisions at the Tevatron, particularly at a tt mass above 450 GeV. This could be interpreted as a sign of new physics. The size of the effect is expected to be smaller in the proton–proton collisions of the LHC, so interesting comparisons with the Tevatron are not expected until the end of 2011.

The search for new physics requires an excellent understanding of Standard Model processes. In this respect, the LHC experiments have shown important progress in jet reconstruction and calibration, while theorists have made improvements in higher-order QCD corrections, discussed in detail at La Thuile. The agreement now achieved between experimental measurements and theoretical calculations is setting an important baseline in the search for new phenomena.

Meanwhile, far from the LHC, the Pierre Auger Observatory (PAO) in South America has opened the window to the study of interactions at far higher energies in the cosmic radiation. The PAO collaboration presented evidence of an unexpected effect: the highest-energy cosmic rays may have an important contribution from iron ions. This observation was possible because protons and iron nuclei generate showers of different shapes but confirmation of the effect will require a better understanding of these shower shapes.

During their long history the Rencontres de Moriond meetings have followed advances at the frontier of energy at the Tevatron, the frontier of flavour at the BaBar and BELLE experiments, the frontier in heavy-ions at RHIC and the detailed measurements of structure functions at the HERA electron–proton collider at DESY. This year, the evidence at La Thuile is that these excellent research programmes will all be continued at the LHC and its experiments.

PAMELA data challenge theory for cosmic-ray acceleration

CCnew3_03_11

The satellite experiment Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) has reported finding differences between the shapes of the energy spectra of protons and helium nuclei in the cosmic radiation. The data thus seem to go against the generally accepted idea that cosmic rays gain their energy through acceleration in the remnants of supernovae, prior to diffusing through the Galaxy. The collaboration argues that more complex processes are needed to explain their observations.

PAMELA, which is run by a collaboration between several Italian institutes with additional participation from Germany, Russia and Sweden, went into space on a Russian satellite launched from the Baikonur cosmodrome in June 2006. The experiment consists of a magnetic spectrometer comprising a silicon tracker in a 0.48 T field produced by a permanent magnet, together with a time-of-flight system, an electromagnetic silicon-tungsten calorimeter, a “shower-tail catcher” scintillator and a neutron detector, all of which are shielded by an anticoincidence system. Its six-plane double-sided silicon micro-strip tracker provides information on absolute charge and track-deflection. The silicon-tungsten tracking calorimeter and the neutron detector are used in performing lepton–hadron discrimination.

The recent report is based on precision measurements of the proton and helium spectra in the rigidity range 1 GV–1.2 TV, which indicate that the spectral shapes of the two species are different and cannot be well described by a single power law. This challenges the conventional wisdom on the acceleration and propagation of cosmic rays. The data reveal a hardening in the spectra around 200 GV, which the collaboration says could be interpreted as an indication of different populations of cosmic-ray sources. One example of a multi-source model cited in the report published in Sciencexpress is that by V I Zatsepin and N V Sokolskaya (the blue curves in the figure), which considered novae stars and explosions in “superbubbles” as additional cosmic-ray sources.

ALICE gets with the flow

CCnew5_03_11

With data from the first heavy-ion run at the LHC, the ALICE collaboration has made the first observation of elliptic flow of charged particles in lead-lead collisions at 2.76 TeV per nucleon pair.

Flow is an interesting observable because it provides information on the equation of state and the transport properties of matter created in a heavy-ion collision. The azimuthal anisotropy in particle production is the clearest experimental signature of collective flow; it is caused by multiple interactions between the constituents of the created matter and the initial asymmetries in the spatial geometry of a non-central collision. The second Fourier coefficient of this azimuthal asymmetry is known as elliptic flow.

The magnitude of the elliptic flow depends strongly on the friction in the created matter, which is characterized by the ratio of shear viscosity to entropy ratio: η/s. A good fluid, such as water, has a small value of η/s and supports flow patterns such as waves in the ocean. By contrast, in a poor fluid, such as honey, flow patterns disappear quickly. Measurements of elliptic flow at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven already revealed the fascinating fact that the hot and dense matter created in the collision there flows as a good fluid with almost no friction.

Surprisingly enough, the first theoretical calculation of η/s in heavy-ion collisions did not come from lattice QCD or transport theory – but from string theory. First calculations showed that in a strongly coupled N = 4 supersymmetric Yang Mills theory with a large number of colours, η/s can be calculated using a gauge gravity duality. The famous anti-de Sitter/conformal field theory (AdS/CFT) conjecture yields a ratio of η/s = h/4πkB, which was argued to be a lower bound for any relativistic thermal field theory.

At RHIC, a precise determination of the friction in the partonic fluid is complicated by uncertainties in the initial conditions of the collision, the relative contributions from the hadronic and partonic phase, and the unknown temperature dependence of η/s. Because this temperature dependence is unknown, it was not even clear if the elliptic flow would increase or decrease when going from RHIC to the LHC. A measurement of elliptic flow at the LHC was therefore one of the most anticipated results.

The measurements at 2.76 TeV by ALICE show that the elliptic flow of charged particles increases by about 30% compared with flow measured at the highest RHIC energy of 0.2 TeV. This result indicates that the hot and dense matter created in these collisions still behaves like a fluid with almost zero friction, providing strong constraints on the temperature dependence of η/s.

This first measurement also shows that elliptic flow – and thus the properties of the created matter – can be studied with unprecedented precision at the LHC. This is because of the increase in particle multiplicity compared with RHIC and the increase in the elliptic flow itself.

ATLAS goes in search of new physics

CCnew6_03_11

Only a few months after the end of the 2010 data taking, the ATLAS experiment is entering its discovery phase at the LHC. The collaboration has already released its first results on the full data for searches for the Higgs boson, for supersymmetry (SUSY) and for other extensions of the Standard Model. The sensitivity of these results, reported here for Higgs and SUSY, is better than expected from simulation studies made in the past, but so far none of the ATLAS searches have shown signs of new physics.

One specific example is the release of the first limits from ATLAS on the Higgs boson production cross-section in the WW decay channel, shown in figure 1 (ATLAS collaboration 2011a). Already with only 35 pb–1 of data, the best expected sensitivity is only 2.4 times that of the predictions from the Standard Model for a Higgs mass of 160 GeV, the region already excluded by experiments at Fermilab’s Tevatron. Excellent detector performance and good control of the backgrounds achieved using data-driven methods resulted in better limits than anticipated. This bodes extremely well for the 2011 run where the very good performance of the detector and analyses, combined with the predicted accelerator performance, should allow ATLAS to draw some much-anticipated conclusions on the search for the Higgs boson.

Another of the very active areas for ATLAS searches is the hunt for particles predicted by SUSY. This conceptually elegant theory predicts that for each known particle of the Standard Model, there exists a super-partner, where the partners of bosons are fermions and vice-versa. The model could also give a suitable candidate for dark matter: a stable neutral super-particle with no decays to known particles.

CCnew7_03_11

The ATLAS collaboration recently submitted its first results for SUSY searches for publication. One of these searches is in the final state with jets, missing energy and no leptons (ATLAS collaboration 2011b). To make sure no potential signal was missed in this analysis, the search was optimized and carried out combining several different signal topologies. No excess of events over the expected backgrounds is observed. The limits from this study provide the strongest constraints on the mass scales of SUSY to date (figure 2). The interpretation of these results in the minimal supergravity grand unification (mSUGRA) model excludes, at 95% confidence level, super-partners of the gluon and quarks with masses below 775 GeV, assuming they have the same mass.

ATLAS has recently completed many other analyses in the search for evidence for SUSY. These include a search with similar final states but with one (ATLAS collaboration 2011c) or two leptons, as well as a search requiring at least one jet to come from a b-quark. There are also results for searches for the super-partners of the neutrinos decaying into electron-muon final states and for stable hadronizing super-partners of quarks and gluons.

These Higgs and SUSY results are not the complete picture of ATLAS searches. A large number of topologies and final states have been studied, and limits at the tera-electron-volts scale have been set on several scenarios of new physics. These limits are in many cases the most stringent to date. The collaboration anticipates promising opportunities for discoveries with much larger data sets in 2011–2012.

CMS pursues the Higgs boson

CCnew8_03_11

The CMS collaboration has announced its first results on the measurement of the W+W production cross-section and on the related search for the Higgs boson in proton–proton collisions at the LHC at 7 TeV in the centre-of-mass. This is the first paper by CMS that includes searches for the Higgs boson.

The data used for the analysis, recorded in the 2010 LHC proton runs, corresponds to an integrated luminosity of 35.5 ± 3.9 pb–1. Each W is observed through its decay into a charged lepton (electron or muon) measured in CMS; the corresponding neutrino escapes undetected (figure 1). Thirteen candidates were reconstructed in the data, while a total background was estimated of 3.29 ± 0.45 (stat.) ± 1.09 (syst.) events. The production cross-section was measured to be 41.1±15.3 (stat.) ± 5.8 (syst.) ± 4.5 (lumi.) pb. This is in good agreement with the prediction from next-to-leading-order QCD calculations.

The result is of particular interest because it is highly sensitive to possible non-Standard Model contributions to the self-interactions of the W bosons, specifically the WWγ and WWZ triple gauge boson couplings. It is also an irreducible background in the decay of the Higgs boson to W+W .

CCnew9_03_11

The data from the first year’s LHC run are insufficient to be sensitive to the Standard Model Higgs; for that, at least ten times the number of collisions will be necessary. However, a possible extension of the Standard Model incorporates a fourth family of fermions with very large masses. The presence of this fourth family substantially increases the production cross-section of Higgs bosons via gluon fusion and also alters the decay branching fractions. Higgs production could be enhanced by a factor of about nine, such that Higgs particles could be observed in the 2010 CMS data sample.

To select H→ W+W candidates, the CMS analysis uses additional observables such as the transverse momentum of the leptons, the azimuthal angle difference between the dileptons, and the dilepton mass. The analysis found no evidence of the Higgs boson in the 2010 data sample. This enables the Higgs mass range of 144–207 GeV/c2 to be excluded, at a 95% confidence level, for physics models that include a fourth generation of leptons and quarks. This constraint is more stringent for high masses than similar results from the Tevatron collider at Fermilab.

LHCb sets limits on rare B decays to dimuons

CCnew10_03_11

The decay of B0 and B0s mesons exclusively to dimuons (μ+μ) is one of the most important channels in the search for new physics in the flavour sector at the LHC. In the Standard Model, the decays are rare because they can proceed only by processes involving loop diagrams and are in addition helicity-suppressed. However, new particles, in particular those that arise in models with an extended Higgs sector, can augment the decay rates and thus provide signs of new physics.

LHCb has published its first results for this important channel after searching for dimuon decays of both B0 andB0s in data collected at the LHC in proton–proton collisions at 7 TeV in the centre-of-mass. For the analysis, the collaboration used data from an integrated luminosity of around 37 pb–1 collected between July and October 2010.

They find no signal for either of the dimuon decays in this data sample: the observed numbers of events are consistent with the expectations for background. This allows the collaboration to place upper limits on the branching ratios for the two decays: B(B0s → μ+μ) < 5.6 × 10–8 and B(B0 → μ+μ) < 1.5 × 10–8 at 95% confidence level (CL). This is to be compared with Standard Model expectations of B(B0s → μ+μ) = 0.3 × 10–8 and B(B0 → μ+μ) = 0.01 × 10–8.

While there have previously been searches for B0(s) → μ+μ at e+e colliders, the highest sensitivity has so far been achieved at Fermilab’s Tevatron, thanks to the large bb– cross-section at hadron colliders. The most restrictive published limits at 95% CL come from the DØ collaboration’s analysis of 6.1 fb−1, yielding B(B0s → μ+μ) < 5.1 × 10–8, and the CDF collaboration’s analysis of 2 fb−1 yielding B(B0s → μ+μ) < 5.8 × 10–8 and B(B0 → μ+μ) < 1.8 × 10–8.

With less than 40 pb–1, LHCb has therefore already approached the sensitivity of existing measurements. This was possible thanks to the large acceptance and trigger efficiency of the experiment, as well as the increase in the bb cross-section at the higher energy of the LHC. With a much larger amount of data expected in 2011, the experiment should be able to explore smaller branching ratios, down to the interesting level of 10–8.

Looking into the Earth’s interior with geo-neutrinos

CCneu1_03_11

The journal Science celebrated its 125th anniversary in 2005 and in a special issue listed what it considered to be the top 25 questions facing scientists during the next quarter of a century (Kerr 2005). These questions included: how does the Earth̓s interior work?

The main geophysical and geochemical processes that have driven the evolution of the Earth are strictly bound by the planet̓s energy budget. The current flux of energy entering the Earth’s atmosphere is well known: the main contribution comes from solar radiation (1.4 × 103 W m–2), while the energy deposited by cosmic rays is significantly smaller (10–8 W m–2). The uncertainties on terrestrial thermal power are larger – although the most quoted models estimate a global heat loss in the range of 40–47 TW, a global power of 30 TW is not excluded. The measurements of the temperature gradient taken from some 4 × 104 drill holes distributed around the world provide a constraint on the Earth’s heat production. Nevertheless, these direct investigations fail near the oceanic ridge, where the mantle content emerges: here hydrothermal circulation is a highly efficient heat-transport mechanism.

The generation of the Earth’s magnetic field, its mantle circulation, plate tectonics and secular (i.e. long lasting) cooling are processes that depend on terrestrial heat production and distribution, and on the separate contributions to Earth’s energy supply (radiogenic, gravitational, chemical etc.). An unambiguous and observationally based determination of radiogenic heat production is therefore necessary for understanding the Earth’s energetics. Such an observation requires determining the quantity of long-lived radioactive elements in the Earth. However, the direct geochemical investigations only go as far as the upper portion of the mantle, so all of the geochemical estimates of the global abundances of heat-generating elements depend on the assumption that the composition of meteorites reflects that of the Earth.

CCneu2_03_11

The uranium and thorium decay chains and 40K contribute about 99% of the total radiogenic heat production of the Earth; however, both the total amount and the distribution of these elements inside the Earth remain open to question. Thorium and uranium are refractory lithophile elements, while potassium is volatile. The processes of accretion and differentiation of the early Earth, as well as the subsequent processes of recycling and dehydrating subducting slabs, further enhance the concentrations of these radioactive elements in the crust. According to Roberta Rudnick and Shan Gao, the radiogenic heat production of the crust is 7.3 ± 1.2 (1σ) TW (Rudnick and Gao 2003).

The expected amount and distribution of uranium, thorium and potassium in the mantle are model dependent. The Bulk Silicate Earth (BSE) is a canonical model that provides a description of geological evidence that is coherent within the constraints placed by the combined studies of mantle samples and the most primitive of all of the meteorites – the CI group of carbonaceous chondrites – which have a chemical composition similar to that of the solar photosphere, neglecting gaseous elements. The model predicts a radiogenic heat production in the mantle of about 13 TW. However, it needs to be tested because, on the grounds of available geochemical and/or geophysical data, it is not possible to exclude the theory that the radioactivity in the Earth today is enough to account for the highest estimate of the total terrestrial heat. Some models are based on a comparison of the planet with other chondrites, such as enstatite chondrites, and alternative hypotheses do not exclude the presence of radioactive elements in the Earth’s core. In addition, other models suggest the existence of a geo-reactor of 3–6 TW induced by important amounts of uranium present around the core. The debate remains open.

Neutrinos from the Earth

Geo-neutrinos are the (anti)neutrinos produced by the natural radioactivity inside the Earth. In particular, the decay chains of 238U and 232Th include six and four β decays, respectively, and the nucleus of 40K decays by electron capture and β decay with branching ratios of 11% and 89%, respectively. The decays produce heat and electron antineutrinos, with fixed ratios of heat to neutrinos (table 1). A measurement of the antineutrino flux, and possibly of the spectrum, would provide direct information on the amount and composition of radioactive material inside the Earth and so would determine the radiogenic contribution to the heat flow.

CCneu3_03_11

The Earth emits mainly in electron-antineutrinos, while the Sun shines in electron-neutrinos. The order of magnitude of the antineutrino flux on the surface, following the model hypotheses, could be 106 cm–2 s–1 from uranium and thorium in the Earth and 107 cm–2 s–1 from potassium, as compared with a neutrino flux of 6 × 1010 cm–2 s–1 from the Sun. Given the two types of crust (continental and oceanic) and their different composition and thickness, the expected flux of geo-neutrinos differs from place to place on the Earth’s surface. Moreover, considering that this variation can be as much as an order of magnitude, a detector’s sensitivity to geo-neutrinos coming from the mantle and the crust will depend on its location.

The process for the detection of low-energy antineutrinos used by the detectors currently running (KamLAND at Kamioka, Japan, and Borexino at Gran Sasso, Italy) and under construction (SNO+ at SNOlab, Canada), is inverse beta decay with a threshold of 1.806 MeV. Hence, only a fraction of the geo-neutrinos from 238U and 232Th are above threshold (figure 1), and the detection of antineutrinos from 40K remains a difficult challenge even for the next generation detectors. These experiments use liquid scintillator as the detecting material: one kilotonne of it contains some 1032 protons. As a consequence the event rate is conveniently expressed in terms of terrestrial neutrino units (TNU), defined as one event per 1032 target protons a year.

In the underground experiments devoted to the measurement of geo-neutrinos, liquid scintillator – essentially hydrocarbons – provides the hydrogen nuclei that act as the target for the antineutrinos. In these detectors a geo-neutrino event is tagged by a prompt signal and a delayed signal, following the inverse beta decay: νe + p → e+ + n – 1.806 MeV.

The positron ionization and annihilation provide the prompt signal. The energy of the incoming neutrino is related to the measured energy by the relationship: Eν = Emeasured + 0.782 MeV. The prompt signal is in the energy range (1.02, 2.50) MeV for uranium and (1.02, 1.47) MeV for thorium. The neutron slows down and after thermalization is captured by a proton, making a deuteron and a gamma ray of 2.22 MeV. The gamma ray generates the delayed signal. In large volumes of liquid scintillator the delayed signal is fully contained with an efficiency of around 98%.

The prompt–delayed sequence of the inverse beta decay provides a strong tag for electron antineutrinos, well known since the pioneering experiment of Clyde Cowan and Fred Reines in 1956. There is a correlation in space and time between the prompt and delayed signals. The correlated time depends on the properties of the scintillator and is in the order of 200–250 μs. The correlated distance between the two signals is related to the spatial resolution of the detector (around 10 cm at 1 MeV) and is driven by Compton interactions – with a probability of about 100% it can be less than 1 m.

Any electron-antineutrinos besides the ones produced within the Earth, and any event that can mimic a prompt–delayed signal with a neutron in the final state, can be a source of background. In particular, consider electron-antineutrinos produced by nuclear power reactors. Their energy spectrum partially overlaps the one for geo-neutrinos, but shifted towards higher energies up to about 10 MeV. Some 400 power reactors exist, mainly in North America, Europe, West Russia and Japan. Therefore, depending on the location of the underground laboratories, this background can produce a significant interference with the detection of geo-neutrinos.

CCneu4_03_11

Among other background sources there are (α,n) reactions resulting from contaminants in the scintillator, such as 210Po, and cosmogenic radioactive isotopes such as 9Li and 8He, which are produced by muons crossing the laboratory overburden. 9Li and 8He decay through beta-delayed neutron emission with T1/2 = 178.3 ms and 119 ms, respectively. Using dead-time, a cut of 2 s after each detected muon crossing the liquid scintillator, can reject this background with an efficiency of 99.9%. A high level of radiopurity and a fiducial mass cut will reduce uncorrelated random coincidences, which can arise from impurities such as 210Bi, 214Bi and 208Tl.

Detecting geo-neutrinos

CCneu5_03_11

The first attempt to detect geo-neutrinos was made by the KamLAND experiment in 2005, where a signal was detected at the 2σ level (Araki et al. 2005). Three years later the same experiment reported a second measurement at 2.7σ (Abe et al. 2008). In 2010 Borexino reported evidence of geo-neutrinos at 4.2σ (Bellini et al. 2010). This was followed by a measurement in KamLAND with the same significance (Inoue 2010 and Shimizu 2010). The KamLAND and the Borexino experiments both make use of a large mass of organic liquid scintillator shielded by a large-volume water Cherenkov detector and viewed by a large number of photomultipliers (around 2000). In KamLAND in particular, a fiducial mass of around 700 tonnes can be selected, whereas in Borexino the maximum target mass can be as much as 280 tonnes. The statistics of the KamLAND measurement is higher than in Borexino owing to the larger volume and longer exposure; on the other hand the signal-to-noise ratio in the geo-neutrino spectrum window is about 2 for Borexino and about 0.15 for KamLAND.

The interesting quantity is the flux of geo-neutrinos in a given location on the Earth’s surface. This depends on the spatial distribution of the heat-generating elements within the Earth. Geo-neutrinos can travel as much as some 12,000 km to the detector. Therefore, the measured flux of geo-neutrinos must include the effect of neutrino oscillations. It turns out that for geo-neutrinos, the global effect of oscillations is reduced to a constant suppression of the flux through an average survival probability, <Pee >, of around 0.57.

The number of observed geo-neutrino events in KamLAND is 106 + 29 – 28 (+89–78) at 1σ (3σ) with 2135 live-days and a target mass of about 670 tonnes. Borexino has observed 9.9 + 4.1 – 3.4(+ 14.6 – 8.2) geo-neutrino events in 482 days and 225 tonnes at 1σ (3σ). The rate in TNU for the Borexino and KamLAND observations corresponds to 64.8 + 26.6 – 21.6 and 38.3+10.3–9.9, respectively. In fits to the detected data in both experiments, the shapes of the geo-neutrino spectra are the same as in figure 1, assuming the chondritic Th/U mass ratio of 3.9. The combined KamLAND and Borexino observation has a significance of 5σ (Fogli et al. 2010). Figure 2 shows the allowed range for geo-neutrino rates in Borexino and KamLAND as a function of the Earth’s radiogenic heat. The minimum radiogenic heat of Earth corresponds only to the crust contribution.

CCneu6_03_11

The signal-to-noise ratio for reactor antineutrino background in the geo-neutrino energy range is a fundamental parameter for geo-neutrino observations. In Borexino in particular this ratio – neglecting other backgrounds – is around 1.3 because there are no nearby nuclear reactors. Indeed, at Gran Sasso the weighted distance to reactors <Lreac> is about 1000 km. By contrast, at Kamioka <Lreac> is around 200 km with a signal-to-noise ratio of about 0.2. Therefore, at present the significance of the Borexino measurement is limited only by the statistics (figure 3). This indicates that a spectroscopic measurement of the geo-neutrino signal is feasible, taking into account the overall low background rate.

In a few years a third detector, SNO+, with a weighted reactor distance <Lreac> of around 480 km should be operational. A combined analysis of the Borexino, KamLAND and SNO+ experiments could constrain the radiogenic heat of the mantle. In the long term, LENA – a super-massive detector of about 50 kilotonnes – could observe as many as 1000 geo-neutrinos a year. LENA would be located at the Centre for Underground Physics at Pyhäsalmi in Finland with <Lreac> of around 1000 km.

• The authors acknowledge some interesting discussions with 
W F McDonough, R L Rudnick and G Fiorentini.

Jülich welcomes the latest spin on physics

CCspi1_03_11

The international conference series on spin originated with the biannual Symposia on High Energy Spin Physics, launched in 1974 at Argonne, and the Symposia on Polarization Phenomena in Nuclear Physics, which started in 1960 at Basle and were held every five years. Joint meetings began in Osaka in 2000, with the latest, SPIN2010, being held at the Forschungszentrum Jülich, chaired by Hans Ströher and Frank Rathmann. The 19th International Spin Physics Symposium was organized by the Institut für Kernphysik

(IKP), host of the 3 GeV Cooler Synchrotron, COSY – a unique facility for studying the interactions of polarized protons and deuterons with internal polarized targets. Research there is aimed at developing new techniques in spin manipulation for applications in spin physics, in particular for the new Facility for Antiproton and Ion Research (FAIR) at GSI, Darmstadt. The 250 or so talks presented at SPIN2010 covered all aspects of spin physics – from the latest results on transverse spin physics from around the world to spin-dependence at fusion reactors.

The conference started with a review of the theoretical aspects of spin physics by Ulf-G Meißner, director of the theory division at IKP, who focused on the challenges faced by the modern effective field-theory approach to few-body interactions at low and intermediate energies. Progress here has been tremendous but old puzzles such as the analysing power, Ay, in proton-deuteron scattering, refuse to be fixed. These were discussed in more detail in the plenary talks by Evgeny Epelbaum of Bochum and Johan Messchendorp of Groningen. In the second talk of the opening plenary session, Richard Milner of the Massachusetts Institute of Technology (MIT) highlighted the future of experimental spin physics.

It is fair to say that the classical issue of the helicity structure of protons has decided to take a rest, in the sense that rapid progress is unlikely. During the heyday of the contribution of the Efremov-Teryaev-Altarelli-Ross spin anomaly to the Ellis-Jaffe sum rule, it was tempting to attribute the European Muon Collaboration “spin crisis” to a relatively large number of polarized gluons in the proton. Andrea Bressan of Trieste reported on the most recent data from the COMPASS experiment at CERN, on the helicity structure function of protons and deuterons at small x, as well as the search for polarized gluons via hard deep inelastic scattering (DIS) reactions. Kieran Boyle of RIKEN and Brookhaven summarized the limitations on Δg from data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. The non-observation of Δg within the already tight error bars indicates that gluons refuse to carry the helicity of protons. Hence, the dominant part of the proton helicity is in the orbital momentum of partons.

The extraction of the relevant generalized parton distributions from deeply virtual Compton scattering was covered by Michael Düren of Gießen for the HERMES experiment at DESY, Andrea Ferrero of Saclay for COMPASS and Piotr Konczykowski for the CLAS experiment at Jefferson Lab. Despite impressive progress, there is still a long road ahead towards data that could offer a viable evaluation of the orbital momentum contribution to Ji’s sum rule. The lattice QCD results reviewed by Philipp Hägler of Munich suggest the presence of large orbital-angular momenta, Lu ≈ –Ld ≈ 0.36 (1/2), which tend to cancel each other.

The future of polarized DIS at electron–ion colliders was reviewed by Kurt Aulenbacher of Mainz. The many new developments range from a 50-fold increase in the current of polarized electron guns to an increase of 1000 in the rate of electron cooling.

Transversity was high on the agenda at SPIN2010. It is the last, unknown leading-twist structure function of the proton – without it the spin tomography of the proton would be forever incomplete. Since the late 1970s, everyone has known that QCD predicts the death of transverse spin physics at high energy. It took quite some time for the theory community to catch up with the seminal ideas of J P Ralston and D E Soper of some 30 years ago on the non-vanishing transversity signal in double-polarized Drell-Yan (DY) processes; it also took a while to accept the Sivers function, although the Collins function fell on fertile ground. Now, the future of transverse spin physics has never been brighter. During the symposium, news came of the positive assessment by CERN’s Super Proton Synchrotron Committee with respect to the continuation of COMPASS for several more years.

CCspi2_03_11

Both the Collins and Sivers effects have been observed beyond doubt by HERMES and COMPASS. With its renowned determination of the Collins function, the Belle experiment at KEK paved the way for the first determination of the transversity distribution in the proton, which turns out to be similar in shape and magnitude to the helicity density in the proton. Mauro Anselmino reviewed the phenomenology work at Turin, which was described in more detail by Mariaelena Boglione. Non-relativistically, the tensor/Gamow-Teller (transversity) and axial (helicity) currents are identical. The lattice QCD results reported by Hägler show that the Gamow-Teller charge of protons is indeed close to the axial charge.

The point that large transverse spin effects are a feature of valence quarks has been clearly demonstrated in single-polarized proton–proton collisions at RHIC by the PHENIX experiment, as Brookhaven’s Mickey Chiu reported. The principal implication for the PAX experiment at FAIR from the RHIC data, the Turin phenomenology and lattice QCD is that the theoretical expectations of large valence–valence transversity signals in DY processes with polarized antiprotons on polarized protons are robust.

The concern of the QCD community about a contribution of the orbital angular momentum of constituents to the total spin is nothing new to the radioactive-ion-beam community. Hideki Ueno of RIKEN reported on the progress in the production of spin-aligned and polarized radioactive-ion beams, where the orbital momentum of stripped nucleons shows itself in the spin of fragments.

The spin-physics community is entering a race to test the fundamental QCD prediction of the opposite sign of the Sivers effect in semi-inclusive DIS and DY on polarized protons. As Catarina Quintans from Lisbon explained, COMPASS is well poised to pursue this line of research. At the same time, ambitious plans to measure AN in DY experiments with transverse polarization at RHIC, which Elke-Caroline Aschenauer of Brookhaven presented, have involved scraping together a “yard-sale apparatus” for a proposal to be submitted this year. Paul Reimer of Argonne and Ming Liu of Los Alamos discussed the possibilities at the Fermilab Main Injector.

Following the Belle collaboration’s success with the Collins function, Martin Leitgab of Urbana-Champaign reported nice preliminary results on the interference fragmentation function. These cover a broad range of invariant masses in both arms of the experiment.

In his summary talk, Nikolai Nikolaev, of Jülich, raised the issue of the impact of hadronization on spin correlation. As Wolfgang Schäfer observed some time ago, the beta decay of open charm can be viewed as the final step of the hadronization of open charm. In the annihilation of e+e to open charm, the helicities of heavy quarks are correlated and the beta decay of the open charm proceeds via the short-distance heavy quark; so there must be a product of the parity-violating components in the dilepton spectrum recorded in two arms of an experiment. However, because the spinning D* mesons decay into spinless Ds, the spin of the charmed quark is washed out and the parity-violating component of the lepton spectrum is obliterated.

The PAX experiment to polarize stored antiprotons at FAIR featured prominently during the meeting. Jülich’s Frank Rathmann reviewed the proposal and also reported on the spin-physics programme of the COSY-ANKE spectrometer. Important tests of the theories of spin filtering in polarized internal targets will be performed with protons at COSY, before the apparatus is moved to the Antiproton Decelerator at CERN – a unique place to study the spin filtering of antiprotons. Johann Haidenbauer of Jülich, Yury Uzikov of Dubna and Sergey Salnikov of the Budker Institute of Nuclear Physics reported on the Jülich- and Nijmegen-model predictions for the expected spin-filtering rate. There are large uncertainties with modelling the annihilation effects but the findings of substantial polarization of filtered antiprotons are encouraging. Bogdan Wojtsekhowski of Jefferson Lab came up with an interesting suggestion for the spin filtering of antiprotons using a high-pressure, polarized 3He target. This could drastically reduce the filtering time but the compatibility with the storing of the polarized antiprotons remains questionable.

Kent Paschke of Virginia gave a nice review on nucleon electromagnetic form factors, where there is still a controversy between the polarization transfer and the Rosenbluth separation of GE and GM. He and Richard Milner of MIT discussed future direct measurements of the likely culprit – the two-photon exchange contribution – at Jefferson Lab’s Hall B, at DESY with the OLYMPUS experiment at DORIS and at VEPP-III at Novosibirsk.

Spin experiments have always provided stringent tests of fundamental symmetries and there were several talks on the electric dipole moments (EDMs) of nucleons and light nuclei. Experiments with ultra-cold neutrons could eventually reach a sensitivity of dn ≈ 10–28 e⋅cm for the neutron EDM, while new ideas on electrostatic rings for protons could reach a still smaller dp ≈ 10–29 e⋅cm. The latter case, pushed strongly by the groups at Brookhaven and Jülich, presents enormous technological challenges. In the race for high precision versus high energy, such upper bounds on dp and dn would impose more stringent restrictions on new physics (supersymmetry etc.) than LHC experiments could provide.

Will nuclear polarization facilitate a solution to the energy problem? There is an old theoretical observation by Russell Kulsrud and colleagues that the fusion rate in tokomaks could substantially exceed the rate of depolarization of nuclear spins. While the spin dependence of the 3HeD and D3H fusion reactions is known, the spin dependence of the DD fusion reaction has never been measured. Kirill Grigoriev of PNPI Gatchina reported on the planned experiment on polarized DD fusion. Even at energies in the 100 keV range, DD reactions receive substantial contributions from higher partial waves and, besides possibly meeting the demands of fusion reactors, such data would provide stringent tests of few-body theories – in 2010 the existing theoretical models predict quintet suppression factors which differ by nearly one order in magnitude.

• The proceedings will be published by IOP Publishing in Journal of Physics: Conference Series (online and open-access). The International Spin Physics Committee (www.spin-community.org) decided that the 20th Spin Physics Symposium will be held in Dubna in 2012.

bright-rec iop pub iop-science physcis connect