Comsol -leaderboard other pages

Topics

The return of quarkonia

Polarization parameter λ

Since the revolutionary discovery of the J/ψ meson, quarkonia – bound states of heavy quark–antiquark pairs – have played a crucial role in understanding fundamental interactions. Being the hadronic-physics equivalent of positronium, they allow detailed study of some of the basic properties of quantum chromodynamics (QCD), the theory of strong interactions. Yet, despite the apparent simplicity of these states, the mechanism behind their production remains a mystery, after decades of experimental and theoretical effort (Brambilla et al. 2011). In particular, the angular decay-

distributions of the quarkonium states produced in hadron collisions – which should provide detailed information on their formation and quantum properties – remain challenging and present a seemingly irreconcilable disagreement between the measurements and the QCD predictions.

Given the success of the Standard Model, why has this intriguing situation not captivated more attention in the high-energy-physics community? The reason may be that this problem belongs to the notoriously obscure and computationally cumbersome “non-perturbative” side of the Standard Model. While the failure to reproduce an experimental observable that is perturbatively calculable in the electroweak or strong sector would be interpreted as a sign of new physics, phenomena requiring a non-perturbative treatment – such as those related to the long-distance regime of the strong force – are less likely to trigger an immediate reaction.

It can also be argued that, until recently, doubts existed regarding the reliability of the experimental data, given some contradictions among results and the incompleteness of the analysis strategies (Faccioli et al. 2010). Similar doubts also existed about the usefulness of the data as a test of theory, given their limited extension into the “interesting” region of high transverse-momentum (pT). The recently published, precise and exhaustive polarization measurements of Υ from the CDF and CMS experiments (CDF collaboration 2012 and CMS collaboration 2013a), which extend to pT of around 40 GeV, have significantly changed this picture, building a robust and unambiguous set of results to challenge the theoretical predictions.

his approach successfully reproduces the differential pT cross-sections, which has been interpreted as a plausible indication that the underlying assumptions are correct

Quarkonium production has been the subject of ambitious theoretical efforts aimed at fully and systematically calculating how an intrinsically non-perturbative system (the cc or bb state) is produced in high-energy collisions and – potentially – at providing Standard Model references for fully fledged precision studies. The nonrelativistic QCD (NRQCD) framework consistently fuses perturbative and non-perturbative aspects of the quarkonium production process, exploiting the notion that the heavy quark and antiquark move relatively slowly when bound as a quarkonium state (Bodwin et al. 1995). This approach introduces into the calculations a mathematical expansion in the quarkʼs velocity-squared, v2, supplementing the usual expansion in the strong coupling constant αs of the hard-scattering processes.

The non-perturbative ingredients in these calculations are the long-distance matrix elements (LDME) that describe the transitions from point-like di-quark objects, which can also be coloured (“colour-octet” states), to the colourless observable quarkonia. In principle these could be calculated using non-perturbative models but the current approach leaves them as free parameters of a global fit to some kinematic spectra of quarkonium production. This approach successfully reproduces the differential pT cross-sections, which has been interpreted as a plausible indication that the underlying assumptions are correct.

The next step in the validation of the NRQCD framework is to make other predictions without changing the previously fitted matrix elements and compare them with independent measurements. The framework clearly predicts that S-wave quarkonia (J/ψ, ψ(2S) and the Υ(nS) states) directly produced in parton–parton scattering at pT much higher than their mass are transversely polarized – that is, their angular momentum vectors are aligned as the spin of a real photon. Specifically, considering their decay into μ+μ, this means that the decay leptons are preferentially emitted in the meson’s direction of motion. The measurements made by CDF and CMS contradict this picture dramatically: the Υ states always decay almost isotropically, meaning that they are produced with no preferred orientation of their angular momentum vectors.

One aspect to keep in mind is that sizeable but not yet well measured fractions of the S-wave quarkonia (orbital angular momentum L=0) are produced from feed-down decays of P-wave states (L=1) leading to more complex polarization patterns. In particular, it is conceivable that the transverse polarization of the directly produced Υ(1S) mesons, say, is washed away by a suitable level of longitudinal polarization brought by the Υ(1S) mesons produced in χb decays. Such potential “conspiracies” illustrate how intertwined the studies of S- and P-wave states are, showing that a complete understanding of the underlying physics requires a global analysis of the whole family.

Few measurements are so far available on the production and polarization of P-wave quarkonia (χc and χb), which are experimentally challenging because the main detection channels involve radiative decays producing low-energy photons. In this respect the Υ(3S) resonance, only affected by feed-down decays from the recently discovered χb(3P) state, a presumably small contribution, offers a clearer comparison between predictions and measurements: the verdict is that there is striking disagreement, as the left-hand figure above shows.

A more decisive assessment of the seriousness of the theory difficulties is provided by measurements of the polarization of high-pT charmonia. Such data probe a domain of high values of the ratio of pT to mass, where the NRQCD prediction is supposed to rest on firmer ground. Furthermore, the heavier charmonium state, ψ(2S), is free from feed-down decays and so its decay angular distribution exclusively reflects the polarization of S-wave quarkonia directly produced in parton–parton scattering, therefore representing a cleaner test of theory. The results for the ψ(2S) shown recently by the CMS collaboration at the Large Hadron Collider Physics Conference, reaching up to pT of 50 GeV, are in disagreement with the theoretically expected transverse polarization, as the right-hand figure indicates (CMS collaboration 2013b). This challenges the assumed hypothesis that long- and short-distance aspects of the strong force can be separated in calculations on these QCD phenomena. The ultimate “smoking-gun signal” will come from measurements of the polarization of directly produced J/ψ mesons. These probe higher pT/mass ratios and lower heavy-quark velocities than the studies of ψ(2S) but at additional cost in the necessary experimental discrimination of the J/ψ mesons from χc decays.

The solution to the quarkonium-polarization problem remains unknown but it seems a safe bet that it will open new perspectives over a whole class of processes

Definite judgements will have to wait for more thorough scrutiny of the theoretical ingredients. An explicit proof that perturbative and non-perturbative effects can be factorized – already existing for several hard-scattering processes in QCD – has yet to be formally provided for the case of quarkonium production. At the same time, the method to determine the colour-octet transition-matrix elements using measured pT spectra must be improved. For example, the existing NRQCD global fits use differential cross-sections measured with acceptance corrections that are evaluated assuming unpolarized production, ignoring the large uncertainty that the experiments assign to the lack of prior knowledge about quarkonium polarization (the acceptance determinations strongly depend on the shape of the dilepton decay distributions). Paradoxically, the fit results lead to the prediction of strong transverse polarization. Moreover, while the NRQCD predictions are considered robust only at sufficiently high pT, the fits assign equal weight to data collected at high pT and those collected at pT values that are similar to the quarkonium mass, which drive the results because of their higher precision. Finally, it could be that the higher-order corrections in the perturbative part of the calculations (currently performed at next-to-leading order in αs) are sizable and not yet well accounted for in current theoretical uncertainties, or that the LDME expansion in the heavy-quark velocity should be reconsidered.

The solution to the quarkonium-polarization problem remains unknown but it seems a safe bet that it will open new perspectives over a whole class of processes. It could unveil an improved Standard Model capable of providing testable predictions for high-momentum production of a large category of non-perturbative hadronic objects. In any case, it will surely stimulate profound rethinking of how such phenomena can be described and predicted.

Beauty in Bologna

CCbea1_06_13

Some 100 physicists, including experts from all around the world, converged on Bologna on 8–12 April for the 14th International Conference on B Physics at Hadron Machines (Beauty 2013). Hosted by the Istituto Nazionale di Fisica Nucleare (INFN) and by the local physics department, the meeting took place in the prestigious Giorgio Prodi lecture hall, at the heart of a magnificent complex in the city centre.

The Beauty conference series aims to review results in the field of heavy-flavour physics and address the physics potential of existing and upcoming B-physics experiments. The major goal of this research at the high-precision frontier is to perform stringent tests of the flavour structure of the Standard Model and search for new physics, where strongly suppressed, “rare” decays and the phenomenon of CP violation – i.e. the non-invariance of weak interactions under combined charge-conjugation (C) and parity (P) transformations – play central roles. New particles may manifest themselves in the corresponding observables through their contributions to loop processes and may lead to flavour-changing neutral currents that are forbidden at tree level in the Standard Model. These studies are complementary to research at the high-energy frontier conducted by the general-purpose experiments ATLAS and CMS at the LHC, which aim to produce and detect new particles directly.

During the past decade e+e B factories have been the main pioneers in the field of B physics, complemented by the CDF and DØ experiments at the Tevatron, which made giant leaps in the exploration of decays of the Bs0 meson. Exploiting the highly successful operation of the LHC, the experimental field of quark-flavour physics is being advanced by the purpose-built LHCb experiment, as well as by ATLAS and CMS. In the coming years, a new e+e machine will join the B-physics programme, following the upgrade of the KEKB collider in Japan and the Belle detector there. This field of research will therefore continue to be lively for many years, with the exciting perspective of reaching the ultimate precision in various key measurements.

The participants at Beauty 2013 were treated to reports on a variety of impressive new results. CP violation has recently been established by LHCb in the Bs0 system with a significance exceeding 5σ by means of the Bs0 → Kπ+ channel. The ATLAS collaboration reported its first flavour-tagged study of Bs0 → J/ψφ decays and the corresponding result for the Bs– Bsmixing phase φs, which is in agreement with previous LHCb analyses. LHCb presented the first combination of several measurements of the angle γ of the unitarity triangle from pure tree-level decays. In the field of charm physics, a new LHCb analysis of the difference of the CP asymmetries in the D0 → π+π and D0 → K+K channels does not support previous measurements that pointed towards a surprisingly large asymmetry. The CDF collaboration reported on the observation of D0 D0 mixing, confirming the previous measurement by LHCb. Concerning the exploration of rare decays, LHCb presented the first angular analysis of Bs0 → φμ+μ.

Di-muons and more

In addition to this selection of highlights, one of the most prominent and rare B decays, the Bs0 → μ+μ channel, was the focus of various discussions and presentations at Beauty 2013. In the Standard Model, this decay originates from quantum-loop effects and is strongly suppressed. Recently, LHCb was able, for the first time, to observe evidence of Bs0 → μ+μ at the 3.5σ level. The reported (time-integrated) branching ratio of 3.2+1.5–1.2  × 10–9 agrees with the Standard-Model prediction. Although the current experimental error is still large, this measurement places important constraints on physics beyond the Standard Model. It will be interesting to monitor future experimental progress.

CCbea2_06_13

With recently proposed new observables complementing the branching ratio, the measurement of Bs0 → μ+μ with increased precision will continue to be vital in the era of the LHC upgrade. ATLAS and CMS can also make significant contributions in the exploration of this decay. Important information will additionally come from stronger experimental constraints on B0 → μ+μ, which has a Standard-Model branching ratio about 30 times smaller than that for Bs0 → μ+μ; the current experimental upper bound is about one magnitude above the Standard-Model expectation. Assuming no suppression through new physics, B0 → μ+μ should be observable at the upgraded LHC.

Altogether, there were 60 invited talks at Beauty 2013 in 12 topical sessions and 11 posters were displayed. In addition to the searches for new physics in the so-called “golden channels”, the talks covered many other interesting measurements, as well as progress in theory. Results on heavy-flavour production and spectroscopy at the B factories, the Tevatron and at the ALICE, ATLAS, CMS and LHCb experiments were presented. Despite the primary focus of the conference being on B physics, two sessions were devoted entirely to CP violation in top, charm and kaon physics. There were also presentations on the status of lepton flavour-violation and models of physics beyond the Standard Model, as well as talks on the status and prospects for future B-physics experiments, SuperKEKB/Belle II and the LHCb upgrade. Moreover, each session featured a theoretical review talk. Guy Wilkinson of Oxford University gave the exciting summary talk that concluded the conference.

The charming environment of the old city centre, dating from the Middle Ages, inspired informal physics discussions during tours through beautiful squares and churches. The programme included a visit to the Bologna Museum of History, followed by the conference dinner, with some jazz music to liven up the evening. The food lived up to the reputation of the traditional Bolognese cuisine and was particularly appreciated.

The 14th Beauty conference marked, for the first time, the dominance of the LHC experiments in the heavy-flavour sector. The field is now entering a high-precision phase for B physics, with the LHC and SuperKEKB promising to enrich it with many new measurements throughout this decade. The forthcoming increase in the beam energy of the LHC will double the b-quark production rate, strengthening its role in the exciting quest for physics beyond the Standard Model.

Quarks and Beauty: An Encounter at the Airport

Ten years ago, the Beauty 2003 conference took place in Pittsburgh. I had already been working on B physics for some years and I thought this would be an opportunity to learn what was happening in the field and talk to some of the experts. In particular, the programme included a talk by Ed Witten that I was keen to hear. Above all, the conference was being hosted by Carnegie-Mellon University, where I had studied physics in the 1960s. I was looking forward to visiting the campus after decades and meeting my mentor, Lincoln Wolfenstein, who was one of the organizers.

I was based at the University of Aachen but found out that there was a convenient flight from Brussels to Pittsburgh and, as a courtesy, the university proposed that one of its cars could drop me at the airport. On arrival in Brussels, I checked in and proceeded towards immigration. There was a long line of passengers heading to the US, who had to wait for special security clearance. After some time, a young woman representing the airline came to me to ask some questions. I told her I was going to Pittsburgh for a conference. She checked my papers confirming my conference registration and hotel reservation. Then she asked me what the conference was about. To avoid going into detailed explanations, I just said: “It is about elementary particles. About quarks.” She looked at me with raised eyebrows that suggested a degree of scepticism, so I decided to explain more about quarks.

“All of the matter you see around you is made of atoms. The centre of the atom is a tiny nucleus. The nucleus itself contains tinier constituents called quarks. There are two main varieties, called up-quark and down-quark. There are some rare varieties, too, which are heavier and unstable. One of them is called the beauty-quark. That is the one the conference is about.” I paused to see if she was registering what I said. She had a bemused look, not sure if I was being serious. I thought it was the nomenclature of quarks that confused her. So I said, helpfully: “These names up, down, beauty are sort of arbitrary. There are some people who call the beauty-quark bottom. Not a nice name, in my opinion. I much prefer beauty.”

At this stage she was distinctly nervous and went to fetch one of her superiors. This was an older woman with a no-nonsense manner. She asked to see the conference papers that I had in my hand. She glanced at the first page, which was a copy of the conference poster with the name “Beauty 2003” printed in bold letters. She immediately exclaimed: “It’s a conference on cosmetics! Why didn’t you say so?” Without waiting for my reaction, she picked up my hand-baggage and hustled me past the line of waiting passengers to the top of the queue, where I could proceed to passport control. She wished me a pleasant flight and disappeared.

I did not have the chance to tell her that the beauty-quark is not a cosmetic but rather a laboratory that might shed light on some of the deep mysteries of nature, such as why we exist and why time runs forwards.

• Lalit M Sehgal, Aachen.

For Lincoln Wolfenstein, an expert in the phenomenology of weak interactions, who turned 90 in February.

Neutrino telescopes point towards exotic physics

The IceCube lab

It is more than six years since Uppsala University was host to the first Workshop on Exotic Physics with Neutrino Telescopes. Since then, the large neutrino telescopes IceCube and ANTARES have been completed and indirect searches for dark matter, monopoles and other aspects of physics beyond the Standard Model are proceeding at full strength. Indeed, some theoretical models have already been called into question by recent results from these detectors. Meanwhile, searches for dark-matter candidates and indications of physics beyond the Standard Model in experiments at the LHC have set stringent constraints on many models, complementing those derived from the neutrino telescopes. The time was therefore ripe for a second workshop, with the Centre for Particle Physics of Marseilles (CPPM) as host, bringing together 47 experts on 3–6 April.

Dark matter

Review talks on supersymmetric dark-matter candidates and the status of experimental searches opened the first day’s sessions. Supersymmetry – still a well motivated candidate for physics beyond the Standard Model – has been put to the first serious tests at the LHC. The discovery there of a Higgs boson at a mass of 126 GeV can be seen either as just another confirmation of the Standard Model or as a first glimpse of physics beyond it. The lack of evidence so far for supersymmetry from direct searches at the LHC raises the limits of supersymmetric particle masses to the scale of tera-electron-volts and has implications for the dark-matter candidates arising in supersymmetric models. The current preferred mass-range for the lightest, stable neutralino is in the region of hundreds of giga-electron-volts. This is good news for neutrino telescopes, which – by design – are sensitive to high-energy particles. The downside is that the predicted rates from annihilation of neutralinos accumulated in heavy celestial objects are low if the constraints from the Wilkinson Microwave Anisotropy Probe and the LHC are taken into account. Only a handful of minimal supersymmetric Standard Model variants predict rates in cubic-kilometre neutrino telescopes that are of the order of 100 events per year or higher.

DeepCore

However, the neutralino in minimal supersymmetry is not the only viable candidate for dark matter. In models with R-parity violation, a long-lived but unstable gravitino with a mass between a few and a few hundred giga-electron-volts could be a component of the dark matter in the halo of galaxies. Neutrinos of any flavour can be produced in gravitino decay and can be detected by neutrino telescopes. A feature of gravitino dark matter is that it would leave no signal in direct-detection experiments because the cross-section for the interaction between a gravitino and normal matter is suppressed by the Planck mass to the fourth power.

Models with extra dimensions of sizes in the range 10–3–10–15 m can also provide dark-matter candidates. Extra dimensions can be accommodated (or even required) in supersymmetry, string-theory or M-theory, where they give rise to branons – weakly interacting and massive fluctuations of the field that represents the 3D brane on which the standard world lives. As stable and weakly interacting objects, branons make a good candidate for dark matter, following the usual scenario: relic branons left over after a freeze-out period during the evolution of the universe accumulate gravitationally in the halos of galaxies, where they annihilate into Standard Model particles that can be detected by gamma-ray telescopes, surface arrays or neutrino telescopes.

current flux limits on relativistic (β > 0.6) monopoles

From the experimental side, the IceCube, ANTARES, Baikal, Baksan and Super-Kamiokande collaborations presented their latest results on the search for dark matter from different potential sources – the Sun, the Galaxy or nearby galaxies. Their search techniques are similar and based on looking for an excess of neutrinos over the known atmospheric-neutrino background from the direction of the sources. DeepCore, the denser array in the centre of IceCube, which was not part of the original design, has proved extremely useful in lowering the energy reach of the detector. It has opened up the possibility of pursuing physics topics that would otherwise be impossible with a detector designed for tera-electron-volt neutrino astrophysics. Using the surrounding strings of IceCube as a veto, starting and contained tracks can be defined, therefore turning IceCube into a 4π detector with an energy threshold of around 10 GeV, with access to the Galactic centre and the whole Southern Sky.

However, none of the experiments report any excess, and upper limits on the neutrino flux and on the cross-section for interactions between weakly interacting massive particles (WIMPs) and nucleons have been calculated over an ample range of WIMP masses, from about 1 GeV (Super-Kamiokande) to 10 TeV (IceCube). An example of the long-term search capability, as well as consistency in data analysis, was presented for the Baksan experiment. Although it has the smallest of the detectors mentioned above, it has gathered data over 24 years, from 1978 to 2009.

Monopoles, nuclearites and more

Monopoles and heavy, highly ionizing particles leave a unique signal in a neutrino telescope: a strong light-yield along the path of the particle, which is much more intense than the usual track-pattern of a minimum-ionizing muon. If the particle is nonrelativistic, then the separation of such a signal from relativistic muons traversing the detector is even easier. However, dedicated online or offline triggers are needed because for a nonrelativistic particle, light is deposited in the detector over a time of up to tens of milliseconds, instead of a few microseconds for a relativistic muon.

The best limit for fast (β > 0.75) monopoles, at a level of about 3 × 10–18 cm–2 s–1, was presented by the IceCube collaboration using data from its 40-string configuration, although the ANTARES limit – at a level of around 7 × 10–17 cm–2 s–1 – remains the best so far, at between 0.65 < β < 0.75. However, the sensitivity of the full IceCube detector could extend to β = 0.60 and reach a level of between 2 × 10–18 cm–2 s–1and 10–17 cm–2 s–1 in a one-year exposure, depending on the assumptions on the monopole spin. Results are expected soon, when the ongoing data analysis is finalized.

The Super-Kamiokande collaboration presented a novel way to search for monopoles using the Sun as the target. The idea is that super-heavy monopoles that have been gravitationally trapped in the Sun will induce proton decay along their orbits. Neutrinos with an energy of tens of mega-electron-volts will then be emitted by the decays of the muons and pions produced as the protons decay. This is a low-energy signal that is well below the threshold of large-scale neutrino telescopes but for which Super-Kamiokande has sensitivity. Indeed, this experiment provides the best limit so far on the flux of super-heavy monopoles in the range 10–5 < β < 10–2. At the other end of the kinematic spectrum, radio-Cherenkov detectors such as RICE and ANITA provide the best limits for ultrarelativistic monopoles of intermediate mass, at the level of 10–19 cm–2 s–1.

Another bright signature, although from a different process, is produced by slowly moving heavy nuclearites. These massive stable lumps of up, down and strange quarks could be detected in neutrino telescopes through the blackbody radiation emitted by the overheated matter along their path. From the analysis of 310 days of live time in the years 2007–2008, the ANTARES collaboration reported a flux limit at the level of 10–17 cm–2 s–1 sr–1 for nuclearite masses larger than 1014 GeV and β around 10–3. Indeed, the limit improves previous results from the MACRO experiment by a factor of between three and an order of magnitude, depending on the nuclearite mass.

The atmosphere, acting as a target for ultra-high-energy cosmic rays, can be a useful source for searches of physics beyond the Standard Model

The atmosphere, acting as a target for ultra-high-energy cosmic rays, can be a useful source for searches of physics beyond the Standard Model. The interaction of a cosmic ray of energy around 1011 GeV with a nucleon in the atmosphere takes place at a much higher centre-of-mass energy than is achievable in accelerator laboratories and a wealth of physics can be extracted from such collisions. Supersymmetric particles can be produced in pairs and, except for the lightest, they can be charged. Even if unstable, they can, because of the boost in the interaction, reach the depths of a detector and emit Cherenkov light as they traverse an array. The signature is two minimum-ionizing, parallel, coincident tracks separated by more than 100 m. These types of searches are being carried out by the two large neutrino-telescope collaborations, IceCube and ANTARES.

ANTARES neutrino telescope

The same interactions of cosmic rays with the atmosphere can also be used to probe non-standard neutrino interactions arising from the effects of tera-electron-volt gravity and/or extra dimensions. At high energies, neutrino interactions with matter may become stronger and the atmosphere can become opaque to neutrinos with energies of peta-electron-volts. A signature in a neutrino telescope would be an absence of regular neutrinos with ultra-high energies accompanied by an excess of muon bundles at horizontal zenith angles. The same effect would take place with a cosmogenic neutrino flux – that is, the flux of neutrinos produced by the interactions of ultra-high-energy cosmic rays with the cosmic microwave background radiation. In the absence of a discovery so far, this flux can be assumed to be at a level compatible with gamma-ray constraints from the Fermi Gamma-ray Space Telescope. The neutrino-nucleon cross-section will depend on the number of extra dimensions, ND, and a lack of events over the expected flux can be transformed into a limit on ND. However, the effect in neutrino telescopes with volumes of a cubic kilometre or so is not big. For values of ND not excluded by the LHC, fewer than one event a year is estimated for IceCube. Only with the larger radio arrays is the expected number of events of the order of 10 per year.

The recent two peta-electron-volt events announced by the IceCube collaboration have already been used to set stringent limits on the violation of Lorentz invariance. If strict Lorentz invariance does not hold, then neutrino bremsstrahlung of electron–positron pairs (ν → νe+e) is possible, so extragalactic neutrinos would rapidly lose energy via such a process. This would lead to a depletion of the ultra-high-energy neutrino flux at the Earth. Assuming that the IceCube events are, indeed, extragalactic (that is, they have travelled of the order of megaparsecs from the sources to the Earth) and that the extragalactic high-energy neutrino flux is at most at the level of the current IceCube limit of 2 × 10–8 cm–2 s–1 sr–1, a limit can be set on Lorentz invariance violation, parameterized by the factor δ, defined as (dE/dp)2-1. Under these assumptions, the bound obtained from the two IceCube events is δ <10–18, which is orders of magnitude smaller than the current best limit of 10–13.

Even if conventionally produced, the absolute normalization of the atmospheric lepton spectrum is not well understood

High-energy atmospheric muons and neutrinos present a background to many of the topics discussed in the workshop. Even if conventionally produced, the absolute normalization of the atmospheric lepton spectrum is not well understood – in particular the contribution from prompt charm decays. Calculations of an atmospheric lepton component, which is rarely considered, from the decays of unflavoured mesons (η, η’, ρ, ω, φ), were presented at the workshop. These mesons decay rapidly to μ+μ pairs and in very-high-energy cosmic-ray interactions the products of the decays can be the dominant muon flux at energies above 106 GeV, forming a background that must be taken into account in exotic searches.

One of the unexpected developments in the field since the first ideas of building neutrino telescopes has been their use in neutrino-oscillation physics. On one hand, the detectors can probe oscillation physics at energies not reachable by the smaller detectors. On the other, an aggressive plan to lower the energy threshold of IceCube and the proposed KM3NeT array to the few-giga-electron-volt region is underway, and IceCube has already produced physics results with its low-energy subarray, DeepCore. Plans to build megatonne water-Cherenkov detectors with a giga-electron-volt energy threshold – PINGU at the South Pole and ORCA in the Mediterranean – were also discussed in the workshop. These detectors consist of about 20–50 strings of optical modules with an inter-string separation of 20 m, to be compared, for example, with the 125 m inter-string separation of IceCube or the 70 m inter-string separation of DeepCore. Such detectors may address the issue of the neutrino mass hierarchy at a relatively low cost and on a short timescale because the technology exists already and the deployment techniques are the same as in IceCube and ANTARES.

Are some atomic nuclei pear shaped?

CCnew1_05_13

Most atomic nuclei that exist naturally are not spherical but have the shape of a rugby ball. While state-of-the-art theories are able to predict this behaviour, the same theories have predicted that for some particular combinations of protons and neutrons, nuclei can also assume an asymmetrical shape like a pear, with more mass at one end of the nucleus than the other. Now an international team studying radium isotopes at CERN’s ISOLDE facility has found that some atomic nuclei can indeed take on this unusual shape.

Most nuclear isotopes predicted to have pear shapes have for a long time been out of reach of experimental techniques. In recent years, however, the ISOLDE facility has demonstrated that heavy, radioactive nuclei, produced in high-energy proton collisions with a uranium-carbide target, can be selectively extracted before being accelerated to 8% of the speed of light. The beam of nuclei is directed onto a foil of isotopically pure nickel, cadmium or tin where the relative motion of the heavy accelerated nucleus and the target nucleus creates an electromagnetic impulse that excites the nuclei.

By studying the details of this excitation process it is possible to infer the nuclear shape. This method has now been used successfully to study the shape of the short-lived isotopes 220Rn and 224Ra. The data show that while 224Ra is pear shaped, 220Rn does not assume the fixed shape of a pear but rather vibrates about this shape.

The findings from the teams at ISOLDE are in contradiction with some nuclear theories and will help others to be refined. The experimental observation of nuclear pear shapes is also important because it can help in experimental searches for atomic electric dipole moments (EDMs). The Standard Model of particle physics predicts that the value of the atomic EDM is so small that it will lie well below the current observational limit. However, many theories that try to refine the model predict values of EDMs that should be measurable. Testing these theories requires improved measurements, the most sensitive being to use exotic atoms whose nuclei are pear shaped.

The new measurements will help to direct the searches for EDMs currently being carried out in North America and in Europe, where new techniques are being developed to exploit the special properties of radon and radium isotopes. The expectation is that the data from the nuclear-physics experiment at ISOLDE can be combined with results from atomic-trapping experiments that measure EDMs to make the most stringent tests of the Standard Model.

ALPHA presents novel investigation of the effect of gravity on antimatter

The ALPHA collaboration at CERN has made the first direct analysis of how antimatter is affected by gravity. The ALPHA experiment was the first to trap atoms of antihydrogen, held in place with a strong magnetic field for up to 1000 s. Although the main goal is not to study gravity, the team realized that the data that they have collected might be sensitive to gravitational effects. Specifically, they searched for the free fall (or rise) of antihydrogen atoms released from the trap, which allowed them to measure limits directly on the ratio of the gravitational to inertial mass of antimatter, F=Mg/M.

Measuring a total of 434 atoms, they found that in the absence of systematic errors, F must be < 75 at a statistical significance level of 5%; the worst-case systematic errors increase this limit to < 110. A similar search places somewhat tighter bounds on a negative F, that is, on antigravity. Refinements of the technique, coupled with larger numbers of cold-trapped antiatoms, should allow future measurements to place tighter bounds on F and approach the interesting region around 1.

Meanwhile, the antimatter programme at CERN is expanding. AEgIS and GBAR, two experiments currently under construction, will focus on measuring how gravity affects antihydrogen.

CMS hunts for low-mass dark matter

Astronomical observations – such as the rotation velocities of galaxies and gravitational lensing – show that more than 80% of the matter in the universe remains invisible. Deciphering the nature of this “dark matter” remains one of the most interesting questions in particle physics and astronomy. The CMS collaboration recently conducted a search for the direct production of dark-matter particles (χ), with especially good sensitivity in the low-mass region that has generated much interest among scientists studying dark matter.

Possible hints of a particle that may be a candidate for dark matter have already begun to appear in the direct-detection experiments; most recently the CDMS-II collaboration reported the observation of three candidate events in its silicon detectors with an estimated background of 0.7 events. This result points to low masses, below 10 GeV/c2, as a region that should be particularly interesting to search. This mass region is where the direct-detection experiments start to lose sensitivity because they rely on measuring the recoil energy imparted to a nucleus by collisions with the dark-matter particles. For a low-mass χ, the kinetic energy transferred to the nucleus in the collision is small, and the detection sensitivity drops as a result.

The CMS collaboration has searched for hints of these elusive particles in “monojet” events, where the dark-matter particles escape undetected, yielding only “missing momentum” in the event. A jet of initial-state radiation can accompany the production of the dark-matter particles, so a search is conducted for an excess of these visible companions compared with the expectation from Standard Model processes. The results are then interpreted within the framework of a simple “effective” theory for their production, where the particle mediating the interaction is assumed to have high mass. An important aspect of the search by CMS is that there is no fall in sensitivity for low masses.

CCnew3_05_13

The monojet search requires at least one jet with more than 110 GeV of energy and has the best sensitivity if there is more than 400 GeV of missing momentum. Events with additional leptons or multiple jets are vetoed. After event selection, 3677 events were found in the recent analysis, with an expectation from Standard Model processes of 3663 ± 196 events. The contribution from electroweak processes dominate this expectation, either from pp → Z+jets with the Z decaying to two neutrinos or from pp → W+jets, where the W decays into a lepton and neutrino, while the lepton escapes detection.

With no significant deviation from the expectation from the Standard Model, CMS has set limits on the production of dark matter, as shown in the figures of the χ–nucleon cross-section versus χ mass. The limits show that CMS has good sensitivity in the low-mass regions of interest, for both spin-dependent and spin-independent interactions.

CP violation observed in the decays of B0s mesons

In March 2012, the LHCb collaboration reported an observation of CP violation in charged B-meson decays, B± → DK±. Now, just over a year later, the collaboration has announced a similar observation in the decays in another B meson, in this case the B0s meson composed of a beauty antiquark b bound with a strange quark s. This first observation of CP violation in the decays B0s → Kπ+ with a significance of more than 5σ marks the first time that CP violation has been found in the decays of B0s mesons – only the fourth type of meson where this effect has been seen. It is an important milestone for LHCb because the precise study of B0s decays is sensitive to possible physics beyond the Standard Model.

CCnew5_05_13

The study of CP violation in charmless charged two-body B decays provides stringent tests of the Cabibbo-Kobayashi-Maskawa picture of CP violation in the Standard Model. However, the presence of hadronic contributions means that several measurements from such decays are needed to exploit flavour symmetries and disentangle the different contributions. In 2004, the BaBar and Belle collaborations at SLAC and KEK, respectively, discovered direct CP violation in the decay B0 → K+π and a model-independent test was proposed to check the consistency of the observed size of the effect with the Standard Model. The test consists of comparing CP violation in B0 → K+π with that in B0s → Kπ+. The B factories at KEK and SLAC did not have the possibility of accumulating large enough samples of B0s decays and, despite much effort by the CDF collaboration at Fermilab’s Tevatron, CP violation had until now not been seen in B0s → Kπ+ with a significance exceeding 5σ.

CCnew6_05_13

Using a data sample corresponding to an integrated luminosity of 1.0 fb–1 collected by the experiment in 2011, the LHCb collaboration measured the direct CP-violating asymmetry for B0s → Kπ+ decays, ACP (B0s → Kπ+) = 0.27 ± 0.04 (stat.) ± 0.01 (syst.), with a significance of more than 5σ. In addition, the collaboration improved the determination of direct CP violation in B0 → K+π decays, ACP (B0 → K+π) = –0.080 ± 0.007 (stat.) ± 0.003 (syst.), which is the most precise measurement of this quantity to date. The four plots in figure 2 show different components of the K+π invariant mass. The upper plots indicate the well established difference in the decay rates of B0 mesons. The enlargements in the lower plots reveal that a difference is also visible around the mass of the B0s meson. The measured values are in good agreement with the Standard Model expectation.

Only the data sample collected in 2011 was used to obtain these results, so LHCb will improve the precision further with the total data set now available, which more than trebled with the excellent performance of the LHC during 2012.

Luminosity-independent measurement of the proton–proton total cross-section at 8 TeV

The TOTEM collaboration has published the first luminosity-independent measurement at the LHC of the total proton–proton cross-section at a centre-of-mass energy of 8 TeV. This follows the collaboration’s published measurement of the same cross-section at 7 TeV, which demonstrated the reliability of the luminosity-independent method by comparing several approaches.

The method requires the simultaneous measurements of the inelastic and elastic rates, as well as the extrapolation of the latter down to a four-momentum transfer squared, |t| = 0. This is achieved with the experimental set-up consisting of two telescopes, T1 and T2, to detect charged particles produced in inelastic proton–proton collisions, and Roman Pot stations to detect elastically scattered protons at very small angles.

The analysis at 8 TeV was performed on two data samples recorded in July 2012 during special fills of the LHC with the magnets set to give the parameter β* = 90 m. During these fills, the Roman Pots were inserted close to the beam, allowing the detection of around 90% of the nuclear elastic-scattering events. Simultaneously, the inelastic scattering rate was measured by the T1 and T2 telescopes.

By applying the optical theorem, the collaboration determined a total proton–proton cross-section of 101.7 ± 2.9 mb, which is in good agreement with the extrapolation from lower energies. The method also allows the derivation of the luminosity-independent elastic and inelastic cross-sections: σel = 27.1 ± 1.4 mb and σinel = 74.7±1.7 mb. The two measurements are consistent in terms of detector performance, showing comparable systematic uncertainties, and they are both in good agreement with the extrapolation of the lower-energy measurements.

Chasing new physics with electroweak penguins

The recent identification of the new particle discovered at the LHC as a Higgs boson with a mass of 125 GeV/c2 completes the picture of particles and forces described by the Standard Model. However, it does not mark the end of the story as, unfortunately, the Standard Model is an incomplete description of nature. Puzzles still remain, for example, in explaining the existence of dark matter and the matter–antimatter asymmetry. The answers to these puzzles may lie in the existence of as yet undiscovered particles that would have played a key role in the early, high-energy, phase of the universe and whose existence would help to complete the description of nature in particle physics. The question then is: at what energy scale would these new particles appear?

Particle physics provides no certain knowledge about this scale but the hope is that the new particles might be produced directly in the high-energy proton–proton collisions of the LHC. However, new particles could also be observed indirectly through the effects of their participation as virtual particles in rare decay processes. By studying such processes, experiments can probe mass scales that are much higher than those accessible directly through the energy available at the LHC. This is because quantum mechanics and Heisenberg’s uncertainty principle allow virtual particles to have masses that are not constrained by the energy of the system. Searches based on virtual particles are limited by the precision of the measurements, rather than the energy of the collider.

Rare potential

One promising place to look for contributions from new virtual particles is in the rare transitions of b quarks to s quarks in which a muon pair (dimuon) is produced: b → sμ+μ. Described by the Feynman diagrams shown in figure 1 (overleaf), these involve what are known as “flavour-changing neutral currents” because the initial quark changes flavour without changing charge. In the Standard Model, transitions of this type are forbidden at the lowest perturbative order – that is, at “tree-level”, where the diagrams have only two vertices. Instead, they are mediated as shown in figure 1 by higher-order diagrams known as “electroweak penguin” and “box” diagrams. For this reason the Standard Model process is rare, which enhances the potential to discover new high-mass particles.

Studies of flavour-changing neutral currents have paved the way for discoveries in particle physics in the past, specifically in the decays of K mesons, where s quarks change to d quarks. Investigations of mixing between the mass eigenstates of the neutral kaon system and of rare K-meson decays led to the prediction of the existence of a second u-like quark (the charm quark, c), at a time when only three quarks were known (u, d and s). It was 10 years before the existence of the c quark was confirmed directly. Similarly, the observation of CP violation in neutral kaons led to the prediction of the third generation of quarks (b and t). Now, the study of flavour-changing neutral-current processes related to the third generation of quarks – in particular the rare b → sμ+μ transitions – could soon provide similar evidence for the existence of new particles.

The LHCb detector is characterized by excellent vertex and momentum resolution.

Several b → sμ+μ transitions have already been observed by the Belle, BaBar and CDF experiments at KEK, SLAC and Fermilab respectively. So far, the results have been limited by the small size of the data sets but with the LHC, a new era of precision has begun. The collider is the world’s largest “factory” for producing particles that contain b quarks: in one year, it produced about 1012 b hadrons in the LHCb experiment, while running at a centre-of-mass energy of 7 TeV with an instantaneous luminosity in the experiment of 4 × 1032 cm–2 s–1. ATLAS and CMS have also recently joined the game, showing their first results on the B0 → K*0 μ+μ decay at the BEAUTY 2013 conference (ATLAS collaboration 2013 and CMS collaboration 2013).

The LHCb detector is characterized by excellent vertex and momentum resolution (coming from its tracking systems) and impressive particle-identification capabilities (from its two ring-imaging Cherenkov detectors). Combined with the large b-hadron production rate, these features allow LHCb to reconstruct clean signals of rare b-hadron decays (figure 2). These processes have branching fractions below 10–6 and at most occur once in every 100 million collisions.

The branching fractions of these decays are sensitive to new physics but their interpretation is unfortunately complicated. The b quark has hadronized, so the observations relate to hadronic rather than quark-level processes. A lack of detailed understanding of the hadronic system limits the usefulness of the branching-fraction measurements in the search for new physics.

Angles and asymmetries

Fortunately, the branching fractions of these decays are not the only handles for investigating new particle contributions. It is often much more instructive to look at the angular distribution of the particles coming from the decay. However, such angular analyses are experimentally challenging because they require a detailed understanding of how both the geometry of the detector and the reconstruction of the event bias the angular distribution of the particles.

The decays B→ K*0μ+μ and Bs → φμ+μ have been shown to be highly sensitive to a variety of new physics scenarios (LHCb collaboration 2013a and 2013b). These decays are characterized by three angles: θK, which describes the K* or φ decay; θl, which describes the dimuon decay; and Φ, the angle between the K* or φ and the dimuon decay planes.

The angular distribution of the particles depends on the properties of the underlying theory. For instance, two features of the Standard Model drive the angular distribution: the photon exchanged in the penguin diagram of figure 1 is transversely polarized, while the charged-current interaction (the W exchange) is purely left-handed. The angle in the dimuon system also has an intrinsic forward–backward asymmetry that arises from interference between the different diagrams. The forward–backward asymmetry can be studied as a function of the mass of the dimuon system, which can be anywhere between twice the muon’s mass and the difference between the mass of the B and the mass of the K* or φ.

In the Standard Model, the forward–backward asymmetry has a characteristic behaviour, changing sign at a dimuon mass of around 2 GeV/c2. It turns out that this point can be predicted with only a small theoretical uncertainty. Figure 3 shows LHCb’s measurement of the forward–backward asymmetry in the decay B→ K*0μ+μ . In addition, the angle Φ can be used to test nature’s left-handedness, through an observable called AT(2).

It is important to emphasize that the room for new physics is still large given the statistical uncertainty of the present measurements.

So far, measurements of both the forward–backward asymmetry and AT(2) show good agreement with the predictions of the Standard Model. While there is no evidence for any disagreement, it is nevertheless important to emphasize that the room for new physics is still large given the statistical uncertainty of the present measurements.

Another way to decrease the theoretical uncertainty associated with the hadronic transitions is to form asymmetries between specific decay modes – for example, CP asymmetries between particle and antiparticle decays. In the Standard Model, the decay B→ K*0μ+μ and its CP conjugate are expected to have the same branching fraction to about 1 part in 1000. With the large LHC data samples, LHCb has verified this at the level of 4% (LHCb collaboration 2013e).

Another example concerns so-called isospin asymmetries between decays that differ only in the type of spectator quark (u or d), labelled q in figure 1. The isospin asymmetry between B0 and B+ decays is defined as:

 

This is formed using the branching fractions of the B0 and B+ decays and the ratio τ0+ of the lifetimes of the B0 and the B+. In the Standard Model, the spectator quark is expected to play only a limited role in the dynamics of the system, so isospin asymmetries are predicted to be tiny. Experimentally, AI is measured as a double ratio with respect to the decay channels B→ K(*)0 J/ψ or B→ K(*)+ J/ψ, which give the same final states after the J/ψ decays to μ+μ and are well known from previous measurements.

Isospin asymmetries have been measured for both B → K*μ+μ and B → Kμ+μ by the BaBar, Belle, CDF and LHCb experiments. All of these measurements are in good agreement with each other and favour a value for AI(B → Kμ+μ) that is close to zero and a negative value for AI(B → Kμ+μ). The LHCb experiment observes a negative isospin asymmetry in this channel at the level of four standard deviations (from zero) as figure 4 shows (LHCb collaboration 2012). This unexpected result is yet to be explained. Indeed, most extensions of the Standard Model do not predict a significant dependence on the charge or flavour of the spectator quark.

Looking to the future

The LHCb experiment has already on tape a data set that is roughly three times larger than that used in its results published so far. Even with only 1 fb–1 of integrated luminosity currently analysed, LHCb has larger samples than all previous experiments combined in most of the channels shown in figure 2. Furthermore, while the selected current data sets contain hundreds of events, the samples will be of the order of tens of thousands of events once the experiment has been upgraded. With these larger data sets the LHCb collaboration will be able to chase progressively smaller and smaller deviations from the Standard Model. This will allow them to probe ever higher mass scales, far beyond those that can be accessed by searching directly for the production of new particles at the LHC. A new era in precision measurements of flavour-changing neutral currents is now opening.

 

John Ellis on the origin of penguins

The penguin diagram

“Mary K [Gaillard], Dimitri [Nanopoulos], and I first got interested in what are now called penguin diagrams while we were studying CP violation in the Standard Model in 1976 … The penguin name came in 1977, as follows.

In the spring of 1977, Mike Chanowitz, Mary K and I wrote a paper on GUTs [grand unified theories] predicting the b quark mass before it was found. When it was found a few weeks later, Mary K, Dimitri, Serge Rudaz and I immediately started working on its phenomenology.

That summer, there was a student at CERN, Melissa Franklin, who is now an experimentalist at Harvard. One evening, she, I and Serge went to a pub, and she and I started a game of darts. We made a bet that if I lost I had to put the word penguin into my next paper. She actually left the darts game before the end, and was replaced by Serge, who beat me. Nevertheless, I felt obligated to carry out the conditions of the bet.

For some time, it was not clear to me how to get the word into this b-quark paper that we were writing at the time … Later … I had a sudden flash that the famous diagrams look like penguins. So we put the name into our paper, and the rest, as they say, is history.”

John Ellis in Mikhail Shifman’s “ITEP Lectures in Particle Physics and Field Theory”, hep-ph/9510397. Reproduced here courtesy of symmetry magazine.

AMS measures antimatter excess in space

CCnew1_04_13

The international team running the Alpha Magnetic Spectrometer (AMS) has announced the first results in its search for dark matter. They indicate the observation of an excess of positrons in the cosmic-ray flux. The results were presented by Samuel Ting, the spokesperson of AMS, in a seminar at CERN on 3 April, the date of publication in Physical Review Letters.

The AMS results are based on an analysis of some 2.5 × 1010 events, recorded over a year and a half. Cuts to reject protons, as well as electrons and positrons produced in the interactions of cosmic rays in the Earth’s atmosphere, reduce this to around 6.8 × 106 positron and electron events, including 400,000 positrons with energies between 0.5 GeV and 350 GeV. This represents the largest collection of antimatter particles detected in space.

The data reveal that the fraction of positrons increases from 10 GeV to 250 GeV, with the slope of the increase reducing by an order of magnitude over the range 20–250 GeV. The data also show no significant variation over time, or any preferred incoming direction. These results are consistent with the positrons’ origin in the annihilation of dark-matter particles in space but they are not yet sufficiently conclusive to rule out other explanations.

The AMS detector is operated by a large international collaboration led by Nobel laureate Samuel Ting. The collaboration involves some 600 researchers from China, Denmark, Finland, France, Germany, Italy, Korea, Mexico, the Netherlands, Portugal, Spain, Switzerland, Taiwan and the US. The detector was assembled at CERN, tested at ESA’s ESTEC centre in the Netherlands and launched into space on 16 May 2011 on board NASA’s Space Shuttle Endeavour. Designed to study cosmic rays before they interact with the Earth’s atmosphere, the experiment is installed on the International Space Station. It tracks incoming charged particles such as protons and electrons, as well as antimatter particles such as positrons, mapping the flux of cosmic rays with unprecedented precision.

An excess of antimatter within the cosmic-ray flux was first observed around two decades ago in experiments flown on high-altitude balloons and has since been seen by the PAMELA detector in space and the Large Area Telescope on the Fermi Gamma-ray Space Telescope. The origin of the excess, however, remains unexplained.

One possibility, predicted by theories involving supersymmetry, is that positrons could be produced when two particles of dark matter collide and annihilate. Assuming an isotropic distribution of dark-matter particles, these theories predict the observations made by AMS. However, the measurement by AMS does not yet rule out the alternative explanation that the positrons originate from pulsars distributed around the galactic plane. Moreover, supersymmetry theories also predict a cut-off at higher energies above the mass range of dark-matter particles and this has not yet been observed.

AMS is the first experiment to measure to 1% accuracy in space – a level of precision that should allow it to discover whether the positron observation has an origin in dark matter or in pulsars. The experiment will further refine the measurement’s precision over the coming years and clarify the behaviour of the positron fraction at energies above 250 GeV.

bright-rec iop pub iop-science physcis connect