Comsol -leaderboard other pages

Topics

LHC results top the bill in Paris

CCnew1_07_10

More than 1100 physicists gathered in the Palais des Congrès conference centre in Paris on 22–28 July to attend the 35th International Conference on High Energy Physics (ICHEP), the world’s largest conference on particle physics. As the first meeting in the series to announce results from the LHC, it caught the attention not only of physicists but also of media around the world and the president of the host country, France.

President Nicolas Sarkozy, addressed the conference on 26 July, at the official opening of the plenary sessions. In a spirited speech, he exhorted the particle-physics community to continue its quest to understand the nature of the universe, and stated his belief that investment in fundamental research is critical for the progress of mankind.

News from the LHC experiments had already reached the physicists during the three days of parallel sessions with which the ICHEP meetings traditionally begin. One of the items of breaking news from the ATLAS and CMS experiments was the first observation of top quark candidates at the LHC. The top, the heaviest elementary particle observed to date, has so far been produced only at Fermilab’s Tevatron collider in the US.

CCnew2_07_10

Another hotly anticipated presentation at ICHEP concerned the CDF and DØ experiments at the Tevatron. The two experiments have not yet spotted the Higgs boson but have further limited the territory in which it may be hiding. So, the Higgs is still out there waiting to be found, and the LHC experiments have shown at ICHEP that they are well on the way to joining the hunt.

With their first measurements the LHC experiments are rediscovering the particles that lie at the heart of the Standard Model – an essential step before moving on to make discoveries. The quality of the results presented at ICHEP bears witness both to the excellent performance of the LHC and to the high quality of the data in the experiments. The LHC, which is still in its early days, is making steady progress towards its ultimate operating conditions. By the time of the conference, the luminosity had already risen by a factor of more than a thousand since the end of March – and has since risen further still (Multibunch injection provides a quick fill).

The rapid progress with commissioning the LHC beam has been matched by the speed with which the data on billions of collisions have been processed by the Worldwide LHC Computing Grid. This allows data from the experiments to be analysed at collaborating centres around the world, resulting in a truly international experience.

• A feature-length report on ICHEP 2010 will appear in a future edition of the CERN Courier. For details on all of the talks, see www.ichep2010.fr.

Copernicium enters the periodic table

CCnew4_07_10

On 12 July, a ceremony at GSI celebrated the entry of copernicium into the periodic table of elements with a symbolic christening for the new element. Copernicium is 277 times heavier than hydrogen and the heaviest element officially recognized in the periodic table. It is named in honour of the astronomer Nicolaus Copernicus.

Element 112 was discovered at GSI in 1996 by an international team of scientists led by Sigurd Hofmann. The element has officially carried the name copernicium and the symbol Cn since 19 February 2010. Naming the element after Copernicus follows the long-standing tradition of choosing an accomplished scientist as eponym.

The team of scientists at GSI, from Germany, Finland, Russia and Slovakia, produced the new element for the first time in February 1996, by firing zinc ions onto a lead foil. The fusion of the nuclei of the two elements produced one atom of element 112. Although it is stable for only a fraction of a second, the team identified the new element through the radiation emitted during its decay. Further independent experiments at other research facilities confirmed the discovery of element 112 and in 2009 the International Union of Pure and Applied Chemistry officially recognized the existence of element 112 and acknowledged the GSI team’s discovery by inviting them to propose a name.

OPERA catches its first tau-neutrino

CCnew1_06_10

The OPERA collaboration has announced the observation of the first candidate tau-neutrino (ντ) in the muon-neutrino (νμ) beam sent through the Earth from CERN to the INFN’s Gran Sasso Laboratory 730 km away. The result is an important final piece in a puzzle that has challenged science for almost half a century.

CCnew2_06_10

The puzzle surrounding neutrinos originated in the 1960s when the pioneering experiment by Ray Davis detected fewer neutrinos arriving at the Earth from the Sun than solar models predicted. A possible solution, proposed in 1969 by Bruno Pontecorvo and Vladimir Gribov, was that oscillatory changes between different types of neutrinos could be responsible for the apparent neutrino deficit. Conclusive evidence that electron-neutrinos, νe, from the Sun change type en route to the Earth came from the Sudbury Neutrino Observatory in 2002, a few years after the Super-Kamiokande experiment found the first evidence for oscillations in νμ created by cosmic rays in the atmosphere. Accelerator-based experiments have since observed the disappearance of νμ, confirming the oscillation hypothesis, but until now there have been no observations of the appearance of a ντ in a νμ beam.

OPERA’s result follows seven years of preparation and more than three years of beam provided by CERN. The neutrinos are generated at CERN when a proton beam from the Super Proton Synchrotron strikes a target, producing pions and kaons. These quickly decay, giving rise mainly to νμ that pass unhindered through the Earth’s crust towards Gran Sasso. The appearance and subsequent decay of a τ in the OPERA experiment would provide the telltale sign of νμ to ντ oscillation through a charged-current interaction.

Detecting the τ decay is a challenging task, demanding particle tracking at micrometre resolution to reconstruct the topology: either a kink – a sharp change (>20 mrad) in direction occurring after about 1 mm – as the original τ decays into a charged particle together with one or more neutrinos, or the vertex for the decay mode into three charged particles plus a neutrino.

The OPERA apparatus has two identical Super Modules, each containing a target section and a large-aperture muon spectrometer. The target consists of alternate walls of lead/emulsion bricks – 150,000 bricks in total – and modules of scintillator strips for the target tracker. The nuclear-emulsion technique allows the collaboration to measure the neutrino-interaction vertices with high precision. The scintillators provide an electronic trigger for neutrino interactions, localize the particular brick in which the neutrino has interacted, and perform a first tracking of muons within the target. The relevant bricks are then extracted from the walls so that the film can be developed and scanned using computer-controlled scanning microscopes.

The collaboration has identified the first candidate ντ in a sample of events from data taken in 2008–2009, corresponding to 1.89 × 1019 protons on the target at CERN. The sample contains 1088 events, including 901 that appear to be charged-current interactions. The search through these has yielded one candidate with the characteristics expected for the decay of a τ into a charged hadron (h), neutral pions (π0) and a ντ. Indeed, the kinematical analysis suggests the decay τ → rντ. The event has a significance of 2.36 σ of not being a background fluctuation for the τ decay to h0τ.

This candidate event is an important first step towards the observation of ντ → νμ oscillations through the direct appearance of the ντ. That claim will require the detection of a few more events, but so far the collaboration has analysed only 35% of the data taken in 2008 and 2009 and ultimately should have five times as much data than as at present.

DØ sees anomalous asymmetry in decays of B mesons

The DØ collaboration at Fermilab has reported evidence of a violation of matter–antimatter symmetry (“CP symmetry”) in the behaviour of neutral mesons containing b quarks. Studying collisions where B mesons decay semi-leptonically into muons, the team finds about 1% more collisions when two negatively charged muons are produced than collisions with two positively charged muons. The collisions in the DØ detector occur through a symmetric proton–antiproton state and the expected CP asymmetry from the Standard Model is predicted to be much smaller than what has been observed. An asymmetry of 1% is therefore completely unexpected.

The properties of B mesons, created in collisions where a bb quark pair is produced, are assumed to be responsible for this asymmetry. Mesons containing b quarks are known to oscillate between their particle (B=bd or bs) and anti-particle (B=bd or bs) state before they decay into a positively charged muon (for the B) or a negatively charged muon (for the B). If a B meson oscillates before its decay, its decay muon has the “wrong sign”, i.e. its charge is identical to the charge of the muon from the other b decay. Having 1% more negatively charged muon pairs therefore implies that the B meson decays slightly more often into its matter state than into its antimatter state.

The DØ detector has two magnets, a central solenoid and a muon-system toroid, which determine the curvature and charge of muons. By regularly reversing the polarities of these magnets the collaboration can eliminate most effects coming from asymmetries in the detection of positively and negatively charged muons. This feature is crucial for reducing systematic effects in this measurement.

Another known source of asymmetry arises in muons produced in the decays of charged kaons. Kaons contain strange quarks and the interaction cross-sections of positively and negatively charged kaons with the matter making up the DØ detector differ significantly: more interactions are open to the K, which contains strange quarks, than to the K+, which contain strange antiquarks. In detailed studies the collaboration has derived the contribution of this effect almost entirely from the data, making the measurement of the asymmetry in B-meson decays largely independent of external assumptions and simulation.

The final result is 3.2 σ from the Standard Model expectation, corresponding to a probability of less than 0.1 per cent that this measurement is consistent with any known effect. The analysis was based on an integrated luminosity of 6.1 fb–1 and the plan is to increase the accuracy of this measurement by adding significantly more data and improving future analysis methods.

Workshop looks deep into the proton and QCD

CCdis1_06_10

The International workshop on Deep Inelastic Scattering and Related Subjects began as a forum for discussing results on deep inelastic scattering (DIS) from the electron–proton collider, HERA. However, it has quickly become successful at bringing together theorists and experimentalists to discuss results from all collider experiments, both in terms of the latest developments in measurements of the proton structure and in QCD dynamics in general. This year the brand-new measurements of inclusive properties of proton–proton interactions at the LHC found a natural niche for discussion in the 18th workshop, DIS 2010, held in Florence on 19–23 April.

Volcanic disruptions

The cloud of volcanic ash present over most of Europe on the weekend before the workshop caused many flight cancellations and around 140 participants were unable to reach Florence in person; notably there were almost no participants from the UK or the US. On the other hand, more than 200 participants from mainland Europe embarked on long and often adventurous journeys to reach the conference site, a 16th-century cloister in the old part of the city. Owing to the late arrivals, the first day of plenary talks started a little later than planned – immediately with a coffee break, followed by an introduction from the director of INFN Florence, Pier Andrea Mandò.

CCdis2_06_10

The programme continued with a full agenda of plenary talks that set the scene and introduced a wealth of experimental results and recent developments in theory. Monica Turcato of Hamburg University and Katja Krueger of Heidelberg University presented the highlights from the ZEUS and H1 experiments at HERA. Horst Fischer of Albert-Ludwigs-Universität Freiburg reviewed the results on spin from all experiments. Thomas Gehrmann of Universität Zürich and Stefano Forte of Università di Milano reported on the recent progress in perturbative QCD. The session ended with the highlights from the ATLAS and CMS experiments at the LHC, with Thorsten Wengler of Manchester University and Ferenc Sikler of KFKI RMKI, Budapest, having the honour of showing the first published results on charged-particle spectra at 900 GeV and 2.36 TeV, as well as the first preliminary distributions at 7 TeV.

The opening day ended with a welcome cocktail, during which the conveners of the seven parallel sessions set a plan for installing EVO videoconferencing facilities to allow remote participation for those unable to get there, and reshuffled their programmes. Paul Laycock of Liverpool University was appointed convener of the Future of DIS working group “on the fly”, so relieving the organizers of a difficult situation.

The following two and a half days were dedicated to parallel sessions, which were held in the cloister’s painted rooms and library. The working groups covered a broad programme: parton densities; small-x, diffraction and vector mesons; QCD and final states; heavy flavours; electroweak physics and searches; spin physics; and the future of DIS.

The final two days of the conference began early, at 8.00 a.m., with plenary talks by US speakers over EVO. These included reports on: the rich physics of the CDF and DØ experiments at the Tevatron, by Massimo Casarsa and Qizhong Li of Fermilab; heavy-ion physics at RHIC, by Bill Christie of Brookhaven; and DIS results at Jefferson Lab, by Dave Gaskell. The plenary session on Friday had CERN’s Mike Lamont as a special guest, who reported on the status of the LHC accelerator and its performance. The conveners of the seven working groups summarized their sessions, splitting their reports into theoretical and experimental parts. Halina Abramowicz of Tel Aviv University concluded the workshop, pointing out how the different topics such as parton densities, low-x, diffraction, jets, heavy flavours and spin physics are all tools for improving understanding of the structure of the proton and its implications for the LHC.

Bright horizons

CCdis3_06_10

The combined results of ZEUS and H1 in neutral-current and charged-current cross-sections, used as input to fits of the parton distributions in the proton, have led to an incredible accuracy (1–2%), which allows a 5% uncertainty in the prediction of W and Z production at central rapidities at the LHC. The recent inclusion in the fits of combined data on charm reveals that the QCD evolution is sensitive to the treatment of heavy flavours and that the choice of the charm mass plays an important role in the predictions for the LHC. H1 and ZEUS are now focusing on the extension of the precision inclusive measurements to high/low photon virtualities, Q2, and high xBjorken. Also on the way is the completion of jet and heavy-flavour measurements based on all of the HERA statistics (0.5 fb–1 per experiment). Together, these will provide stringent tests of QCD at all Q2 and will further constrain the proton parton distributions.

Meanwhile, CDF and DØ now have 7 fb–1 each on tape and are sensitive to processes with cross-sections below 1 pb. Such a harvest provides a number of outstanding electroweak and QCD results: running αs has been measured at the highest pt ever, and the combined W mass measurement from the Tevatron is more precise than the direct measurements at LEP. The combined limit on the Standard Model Higgs lies in the range 163 <MHiggs <166 GeV at 95% confidence level. More results are on the horizon, with the 10 fb–1 expected by the end of 2011.

CCdis4_06_10

The newborn LHC experiments are performing well and are taking their first look at the particle spectra provided by nature at previously unexplored centre-of-mass energies. A few weeks after the first collisions, distributions at 7 TeV were already available. Figure 1 shows the multiplicity of charged particles as a function of the centre-of-mass energy from different measurements, including from ALICE at 7 TeV. Figure 2, where the average transverse momentum as a function of the charged-particle multiplicity of ATLAS data at 7 TeV is compared with various Monte Carlos (MCs), seems to point to the inadequacy of the models at this energy.

With increasing centre-of-mass energy, the momentum fraction of the partons can be small and the probability of multiparton interactions increases. Looking in detail at the event topology with the available LHC data is already informative: comparing the forward energy flow from minimum-bias events at different √s provides a new, independent constraint on the underlying event models. For example, figure 3 shows the ratio of energy flow measured by CMS at 7 TeV and 0.9 TeV as a function of the rapidity, compared with the Pythia MC.

Exclusive reactions – mainly at HERMES, Jefferson Lab and RHIC – allow the extraction of the generalized parton distributions. This was defined at the workshop as “a major new direction in hadron physics”, aimed at the 3D mapping of the proton and, more generally, of the nucleon.

CCdis5_06_10

In all, with results from Belle at KEK, BaBar at SLAC, COMPASS at CERN – as well as from Jefferson Lab, RHIC, the Tevatron, HERA and LHC experiments – QCD was seen at work over a range of studies from e+e to muon scattering and DIS to heavy ions and up to the energy frontier of LHC. In this stimulating contest, theory is preparing for present and future challenges with the first next-to-next-to-leading order (NNLO) calculations of precision observables and NNLO parton distributions. An example of the interplay between the precision of the available data and the theoretical predictions is given in figure 4, which shows a compilation of all of the αsmeasurements presented in the QCD session of the workshop.

The two main future projects, the LHeC electron–proton collider and the EIC electron–ion collider, were discussed extensively in the session on the future of DIS. The interest manifested by 350 or so registrations for the workshop promises a bright future for the field as well as for the DIS workshop series. The next workshop will be held at Jefferson Lab in April 2011 – a site in the US will be the ideal place to discuss future facilities.

• The workshop was organized by the University and INFN Florence, and by the University of Piemonte Orientale. We would like to thank the sponsors: INFN, DESY, CERN, Jefferson Laboratory, Brookhaven National Laboratory and CAEN Viareggio. Special thanks go to our co-organizers Giuseppe Barbagli, Dimitri Colferai and Massimiliano Grazzini, to all of the students and postdocs of our universities who helped out, and to the founder of the workshop series, Aharon Levy.

QCD scattering: from DGLAP to BFKL

CCqcd1_06_10

Most particle physicists will be familiar with two famous abbreviations, DGLAP and BFKL, which are synonymous with calculations of high-energy, strong-interaction scattering processes, in particular nowadays at HERA, the Tevatron and most recently, the LHC. The Dokshitzer-Gribov-Lipatov-Alterelli-Parisi (DGLAP) equation and the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation together form the basis of current understanding of high-energy scattering in quantum chromodynamics (QCD), the theory of strong interactions. The celebration this year of the 70th birthday of Lev Lipatov, whose name appears as the common factor, provides a good occasion to look back at some of the work that led to the two equations and its roots in the theoretical particle physics of the 1960s.

Quantum field theory (QFT) lies at the heart of QCD. Fifty years ago, however, theoreticians were generally disappointed in their attempts to apply QFT to strong interactions. They began to develop methods to circumvent traditional QFT by studying the unitarity and analyticity constraints on scattering amplitudes, and extending Tullio Regge’s ideas on complex angular momenta to relativistic theory. It was around this time that the group in Leningrad led by Vladimir Gribov, which included Lipatov, began to take a lead in these studies.

Quantum electrodynamics (QED) provided the theoretical laboratory to check the new ideas of particle “reggeization”. In several pioneering papers Gribov, Lipatov and co-authors developed the leading-logarithm approximation to processes at high-energies; this later played a key role in perturbative QCD for strong interactions (Gorshkov et al. 1966). Using QED as an example, they demonstrated that QFT leads to a total cross-section that does not decrease with energy – the first example of what is known as Pomeron exchange. Moreover, they checked and confirmed the main features of Reggeon field theory in the particular case of QED.

CCqcd2_06_10

By the end of the 1960s, experiments at SLAC had revealed Bjorken scaling in deep inelastic lepton-hadron scattering. This led Richard Feynman and James Bjorken to introduce nucleon constituents – partons – that later turned out to be nothing other than quarks, antiquarks and gluons. Gribov became interested in finding out if Bjorken scaling could be reproduced in QFT. As examples he studied both a fermion theory with a pseudoscalar coupling and QED, in the kinematic conditions where there is a large momentum-transfer, Q2, to the fermion. The task was to select and sum all leading Feynman diagrams that give rise to the logarithmically enhanced (α log Q2)n contributions to the cross section, at fixed values of the Bjorken variable x=Q2/(s+Q2) between zero and unity, where s is the invariant energy of the reaction.

At some point Lipatov joined Gribov in the project and together they studied not only deep inelastic scattering but also the inclusive annihilation of e+e to a particle, h, in two field-theoretical models, one of which was QED. They showed that in a renormalizable QFT, the structure functions must violate Bjorken scaling (Gribov and Lipatov 1971). They obtained relations between structure functions that describe deep inelastic scattering and those that describe jet fragmentation in e+e annihilation – the Gribov-Lipatov reciprocity relations. It is interesting to note that this work appeared at a time before experiments had either detected any violation in Bjorken scaling or observed any rise with momentum transfer of the transverse momenta in “hard” hadronic reactions, as would follow from a renormalizable field theory. This paradox led to continuous and sometimes heated discussions in the new Theory Division of the Leningrad (now Petersburg) Nuclear Physics Institute (PNPI) in Gatchina.

CCqcd3_06_10

Somewhat later, Lipatov reformulated the Gribov-Lipatov results for QED in the form of the evolution equations for parton densities (Lipatov 1974). This differed from the real thing, QCD, only by colour factors and by the absence of the gluon-to-gluon-splitting kernel, which was later provided independently by Yuri Dokshitzer at PNPI, and by Guido Altarelli and Giorgio Parisi, then at Ecole Normale Superieure and IHES, Bures-sur-Yvette, respectively (Dokshitzer 1977, Altarelli and Parisi 1977). Today the Gribov-Lipatov-Dokshitzer-Altarelli-Parisi (DGLAP) evolution equations are the basis for all of the phenomenological approaches that are used to describe hadron interactions at short distances.

The more general evolution equation for quasi-partonic operators that Lipatov and his co-authors obtained allowed them to consider more complicated reactions, including high-twist operators and polarization phenomena in hard hadronic processes.

Lipatov went on to show that the gauge vector boson in Yang-Mills theory is “reggeized”: with radiative corrections included, the vector boson becomes a moving pole in the complex angular momentum plane near j=1. In QCD, however, this pole is not directly observable by itself because it corresponds to colour exchange. More meaningful is an exchange of two or more reggeized gluons, which leads to “colourless” exchange in the t-channel, either with vacuum quantum numbers (when it is called a Pomeron) or non-vacuum ones (when it is called an “odderon”). Lipatov and his collaborators showed that the Pomeron corresponds not to a pole, but to a cut in the plane of complex angular momentum.

A different approach

The case of high-energy scattering required a different approach. In this case, in contrast to the DGLAP approach – which sums up higher-order αs contributions enhanced by the logarithm of virtuality, ln Q2 – contributions enhanced by the logarithm of energy, ln s, or by the logarithm of a small momentum fraction, x, carried by gluons, become important. The leading-log contributions of the type (αsln(1/x))n are summed up by the famous Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation (Kuraev et al. 1977, Balitsky and Lipatov 1978). Compared with DGLAP, this is a more complicated problem because the BFKL equation actually includes contributions from operators of higher twists.

In its general form the BFKL equation describes not only the high-energy behaviour of cross-sections but also the amplitudes at non-zero momentum transfer. Lipatov discovered beautiful symmetries in this equation, which enabled him to find solutions in terms of the conformal-symmetric eigenfunctions. This completed the construction of the “bare Pomeron in QCD”, a fundamental entity of high-energy physics (Lipatov 1986). An interesting new property of this bare Pomeron (which was not known in the old reggeon field theory) is the diffusion of the emitted particles in ln kt space.

Later, in the 1990s, Lipatov together with Victor Fadin calculated the next-to-leading-order corrections to the BFKL equation, obtaining the “BFKL Pomeron in the next-to-leading approximation” (Fadin and Lipatov 1998). Independently, this was also done by Marcello Ciafaloni and Gianni Camici in Florence (Ciafaloni and Camici 1998). Lipatov also studied higher-order amplitudes with an arbitrary number of gluons exchanged in the t-channel and, in particular, described odderon exchange in perturbative QCD. The significance of this work was, however, much greater. It led to the discovery of the connection between high-energy scattering and the exactly solvable two-dimensional field-theoretical models (Lipatov 1994).

More recently Lipatov has taken these ideas into the hot, new field in theoretical physics: the anti-de Sitter/conformal-field theory correspondence (ADS/CFT) – a hypothesis put forward by Juan Maldacena in 1997. This states that there is a correspondence – a duality – in the description of the maximally supersymmetric N=4 modification of QCD from the standard field-theory side and, from the “gravity” side, in the spectrum of a string moving in a peculiar curved anti-de Sitter background – a seemingly unrelated problem. However, Lipatov’s experience and deep understanding of re-summed perturbation theory has enabled him to move quickly into this new territory where he has developed and tested new ideas, considering first the BFKL and DGLAP equations in the N=4 theory and computing the anomalous dimensions of various operators. The high symmetry of this theory, in contrast to standard QCD, allows calculations to be made at unprecedented high orders and the results then compared with the “dual” predictions of string theory. It also facilitates finding the integrable structures in the theory (Lipatov 2009).

In this work, Lipatov has collaborated with many people, including Vitaly Velizhanin, Alexander Kotikov, Jochen Bartels, Matthias Staudacher and others. Their work is establishing the duality hypothesis almost beyond doubt. This opens a new horizon in studying QFT at strong couplings – something that no one would have dreamt of 50 years ago.

• The author thanks Victor Fadin and Mikhail Ryskin for helpful comments.

ALICE reveals first results at 7 TeV

The ALICE collaboration has submitted its first paper with results from LHC proton collisions at a centre-of-mass energy of 7 TeV. The results confirm that the charged-particle multiplicity appears to be rising with energy faster than expected.

The results are based on the analysis of a sample of 300,000 proton–proton collisions the ALICE experiment collected during the first runs of the LHC with stable beams at a centre-of-mass energy, √s, of 7 TeV, following the first collisions at this energy on 30 March. The collaboration compares them with data collected earlier at √s=0.9 TeV and √s=2.36 TeV, which they have re-analysed since their earlier publication, with the same normalization as for the new data.

The events used in the analysis have at least one charged particle in the central pseudorapidity region, |η| <1. The selection leaves 47,000, 35,000, and 240,000 events for analysis at 0.9 TeV, 2.36 TeV, and 7 TeV, respectively. At 7 TeV, the collaboration measures a pseudorapidity density of primary charged particles, dNch/dη = 6.01 ± 0.01(stat.)+0.20/0.12(syst.). This corresponds to an increase of 57.6% ± 0.4%(stat.)+3.6/1.8%(syst.) relative to collisions at 0.9 TeV.

This increase is significantly higher than expected from calculations with the commonly used models, so confirming the observations made earlier at 2.36 TeV. In addition, the ALICE collaboration find that the shape of the multiplicity distribution is not reproduced well by the standard simulations. These results have already triggered interest in the cosmic-ray community.

Gran Sasso becomes a workshop WONDERland

gran1

The INFN’s Gran Sasso National Laboratory provides the world’s largest underground infrastructure for astroparticle physics. It currently hosts four operational dark-matter experiments – CRESST, DAMA-LIBRA, WArP and XENON – and was therefore a fitting venue for WONDER, the Workshop On Next Dark-matter Experimental Research. Designed to generate fruitful discussions about the future of the exciting field of dark-matter physics, the workshop was held on 22–23 March and attracted around 100 participants.

As is well known, “dark matter” is the name given to 23% of the “inventory” of the universe, the existence of which is indicated by several experimental facts, the first and most famous being the anomalous behaviour of the radial velocity of galaxies. Although some alternative models still survive to explain these unexpected effects, the most fascinating explanation – at least for particle physicists – is the existence of stable, massive particles that interact only weakly with ordinary matter and permeate all galaxies, including ours. Supersymmetry provides a nice theoretical framework for such an explanation, and the lightest supersymmetric particle, the neutralino, could be a viable candidate for dark matter. First, however, someone has to observe some experimental evidence to pin down the characteristics of the “dark” particles, which are often referred to as “WIMPs” – weakly interacting massive particles. The question is: how to identify the particles?

gran2

One way is to look for the production of WIMPs in collisions at the LHC at CERN. Other “indirect” techniques look for likely signatures of annihilations of WIMPs occurring in the Sun, Earth or galactic halo; these could appear, for example, as anomalous neutrino or gamma-ray fluxes. A third method is to observe the direct interactions of WIMPs with ordinary matter. Underground laboratories are the ideal place to carry out this quest. Anywhere else on the surface of the Earth, the overwhelming cosmic radiation would drown out the tiny signal (if it exists), making the search as hopeless as trying to spot a distant star in daylight.

Even amid the “cosmic silence” at the heart of a mountain (as at Gran Sasso), dark-matter experiments struggle to attain the best sensitivity with elaborate techniques and, above all, by trying to reduce the residual gamma and neutron backgrounds to unprecedentedly low levels.

gran3

DAMA-LIBRA, one of the first experiments at Gran Sasso, does in fact observe a significant modulation signal in its scintillators of high-purity sodium iodine, which is identical to the one that the motion of the Earth through the dark-matter halo is supposed to cause. The DAMA collaboration presents this signal as evidence for the discovery of dark matter and the scientific community waits for a confirmation, possibly with new, different techniques. The problem is that, until to now, the other experiments seem to rule out DAMA-LIBRA’s result, although the comparison between different techniques is far from straightforward. Theoretical models still survive that reconcile all current experimental results with a positive discovery by DAMA-LIBRA.

Among today’s technologies, detectors employing cryogenic noble liquids occupy a pre-eminent position. These seem to allow for excellent signal-to-background discrimination, coupled with the possibility to build massive detectors. The Gran Sasso National Laboratory provided a natural location to discuss the future of these searches because it hosts three experiments, other than DAMA-LIBRA, that are competing for the discovery of dark matter, namely CRESST, WArP and XENON. The race is particularly interesting between the latter two of these because they use the same “double-phase” technique, but with different targets. XENON employs 160 kg of its homonymous noble element in liquid form, while WArP has a similar amount of liquid argon, a medium with which research groups at INFN have considerable expertise.

gran4

Carlo Rubbia, the spokesperson of the WIMP Argon Programme (WArP), opened the workshop with an excellent and comprehensive overview of the experimental landscape. This was followed by theoretical talks that helped to set up the general framework of the field. With regard to experimental activities, preliminary results from the XENON100 detector provided a highlight of the workshop. About 11 days of data have been analysed and were presented by the XENON spokesperson Elena Aprile, from Columbia University. The data show an extremely low background – the lowest ever reached – and raise even stronger expectations for future results.

Claudio Montanari of INFN presented the status of WArP, which has just started data-taking, while Wolfgang Seidel of the Max Planck Institute talked about interesting results from CRESST, a detector made from scintillating calcium-tungstate crystals. Activities beyond Gran Sasso were also discussed. Masaki Yamashita of Kamioka/Tokyo presented Xmass, a particularly promising detector based on liquid xenon, which is close to its commissioning phase in the Kamioka mine in Japan. Newer techniques also seem to be interesting and promising. These include the directional detectors that Neil Spooner of Sheffield University described and, in particular, the bubble chambers COUPP and PICASSO, which Nigel Smith of SNOLAB discussed in his extensive overview of dark-matter activities around the world.

gran5

Two stimulating talks were dedicated to the problem of backgrounds, especially from neutrons. Frank Calaprice of Princeton University and Vitaly Kudryavtsev of Sheffield University described these issues. The final session covered, in depth and in a critical manner, the issues of backgrounds, sensitivity and stability for each group of techniques.

Overall, the workshop revealed an extremely lively field, with existing detectors producing new results, others about to enter their commissioning phase, advanced projects being proposed for new underground facilities and intense theoretical activity. We all “wonder” if a discovery is just round the corner.

Borexino gets a first look inside the Earth

CCnew3_04_10

The Borexino Collaboration has announced the observation of geoneutrinos at the underground Gran Sasso National Laboratory of the Italian Institute for Nuclear Physics (INFN). The data reveal, for the first time, an antineutrino signal well above background with the energy spectrum expected for radioactive decays of uranium and thorium in the Earth.

The Borexino Collaboration, comprising institutes from Italy, the US, Germany, Russia, Poland and France, operates a 300-tonne liquid-scintillator detector designed to observe and study low-energy solar neutrinos. Technologies developed by the collaboration have enabled them to achieve very low background levels in the detector, which were crucial in making the first measurements of solar neutrinos below 1 MeV. The central core of Borexino now has the lowest background available for such observations and this has been key to the detection of geoneutrinos.

Geoneutrinos are antineutrinos produced in the radioactive decays of naturally occurring uranium, thorium, potassium and rubidium. Decays from these radioactive elements are believed to contribute a significant but unknown fraction of the heat generated inside the Earth. This heat produces convective movements in the mantle, which influence volcanic activity and the tectonic-plate movements that induce seismic activity, as well as the geo-dynamo that creates the Earth’s magnetic field.

The importance of geoneutrinos was pointed out by Gernot Eder and George Marx in the 1960s and in 1984 a seminal study by Laurence Krauss, Sheldon Glashow and David Schramm laid the foundation for the field. In 2005, the KamLAND Collaboration reported an excess of low-energy antineutrinos above background in their detector in the Kamioka mine in Japan. Owing to a high background from internal radioactivity and antineutrinos emitted from nearby nuclear power plants, the KamLAND Collaboration reported that the excess events were an “indication” of geoneutrinos.

With 100 times lower background than KamLAND, the Borexino data reveal a clear low-background signal for antineutrinos, which matches the energy spectrum of uranium and thorium geoneutrinos. The lower background is a consequence both of the scintillator purification and the construction methods developed by the Borexino Collaboration to optimize radio-purity, and of the absence of nearby nuclear-reactor plants.

The origin of the known 40 TW of power produced within the Earth is one of the fundamental questions of geology. The definite detection of geoneutrinos by Borexino confirms that radioactivity contributes a significant fraction, possibly most, of this power. Other sources of power are possible, the main one being cooling from the primordial condensation of the hot Earth. A powerful natural geo-nuclear reactor at the centre of the Earth has been suggested, but is ruled out as a significant energy source by the absence of the high rate of antineutrinos associated with such a geo-reactor that should have been observed in the Borexino data.

Although radioactivity can account for a significant part of the Earth’s internal heat, measurements with a global array of geoneutrino detectors above continental and oceanic crust will be needed for a detailed understanding. By exploiting the unique features of the geoneutrino probe, future data from Borexino, KamLAND and the upcoming SNO+ detector in Canada should provide a more complete understanding of the Earth’s interior and the source of its internal heat.

Black holes and qubits

CCbla1_04_10

Quantum entanglement lies at the heart of quantum information theory (QIT), with applications to quantum computing, teleportation, cryptography and communication. In the apparently separate world of quantum gravity, the Hawking effect of radiating black holes has also occupied centre stage. Despite their apparent differences it turns out that there is a correspondence between the two (Duff 2007; Kallosh and Linde 2006).

Whenever two disparate areas of theoretical physics are found to share the same mathematics, it frequently leads to new insights on both sides. Indeed, this correspondence turned out to be the tip of an iceberg: knowledge of string theory and M-theory leads to new discoveries about QIT, and vice versa.

Bekenstein-Hawking entropy

Every object, such as a star, has a critical size that is determined by its mass, which is called the Schwarzschild radius. A black hole is any object smaller than this. Once something falls inside the Schwarzschild radius, it can never escape. This boundary in space–time is called the event horizon. So the classical picture of a black hole is that of a compact object whose gravitational field is so strong that nothing – not even light – can escape.

Yet in 1974 Stephen Hawking showed that quantum black holes are not entirely black but may radiate energy. In that case, they must possess the thermodynamic quantity called entropy. Entropy is a measure of how disorganized a system is and, according to the second law of thermodynamics, it can never decrease. Noting that the area of a black hole’s event horizon can never decrease, Jacob Bekenstein had earlier suggested such a thermodynamic interpretation implying that black holes must have entropy. This Bekenstein–Hawking black-hole entropy is given by one quarter of the area of the event horizon.

Entropy also has a statistical interpretation as a measure of the number of quantum states available. However, it was not until 20 years later that string theory provided a microscopic explanation of this kind for black holes.

Bits and pieces

A bit in the classical sense is the basic unit of computer information and takes the value of either 0 or 1. A light switch provides a good analogy; it can either be off, denoted 0, or on, denoted 1. A quantum bit or “qubit” can also have two states but whereas a classical bit is either 0 or 1, a qubit can be both 0 and 1 until we make a measurement. In quantum mechanics, this is called a superposition of states. When we actually perform a measurement, we will find either 0 or 1 but we cannot predict with certainty what the outcome will be; the best we can do is to assign a probability to each outcome.

There are many different ways to realize a qubit physically. Elementary particles can carry an intrinsic spin. So one example of a qubit would be a superposition of an electron with spin up, denoted 0, and an electron with spin down, denoted 1. Another example of a qubit would be the superposition of the left and right polarizations of a photon. So a single qubit state, usually called Alice, is a superposition of Alice-spin-up 0 and Alice-spin-down 1, represented by the line in figure 1. The most general two-qubit state, Alice and Bob, is a superposition of Alice-spin-up–Bob-spin-up 00, Alice-spin-up–Bob-spin-down 01, Alice-spin-down–Bob-spin-up 10 and Alice-spin-down–Bob-spin-down 11, represented by the square in figure 1.

Consider a special two-qubit state that is just 00 + 01. Alice can only measure spin up but Bob can measure either spin up or spin down. This is called a separable state; Bob’s measurement is uncorrelated with that of Alice. By contrast, consider 00 + 11. If Alice measures spin up, so too must Bob, and if she measures spin down so must he. This is called an entangled state; Bob cannot help making the same measurement. Mathematically, the square in figure 1 forms a 2 × 2 matrix and a state is entangled if the matrix has a nonzero determinant.

This is the origin of the famous Einstein–Podolsky–Rosen (EPR) paradox put forward in 1935. Even if Alice is in Geneva and Bob is millions of miles away in Alpha Centauri, Bob’s measurement will still be determined by that of Alice. No wonder Albert Einstein called it “spooky action at a distance”. EPR concluded rightly that if quantum mechanics is correct then nature is nonlocal, and if we insist on local “realism” then quantum mechanics must be incomplete. Einstein himself favoured the latter hypothesis. However, it was not until 1964 that CERN theorist John Bell proposed an experiment that could decide which version was correct – and it was not until 1982 that Alain Aspect actually performed the experiment. Quantum mechanics was right, Einstein was wrong and local realism went out the window. As QIT developed, the impact of entanglement went far beyond the testing of the conceptual foundations of quantum mechanics. Entanglement is now essential to numerous quantum-information tasks such as quantum cryptography, teleportation and quantum computation.

Cayley’s hyperdeterminant

As a high-energy theorist involved in research on quantum gravity, string theory and M-theory, I paid little attention to any of this, even though, as a member of staff at CERN in the 1980s, my office was just down the hall from Bell’s.

My interest was not aroused until 2006, when I attended a lecture by Hungarian physicist Peter Levay at a conference in Tasmania. He was talking about three qubits Alice, Bob and Charlie where we have eight possibilities,000 , 001, 010, 011, 100, 101, 110, 111, represented by the cube in figure 1. Wolfgang Dür and colleagues at the University of Innsbruck have shown that three qubits can be entangled in several physically distinct ways: tripartite GHZ (Greenberger–Horne–Zeilinger), tripartite W, biseparable A-BC, separable A-B-C and null, as shown in the left hand diagram of figure 2 (Dür et al. 2000).

CCbla2_04_10

The GHZ state is distinguished by a nonzero quantity known as the 3-tangle, which measures genuine tripartite entanglement. Mathematically, the cube in figure 1 forms what in 1845 the mathematician Arthur Cayley called a “2 × 2 × 2 hypermatrix” and the 3-tangle is given by the generalization of a determinant called Cayley’s hyperdeterminant.

The reason this sparked my interest was that Levay’s equations reminded me of some work I had been doing on a completely different topic in the mid-1990s with my collaborators Joachim Rahmfeld and Jim Liu (Duff et al. 1996). We found a particular black-hole solution that carries eight charges (four electric and four magnetic) and involves three fields called S, T and U. When I got back to London from Tasmania I checked my old notes and asked what would happen if I identified S, T and U with Alice, Bob and Charlie so that the eight black-hole charges were identified with the eight numbers that fix the three-qubit state. I was pleasantly surprised to find that the Bekenstein–Hawking entropy of the black holes was given by the 3-tangle: both were described by Cayley’s hyperdeterminant.

Octonions and super qubits

According to supersymmetry, for each known boson (integer spin 0, 1, 2 and so on) there is a fermion (half-integer spin 1/2, 3 /2, 5/2 and so on), and vice versa. CERN’s Large Hadron Collider will be looking for these superparticles. The number of supersymmetries is denoted by N and ranges from 1 to 8 in four space–time dimensions.

CERN’s Sergio Ferrara and I have extended the STU model example, which has N = 2, to the most general case of black holes in N = 8 supergravity. We have shown that the corresponding system in quantum-information theory is that of seven qubits (Alice, Bob, Charlie, Daisy, Emma, Fred and George), undergoing at most a tripartite entanglement of a specific kind as depicted by the Fano plane of figure 3.

CCbla3_04_10

The Fano plane has a strange mathematical property: it describes the multiplication table of a particular kind of number: the octonion. Mathematicians classify numbers into four types: real numbers, complex numbers (with one imaginary part A), quaternions (with three imaginary parts A, B, D) and octonions (with seven imaginary parts A, B, C, D, E, F, G). Quaternions are noncommutative because AB does not equal BA. Octonions are both noncommutative and nonassociative because (AB)C does not equal A(BC).

Real, complex and quaternion numbers show up in many physical contexts. Quantum mechanics, for example, is based on complex numbers and Pauli’s electron-spin operators are quaternionic. Octonions have fascinated mathematicians and physicists for decades but have yet to find any physical application. In recent books, both Roger Penrose and Ray Streater have characterized octonions as one of the great “lost causes” in physics. So we hope that the tripartite entanglement of seven qubits (which is just at the limit of what can be reached experimentally) will prove them wrong and provide a way of seeing the effects of octonions in the laboratory (Duff and Ferrara 2007; Borsten et al. 2009a).

In another development, QIT has been extended to super-QIT with the introduction of the superqubit, which can take on three values: 0 or 1 or $. Here 0 and 1 are “bosonic” and $ is “fermionic” (Borsten et al. 2009b). Such values can be realized in condensed-matter physics, such as the excitations of the t-J model of strongly correlated electrons, known as spinons and holons. The superqubits promise totally new effects. For example, despite appearances, the two-superqubit state $$ is entangled. Superquantum computing is already being investigated (Castellani et al. 2010).

Strings, branes and M-theory

If current ideas are correct, a unified theory of all physical phenomena will require some radical ingredients in addition to supersymmetry. For example, there should be extra dimensions: supersymmetry places an upper limit of 11 on the dimension of space–time. The kind of real, four-dimensional world that supergravity ultimately predicts depends on how the extra seven dimensions are rolled up, in a way suggested by Oskar Kaluza and Theodor Klein in the 1920s. In 1984, however, 11-dimensional supergravity was knocked off its pedestal by superstring theory in 10 dimensions. There were five competing theories: the E8 × E8 heterotic, the SO(32) heterotic, the SO(32) Type I, and the Type IIA and Type IIB strings. The E8 × E8 seemed – at least in principle – capable of explaining the elementary particles and forces, including their handedness. Moreover, strings seemed to provide a theory of gravity that is consistent with quantum effects.

However, the space–time of 11 dimensions allows for a membrane, which may take the form of a bubble or a two-dimensional sheet. In 1987 Paul Howe, Takeo Inami, Kelly Stelle and I showed that if one of the 11 dimensions were a circle, we could wrap the sheet round it once, pasting the edges together to form a tube. If the radius becomes sufficiently small, the rolled-up membrane ends up looking like a string in 10 dimensions; it yields precisely the Type IIA superstring. In a landmark talk at the University of Southern California in 1995, Ed Witten drew together all of this work on strings, branes and 11 dimensions under the umbrella of M-theory in 11 dimensions. Branes now occupy centre stage as the microscopic constituents of M-theory, as the higher-dimensional progenitors of black holes and as entire universes in their own right.

Such breakthroughs have led to a new interpretation of black holes as intersecting black-branes wrapped round the seven curled dimensions of M-theory or six of string theory. Moreover, the microscopic origin of the Bekenstein-Hawking entropy is now demystified. Using Polchinski’s D-branes, Andrew Strominger and Cumrun Vafa were able to count the number of quantum states of these wrapped branes (Strominger and Vafa 1996). A p-dimensional D-brane (or Dp-brane) wrapped round some number p of the compact directions (x4, x5, x6, x7, x8, x9) looks like a black hole (or D0-brane) from the four-dimensional (x0, x1, x2, x3) perspective. Strominger and Vafa found an entropy that agrees with Hawking’s prediction, placing another feather in the cap of M-theory. Yet despite all of these successes, physicists are glimpsing only small corners of M-theory; the big picture is still lacking. Over the next few years we hope to discover what M-theory really is. Understanding black holes will be an essential prerequisite.

Falsifiable predictions?

The partial nature of our understanding of string/M-theory has so far prevented any kind of smoking-gun experimental test. This has led some critics of string theory to suggest that it is not true science. This is easily refuted by studying the history of scientific discovery; the 30-year time lag between the EPR idea and Bell’s falsifiable prediction provides a nice example (see Further reading). Nevertheless it cannot be denied that such a prediction in string theory would be welcome.

CCbla4_04_10

In string literature one may find D-brane intersection rules that tell us how N branes can intersect over one another and the fraction of supersymmetry (susy) that they preserve (Bergshoeff et al. 1997). In our black hole/qubit correspondence, my students Leron Borsten, Duminda Dahanayake, Hajar Ebrahim, William Rubens and I showed that the microscopic description of the GHZ state,000 +011+101+110 is that of the N = 4;1/8 susy case of D3-branes of Type IIB string theory (Borsten et al. 2008). We denoted the wrapped circles by crosses and the unwrapped circles by noughts; O corresponds to XO and 1 to OX, as in table 1. So the number of qubits here is three because the number of extra dimensions is six. This also explains where the two-valuedness enters on the black-hole side. To wrap or not to wrap; that is the qubit.

Repeating the exercise for the N <4 cases and using our dictionary, we see that string theory predicts the three-qubit entanglement classification of figure 2, which is in complete agreement with the standard results of QIT. Allowing for different p-branes wrapping different dimensions, we can also describe “qutrits” (three-state systems) and more generally “qudits” (d-state systems). Furthermore, for the well documented cases of 2 × 2, 2 × 3, 3 × 3, 2 × 2 × 3 and 2 × 2 × 4, our D-brane intersection rules are also in complete agreement. However, for higher entanglements, such as 2 × 2 × 2 × 2, the QIT results are partial or not known, or else contradictory. This is currently an active area of research in QIT because the experimentalists can now control entanglement with a greater number of qubits. One of our goals is to use the allowed wrapping configurations and D-brane intersection rules to predict new qubit-entanglement classifications.

So the esoteric mathematics of string and M-theory might yet find practical applications.

 

bright-rec iop pub iop-science physcis connect