Comsol -leaderboard other pages

Topics

DØ sees anomalous asymmetry in decays of B mesons

The DØ collaboration at Fermilab has reported evidence of a violation of matter–antimatter symmetry (“CP symmetry”) in the behaviour of neutral mesons containing b quarks. Studying collisions where B mesons decay semi-leptonically into muons, the team finds about 1% more collisions when two negatively charged muons are produced than collisions with two positively charged muons. The collisions in the DØ detector occur through a symmetric proton–antiproton state and the expected CP asymmetry from the Standard Model is predicted to be much smaller than what has been observed. An asymmetry of 1% is therefore completely unexpected.

The properties of B mesons, created in collisions where a bb quark pair is produced, are assumed to be responsible for this asymmetry. Mesons containing b quarks are known to oscillate between their particle (B=bd or bs) and anti-particle (B=bd or bs) state before they decay into a positively charged muon (for the B) or a negatively charged muon (for the B). If a B meson oscillates before its decay, its decay muon has the “wrong sign”, i.e. its charge is identical to the charge of the muon from the other b decay. Having 1% more negatively charged muon pairs therefore implies that the B meson decays slightly more often into its matter state than into its antimatter state.

The DØ detector has two magnets, a central solenoid and a muon-system toroid, which determine the curvature and charge of muons. By regularly reversing the polarities of these magnets the collaboration can eliminate most effects coming from asymmetries in the detection of positively and negatively charged muons. This feature is crucial for reducing systematic effects in this measurement.

Another known source of asymmetry arises in muons produced in the decays of charged kaons. Kaons contain strange quarks and the interaction cross-sections of positively and negatively charged kaons with the matter making up the DØ detector differ significantly: more interactions are open to the K, which contains strange quarks, than to the K+, which contain strange antiquarks. In detailed studies the collaboration has derived the contribution of this effect almost entirely from the data, making the measurement of the asymmetry in B-meson decays largely independent of external assumptions and simulation.

The final result is 3.2 σ from the Standard Model expectation, corresponding to a probability of less than 0.1 per cent that this measurement is consistent with any known effect. The analysis was based on an integrated luminosity of 6.1 fb–1 and the plan is to increase the accuracy of this measurement by adding significantly more data and improving future analysis methods.

CERN Council opens the door to greater integration

At its 155th session, on 18 June, the CERN Council opened the door to greater integration in particle physics when it unanimously adopted the recommendations of a working group that was set up in 2008 to examine the role of the organization in the light of increasing globalization in particle physics.

“This is a milestone in CERN’s history and a giant leap for particle physics,” said Michel Spiro, president of the CERN Council. “It recognizes the increasing globalization of the field, and the important role played by CERN on the world stage.”

The key points agreed at the meeting were:

• All states shall be eligible for membership, irrespective of their geographical location;

• A new associate membership status is to be introduced to allow non-member states to establish or strengthen their institutional links with the organization;
• Associate membership shall also serve as the obligatory pre-stage to full membership;
• The existing observer status will be phased out for states, but retained for international organizations;

International co-operation agreements and protocols will be retained. “Particle physics is becoming increasingly integrated at the global level,” explained CERN’s director-general Rolf Heuer. “The decision contributes towards creating the conditions that will enable CERN to play a full role in any future facility, wherever in the world it might be.”

CERN currently has 20 member states: Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the UK. India, Israel, Japan, the Russian Federation, the US, Turkey, the European Commission and UNESCO have observer status. Applications for membership from Cyprus, Israel, Serbia, Slovenia and Turkey have already been received by the CERN Council, and are currently undergoing technical verification. At future meetings, Council will determine how to apply the new arrangements to these states.

In other business, Council recognized that further work is necessary on the organization’s medium-term plan, in order to maintain a vibrant research programme through a period of financial austerity, and endorsed CERN’s new code of conduct.

Full details of the new membership arrangements can be found in Council document CERN/2918, which is available at http://indico.cern.ch/getFile.py/access?resId=1&materialId=0&contribId=35&sessionId=0&subContId=0&confId=96020.

Further commissioning improves luminosity

By the end of June the LHC was making good progress towards delivering the first 100 nb–1 of integrated luminosity at an energy of 3.5 TeV per beam. This followed some two weeks devoted to beam commissioning, with the goal of achieving stable collisions at 3.5 TeV with the design intensity of 8 × 1011 protons per bunch. The first days of July saw machine fills for physics with six bunches per beam at this nominal intensity, providing further boosts to the goal of reaching 1 fb–1 before the end of 2011.

The first collisions at 3.5 TeV between bunches at nominal intensity were achieved on 26 May, following earlier tests on ramping the energy with bunches at this intensity. However, to make progress towards further stable running, the accelerator team needed to perform a variety of commissioning studies to establish the appropriate baseline for operating the LHC in these conditions.

These studies involved establishing the optimal reference settings for both ramping the energy and for a “squeeze” to β* of 3.5 m, prior to bringing the beams into collision. (The squeeze reduces the beam size at the interaction points and is described by the parameter β*, which gives the distance from the interaction point to the place where the beam is twice the size.) The settings include the all-important collimator positions, a key part of the machine protection system, and this alone involved 108 setup operations.

The work also involved commissioning the transverse damper – basically, an electrostatic deflector – to subdue instabilities in the nominal bunches as they are ramped to 3.5 TeV.

By 26 June the teams were ready with a new sequence to ramp, squeeze and collapse the separation at the interaction points to bring three bunches per beam at nominal intensity into collision at 3.5 TeV. With a physics run at an instantaneous luminosity of 5 × 1029 cm–2 s–1, the integrated luminosity in the experiments since 30 March already doubled, rising to more than 30 nb–1. A few days later, on 7 July, the machine ran with seven bunches per beam at nominal intensity and achieved a new luminosity record of 1030 cm–2 s–1. This is one more step towards the goal for 2010 of 1032 cm–2 s–1, which will require 800 nominal bunches per beam.

• Sign up to follow CERN’s Twitter feed for the latest LHC news. See http://twitter.com/cern/.

First LHC results aired in Hamburg

CCnew3_06_10

“Physics at the LHC 2010”, which took place at DESY in Hamburg on 7–12 June, was the first large conference to discuss 7 TeV collision data from the LHC. Covering all fields of LHC physics, it attracted 270 participants, among them many young postdocs and students.

On the opening day, Steve Myers, CERN’s director for accelerators and technology, gave an overview of the LHC’s status and the steps required to increase the luminosity. The spokespersons of the four big experiments, Fabiola Gianotti (ATLAS), Guido Tonelli (CMS), Jurgen Schukraft (ALICE) and Andrzej Golutvin (LHCb) summarized the commissioning of their experiments and the progress in understanding the detectors, and also flashed up the first physics results.

The main message from these presentations was that the LHC is progressing well and that the experiments are well prepared. Data-taking is going smoothly, triggers and reconstruction are working well and detectors are rapidly being understood. Data processing on the LHC Computing Grid is also performing as expected.

There was special emphasis at the conference on the operation and performance of detectors and in the afternoon sessions young researchers reported on experiences in all of the experiments. The reports showed that many design performances have already been achieved or are within close reach. One by-product of understanding the detectors is a “detector tomography”, which has been performed using mainly photon conversions; this has allowed several shortcomings of the detector simulations to be identified and removed.

The pay-off for the years of hard work that led to this excellent knowledge of the detectors has been a quick turnaround time for physics results. After only a few weeks of high-energy data-taking at 7 TeV in the centre-of-mass with an integrated luminosity of about 16 nb–1 delivered to each experiment, all four collaborations have rediscovered almost the full Standard Model particle spectrum – except for the top quark, which is just round the corner.

CCnew4_06_10

Among the first LHC physics highlights are the observations of W and Z bosons, and of high-pT jets. In several presentations the audience was reminded of how long the community waited for single weak bosons to be produced in the early days of the SppS. Now, dozens of W and Z bosons have already been reported by ATLAS and CMS in different decay channels. However, there is still a long way to go to match the excellent work done in the electroweak sector by the experiments at Fermilab’s Tevatron.

The political support for the LHC in Germany was touched upon on the third day of the conference. In their messages, Georg Schütte, state secretary in the German Ministry for Education and Research, and Bernd Reinert, state secretary for science and research of the state of Hamburg, expressed the keen interest of the funding bodies for further support and exploitation of the LHC.

Looking at the prospects from the scientific point of view, Mike Lamont of CERN sketched the plans for the LHC and emphasized the goal of collecting 1 fb–1 proton-collision data per experiment at 7 TeV before the end of 2011 (plus two heavy-ion runs). With this integrated luminosity, the LHC will already compete with the Tevatron in a number of fields. It would be sensitive to W’ and Z’ bosons with masses up to 1.9 TeV and 1.5 TeV, respectively, and low-mass supersymmetry would also be in reach. However, the Higgs – if this is indeed nature’s choice – will most likely take longer to be discovered.

The last day of the conference was dedicated to overview talks from other fields (astroparticle physics, dark-matter physics) and concluded with an excellent experimental summary by CERN’s Peter Jenni and a visionary overview of theory by David Gross, the 2004 Nobel laureate in physics. Gross reflected on 20 predictions made in 1993 – a good fraction of which have already come true. There is reason to hope that at least a few others (among them the discoveries of the Higgs, supersymmetry and the origin of dark matter, and the transformation of string theory into a real predictive theory) will also come true. There are exciting times ahead.

Two-orbit energy recovery linac operates at Novosibirsk free-electron laser facility

CCnew7_06_10

Over the past 30 years, the Budker Institute of Nuclear Physics in Novosibirsk has developed many free-electron lasers (FELs). The most recent one, which has been in operation since 2003, is a continuous-wave terahertz FEL based on a single-orbit energy-recovery linac (ERL), which is the world’s most intense radiation source at terahertz wavelengths. The laboratory is now making progress in constructing a four-orbit 40 MeV electron ERL to generate radiation in the range 5–250 μm. Already operating with two orbits, this is the world’s first multiturn ERL.

FELs provide coherent radiation in the wavelength range from 0.14 nm to 1 mm. They use the phenomenon of stimulated radiation from relativistic electrons moving in an undulator – a special magnet that creates a periodic alternating field such that the electron trajectory remains close to a straight line (the undulator axis). Travelling through an undulator, electrons amplify a collinear electromagnetic wave if the last one has wavelength λ = d/(2γ2), where d is the undulator period, and γ is the particle’s total energy divided by its mass.

Unfortunately, the maximum electron efficiency of FELs is only about 1%, which makes energy recovery a desirable feature. The simplest realization of energy recovery for an FEL is to install it into a straight section of a storage ring. Such storage-ring FEL facilities exist now but the power of their radiation does not exceed a few watts. The intrinsic limitation of the power is caused by multiple interactions of the same electrons with light, which increases the energy spread of the beam. To achieve high light power, it is better to use a fresh beam, which is just what ERLs can do.

CCnew6_06_10

The Novosibirsk ERL has a rather complicated magnetic system, which makes use of a common accelerating structure (figure 1). This differs from other ERL-based FEL facilities in that it uses low-frequency (180 MHz) non-superconducting RF cavities, with continuous-wave operation. The existing terahertz FEL uses one orbit, which lies in the vertical plane. This FEL generates coherent radiation, tunable in the range 120–240 μm. It produces a continuous train of 40–100 ps pulses at a repetition rate of 5.6–22.5 MHz. The maximum average output power is 500 W, with a peak power of more than 1 MW. The minimum measured linewidth is 0.3%, which is close to the Fourier-transform limit. A beamline directs the radiation from the FEL in the accelerator hall to the user hall. It is filled with dry nitrogen and separated from the accelerator vacuum by a diamond window, and from the air by polyethylene windows. Radiation is delivered to six stations, two of which are used for the measurement of radiation parameters, and the other four by users, typically biologists, chemists and physicists.

The other four orbits of the final ERL lie in the horizontal plane. The beam is directed to these orbits by switching on two round magnets. The electrons will pass through the RF cavities four times, to reach 40 MeV. After the fourth orbit the beam will be used in an FEL, before being decelerated four times. A bypass with another FEL is installed at the second orbit (20 MeV). When the bypass magnets are switched on, the beam passes through this FEL. The length of the bypass has been chosen to provide the delay necessary in this case to give deceleration on the third pass through the accelerating cavities.

Two of the four horizontal orbits were assembled and commissioned in 2008. The electron beam was accelerated twice and then decelerated down to the low injection energy, thus successfully demonstrating the world’s first multiorbit ERL operation. The first lasing of the FEL at the bypass was achieved in 2009, providing radiation in the wavelength range 40–80 μm. At first a significant (several per cent) increase in beam loss occurred during lasing. Sextupole correctors were therefore installed in some of quadrupoles to make the 180° bends achromatic to second order. This increased the energy acceptance for the reused electron beam. The output power is about 0.5 kW at an ERL average current of 9 mA. The output of this new FEL is near 70 μm, so the power obtained is also the world record for this wavelength range.

The beamline to deliver radiation from the new FEL to existing user stations has been assembled and commissioned. Thus, the world’s first two-orbit ERL is now operating for a far infrared FEL. In the meantime, the assembly of the third and fourth ERL orbits is in progress.

X-rays reveal missing matter in hot gas

X-ray observations suggest that about half of the ordinary, baryonic matter in the universe is in the form of hot, diffuse gas. The detection of absorption lines from the Sculptor Wall by the two leading European and American X-ray satellites is the strongest evidence yet that the “missing matter” in the nearby universe is in the form of diffuse gas, located in the web of large-scale structures.

The composition of the universe has been precisely determined in the past decade by cosmological studies, in particular by the analysis of the fluctuations in the map of the cosmic microwave background radiation (CERN Courier May 2008 p8). The matter-energy content of today’s nearby universe is dominated by dark energy and dark matter, with less than 5% in the form of ordinary, baryonic matter. Observable matter in the form of stars and gas inside galaxies accounts for about only one half of the nucleons (protons and neutrons) expected from cosmology. The other half remains elusive.

The vast amount of hot, X-ray emitting gas that pervades clusters of galaxies often contains several times the mass of the actual galaxies (CERN Courier July 2003 p13). This suggests that the “missing matter” could be of similar nature, possibly slightly cooler (105–107 K) and less dense so that it remains almost undetectable. An alternative to a direct detection of such a warm–hot intergalactic medium (WHIM) is to search for the absorption of X-rays from a background source shining through this gas. A strong claim for the detection of absorption lines that could be attributed to the WHIM on the line of sight to the bright blazar Markarian 421 was published five years ago (CERN Courier March 2005 p13). However, as Taotao Fang – the lead author of the new study – points out, this earlier work was the subject of quite some debate. The main issue is that the location of the intervening gas was determined through a blind search. Therefore, the quite high significance of the X-ray absorption lines is reduced by the fact that the position of these lines was not known a priori and that they could appear at any redshift between the background source and the Earth.

The new study by Fang of the University of California, Irvine, and colleagues is more robust because it searches for absorption at the position of a known foreground structure that is likely to have associated WHIM.

The team focused on a nearby filamentary structure – the Sculptor Wall – containing thousands of galaxies and chose a suitable background blazar called H2356-309 as the source of X-rays. Blazars are a type of active galaxy with a relativistic jet that points towards Earth and produces a featureless radiation spectrum. This makes it easier to detect feeble absorption lines from atoms along the line of sight. Long observations of H2356-309 with NASA’s Chandra X-ray observatory and ESA’s XMM-Newton satellite allowed Fang and collaborators to obtain a significant (4 σ) detection of an absorption line originating from highly ionized oxygen (O VII) at a redshift coinciding with the distance of the Sculptor Wall and consistent with the predicted temperature and density of the WHIM.

This detection by two independent satellites is the best evidence yet that the missing baryons are in the form of WHIM. The detection in the Sculptor Wall, stretching across tens of millions of light-years, suggests that the WHIM follows the filamentary, large-scale structure of the universe outlined by the distribution of galaxies and dark matter (CERN Courier September 2007 p11).

Workshop looks deep into the proton and QCD

CCdis1_06_10

The International workshop on Deep Inelastic Scattering and Related Subjects began as a forum for discussing results on deep inelastic scattering (DIS) from the electron–proton collider, HERA. However, it has quickly become successful at bringing together theorists and experimentalists to discuss results from all collider experiments, both in terms of the latest developments in measurements of the proton structure and in QCD dynamics in general. This year the brand-new measurements of inclusive properties of proton–proton interactions at the LHC found a natural niche for discussion in the 18th workshop, DIS 2010, held in Florence on 19–23 April.

Volcanic disruptions

The cloud of volcanic ash present over most of Europe on the weekend before the workshop caused many flight cancellations and around 140 participants were unable to reach Florence in person; notably there were almost no participants from the UK or the US. On the other hand, more than 200 participants from mainland Europe embarked on long and often adventurous journeys to reach the conference site, a 16th-century cloister in the old part of the city. Owing to the late arrivals, the first day of plenary talks started a little later than planned – immediately with a coffee break, followed by an introduction from the director of INFN Florence, Pier Andrea Mandò.

CCdis2_06_10

The programme continued with a full agenda of plenary talks that set the scene and introduced a wealth of experimental results and recent developments in theory. Monica Turcato of Hamburg University and Katja Krueger of Heidelberg University presented the highlights from the ZEUS and H1 experiments at HERA. Horst Fischer of Albert-Ludwigs-Universität Freiburg reviewed the results on spin from all experiments. Thomas Gehrmann of Universität Zürich and Stefano Forte of Università di Milano reported on the recent progress in perturbative QCD. The session ended with the highlights from the ATLAS and CMS experiments at the LHC, with Thorsten Wengler of Manchester University and Ferenc Sikler of KFKI RMKI, Budapest, having the honour of showing the first published results on charged-particle spectra at 900 GeV and 2.36 TeV, as well as the first preliminary distributions at 7 TeV.

The opening day ended with a welcome cocktail, during which the conveners of the seven parallel sessions set a plan for installing EVO videoconferencing facilities to allow remote participation for those unable to get there, and reshuffled their programmes. Paul Laycock of Liverpool University was appointed convener of the Future of DIS working group “on the fly”, so relieving the organizers of a difficult situation.

The following two and a half days were dedicated to parallel sessions, which were held in the cloister’s painted rooms and library. The working groups covered a broad programme: parton densities; small-x, diffraction and vector mesons; QCD and final states; heavy flavours; electroweak physics and searches; spin physics; and the future of DIS.

The final two days of the conference began early, at 8.00 a.m., with plenary talks by US speakers over EVO. These included reports on: the rich physics of the CDF and DØ experiments at the Tevatron, by Massimo Casarsa and Qizhong Li of Fermilab; heavy-ion physics at RHIC, by Bill Christie of Brookhaven; and DIS results at Jefferson Lab, by Dave Gaskell. The plenary session on Friday had CERN’s Mike Lamont as a special guest, who reported on the status of the LHC accelerator and its performance. The conveners of the seven working groups summarized their sessions, splitting their reports into theoretical and experimental parts. Halina Abramowicz of Tel Aviv University concluded the workshop, pointing out how the different topics such as parton densities, low-x, diffraction, jets, heavy flavours and spin physics are all tools for improving understanding of the structure of the proton and its implications for the LHC.

Bright horizons

CCdis3_06_10

The combined results of ZEUS and H1 in neutral-current and charged-current cross-sections, used as input to fits of the parton distributions in the proton, have led to an incredible accuracy (1–2%), which allows a 5% uncertainty in the prediction of W and Z production at central rapidities at the LHC. The recent inclusion in the fits of combined data on charm reveals that the QCD evolution is sensitive to the treatment of heavy flavours and that the choice of the charm mass plays an important role in the predictions for the LHC. H1 and ZEUS are now focusing on the extension of the precision inclusive measurements to high/low photon virtualities, Q2, and high xBjorken. Also on the way is the completion of jet and heavy-flavour measurements based on all of the HERA statistics (0.5 fb–1 per experiment). Together, these will provide stringent tests of QCD at all Q2 and will further constrain the proton parton distributions.

Meanwhile, CDF and DØ now have 7 fb–1 each on tape and are sensitive to processes with cross-sections below 1 pb. Such a harvest provides a number of outstanding electroweak and QCD results: running αs has been measured at the highest pt ever, and the combined W mass measurement from the Tevatron is more precise than the direct measurements at LEP. The combined limit on the Standard Model Higgs lies in the range 163 <MHiggs <166 GeV at 95% confidence level. More results are on the horizon, with the 10 fb–1 expected by the end of 2011.

CCdis4_06_10

The newborn LHC experiments are performing well and are taking their first look at the particle spectra provided by nature at previously unexplored centre-of-mass energies. A few weeks after the first collisions, distributions at 7 TeV were already available. Figure 1 shows the multiplicity of charged particles as a function of the centre-of-mass energy from different measurements, including from ALICE at 7 TeV. Figure 2, where the average transverse momentum as a function of the charged-particle multiplicity of ATLAS data at 7 TeV is compared with various Monte Carlos (MCs), seems to point to the inadequacy of the models at this energy.

With increasing centre-of-mass energy, the momentum fraction of the partons can be small and the probability of multiparton interactions increases. Looking in detail at the event topology with the available LHC data is already informative: comparing the forward energy flow from minimum-bias events at different √s provides a new, independent constraint on the underlying event models. For example, figure 3 shows the ratio of energy flow measured by CMS at 7 TeV and 0.9 TeV as a function of the rapidity, compared with the Pythia MC.

Exclusive reactions – mainly at HERMES, Jefferson Lab and RHIC – allow the extraction of the generalized parton distributions. This was defined at the workshop as “a major new direction in hadron physics”, aimed at the 3D mapping of the proton and, more generally, of the nucleon.

CCdis5_06_10

In all, with results from Belle at KEK, BaBar at SLAC, COMPASS at CERN – as well as from Jefferson Lab, RHIC, the Tevatron, HERA and LHC experiments – QCD was seen at work over a range of studies from e+e to muon scattering and DIS to heavy ions and up to the energy frontier of LHC. In this stimulating contest, theory is preparing for present and future challenges with the first next-to-next-to-leading order (NNLO) calculations of precision observables and NNLO parton distributions. An example of the interplay between the precision of the available data and the theoretical predictions is given in figure 4, which shows a compilation of all of the αsmeasurements presented in the QCD session of the workshop.

The two main future projects, the LHeC electron–proton collider and the EIC electron–ion collider, were discussed extensively in the session on the future of DIS. The interest manifested by 350 or so registrations for the workshop promises a bright future for the field as well as for the DIS workshop series. The next workshop will be held at Jefferson Lab in April 2011 – a site in the US will be the ideal place to discuss future facilities.

• The workshop was organized by the University and INFN Florence, and by the University of Piemonte Orientale. We would like to thank the sponsors: INFN, DESY, CERN, Jefferson Laboratory, Brookhaven National Laboratory and CAEN Viareggio. Special thanks go to our co-organizers Giuseppe Barbagli, Dimitri Colferai and Massimiliano Grazzini, to all of the students and postdocs of our universities who helped out, and to the founder of the workshop series, Aharon Levy.

PAX promotes beams of polarized antiprotons

CCpax1_06_10

The physics potential for QCD experiments with high-energy polarized antiprotons is enormous but until now many experiments have been impossible owing to the lack of a high-luminosity beam. This situation could change with the advent of a stored beam of polarized antiprotons and the realization of a double-polarized, high-luminosity antiproton–proton collider. The collaboration for Polarized Antiproton Experiments (PAX) has already formulated the physics programme that would be possible with such a facility (PAX collaboration 2006). Following studies with proton beams, it is now planning to make the first measurements with polarized beams at CERN’s Antiproton Decelerator (AD), which is currently the world’s only stand-alone antiproton storage facility.

The experimental approach adopted by the PAX collaboration to produce a beam of polarized antiprotons is based on spin filtering, a technique that exploits the spin dependence of the strong interaction (Oellers et al. 2009). The total cross-section, σ, depends on the relative orientation of the spins of the colliding particles, i.e. σ(↑↑)≠σ(↑↓). The method was shown to work in the 1990s with protons in a 23 MeV beam stored in the Heidelberg Test Storage Ring, which passed through a polarized hydrogen gas target (Rathmann et al. 1993).

In contrast to the proton–proton system, the experimental basis for predicting the build-up of polarization in a stored antiproton beam by spin filtering is practically nonexistent. It is therefore a high priority to perform a series of dedicated spin-filtering experiments using stored antiprotons together with a polarized target, which the PAX collaboration is aiming to undertake at the AD ring at CERN (PAX collaboration 2009b). Figure 1 illustrates schematically the proposed experimental set-up.

Expected build-up

The AD is a unique facility at which stored antiprotons in the appropriate energy range are available with characteristics that meet the requirements for the first antiproton polarization build-up studies. In 2009, the European Research Council awarded an Advanced Grant to the Jülich group to pursue these studies at the AD. Once an experimental proton–antiproton data base is available, work can begin to design a dedicated polarized antiproton ring.

CCpax2_06_10

The Jülich group has made predictions for the spin-dependent cross-sections for the expected build-up of polarization in an antiproton beam (PAX collaboration 2009). In addition, a group from the Budker Institute for Nuclear Physics, Novosibirsk, has recently generated estimates on the basis of a Nijmegen proton–antiproton potential. These indicate that antiproton beam polarizations of 0.15–0.20 (spin filtering with transverse target orientation) and 0.35–0.40 (longitudinal) might be expected (Dmitriev et al. 2010).

For efficient commissioning of the equipment required for measurements at the AD, the PAX collaboration is preparing polarization build-up studies using stored protons at the Cooler Synchrotron (COSY) at Jülich (PAX collaboration 2009a). Because the spin-dependence of the proton–proton interaction is well known at energies where electron cooling is available at COSY (up to 130 MeV), details of the polarization build-up process can also be studied.

Beautiful techniques

The polarized internal target (figure 2), consisting of an atomic beam source and a Breit-Rabi type target polarimeter, has been successfully operated with an openable storage cell. Such an openable cell constitutes an important development for the investigations with stored antiprotons at the AD: when the beam is injected into the AD with a momentum of around 3.5 GeV/c, any restriction of the machine acceptance reduces the number of stored antiprotons during the spin-filtering studies. Only after cooling and deceleration to the experimental energies of interest, around 50–500 MeV, would the storage cell be closed.

CCpax3_06_10

The storage-cell technique works beautifully, as figure 3 shows, with the target polarization unaffected by the opening and closing of the storage cell (Barschel 2010). This constitutes a major milestone because for the first time, both high polarization and density have been achieved with an openable storage cell. While this is crucial for investigations of the spin-dependence of the proton–antiproton interaction at the AD, many other experiments employing internal storage-cell targets can also benefit from this development.

CCpax4_06_10

The quadrupole magnets for the low-β insertion of PAX at COSY were installed during the summer shutdown in 2009. During beam-time in early 2010, the β-functions at the location of the PAX quadrupoles were measured for the non-zero dispersion setting, by varying the magnet currents. The calculated and measured values at the location of the quadrupoles match nicely, as figure 4 shows. The model calculations suggest that β-functions of βx around 0.38 m and βy around 0.36 m were reached. The measured beam lifetimes at COSY did not depend on whether the low-β section was powered on or not. More accurate values of βx and βy at the centre of the storage cell will be determined once the target chamber has been installed later this year.

In the second half of 2010, the PAX collaboration would like to perform machine studies at COSY to obtain a better insight into the actual limitations of the beam lifetime. The plan is then to carry out the first spin-filtering measurements at COSY with transversely polarized protons early in 2011.

CCpax5_06_10

The installation at the AD will consist of a set of additional quadrupole magnets, the internal target and a detection system surrounding the openable storage cell (figure 5). The PAX proposal for the AD is currently awaiting approval (PAX collaboration 2009b). It would be advantageous if the six additional quadrupole magnets could be installed without modification of the current AD lattice (i.e. while the central AD quadrupole magnet in that section remains in place). Subsequent machine studies to commission the low-beta section would ensure that the proposed experimental set-up for the spin-filtering studies is compatible with the other physics pursued at the AD. Once satisfactory operation of the equipment has been achieved, the first measurements of the polarization build-up in proton–antiproton scattering will be possible. A Siberian snake needs to be installed at a later stage, as figure 1 indicates, and the AD electron cooler upgraded to provide cooled antiproton beams with an energy of up to 500 MeV.

QCD scattering: from DGLAP to BFKL

CCqcd1_06_10

Most particle physicists will be familiar with two famous abbreviations, DGLAP and BFKL, which are synonymous with calculations of high-energy, strong-interaction scattering processes, in particular nowadays at HERA, the Tevatron and most recently, the LHC. The Dokshitzer-Gribov-Lipatov-Alterelli-Parisi (DGLAP) equation and the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation together form the basis of current understanding of high-energy scattering in quantum chromodynamics (QCD), the theory of strong interactions. The celebration this year of the 70th birthday of Lev Lipatov, whose name appears as the common factor, provides a good occasion to look back at some of the work that led to the two equations and its roots in the theoretical particle physics of the 1960s.

Quantum field theory (QFT) lies at the heart of QCD. Fifty years ago, however, theoreticians were generally disappointed in their attempts to apply QFT to strong interactions. They began to develop methods to circumvent traditional QFT by studying the unitarity and analyticity constraints on scattering amplitudes, and extending Tullio Regge’s ideas on complex angular momenta to relativistic theory. It was around this time that the group in Leningrad led by Vladimir Gribov, which included Lipatov, began to take a lead in these studies.

Quantum electrodynamics (QED) provided the theoretical laboratory to check the new ideas of particle “reggeization”. In several pioneering papers Gribov, Lipatov and co-authors developed the leading-logarithm approximation to processes at high-energies; this later played a key role in perturbative QCD for strong interactions (Gorshkov et al. 1966). Using QED as an example, they demonstrated that QFT leads to a total cross-section that does not decrease with energy – the first example of what is known as Pomeron exchange. Moreover, they checked and confirmed the main features of Reggeon field theory in the particular case of QED.

CCqcd2_06_10

By the end of the 1960s, experiments at SLAC had revealed Bjorken scaling in deep inelastic lepton-hadron scattering. This led Richard Feynman and James Bjorken to introduce nucleon constituents – partons – that later turned out to be nothing other than quarks, antiquarks and gluons. Gribov became interested in finding out if Bjorken scaling could be reproduced in QFT. As examples he studied both a fermion theory with a pseudoscalar coupling and QED, in the kinematic conditions where there is a large momentum-transfer, Q2, to the fermion. The task was to select and sum all leading Feynman diagrams that give rise to the logarithmically enhanced (α log Q2)n contributions to the cross section, at fixed values of the Bjorken variable x=Q2/(s+Q2) between zero and unity, where s is the invariant energy of the reaction.

At some point Lipatov joined Gribov in the project and together they studied not only deep inelastic scattering but also the inclusive annihilation of e+e to a particle, h, in two field-theoretical models, one of which was QED. They showed that in a renormalizable QFT, the structure functions must violate Bjorken scaling (Gribov and Lipatov 1971). They obtained relations between structure functions that describe deep inelastic scattering and those that describe jet fragmentation in e+e annihilation – the Gribov-Lipatov reciprocity relations. It is interesting to note that this work appeared at a time before experiments had either detected any violation in Bjorken scaling or observed any rise with momentum transfer of the transverse momenta in “hard” hadronic reactions, as would follow from a renormalizable field theory. This paradox led to continuous and sometimes heated discussions in the new Theory Division of the Leningrad (now Petersburg) Nuclear Physics Institute (PNPI) in Gatchina.

CCqcd3_06_10

Somewhat later, Lipatov reformulated the Gribov-Lipatov results for QED in the form of the evolution equations for parton densities (Lipatov 1974). This differed from the real thing, QCD, only by colour factors and by the absence of the gluon-to-gluon-splitting kernel, which was later provided independently by Yuri Dokshitzer at PNPI, and by Guido Altarelli and Giorgio Parisi, then at Ecole Normale Superieure and IHES, Bures-sur-Yvette, respectively (Dokshitzer 1977, Altarelli and Parisi 1977). Today the Gribov-Lipatov-Dokshitzer-Altarelli-Parisi (DGLAP) evolution equations are the basis for all of the phenomenological approaches that are used to describe hadron interactions at short distances.

The more general evolution equation for quasi-partonic operators that Lipatov and his co-authors obtained allowed them to consider more complicated reactions, including high-twist operators and polarization phenomena in hard hadronic processes.

Lipatov went on to show that the gauge vector boson in Yang-Mills theory is “reggeized”: with radiative corrections included, the vector boson becomes a moving pole in the complex angular momentum plane near j=1. In QCD, however, this pole is not directly observable by itself because it corresponds to colour exchange. More meaningful is an exchange of two or more reggeized gluons, which leads to “colourless” exchange in the t-channel, either with vacuum quantum numbers (when it is called a Pomeron) or non-vacuum ones (when it is called an “odderon”). Lipatov and his collaborators showed that the Pomeron corresponds not to a pole, but to a cut in the plane of complex angular momentum.

A different approach

The case of high-energy scattering required a different approach. In this case, in contrast to the DGLAP approach – which sums up higher-order αs contributions enhanced by the logarithm of virtuality, ln Q2 – contributions enhanced by the logarithm of energy, ln s, or by the logarithm of a small momentum fraction, x, carried by gluons, become important. The leading-log contributions of the type (αsln(1/x))n are summed up by the famous Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation (Kuraev et al. 1977, Balitsky and Lipatov 1978). Compared with DGLAP, this is a more complicated problem because the BFKL equation actually includes contributions from operators of higher twists.

In its general form the BFKL equation describes not only the high-energy behaviour of cross-sections but also the amplitudes at non-zero momentum transfer. Lipatov discovered beautiful symmetries in this equation, which enabled him to find solutions in terms of the conformal-symmetric eigenfunctions. This completed the construction of the “bare Pomeron in QCD”, a fundamental entity of high-energy physics (Lipatov 1986). An interesting new property of this bare Pomeron (which was not known in the old reggeon field theory) is the diffusion of the emitted particles in ln kt space.

Later, in the 1990s, Lipatov together with Victor Fadin calculated the next-to-leading-order corrections to the BFKL equation, obtaining the “BFKL Pomeron in the next-to-leading approximation” (Fadin and Lipatov 1998). Independently, this was also done by Marcello Ciafaloni and Gianni Camici in Florence (Ciafaloni and Camici 1998). Lipatov also studied higher-order amplitudes with an arbitrary number of gluons exchanged in the t-channel and, in particular, described odderon exchange in perturbative QCD. The significance of this work was, however, much greater. It led to the discovery of the connection between high-energy scattering and the exactly solvable two-dimensional field-theoretical models (Lipatov 1994).

More recently Lipatov has taken these ideas into the hot, new field in theoretical physics: the anti-de Sitter/conformal-field theory correspondence (ADS/CFT) – a hypothesis put forward by Juan Maldacena in 1997. This states that there is a correspondence – a duality – in the description of the maximally supersymmetric N=4 modification of QCD from the standard field-theory side and, from the “gravity” side, in the spectrum of a string moving in a peculiar curved anti-de Sitter background – a seemingly unrelated problem. However, Lipatov’s experience and deep understanding of re-summed perturbation theory has enabled him to move quickly into this new territory where he has developed and tested new ideas, considering first the BFKL and DGLAP equations in the N=4 theory and computing the anomalous dimensions of various operators. The high symmetry of this theory, in contrast to standard QCD, allows calculations to be made at unprecedented high orders and the results then compared with the “dual” predictions of string theory. It also facilitates finding the integrable structures in the theory (Lipatov 2009).

In this work, Lipatov has collaborated with many people, including Vitaly Velizhanin, Alexander Kotikov, Jochen Bartels, Matthias Staudacher and others. Their work is establishing the duality hypothesis almost beyond doubt. This opens a new horizon in studying QFT at strong couplings – something that no one would have dreamt of 50 years ago.

• The author thanks Victor Fadin and Mikhail Ryskin for helpful comments.

Lake Views: This World and the Universe

by Steven Weinberg, Belknap Press/Harvard University Press. Hardback ISBN 9780674035157, $25.95.

CCboo2_06_10

This book collects some essays and book reviews written by Steve Weinberg between the years 2000 and 2008. They were written in his study at home, from where the author can see Lake Austin. In 25 chapters he covers an impressive number of subjects ranging from military history to his review of Richard Dawkins’ book The God Delusion, passing through fundamental physics, missile defence, the boycott of Israeli academics and even offering some advice to young students and postdoctoral fellows.

As with previous books, one is captivated by the depth and breadth of his knowledge, the elegance of his prose and his intellectual honesty. In each chapter there is a preamble where he explains the origin of the article, whether it was asked for by different journals or as an exposition to a learned society; an afterword reveals some of the reactions his views have elicited.

An important part of the book is dedicated to the current theory of multiverses and string landscapes. To a certain extent all of these developments were inspired by his remarkable work in the late 1980s (explained in the book) where he used anthropic reasoning to understand (if not explain) the possible value of the cosmological constant, also known as the dark energy of the universe. It is quite remarkable that the value derived from the observations carried out by groups studying galactic redshifts, as well as from the Wilkinson Microwave Anisotropy Probe satellite, are in good agreement with the values favoured by his analysis. The sections of the book describing this work, dealing with Einstein’s famous blunder, are a masterpiece of insight and deep mastery of physics.

In other chapters, covering the humanities or religion, he takes his usual “rationalist, reductionist, realist and devoutly secular” viewpoint. Unlike Dawkins, his discourse is not the one of a “born-again atheist” (my quotes), but rather he explains his point of view in a relaxed form not devoid of humour. The effect of the relevant chapters is probably much stronger in US society, where religion plays a much bigger role than in Europe, where a large number of scientists, humanists, politicians and ordinary citizens would easily agree with his discourse. He raises provocation to the level of an art.

Another theme addressed in these essays is the ongoing discussion with philosophers or theologians on the notion of whether science explains only the “how” and not the “why” of things. He makes it very clear that the laws of nature have no purpose, and that the only legitimate purpose of science is to understand the basic laws that rule the universe. Finality is not the aim of science, but that does not make it a lesser element in the human endeavour to understand the universe that we live in.

Weinberg has not lost his punch. Far from that. This book is thought provoking, informative, challenging and fun to read. A single fault: it is too short.

bright-rec iop pub iop-science physcis connect