Comsol -leaderboard other pages

Topics

Successful test of proton–ion collisions in LHCb

CCnew6_09_12

Unlike the other three large experiments at the LHC, LHCb did not participate in heavy-ion runs in 2010 and 2011. This was because the forward region covered by the experiment, which corresponds to angles below 20° with respect to the beam axis, has been optimized for the study of heavy quarks in the proton–proton (pp) collisions that the LHC provides for most of the year. In the usual heavy-ion environment, i.e. the collisions of two lead-ion beams (PbPb), the density of tracks in this region would be so high that LHCb’s tracking detectors would be saturated with hits. However, the decision to run with proton–lead-ion (pPb) collisions for the next heavy-ion run opened the door for LHCb to participate, as the track occupancies would to be more similar to the usual pp running.

In the recent test of this mode of operating the LHC, collisions were provided for a few hours in the early morning of 13 September (Celebrations, challenges and business as usual). The LHCb detector worked perfectly during the test, as the figure shows, and it was exciting for the LHCb team to see this new category of events being recorded. The data were analysed quickly and clean signals were reconstructed for decays of KS and Λ decays (long-lived particles containing the strange quark). The signals were found to be even cleaner than equivalent ones extracted from pp data. This was to some extent expected, because the luminosity was low during the test run, so there were only single primary pPb interactions, whereas in pp running there may be four or more primary interactions in the same event in LHCb. However, once the signals had been normalized to the number of primary vertices, there still remained a factor of three or so enhancement in the pPb data compared with pp.

This is a first indication of the interesting physics that can be studied in the full pPb run, currently scheduled for early in 2013. With its high-precision vertex reconstruction and powerful particle-identification capabilities, the LHCb experiment should provide extra information to complement the measurements from the other experiments. In particular, the production rates of heavy-flavour states such as the J/ψ and Υ, or charmed particles, will be of interest in the forward region.

Using the LHC as a photon collider

The protons and nuclei accelerated by the LHC are surrounded by strong electric and magnetic fields. These fields can be treated as an equivalent flux of photons, making the LHC the world’s most powerful collider not only for protons and lead ions but also for photon–photon and photon–hadron collisions. This is particularly so for beams of multiply charged heavy-ions, where the number of photons is enhanced by almost four orders of magnitude compared with the singly charged protons (the photon flux is proportional to the square of the ion charge).

CCnew8_09_12

The ALICE collaboration has recently taken advantage of this effect in a study of coherent photoproduction of J/ψ mesons in lead–lead (PbPb) collisions. The J/ψ is detected through its dimuon decay in the muon arm of the ALICE detector, which also provides the trigger for these events. The relevant collisions typically occur at impact parameters of several tens of femtometres, which is well beyond the range of the strong force, so the nuclei usually remain intact and continue down the beam pipe. The photonuclear origin of the J/ψ is therefore ensured by requiring that the detector is void of other particles, that there is only one positive and one negative muon candidate, and that the J/ψ has very low transverse momentum, etc. The appearance of these events (see figure) stands in sharp contrast to central heavy-ion collisions, where thousands of particles are produced.

CCnew9_09_12

These interactions carry an interesting message about the partonic substructure of heavy nuclei. Exclusive photoproduction of heavy vector mesons is believed to be a good probe of the nuclear gluon distribution. The cross-section measured in a heavy-ion collision Pb+Pb → Pb+Pb+J/ψ is a convolution of the equivalent photon spectrum with the photonuclear cross-section for γ+Pb → J/ψ+Pb. The latter process can be modelled as the colourless exchange of two gluons.

At the rapidities (y around 3) studied in ALICE, J/ψ photoproduction is sensitive mainly to the gluon distribution at values of Bjorken-x of about 10–2. Although the experimental error is rather large, the conclusion from ALICE is that the data favour models that include strong modifications to the nuclear gluon distribution, known as nuclear shadowing.

ALICE takes a first look at diffraction

Diffractive processes represent more than 25% of the cross-section for inelastic proton–proton collisions at the LHC. So, it is only natural that the ALICE collaboration is interested in a field that started with optical phenomena but which today provides access to non-perturbative QCD processes, at the heart of ALICE’s scientific programme. There are also practical reasons that diffraction cannot be ignored, for instance, when normalizing data to specific event classes such as non-single diffractive (NSD) or inelastic (INEL), or when measuring precisely the proton–proton inelastic cross-section, which is used by ALICE as input for model calculations to determine the number of nucleon–nucleon binary collisions in heavy-ion collisions.

Diffractive reactions in particle physics are characterized by an exchange that has the quantum numbers of the vacuum – the pomeron – and leaves a “rapidity gap”, devoid of particles. Experimentally, there is no possibility to distinguish large rapidity gaps caused by pomeron exchange from those caused by other colour-neutral exchanges (e.g. secondary Reggeons). The ALICE collaboration therefore used the occurrence of large rapidity gaps as definition of diffraction (figure 1). Single Diffraction (SD) processes are those that have a large gap in rapidity from the leading proton, limited by the value of the diffracted mass MX < 200 GeV/c2 on the other side; other inelastic events are considered NSD events. Double diffraction (DD) processes are defined as NSD events with a gap in pseudorapidity, Δη > 3.

ALICE profited from particularly favourable circumstances in its study of diffraction: a detector sensitive to particles with low transverse-momentum, down to about 20 MeV/c; data taken with low luminosity so that corrections for event overlap in the same proton bunch-crossing are small; and above all, the presence in the collaboration of Martin Poghosyan, a developer of diffraction models and a former co-worker of the late Aleksei Kaidalov, which gave ALICE privileged access to the details of the Kaidalov-Poghosyan (KP) model.

The challenge was to study diffraction while being unable to observe either the non-diffracted proton or events in which the diffracted system escapes the acceptance of the detector. Nevertheless, the ALICE detectors cover a sufficient range in pseudorapidity (8.8 units, from –3.7 to 5.1, for collisions at the origin of the co-ordinate system) to have ample sensitivity to the SD and DD processes. Two independent observables were identified that are sensitive to diffraction: the ratio of the numbers of SD-like (activity on one side of the detector only) to NSD-like (activity on both sides of the detector) events; and the width distribution of the pseudorapidity gap for events of NSD type.

Obtaining the relative rates of diffractive processes from these two observables required the use of a model, so ALICE chose the KP model to estimate the fraction of unseen events. In practice, the diffracted-mass distribution is the sole relatively unknown parameter – the kinematics of diffractive collisions and the fragmentation of the diffracted system are known with sufficient precision. Experimental data and recent models that include higher-order pomeron terms show that the variation of the diffracted-mass distribution with centre-of-mass energy is slow, which gives confidence that extrapolation to LHC energies does not add a large uncertainty.

The sensitivity to models was studied by considering different models for the diffracted-mass distribution. The systematic error was obtained from extreme cases, varying the KP model by a factor of ±50% at the low-mass threshold and using the Donnachie-Landshoff model as the other limit. A van der Meer scan of the transverse profiles of the beams provided the luminosity L. The simulation needed to be adjusted using the KP model and the observed relative rate of diffraction to determine the acceptance and efficiency factor, A, in the measurement of the trigger rate, R(t), so that the inelastic cross-section, σINEL, could be determined, using R(t) = A × σINEL × L. Combining the relative rates of diffractive processes with inelastic cross-sections, ALICE obtained the SD and DD cross-sections at three centre-of-mass energies, √s = 0.9, 2.76 and 7 TeV, as shown in figure 2. (σINEL at √s = 0.9 TeV was not measured by ALICE, instead, σINEL = 52.5+2.0/–3.3 was used.)

This first measurement of SD and DD cross-sections at the LHC confirms that these processes evolve only slowly from the centre-of-mass energies of the Intersecting Storage Rings to 7 TeV at the LHC. The analysis by ALICE also shows the importance of including diffraction correctly to describe precisely the acceptance and efficiency of the detector for minimum bias triggers.

Top-quark production gets a boost

Upper limit on the production

Top quarks are especially interesting at the LHC because they are the most massive fundamental particle known, suggesting an intimate association with electroweak symmetry breaking and possible new-physics scenarios.

The top quark decays via two channels: t → Wb → lνb or t → Wb → qqb. When a tt pair is created in an experiment with energy roughly equal to the quark–antiquark rest mass, the decay products appear well separated in the detector. With the higher energies at the LHC, however, particles are often given a “boost” in momentum when produced so the decay products of a tt pair have extra momentum along the directions of the top and antitop, and are found in opposite hemispheres of the detector.

While higher energies allow the experiments at the LHC to probe for new physics as never before, they also bring new challenges. For example, what if the top quark is so boosted that the three jets from the decay t → Wb → qqb merge to a point where they are indistinguishable from each other and appear as one large jet? With the high energy at the LHC, this boosted situation happens quite often and must be accounted for when reconstructing top-quark decays. Analyses involving top quarks or other “boosted objects” at the LHC, now include approaches that allow for these effects.

The special techniques for measuring boosted top quarks are particularly important when searching for new resonances, where a new heavy particle decaying primarily into tt pairs could be observed as a bump in the relevant invariant mass spectrum. The higher the mass of the new particle, the more likely it is that the top-quark decay products will merge in the detector.

ATLAS recently performed searches for tt resonances in final states with one or no leptons. In the former case, the lepton is allowed to be much closer to the b quark than in non-boosted analyses. In the other hemisphere of the detector, a wide massive jet with underlying structure is required. Using these boosted techniques, the sensitivity to a new heavy-gauge boson increased by nearly 700 GeV.

With the expected energy upgrade of the LHC the frequency of boosted final states will increase and even more sophisticated methods will be needed to search for physics beyond the Standard Model. The future, is certainly boosted.

Updating the strategy for particle physics

On 10–12 September, some 500 physicists attended an open symposium in Krakow for the purpose of updating the European Strategy for Particle Physics, which was adopted by CERN Council in 2006. The meeting provided an opportunity for the global particle-physics community to express views on the scientific objectives of the strategy in light of developments over the past six years. With the aid of a local organizing committee, it was arranged by a preparatory group chaired by Tatsuya Nakada (see Viewpoint Charting the future of European particle physics).

Theorists calculate the route to carbon-12

CCnew14_09_12

The triple-alpha reaction rate that produces carbon-12 in stars and other energetic astronomical phenomena has been a tricky subject for nuclear theorists for some time. Initially, Fred Hoyle proposed that there should be a 0+ resonance close to the 3α threshold to justify the observed abundances of carbon-12 in stars, a theory that was later confirmed experimentally. However, if there is not enough energy in the stellar environment to reach the narrow resonances involved, then a direct three-body capture becomes the favoured path.

In the Nuclear Astrophysics Compilation of Reaction Rates (NACRE), the direct triple-alpha capture rate has been extrapolated from the two-step resonant capture to temperatures well below 108 K, where the resonant capture dominates (C Angulo et al. 1999). However, this estimation has proved inadequate and nuclear theorists began trying to solve the problem more directly.

Recently, a team at Kyushu University in Japan made use of the continuum-discretized coupled-channel (CDCC) method, which expands the full three-body wave function in terms of the continuum states of the two-body subsystem – in this case beryllium-8 (Ogata et al. 2009). This method is challenging in the case of the triple-alpha reaction problem because the charged-particle reaction occurs at large distances and is dominated by Coulomb interactions. The results reflected these challenges, as the predicted rates showed an increase of 20 orders of magnitude when compared with NACRE, and caused the red-giant phase in low- and intermediate-mass stars to disappear in theoretical models of stellar evolution. Additionally, studies of helium ignition in accreting white dwarfs and accreting neutron stars showed that the CDCC rate is barely consistent with observations of Type Ia supernovae and type I X-ray bursts, respectively.

To skirt some of these difficulties, the nuclear theory group at the National Superconducting Cyclotron Laboratory at Michigan State University combined the Faddeev hyperspherical harmonics and the R-matrix method (HHR) to obtain a full solution to the three-body triple-alpha continuum (Nguyen et al. 2012). The researchers find that the HHR method agrees well with NACRE above 7 × 107 K. However, below that temperature the calculations revealed a pronounced increase of the rate accompanied by a completely different temperature dependence. Though the results do show a strong enhancement at these low energies, it is not as strong as that seen in the CDCC result.

This finding turns out to have crucial repercussions for astrophysics. When the new results are used in stellar evolution simulations within the MESA (Modules for Experiments in Stellar Astrophysics) code, the red-giant phase in the stellar evolution of low- and intermediate-mass stars survives. The team plans to carry out further astrophysical studies to understand the implications of the new rate in explosive scenarios in the near future.

Closer to the Milky Way’s supermassive black hole

Astronomers have been tracking the motion of stars at the very centre of the Galaxy for the past 20 years. One star in particular has attracted a great deal of attention by completing a full orbit round the supermassive black hole at the centre with a period of about 16 years. Now a team using the two Keck telescopes in Hawaii has detected another much fainter star with an even shorter period of only 11.5 years. This second star will help test Albert Einstein’s theory of general relativity in the strong gravitational field of the back hole.

The Sun is located at the periphery of a disc-shaped spiral galaxy, commonly known as the Galaxy. Seen from Earth, the lights from thousands of millions of stars in this galaxy form a bright stripe across the night sky. This is the Milky Way. However, most of the stars remain hidden behind an inhomogeneous web of gas and dust, forming the interstellar medium. While this absorbing material obscures the view of the galactic centre in visible light, at infrared wavelengths, dust becomes much more transparent. The advent of infrared cameras in the 1990s therefore allowed the detection of the Galaxy’s most centrally located stars for the first time.

Europeans and Americans competed to follow the motion of these stars with the goal of ascertaining the existence of a supermassive black hole in the Galaxy. The Europeans started in 1992 by using the New Technology Telescope and then the Very Large Telescope of the European Southern Observatory (ESO) in Chile, and since 1995 the Americans have used the Keck Observatory in Hawaii. Both groups used a star, S0-2, that orbits the radio source Sagittarius A* in 15.9 years to derive the mass and the distance of the black hole coinciding with this source. Around four years ago, they obtained a result of about 4 million solar masses located some 27,000 light-years away.

Now a group of astronomers, led by Leo Meyer and Andrea Ghez from the University of California, Los Angeles, has found a second star that is even closer to the supermassive black hole, with a period of only 11.5 years. Being 16 times fainter than S0-2, this new source, S0-102, was difficult to detect by the twin Keck telescopes in the crowded central region of the Galaxy. It is mainly thanks to adaptive optics that this star could be identified and tracked. Adaptive optics, first employed in 2004 at the Keck Observatory, is a technique used to correct in real time the deformation of the image induced by turbulence in the atmosphere. The wave-front deformation is measured by observing a bright star or an artificial “guide star”, generated in the upper atmosphere by a powerful sodium laser. The correction is implemented by changing the inclination and the shape of the deformable secondary mirror on time scales of a few milliseconds.

The detection of a second star orbiting closely the supermassive black hole of the Galaxy will enable tests of general relativity in a gravitational potential that is two orders of magnitude stronger than at the surface of the Sun. The effect of curved space–time manifests itself by a deviation from the Keplerian orbit of the stars and a relativistic red shift of their emission. The gravitational red shift could become measurable at the next closest approach to the black hole, in 2018 for S0-2 and three years later for S0-102. The deformation of the elliptical orbits induced by curved space–time will be more difficult to identify and might await the next generation of 30-m class telescopes. The presence of the second star will anyway be instrumental in breaking the degeneracy inherent in the measurement of curved space–time with a single star.

Quark Matter goes to Washington

CCqm1_09_12

The Quark Matter conferences, held roughly every 18 months, form the most important series of meetings in relativistic heavy-ion physics. The latest and 23rd in the series took place on 13–18 August at the Omni Shoreham hotel, a historic landmark in downtown Washington, DC. The meeting attracted around 700 participants from all around the world who discussed an unprecedented amount of new heavy-ion data from experiments at both Brookhaven National Laboratory (BNL) and CERN. This rich harvest of high-quality experimental results from the PHENIX and STAR collaborations at BNL’s Relativistic Heavy-Ion Collider (RHIC) and the ALICE, ATLAS and CMS collaborations at CERN’s LHC is providing a deep insight into the behaviour of quarks and gluons under the extreme conditions of high temperature and density.

The opening ceremony included presentations by Bart Gordon, former chair of the US House of Representatives Committee on Science and Technology, Timothy Hallman, associate director of Science for Nuclear Physics of the US Department of Energy, and Samuel Aronson, director of BNL. Urs Wiedemann of CERN provided an overview of the current status of relativistic heavy-ion physics, followed by highlights from the experiments presented by Takao Sakaguchi of PHENIX, Xin Dong of STAR, Karel Safarik of ALICE, Barbara Wosiek of ATLAS and Gunther Roland of CMS. The welcome reception was held at the spectacular Smithsonian Institute’s National Portrait Gallery.

Understanding the quark–gluon plasma

Quantum chromodynamics (QCD) – the theory describing the interactions of quarks and gluons – is believed to be responsible for 99% of the mass of the visible universe, with the Higgs boson responsible for the remaining 1%. It has become clear that this mass originates mainly from the self-interaction of gluons, which at short distances is governed by asymptotic freedom. Yet the dynamics of gluon interactions in the large-distance, strong-coupling regime, which is responsible for quark confinement and the existence of atomic nuclei, remains mysterious. It is intimately linked to the complicated and poorly understood structure of the QCD vacuum.

QCD is believed to be responsible for 99% of the mass of the visible universe

The understanding of matter is often advanced by the study of phase transitions in macroscopic systems; thus heavy-ion physics aims towards a better understanding of QCD by creating a “macroscopic” domain of excited vacuum populated by a hot quark–gluon fireball. Advancing the understanding of the quark–gluon plasma also helps in better understanding the origins of the universe – this is because the conditions created in heavy-ion collisions, albeit fleetingly, are similar to the conditions that existed a few microseconds after the Big Bang. In addition, because both QCD and the electroweak sector of the Standard Model are described by non-Abelian gauge theories, understanding the QCD plasma will therefore provide valuable insight into the dynamics of matter at temperatures above the electroweak phase transition that are not accessible in the laboratory. This is important because, for example, the topological “sphaleron” transitions in the electroweak plasma could be responsible for the baryon asymmetry of the present-day universe.

CCqm2_09_12

A major advance in the physics of the QCD plasma made possible by the data from RHIC – and now also from the LHC – was the realization that at experimentally accessible temperatures the plasma behaves as a liquid with small dissipation, quantified by small values of shear and bulk viscosities. This implies that there exists a range of temperatures above the deconfinement phase transition in which the plasma does not at all resemble the quasi-ideal gas of quarks and gluons that is expected at high temperatures as a result of asymptotic freedom. At strong coupling, the non-Abelian plasma possesses small shear viscosity as exemplified by the supersymmetric plasma that is amenable to the studies by holographic methods based on string theory. In the latter case, the ratio of shear viscosity to entropy at strong coupling reaches the value of 1/(4π), a value that was conjectured to be the universal lower bound for any fluid. The physics underlying this bound is of a quantum nature because at strong coupling the mean free path approaches the de Broglie wavelength of the constituents, making the quasi-particle picture inapplicable.

The new data presented at Quark Matter 2012 have strengthened the case for the “perfect liquid” and made the physical picture more detailed. The data on hadron spectra and azimuthal correlations from RHIC and the LHC point towards the presence of well localized quantum fluctuations at the early stage of the collision that induce excitations in the quark–gluon liquid. The azimuthal distributions of hadrons are conveniently parameterized by their Fourier coefficients vn. For large n, these coefficients signal the presence of localized fluctuations at the early stage of the collision; their values should be sensitive to the shear viscosity of the liquid.

Ordinarily, the “elliptic flow” v2 dominates over higher harmonics because it reflects decompression of the elliptical shape of the produced fireball. However, all harmonics in the most-central heavy-ion collisions become similar, as illustrated in figure 1 by data from the CMS collaboration in the 0.2% most-central lead–lead (PbPb) collisions at the LHC. A comparison of the data with hydrodynamical calculations shows that the shear viscosity of the liquid is quite close to the conjectured quantum bound, although its precise value depends on the choice of initial conditions.

The initial conditions in heavy-ion collisions are determined by the structure of nuclear wave-functions at small Bjorken-x and the dynamics of their interaction. Significant progress in the understanding of QCD at small x has been made in recent years, triggered by the data from RHIC and the LHC. The quantum evolution in QCD and the high density of partons in Lorentz-contracted nuclei (which can be described as the “colour glass condensate”) lead to the emergence of strong colour fields that dominate the early moments of heavy-ion collisions. The data on the collective flow and other observables suggest that thermalization occurs very early on – within 1 fm/c of the beginning of the collision. The dynamics of this “early thermalization” is not yet entirely understood but several promising theoretical developments were reported at the conference.

One of the proposed signatures of the colour glass condensate is the disappearance of quantum back-to-back di-jet correlations in the forward rapidity region of deuterium–gold collisions at RHIC, owing to the emergence of a semi-classical gluon field at small Bjorken-x. Both the PHENIX and STAR collaborations reported on observations of this effect at RHIC.

The QCD medium can be studied using hard probes to investigate its response to external localized perturbations. The RHIC experiments observed the strong quenching of high-transverse-momentum hadrons and jets that had been proposed as a signature of hot and dense quark–gluon matter. The LHC has significantly extended the kinematic reach in the studies of jets. All LHC experiments found that the strong suppression of jets persists up to high jet energies.

There is no clear sign of the dependence of jet-energy loss on the colour charge of the parton

The mechanism behind the jet-energy loss is still not clear: does it depend on the colour charge of the leading parton (quark or gluon)? Is it suppressed for heavy quark jets, as expected for the medium-induced gluon radiation as a result of the “dead cone” effect? Are the dynamics of energy loss adequately described by perturbative QCD, or does it call for new strong-coupling methods? These questions can be answered only after more detailed data are acquired on jet shapes and flavour-tagged jets.

An interesting effect of the modification of the jet-fragmentation function in PbPb collisions was reported at the conference by the ATLAS and CMS collaborations. Figure 2 shows the ATLAS result. In addition to the enhancement of hadron production at a small fraction of the jet energy z, there is also a sizable dip for the intermediate values of z, which has yet to be understood.

As for the flavour-tagged jets, high-energy b- and c-tagged jets are seen by the LHC experiments to be quenched similarly to the inclusive jets, which are dominated by gluons. At present there is no clear sign of the dependence of jet-energy loss on the colour charge of the parton. At transverse momenta below 8 GeV, there is a hint of weaker quenching for D mesons than for light hadrons, as reported by the ALICE collaboration. The electrons from heavy-flavour decays that receive a significant contribution from beauty decays at high transverse momenta have been found by ALICE to be quenched less than the charm decays of D mesons, as figure 3 shows. This suggests that the quenching of bottom quarks is weaker than that of charm quarks.

The PHENIX collaboration presented the first data on heavy-meson quenching from decay electrons obtained by using their new silicon vertex detector. In accord with the expectations from theory, D mesons are observed to be suppressed less than light hadrons. However, the PHENIX collaboration found surprising hints of a significantly stronger suppression of B-mesons.

An important baseline for jet quenching is provided by the colourless probes – the photons, Z and W bosons. Indeed, the ATLAS and CMS collaborations reported the production of Z bosons with no sign of suppression up to transverse momenta of about 100 GeV. This implies that the observed suppression of jets is, indeed, a result of the colour dynamics.

Heavy quarkonium has been proposed as a probe of deconfinement – the Debye screening in the quark–gluon plasma (QGP) is expected to make quarkonium formation impossible. Strong suppression of J/ψ production was observed at CERN’s Super Proton Synchrotron – and then at RHIC and at the LHC. Studies of heavy quarkonium have now been extended to the bottomonium family, with the expected hierarchy of suppression, as shown in figure 4a from the CMS collaboration: it is more difficult to dissolve states with larger binding energies and smaller radii.

Nevertheless, the observed suppression stems from a complicated interplay of final- and initial-state effects, as suggested by the recent PHENIX data on J/ψ production in asymmetrical copper–gold (CuAu) collisions presented at the conference (figure 4b). The J/ψ suppression at rapidities in the Cu fragmentation region is found to be stronger than in the Au-fragmentation region. This is inconsistent with a final-state effect alone because the density of produced particles is larger in the Au-fragmentation region. On the other hand, J/ψ production in central CuAu collisions in the Cu-fragmentation region selects a high-density region in the wave function at small x of the Au nucleus. The rescattering of heavy quarks in this dense gluon system before the formation of the QGP is expected to reduce the probability for J/ψ formation.

Fluctuations, broken symmetries and the critical point

An important goal of heavy-ion physics is to map the QCD phase diagram. A prominent feature of this phase diagram is the possible existence of a critical point at finite baryon density at the end of the first-order phase-transition curve. The signature of the critical point is the enhancement of fluctuations, including the fluctuations of net baryon number. Experimental access to the high baryon-density in heavy-ion collisions requires decreasing the energy of the collisions at RHIC. The search for the critical point, and thus for the disappearance of signatures of deconfined matter, was the goal of the recent scan of the beam energy at RHIC.

Both the STAR and PHENIX collaborations reported results from the RHIC Beam Energy Scan at the conference. The STAR experiment sees an intriguing deviation of the higher moments of net proton-fluctuations from the Poisson baseline and from the expectations based on Monte-Carlo models at centre-of-mass energies below 20 GeV. A measurement at higher statistics with a finer step in collision energy will be needed to tell whether this observation does point to the existence of the critical point. As the energy of the collision was decreased, several signatures of the plasma phase were found to disappear, as reported by STAR. These include the suppression of the high-transverse momentum hadrons, the scaling in constituent number of the elliptic flow and the fluctuations of charge separation, which is a consequence of the chiral magnetic effect.

Theoreticians have proposed the existence of quantum fluctuations of topological origin in the early stage of heavy-ion collisions, which generate chirality similarly to the electroweak sphalerons that generate baryon number at much higher temperatures. In the presence of the strong magnetic field generated by the colliding heavy ions, the fluctuations in net chirality can lead to fluctuations in the electric-charge separation because of the “chiral magnetic effect”. The resulting observable is the event-by-event fluctuation in the electric-charge separation relative to the reaction plane, signalling the fluctuating electric dipole moment of the plasma. The effect can be accessed experimentally by measuring the difference in the fluctuations of the parity-odd harmonics of azimuthal distributions for hadrons of the same and opposite charge. The effect has been seen at RHIC by the STAR and PHENIX experiments; an effect of similar strength was also reported by the ALICE collaboration.

However, because the observable is parity-even it can receive contributions from more mundane effects. An alternative conventional explanation has been put forward based on the combination of correlations between opposite electric charges and the elliptic flow. Usually, the elliptic flow is correlated with the magnetic field by the geometry of the collision and both vanish in central collisions. However, the new RHIC data on uranium–uranium (UU) collisions allow separation of the two effects. Because of the deformed shape of the uranium nucleus, the central collisions produce a deformed fireball leading to a sizeable elliptic flow; yet, the number of spectators detected by the Zero Degree Calorimeter is small, so the magnetic field must be greatly suppressed. Thus, it should be possible to establish whether the observed fluctuations in charge asymmetries are driven by the elliptic flow or by the magnetic field.

Preliminary data from STAR presented at the conference indicate that the difference in the fluctuations of the asymmetry for the same- and opposite-charge hadrons vanishes in central UU collisions (figure 5), suggesting that these fluctuations are driven by the magnetic field. Another important result on this topic reported by STAR was the difference between the elliptic flows of positive and negative pions in AuAu collisions at 200 GeV, which is found to be linearly dependent on the charge asymmetry in the event, as expected on the basis of the chiral magnetic effect. New refined data are necessary to reach a definitive conclusion on this issue. Topological transitions in QCD generating chirality are analogous to the electroweak sphaleron transitions that generated the baryon asymmetry of the universe shortly after the Big Bang. Therefore, understanding them better is important.

Broad connections

The conference highlighted the broad connections of relativistic heavy-ion physics to condensed-matter physics, string theory, cosmology and astrophysics. For example, the small viscosity of the QGP makes it similar to such seemingly distant objects as ultracold atoms and graphene, where the charge carriers are chiral and the effective coupling is large. The non-dissipative chiral magnetic current appears to exist also in Weyl semimetals and opens possibilities for the creation of a new generation of electronic devices.

The conference made clear the need for dedicated future facilities, several of which were discussed, including: the Electron–Ion Collider needed for a precision study of small-x gluon wave-functions of nuclei and of the spin structure of the proton; the Large Hadron–Electron Collider at CERN, which would advance the high-energy, high-momentum-transfer frontier of deep-inelastic scattering; the Facility for Antiproton and Ion Research under construction at GSI in Darmstadt; and the Nuclotron-based Ion Collider facility, currently under construction in Dubna. The case for the latter two facilities was advanced by the first results from the beam-energy scan at RHIC that were reported at the conference.

Summaries of the results presented were provided by three pairs of rapporteurs, each pair composed of a theorist and experimentalist: Boris Hippolyte of the Institut Pluridisciplinaire Hubert Curien and Dirk Rischke of the University of Frankfurt on global variables and correlations; Jorge Casalderrey-Solana of the University of Barcelona and Alexander Milov of the Weizmann Institute of Science on high-transverse-momenta and jets; and Charles Gale of McGill University and Lijuan Ruan of BNL on heavy flavours, quarkonia and electroweak probes. The wealth of new data and the resulting leap in the theoretical understanding of QCD matter were possible only because of the successes of the two complementary experimental programmes at RHIC and the LHC.

Deep-inelastic scattering enters the LHC era

Data-to-theory ratios

The unusually early date for the 20th International Workshop on Deep-Inelastic Scattering and Related Subjects proved not to be a problem. The trees were all in blossom in Bonn during DIS 2012, which was held there on 26–30 March, and the sun shone for most of the week. As is the tradition for these workshops, the first day consisted of plenary talks, with the ensuing three days devoted to parallel sessions, followed by a final day of summary talks from the seven working groups. Almost all of the 300 participants also gave talks: there were as many as 275 contributions, not including the summaries. For the first time, the number of results from the LHC experiments at CERN was larger than from DESY’s HERA collider, which shut down in 2007. Given such a large number of contributions, it is not possible to do justice to them all, so the following report presents only a few rather subjective highlights.

With the move from dominantly electron–proton collisions to more and more results coming from hadron colliders, the workshop started with an “Introduction to deep-inelastic scattering: past and present” by Joël Feltesse of IRFU/CEA/Saclay. Talks on theory and on experiment followed, which covered the full breadth of the topics presented in more detail in the parallel sessions. With running at Fermilab’s Tevatron coming to an end in 2011, results with the complete data set are now being released by the CDF and DØ collaborations. There were also several results from the LHC experiments based on the complete data set for 2011. The emphasis in many of the theory presentations was on calculating processes to higher orders and on parton density function (PDF) and scale uncertainties.

Structure functions and PDFs

Measurements that are relevant to the determination of the PDFs in the nucleon, were reported on combined data from the HERA experiments, H1 and ZEUS, the LHC experiments, ATLAS, CMS and LHCb, as well as from the Tevatron experiments, CDF and DØ. New experimental results have come – in particular from the LHC – on Drell-Yan production, including W and Z bosons, and from HERA and the LHC on jet production, including jets with heavy flavour. In addition, analyses of deep-inelastic scattering (DIS) data on nuclei were presented at the workshop.

There has been substantial progress in the development of tools for PDF fitting

There has been substantial progress in the development of tools for PDF fitting, including the so-called HERAfitter package. This package is designed to include all types of data in a global fit and can be used by both experimentalists and theorists to compare different theoretical approaches within a single framework. The FastNLO package can calculate next-to-leading-order (NLO) jet cross-sections on grids that can then be used for comparisons of data and theory, as well as in PDF fitting. Figure 1 shows a comparison of data and theory for many different energies and processes.

Looking at the current status of fit results, the conclusion is that the determination of the PDFs still gives rise to some controversy but that there is progress in understanding the differences, as Amanda Cooper-Sarkar of Oxford University explained. All of the groups presented PDFs up to next-to-next-to-leading order (NNLO) in the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi formalism. Extensions of the formalism into the Balitsky-Fadin-Kuraev-Lipatov regime and into the high-density regime of nuclei are in progress. The H1 and ZEUS collaborations have also measured the longitudinal structure function, FL. However, the precision is still not good enough to discriminate between predictions of the gluon density and different models.

Measurements of cross-sections of diffractive processes in DIS open the opportunity to probe the parton content in the colourless nucleon, the goal being to determine diffractive PDFs of the nucleon. The H1 and ZEUS collaborations selected diffractive processes in DIS either by requiring the detection of a scattered proton in dedicated proton spectrometers (ZEUS-LPS or H1-FPS) at small angles to the direction of the proton beam, or without proton detection but instead requiring a large rapidity gap between the production of a jet or vector-meson and the proton beam direction. Figure 2 shows reduced cross-sections obtained from LPS and FPS data (and also combined), which were presented at the workshop. The LHC experiments have also started to contribute to diffraction studies. The ATLAS collaboration reported on an analysis of diffractive events selected by a rapidity gap.

Diffractive cross-section

Searches and tests

At the time of the conference, the LHC experiments had only tantalizing hints of an excess in the mass region around 125 GeV using the data from 2011. It was nevertheless impressive that many results could be shown using the full 5 fb–1 of data that had been collected that year. The Higgs searches were the only ones to show any real sign of new particles. All others saw no significant indications and could only set upper limits. Experiments at both the LHC and the Tevatron have now measured WW, ZZ and WZ production with cross-sections that are consistent with Standard Model expectations, calculated to NLO and higher.

As Feltesse reminded participants in his talk, measurements of hadronic final states in DIS were the cradle for the development of the theory of strong interactions, QCD. Such measurements remain key for testing QCD predictions. New results were presented from HERA and the LHC, in which the QCD analyses have reached an impressive level of precision. While leading-order-plus-parton-shower Monte Carlos provide a good description of the data in general, a number of areas can be identified where the description is not good enough. Higher-order generators are needed here and it is important that appropriate tunes are used.

In general, NLO QCD predictions give a good description of the data. However, the uncertainty in the theory because of missing higher-order calculations is almost everywhere much larger than the experimental errors. Moreover, it was shown that in several cases the fragmentation process of partons into hadrons is not well described by NLO QCD calculations.

A central issue is the value and precision of the strong coupling constant, αS, and its running as a function of the energy scale. Many results were presented that improve the precision and show that the energy dependence is well described by QCD calculations.

There has been a great deal of progress in calculations of heavy-quark production

There has been a great deal of progress in calculations of heavy-quark production. A particular highlight is the first complete NNLO QCD prediction for the pair-production of top quarks in the quark–antiquark annihilation channel. There is also a wealth of data from HERA, the LHC, the Tevatron and the Relativistic Heavy-Ion Collider (RHIC) on the production both of quarkonia and of open charm and beauty. The precision with which the Tevatron experiments can measure the masses of both the top quark and the W boson is particularly impressive. Although the LHC experiments have more events of both sorts by now, it will still take some time before the systematic uncertainties are understood well enough to achieve similar levels of precision.

The X,Y,Z states discovered in recent years have been studied by the experiments at B factories, the LHC and the Tevatron. Their theoretical interpretations are still a challenge. The LHCb experiment has performed the world’s best measurements of the properties of the Bc meson and b baryons and has made important contributions in other areas where its ability to measure particles in the forward direction is important.

Experiments that use polarized beams in DIS on polarized targets are relevant for studying the spin structure of nucleons. New results were presented from HERMES at HERA and COMPASS at CERN’s Super Proton Synchrotron, as well as from experiments at RHIC and Jefferson Lab. A tremendous amount of data has been collected and is now being analysed. Current results confirm that neither the quarks nor the gluons carry much of the nucleon spin. This leaves angular momentum. However, a picture describing the nucleon as a spatial object carrying angular momentum has yet to be settled.

The conceptual design report for a future electron–proton collider using the LHC together with a new electron accelerator, known as the LHeC, was released a couple of months after DIS 2012. This was the main topic of the last plenary talk at the workshop. In the parallel sessions, a broad spectrum of options for the future was discussed, covering the upgrades of the LHC machine and detectors, the upgrade plans at Jefferson Lab and RHIC, as well as proposed new accelerators such as an electron–ion collider, the EIC. One of the central aims is to understand better the 3D structure of the proton in terms of generalized parton distribution functions.

DIS 2012 participants once again profited from lively and intense discussions. The conveners of the working groups worked hard to put together informative and interesting parallel sessions. They also organized combined sessions for topics that were relevant for more than one working group. For relaxation, the workshop held the conference dinner in the mediaeval castle “Burg Satzvey”, which was a big success. Many of the participants also went on one of several excursions on offer. Next year, DIS moves south and will take place on 22–26 April in Marseilles.

SLAC at 50: honouring the past and creating the future

CCsla1_09_12

In the early 1960s, a 4-km-long strip of land in the rolling hills west of Stanford University was transformed into the longest, straightest structure in the world – a linear particle accelerator. It was first dubbed Project M and affectionately known as “the Monster” by the scientists at the time. Its purpose was to explore the mysterious subatomic realm.

CCsla2_09_12

Fifty years later, more than 1000 people gathered at SLAC National Accelerator Laboratory to celebrate the scientific successes generated by that accelerator and the ones that followed, and the scientists who developed and used them. The two-day event on 24–25 August, for employees, science luminaries and government and university leaders, was more than a tribute to the momentous discoveries and Nobel prizes made possible by the minds and machines at SLAC. It also provided a look ahead at the lab’s continuing evolution and growth into new frontiers of scientific research, which will keep it at the forefront of discovery for decades to come.

A history of discovery

The original linear-accelerator project, approved by Congress in 1961, was a supersized version of a succession of smaller accelerators, dubbed Mark I to Mark IV, which were built and operated at Stanford University and reached energies of up to 730 MeV. The “Monster” would accelerate electrons to much higher energies – ultimately to 50 GeV – for ground-breaking experiments in creating, identifying and studying subatomic particles. Stanford University leased the land to the federal government for the new Stanford Linear Accelerator Center (SLAC) and provided the brainpower for the project. This set the stage for a productive and unique scientific partnership that continues today, supported and overseen by the US Department of Energy.

CCsla3_09_12

Soon after the new accelerator reached full operation, a research team that included physicists from SLAC and Massachusetts Institute of Technology (MIT) used the electron beam in a series of experiments starting in 1967 that provided evidence for hard scattering centres within the proton – in effect, the first direct dynamical evidence for quarks. That research led to the awarding of the 1990 Nobel Prize in Physics to Richard Taylor and Jerome Friedman of SLAC and Henry Kendall of MIT.

CCsla4_09_12

SLAC soon struck gold again with discoveries that were made possible by another major technical feat – the Stanford Positron Electron Asymmetric Ring, SPEAR. Rather than aiming the electron beam at a fixed target, the SPEAR ring stored beams of electrons and positrons from the linear accelerator and brought them into steady head-on collisions.

CCsla5_09_12

In 1974, the Mark I detector at SPEAR, run by a collaboration from SLAC and Lawrence Berkeley National Laboratory, found clear signs of a new particle – but so had an experiment on the other side of the US. In what became known as the “November Revolution” in particle physics, Burton Richter at SLAC and Samuel Ting at Brookhaven National Laboratory announced their independent discoveries of the J/ψ particle, which consists of a paired charm quark and anticharm quark. They received the Nobel Prize in Physics for this work in 1976. Only a year after the J/ψ discovery, SLAC physicist Martin Perl announced the discovery of the τ lepton, a heavy relative of the electron and the first of a new family of fundamental building blocks. He went on to share the Nobel Prize in Physics in 1995 for this work.

CCsla6_09_12

These and other discoveries that reshaped understanding of matter were empowered by a series of colliders and detectors. The Positron–Electron Project (PEP), a collider ring with a diameter almost 10 times larger than SPEAR, ran during the years 1980–1990. The Stanford Linear Collider (SLC), completed in 1987, focused electron and positron beams from the original linac into micron-sized spots for collisions at a total energy of 100 GeV. Making thousands of Z bosons in its lifetime, the SLC hosted a decade of seminal experiments. It also pioneered the concepts behind the current studies for a linear electron–positron collider to reach energies in the region of 1 TeV.

PEP was followed by the PEP-II project, which included a set of two storage rings and operated in the years 1998–2008. PEP-II featured the BaBar experiment, which created huge numbers of B mesons and their antimatter counterparts. In 2001 and 2004, BaBar researchers and their Japanese colleagues at KEK’s Belle experiment announced evidence supporting the idea that matter and antimatter behave in slightly different ways, confirming theoretical predictions of charge-parity violation.

Synchrotron research and an X-ray laser

Notably, new research areas and projects at SLAC have often evolved as the offspring of the original linear accelerator and storage rings. Researchers at Stanford and SLAC quickly recognized that electromagnetic radiation generated by particles circling in SPEAR, while considered a nuisance to the particle collision experiments, could be extracted from the ring and used for other types of research. They developed this synchrotron radiation – in the form of beams of X-ray and ultraviolet light – as a powerful scientific tool for exploring samples at a molecular scale. This early research blossomed as the Stanford Synchrotron Radiation Project (SSRP), a set of five experimental stations that opened to visiting researchers in 1974.

Its modern descendant, the Stanford Synchrotron Radiation Lightsource (SSRL), now supports 30 experimental stations and about 2000 visiting researchers a year. SPEAR – or more precisely, SPEAR3 following a series of upgrades – became dedicated to SSRL operations 20 years ago. This machine, too, has allowed Nobel-prize winning research. Roger Kornberg, professor of structural biology at Stanford, received the Nobel Prize in Chemistry in 2006 for work detailing how the genetic code in DNA is read and converted into a message that directs protein synthesis. Key aspects of that research were carried out at the SSRL.

Cutting-edge facilities

Meanwhile, sections of the linear accelerator that defined the lab and its mission in its formative years are still driving electron beams today as the high-energy backbone of two cutting-edge facilities: the world’s most powerful X-ray free-electron laser, the Linac Coherent Light Source (LCLS), which began operating in 2009; and FACET, a test bed for next-generation accelerator technologies. LCLS-II, an expansion of the LCLS, should begin construction next year. It will draw electrons from the middle section of the original linear accelerator and use them to generate X-rays for probing matter with high resolution at the atomic scale.

CCsla8_09_12

The late Wolfgang “Pief” Panofsky, who served as the first director of SLAC from 1961 until 1984, often noted that big science is powered by a ready supply of good ideas. He referred to this as the “innovate or die” syndrome. In 1983, Panofsky wrote that he had been asked since the formation of the lab, “How long will SLAC live?” The answer was and still is: “about 10 to 15 years, unless somebody has a good idea. As it turns out, somebody always has had a good idea which was exploited and which has led to a new lease on life for the laboratory.”

Under the leadership of its past two directors – Jonathan Dorfan, who helped launch the BaBar experiment and the astrophysics programme, and Persis Drell, who presided over the opening of the LCLS – SLAC’s scientific mission has grown and diversified. In addition to its original focus on particle physics and accelerator science, SLAC researchers now delve into astrophysics, cosmology, materials and environmental sciences, biology, chemistry and alternative energy research. Visiting scientists still come by the thousands to use lab facilities for an even broader spectrum of research, from drug design and industrial applications to the archaeological analysis of fossils and cultural objects. Much of this diversity in world-class experiments is based on continuing modernizations at the SSRL and the unique capabilities of the LCLS.

CCsla9_09_12

SLAC’s scientists and engineers continue to collaborate actively in international projects – designing machines and building components, running experiments and sharing data with other accelerator laboratories in the US and countries around the globe, including China, France, Germany, Italy, Japan, Korea, Latin America, Russia, Spain and the UK. The lab’s long-standing collaboration with CERN provided an important spark in the formative years of the World Wide Web and led to SLAC’s launch of the first web server in the US. SLAC is also playing an important role in the ATLAS experiment at CERN’s LHC. In the area of synchrotron science, collaborations with US national laboratories and with overseas labs such as DESY in Germany and KEK in Japan have contributed greatly to the development of advanced tools and methodologies, with enormous scientific impact.

Expertise in particle detectors has even elevated the lab’s research into outer space. SLAC managed the development of the Large Area Telescope, the main instrument on board the Fermi Gamma-ray Space Telescope, which was launched into orbit in 2008 and continues to make numerous discoveries. The lab has also earned a role in building the world’s largest digital camera for an Earth-based observatory, the Large Synoptic Survey Telescope, with construction scheduled to begin in 2014 for eventual operation on a mountaintop in Chile.

CCsla10_09_12

Richter, who served as SLAC director from 1984 to 1999, has said that the fast-evolving nature of science necessitates a changing path and pace of research. “Labs can remain on the frontiers of science only if they keep up with the evolution of those frontiers,” remarks Richter. “SLAC has evolved over its first 50 years and is still a world leader in areas beyond what was thought of when it was first built. It is up to the scientists of today to keep it moving and keep it on some perhaps newly discovered frontiers for the next 50.”

This article is based on the one published on the SLAC News Centre.

Stanford University, in California, already has a leading position as far as linear accelerators are concerned. It operates a whole family of linacs, several of which are used for medical purposes. The 200 ft machine [Mark III] in operation there produces 700 MeV electrons and its energy will be stepped up to 1050 MeV.

Late in May, Stanford made the scientific headlines – again with a linac.

Addressing a science research symposium in Manhattan, President Eisenhower announced that he would recommend to the US Congress the financing of a “large new electron linear accelerator … a machine two miles long, by far the largest ever built”.

This machine, intended for Stanford University, would be one of the most spectacular atom smashers ever devised. Two parallel tunnels would have to be driven for two miles into the rock of a small mountain in the vicinity of Palo Alto. Such natural cover would, of course, stop any dangerous radiation. One of the tunnels, the smaller in diameter, would house the accelerator proper, while the bigger one would be used for maintenance purposes.

The proposed new linac for Stanford would initially produce 15 BeV (GeV) electrons; it is announced that this energy could later be raised to 40 BeV. It is believed that the machine would take six years to build, at a cost of 100 million dollars. Approval of the project, now only taken after Congressional hearings, depends on the decision to be held in July.

• From the first issue of CERN Courier.

bright-rec iop pub iop-science physcis connect