Comsol -leaderboard other pages

Topics

Zurich workshop faces the LHC’s precision challenge

CCzur1_04_07

With the imminent start-up of the LHC, particle physics is about to enter a new regime, which should provide solutions to puzzles such as the origin of electroweak symmetry-breaking and the existence of supersymmetry. The LHC will produce head-on collisions between protons or heavy-ions, but at the fundamental level these come down to interactions between partons, that is, quarks and gluons. For this reason, all the interesting new reactions will be initiated essentially by quantum chromodynamic (QCD) hard-scattering, and any claim for new physics will require a precise understanding of known, Standard Model processes.

To prepare for the “precision challenge” at the LHC, the particle theory groups of ETH Zurich and Zurich University organized a workshop on High Precision for Hard Processes (HP2). The three-day workshop took place on the ETH campus in early September 2006, involving about 65 participants. These included 15 diploma and doctoral students, indicating that precision calculations for the LHC is a lively field that attracts many young researchers.

HP2 addressed the precision challenge with reviews of results from Fermilab’s Tevatron, expectations at the LHC and measurements of parton distributions. A few benchmark reactions, such as single-inclusive jet-production and vector-boson production, will already be accessible with very low luminosity at the LHC. These can provide precise constraints on the proton structure, which is relevant to all hadron-collider reactions.

CCzur2_04_07

Likewise, precision studies, such as investigating the properties of the top quark, demand a better description of the full characteristics of an event. These will include improved jet-algorithms and a better understanding of soft physics in hard interactions. Searches will often involve multiparticle final states, calling for a precise description of high-multiplicity processes.

Searches for physics beyond the Standard Model will have to aim for a variety of different theoretical scenarios of electroweak symmetry-breaking. These include supersymmetry, higher-dimensional Higgs-less or composite Higgs models, and many other alternatives. Distinguishing models based on experimental observations could become very difficult, since signatures often look similar. The study of this variety of new models, therefore, calls for flexible tools that allow the prediction of all observable consequences of a new model simultaneously. With leading-order event-generator programmes now including generic new physics scenarios, we are clearly on the way to more systematic studies.

While these leading-order studies should give a first overview of the general features of signal and background processes, allowing the design of cuts and the optimization of search strategies, they will often be insufficient when precision is required. This will be the case either in discriminating among similar models or if signals are likely to be spread out over a continuous background.

Until now, next-to-leading order (NLO) calculations have been carried out on a case-by-case basis. Several speakers presented new results at the workshop, including Higgs-plus-2-jet production through QCD processes and vector-boson fusion; top quark plus 1 jet production; scalar-quark effects in Higgs production; and corrections to the decay of the Higgs boson into four fermions.

All of these calculations were major, time-consuming projects, and it is becoming increasingly clear that the large number of phenomenologically relevant reactions for LHC physics calls for automated techniques for NLO computations. While generating the appropriate real and virtual Feynman diagrams can be automated, the evaluation of the one-loop diagrams poses a major bottleneck, since standard one-loop methods applied to multi-particle processes result in over-complicated and numerically unstable results. The search for an efficient and automated method for NLO calculations was a major focus of the workshop.

Techniques proposed for the automated computation of one-loop virtual corrections to multi-particle processes range from purely numerical to fully analytical methods. In the purely numerical approaches, one searches for isolated, potentially singular contributions to the loop integrals at the level of the integrand, and subtracts them using universal subtraction factors. Semi numerical techniques aim to perform a partial simplification of the one-loop integrals to a not-necessarily minimal basis, which ensures maximum numerical stability. The purely analytical methods aim for an expression in a minimal basis. The workshop heard of much progress in this direction.

A substantial number of talks addressed the application of twistor-space techniques. Originally proposed as a new method to understand better the mathematical structure of tree-level amplitudes, the twistor-space formulation has proven fruitful for simplifying loop amplitudes. The twistor-based coefficients are a crucial ingredient in reconstructing one-loop amplitudes from their cuts. In supersymmetric theories, this procedure yields the full one-loop amplitude for any process. In the Standard Model, however, the rational parts of the amplitudes escape the cut construction.

Fortunately, this is not a major difficulty. Exploiting the detailed analytical properties of the amplitudes or generalizing the unitarity-cut method from four dimensions to general dimensions, the rational parts can also be calculated in a systematic manner. Applications of these new tools range from the phenomenology of Higgs bosons and one-loop multi-parton amplitudes to loop corrections in supergravity amplitudes.

For a number of benchmark processes, typically of low multiplicity, even NLO accuracy is not sufficient. The workshop heard progress reports on next-to-next-to-leading-order (NNLO) calculations of the Drell–Yan process and of the three-jet event-shapes in e+e; annihilation. New methods to perform NNLO calculations and to predict the singularity structure of QCD amplitudes at NNLO and beyond are paving the way for further progress in this area.

Very often, particular terms become large at any order in perturbation theory, necessitating an all-order resummation. For the leading logarithmic corrections to arbitrary processes, this can be performed using parton showers. Over the last few years we have seen major progress in this area, with NLO corrections being included into the parton showers on a process-by-process basis. Additionally, a number of important hard processes have been included recently in the MC@NLO event-generating program. There have also been suggestions for new implementations of parton showers that aim at more efficient systematic methods for the inclusion of the NLO corrections.

Sub-leading corrections need to be determined on a case by case basis, and speakers reported on new results for Higgs and W pair production. On the more formal side, these resummation approaches can be used to predict dominant terms at three loops, and to obtain an improved understanding of universal soft behaviour and of the high-energy limit of QCD. While resummation approaches have long been considered independent of higher-order calculations, the workshop clearly illustrated that both areas can have a fruitful interchange of ideas and methods.

In summarizing the highlights from the conference, Zvi Bern from the University of California at Los Angeles emphasized that theory is taking up the LHC’s precision challenge. Progress on many frontiers of high-precision calculations for hard processes will soon yield a variety of improved results for reactions at the LHC, providing experimental groups with the best possible tools for precision studies and new physics searches.

The next HP2 meeting will be in Buenos Aires, Argentina, in early 2009, when there should be plenty of discussion on the first data from the LHC.

CMS: a super solenoid is ready for business

Assembly of the solenoid

For seven years, Point 5 on the LHC has been the site of intense activity, as the CMS detector has taken shape above ground at the same time as excavation of the experimental cavern below. Last year saw an important step in the preparations on the surface, as the huge CMS superconducting solenoid – the S in CMS – was cooled down, turned on and tested.

The CMS coil is the largest thin solenoid, in terms of stored energy, ever constructed. With a length of 12.5 m and an inner diameter of 6 m, it weighs 220 tonnes and delivers a maximum magnetic field of 4 T. A segmented 12 500 t iron yoke provides the path for the magnetic flux return. Such a complex device necessarily requires extensive tests to bring it into stable operation – a major goal of the CMS Magnet Test and Cosmic Challenge (MTCC) that took place in two phases between July and November in 2006.

From the start, the idea was to assemble and test the CMS magnet – and the whole detector structure – on the surface prior to lowering it 90 m below ground to its final position. The solenoid consists of five modules that make up a single cylinder (figure 1), while the yoke comprises 11 large pieces that form a barrel with two endcaps. There are six endcap disks and five barrel wheels, and their weight varies from 400 t for the lightest to 1920 t for the central wheel, which includes the coil and its cryostat.

The CMS solenoid has several innovative features compared with previous magnets used in particle-physics experiments. These are necessary to cope with the high ampere turns needed to generate the 4 T field – 46.5 MA around a 6 m diameter. The most distinctive feature is the four-layer coil winding, reinforced to withstand the huge forces at play. The niobium titanium conductor is in the form of Rutherford cable co-extruded with pure aluminium and mechanically reinforced with aluminium alloy (figure 2). The layers of this self-supporting conductor bear 70% of the magnetic stress of 130 MPa while the cylindrical support structure, or mandrel, takes the remaining 30%.

Constructing the coil has been a tour de force in international collaboration involving suppliers in several countries. The basic element, the superconducting wire, originated with Luvata in Finland, and passed to Switzerland, where Brugg Kabelwerk made the Rutherford cable, and Nexans co-extruded it with pure aluminium. The cable then went to Techmeta in France for ­electron-beam welding onto two sections of high-strength aluminium alloy to allow the conductor to support the high magnetic stress. Finally ASG Superconductors in Italy wound the coils for the five sections of the solenoid, which travelled individually by sea, river and land to Point 5 for assembly into a single coil. The division into sections, and the chosen outer diameter of 7.2 m, ensured that transport could be by road without widening or rebuilding.

A model of the CMS solenoid coil

By August 2005 the solenoid was ready to be inserted into the cryostat that keeps it at its operating temperature of 4.5 K (figure 3). Cooling requires a helium refrigeration plant with a capacity of 800 W at 4.5 K and 4500 W in the range 60–80 K. The cryoplant was first commissioned with a temporary heat load to simulate the coil and its cryostat, and then early in 2006 the real coil was ready for cool-down. In an exceptionally smooth operation the temperature of the 220 t cold mass was lowered to 4.5 K over 28 days.

The next stage was to close the magnet yoke in preparation for the MTCC. The massive elements of the yoke move on heavy-duty air pads with grease pads for the final 100 mm of approach. Once an element touches the appropriate stop it is pre-stressed with 80 t to the adjacent element to ensure a good contact before switching on the magnet. To assure good alignment, a precise reference network of some 70 points was set up in the assembly hall, with the result that all elements could be aligned to within 1 mm of the ideal coil axis. The first closure of the whole yoke took some 30 days, and was completed on 24 July (CERN Courier September 2006 cover and p7). The MTCC could now begin.

Testing the magnet took place in two phases, with the initial tests in August 2006 and further tests and field mapping in October. The cosmic challenge, to test detectors and data-acquisition systems with cosmic rays, took place simultaneously. Each step in current ended with a fast discharge into external dump-resistor banks. Depending on the current level at the time of the fast discharge, it could take up to three days to re-cool the coil.

A key feature with any superconducting magnet system is to protect against high thermal gradients occurring in the coil if the system switches suddenly from being superconducting to normally (resistively) conducting with a sudden loss of magnetic field and release of stored energy – a quench. The aim is to dissipate the energy into as large a part of the cold mass as possible. For this reason the CMS solenoid is coupled inductively to its external mandrel, so that in the case of a quench eddy currents in the mandrel heat up the whole coil, dissipating the energy throughout the whole cold mass.

The tests showed that when the magnet discharges the dump resistance warms up by as much as 240°. At the same time the internal electrical resistance of the coil increases, up to as much as 0.1 Ω after a fast discharge at 19 kA.

The tests also showed that after a fast discharge at 19 kA the average temperature of the whole cold mass rises to 70 K, with a maximum temperature difference of 32.3° measured between the warmest part, on the inside of the central section of the coil, and the coldest part, on the outside of the mandrel. It then takes about two hours for the temperature to equalize across the whole coil. About half of the total energy (1250 MJ) dissipates as heat in the external dump resistor, which takes less than two hours to return to its normal temperature.

Monitoring the magnet’s mechanical parameters was also an important feature of the tests, to check for example the coil shrinkage and the stresses on the coil-supporting tie rods during cool-down. The measured values proved to be in excellent agreement with calculations. Powering the cold mass step-by-step allowed also for measurements of any misalignment of the coil. This showed that the axial displacement of the coil’s geometric centre is less than 0.4 mm, indicating a magnetic misalignment of less than 2 mm in the positive z direction.

A major goal of Phase II of the MTCC was to map and reconstruct the central field volume with 5 × 10–4 precision. The measurements took place in three zones, with flux-loop measurements in the steel yoke, check-point measurements near the yoke elements, and a detailed scan of the entire central volume of the detector – essentially the whole space inside the hadron calorimeter.

The solenoid

Measuring the average magnetic flux density in key regions of the yoke by an integration technique involved 22 flux loops of 405 turns installed around selected plates. The flux loops enclosed areas of 0.3–1.58 m2 on the barrel wheels and 0.48– 1.1 m2 on the endcap disks. The flux loops measure the variations of the magnetic flux induced in the steel when the field in the solenoid is changed during the fast (190 s time constant) discharge. A system of 76 3D B-sensors developed at NIKHEF measured the field on the steel–air interface of the disks and wheels to adjust the 3D magnetic model and reconstruct the field inside the iron yoke elements, which are part of the muon absorbers.

A special R&D programme with sample disks made of the CMS yoke steel from different melts investigated if the measurements of the average magnetic flux density in the yoke plates could be done with accuracy of a few per cent using flux loops. These studies showed that the contribution of eddy currents to the voltages induced in the test flux loop is negligible.

The precise measurement of the magnetic field in the tracking volume inside the CMS coil used a field-mapper designed and produced at Fermilab. This incorporated 10 more 3D B-sensors, developed at NIKHEF and calibrated at CERN to 5 × 10–4 precision for a nominal field of 4 T. To map a cylindrical volume inside the coil, this instrument moved along the rails installed inside the hadronic barrel calorimeter, stopping in 5 cm steps at points where the sensor arms could be rotated through 360°, and at predefined angles with azimuth steps of 7.5°. Figure 4 shows the final position of the mapper before closure of the positive endcap. It mapped a cylindrical volume 1.724 m in radius and 7 m long.

The CMS magnet has six NMR probes near the inner wall of the superconducting coil cryostat to monitor the magnetic field. These were also used in the field-mapping to measure the field on the coil axis and on the cylindrical surface of the measured volume.

The actual field-mapping in October involved a series of measurements at 0 T, 2 T, 3 T, 3.8 T (twice to study systematics), 3.5 T and 4 T. The flux-loop measurements were made during fast-­discharges of the coil at various current values. While the detailed analysis of the data is still ongoing, preliminary studies are very encouraging. The field distribution behaves very much as the simulation predicted – though more detailed simulation of the extra iron in the feet of CMS is necessary to account for it fully.

The solid steel yoke

The azimuthal component of the field is nominally zero, but as the plot shows it takes on small values with a sinusoidal dependence on the rotation angle. This is now fully understood as coming from small tilts of the plane in which the mapper moves with respect to the nominal z axis of the magnetic field; this couples the magnetic field components and also gives rise to the small variations seen in the radial component. In addition, the team now understands some even smaller variations in the sinusoidal behaviour of the azimuthal field as a function of the z step following a careful survey of the tilts induced on the mapper arms as it traversed the length of the coil on its (almost) straight rails.

The electrical tests of the solenoid have demonstrated its excellent performance, as well as the functioning of its ancillary systems, and its readiness for smooth operation. The detailed mapping was equally successful and final analysis is now underway to ensure that the best possible parameterization of the field for analysis of real data in autumn 2007. As soon as the tests were over, the huge sections of yoke were pulled apart again, and the descent to the cavern could at last begin.

Many institutes participating in CMS took part in the design, construction and procurement of the magnet, as members of the CMS Coil Collaboration, including CEA Saclay, ETH Zurich, Fermilab, INFN Genoa, ITEP Moscow, the University of Wisconsin and CERN. 

U70 proton synchrotron goes ahead with stochastic extraction

Beam current and spill current

The new advanced slow stochastic extraction (SSE) system at the U70, the 70 GeV proton synchrotron at the Institute for High Energy Physics (IHEP) in Protvino, has operated successfully during normal running in 2006. The aim is to use the technique to produce longer, more uniform spills than can be achieved with the standard extraction, which uses magnetic optics to move particles to the transverse resonance.

The first feasibility tests for SSE at the U70 took place in late 2004. These tests yielded natural stochastic spills, which were superimposed by radiofrequency noise with power spectra kept invariant through the extraction time. Such spills inherently had no flat-top in their DC content, however, and so were not useful for users. Since then, the beam physicists and engineers at IHEP have continued their efforts towards an operational SSE scheme and have developed some sophisticated dedicated circuitry, which they beam-tested during runs in 2005–2006.

The core of the new system consists of a feedback loop that modulates the amplitude of the operational noise in response to the spill current signal, which is monitored by a beam-loss monitor located downstream of the electrostatic septum deflector. Being a DC-coupled feedback system with a finite base-band bandwidth, it is designed both to flatten and to smooth the stochastic spills.

Distribution of the spill ripple magnitude and the amplitude Fourier spectru

The team has now used this system at the U70. The figures show that it has achieved the primary design goal of obtaining low-ripple flat-topped spills lasting 2–3 s, with noticeable progress in the quality of slow spills. The persistent AC ripple observed in the past in the extracted current now shows up as a random signal. It turns out that it cannot be suppressed via the feedback control used owing to the limited base-band bandwidth of the 3rd-order transverse resonance transfer-function involved in the overall closed-loop gain product.

The new SSE scheme routinely serviced the U70 during the entire run of 2006 yielding slow spills 1.1 s long, and exhibited a relatively robust and reliable behaviour consistent with the design aims. Further improvements of the SSE set-up promise a better functioning of the U70 for external fixed-target experiments in the near future.

Axions create excitement and doubt at Princeton

The lightweight axion is one of the major candidates for dark matter in the universe, along with weakly interacting massive particles. It originally arrived on the scene about 30 years ago to explain CP conservation in QCD, but there has never been as much theoretical and experimental activity in axion physics as there is today. Last year, the PVLAS collaboration at INFN Legnaro reported an intriguing result, which might indicate the detection of an axion-like particle (ALP) and which has triggered many further theoretical and experimental activities worldwide. The international workshop Axions at the Institute for Advanced Study, held at Princeton on 20–22 October 2006, brought together theorists and experimentalists to discuss current understanding and plans for future experiments. The well organized workshop and the unique atmosphere at Princeton provided ideal conditions for fruitful discussions.

CCEaxi1_03-07

In 2006, the PVLAS collaboration reported a small rotation of the polarization plane of laser light passing through a strong rotating magnetic dipole field. Though small, the detected rotation was around four orders of magnitude larger than predicted by QED (Zavattini et al. 2006). One possible interpretation involves ALPs produced via the coupling of photons to the magnetic field.

Combining the PVLAS result with upper limits achieved 13 years earlier by the BFRT experiment at Brookhaven National Laboratory (Cameron et al. 1993) yields values of the ALP’s mass and its coupling strength to photons of roughly 1 MeV and 2 × 10-6 GeV-1, respectively (Ahlers et al. 2006). If the PVLAS result is verified, these two values challenge theory because a standard QCD-motivated axion with a mass of 1 MeV should have a coupling constant seven orders of magnitude smaller. Another challenge to the particle interpretation of the PVLAS result comes from the upper limit measured recently at CERN with the axion solar helioscope CAST, which should have clearly seen such ALPs. However this apparent contradiction holds true only if such particles are produced in the Sun and can escape to reach the Earth.

So far there is no direct experimental evidence for conventional axions. The first sensitive limits were derived about two decades ago from astrophysics data (mainly from the evolution of stars, where axions produced via the Primakoff effect would open a new energy-loss channel so stars would appear older than they are), and also from experiments searching for axions of astrophysical origin (cavity experiments and CAST for example) and accelerator-based experiments. The conclusions were that QCD-motivated axions with masses in the micro-electron-volt to milli-electron-volt range seem to be most likely – if they exist at all.

CCEaxi2_03-07

The combined PVLAS–BFRT result would fit well into these expectations if the coupling constant were not too large by orders of magnitude. Theoreticians have tried to deal with this problem and develop models in line with the ALP interpretation of the PVLAS data and astrophysical observations. There may be some possibilities involving “specifically designed” ALP properties. However, to the authors’ understanding, such attempts fail if the conclusion announced at the workshop persists: according to preliminary new PVLAS results, the new particle is a scalar, whereas conventional axions are pseudoscalars. Consequently either the interpretation of the data or the experimental results must be reconsidered.

Although the PVLAS collaboration has measured the Coutton–Mouton effect – birefringence of a gas in a dipole magnetic field – for various gases with unprecedented sensitivity, the workshop openly considered possible systematic uncertainties. While experimental tests rule out many uncertainties, others are still to be checked. For example, the relatively large scatter of individual PVLAS measurements and the influence of the indirect effects of magnetic fringe fields remain to be understood. The PVLAS collaboration is therefore planning further detailed analyses.

In search of ALPs

One clear conclusion is the need for more experimental data. A “smoking gun” proof of the PVLAS particle interpretation would be the production and detection of ALPs in the laboratory. In principle the BFRT collaboration has already attempted this in an approach called “light shining through a wall”. In the first part of such an experiment, light passes through a magnetic dipole field in which ALPs would be generated; a “wall” then blocks the light. Only the ALPs can pass this barrier to enter a second identical dipole magnet, in which some of them would convert back into photons (figure 1). Detection of these reconverted photons would then give the impression of light shining through a wall. The intensity of the light would depend on the fourth power of the magnetic field strength and the orientation of the light polarization plane with respect to the magnetic dipole field.

CCEaxi3_03-07

The PVLAS collaboration and other groups are planning a direct experimental verification of the ALP hypothesis. Table 1 provides an overview of some of the approaches presented at the workshop. Besides PVLAS the ALPS, BMV and LIPSS experiments should take data in 2007. BMV and OSQAR (as well as the Taiwanese experiment Q&A) will confirm directly the rotation of the light polarization plane that PVLAS claims. The BMV collaboration aims for such a measurement in late 2007.

Research during the coming year should therefore clarify the PVLAS claim in much greater detail. The measurement of a new axion-like particle would be revolutionary for particle physics and probably also for our understanding of the constituents of the universe. However, considering the theoretical difficulties described above, a different scenario might emerge. Within a year from now we might be confronted both with an independent confirmation of the PVLAS result on the rotation of the light polarization plane, and simultaneously with only upper limits on ALP production by the light shining through a wall approaches. This situation would require new theoretical models.

The planned experiments listed in Table 1 do not have the sensitivity to probe conventional QCD-inspired axions. In the near future, CAST will be the only set-up to touch the predictions for solar-axion production. The workshop in Princeton, however, heard about other promising experimental efforts to search directly for axions or other unknown bosons with similar properties. These studies use state-of-the-art microwave cavities – for example, as in ADMX in the US, which is looking for dark-matter axions – or pendulums to search for macroscopic forces mediated by ALPs.

On the theoretical side, as we mentioned above, attempts to interpret the PVLAS result have generated some doubts on the existence of a new ALP. Perhaps micro-charged particles inspired by string theory might provide a more natural explanation of the PVLAS result. Researchers are thus discussing novel ideas of how to turn experimental test benches for accelerator cavity development into sensitive set-ups to test for micro-charged particles. However, as Ed Witten explained in the workshop summary talk, string theories also predict many ALPs, so perhaps we are on the cusp of discovering an entire new sector of pseudoscalar particles.

In summary, it is clear that small-scale non-accelerator-based particle-physics experiments can have a remarkable input to particle physics. Stay tuned for further developments.
The authors wish to thank the Princeton Institute for Advanced Study for the warm hospitality, and especially Raul Rabadan and Kris Sigurdson for their perfect organization of the workshop.

String theory meets heavy ions in Shanghai

The field of ultra-relativistic heavy-ion collisions held its 19th international quark-matter conference, QM2006, in Shanghai on 14–20 November 2006. More than 600 physicists discussed the latest experimental and theoretical advances in the study of quantum chromodynamics (QCD) at extreme values of temperature, density and low parton fractional momentum (“low-x”).

CCEstr1_03-07

The wealth of data from the six years of operation of the RHIC collider at Brookhaven, together with that from the fixed-target programme at CERN, is leading to a developing paradigm of the matter produced in high-energy nucleus–nucleus collisions as a “perfect liquid” with negligible viscosity. Far from the ideal parton-gas limit, the lattice QCD calculations for several quantities, such as the equation of state and the quarkonia spectral functions, reveal a non-trivial structure up to temperatures three times higher than the critical QCD temperature – the region that experiments are studying. The current theoretical and experimental efforts centre on characterizing in detail the unanticipated properties of this strongly interacting medium.

Strings and the perfect fluid

One of the most important indications of the formation of thermalized collective QCD matter in heavy-ion collisions is the observation of hydrodynamic flow fields in the form of large anisotropies in the final particle yields with respect to the reaction plane. If the medium expands collectively, the pressure gradients present in non-central collisions – with an initial lens-shaped overlap area – result in momentum anisotropies in the final state. Fluid-dynamics calculations indicate that such gradients must develop very early in the collision, during the high-density partonic phase, in order to reproduce the strong “elliptic flow” seen in the data.

CCEstr2_03-07

Two new experimental results presented at the conference supported this theoretical expectation. The PHOBOS and STAR collaborations at RHIC have observed large dynamical fluctuations of the elliptic-flow strength, which are fully compatible with those expected from event-by-event variations in the initial collision geometry alone. These results confirm that the strength of the collective flow is driven by the initial spatial anisotropy of the medium. The PHENIX collaboration presented data on the azimuthal anisotropy of electrons coming from the decay of charm and beauty mesons (figure 1). They have observed momentum anisotropies as large as 10%, indicating that charm quarks interact collectively and participate in the common flow of the medium. Both results clearly suggest a robust hydrodynamical response developing during the early partonic phase of the reaction.

On the theory side, progress was reported on hydrodynamical approaches including computationally expensive descriptions of the full three-dimensional evolution of the plasma, as well as, for the first time, viscosity corrections. These calculations indicate that the dimensionless viscosity-to-entropy-density ratio, η/s, has to be very small in order to reproduce the liquid-like properties seen in the data. If the viscosity is negligibly small then the produced medium is the most perfect fluid ever observed, in striking contrast with the ideal-gas behaviour at high temperatures that asymptotic freedom predicts. Determining the transport properties of such a medium in the region not far from the critical temperature is, however, a difficult task: perturbative expansions break down in this region while finite-temperature lattice techniques are not well adapted for studying real-time quantities.

Techniques developed in the context of string theory, where strong coupling regimes are accessible to computation, can help to circumvent this difficulty. The meeting in Shanghai heard of new approaches that make use of the correspondence between anti-de-Sitter (AdS) and conformal-field theories (CFT) to estimate the properties of strongly coupled systems in N = 4 super-symmetric Yang–Mills theory from calculations in dual weakly coupled gravity. The somewhat controversial hope is that these theories, though different from QCD, capture the relevant dynamics in the range of interest for phenomenology at RHIC. One of the first observables computed by these methods is the η/s ratio, which is conjectured to exhibit a universal lower bound of 1/4π.

CCEstr3_03-07

The conference also reviewed results on the heavy-quark diffusion coefficient, the jet-quenching parameter, and photon and dilepton emission rates. The application of AdS/CFT techniques, which the field has received with both enthusiasm and criticism, is nonetheless providing new insight into dynamical properties of strongly interacting systems that cannot be directly treated by either perturbation theory or lattice methods. At the same time this approach is opening novel directions for phenomenological studies and experimental searches.

Jet quenching

The study of hadron spectra at high-transverse momentum in heavy-ion collisions – the apparent modifications of which are generically known as jet quenching – was again one of the main conference topics. Speakers from experiments at both RHIC and CERN’s SPS presented new results on inclusive high-pT hadron suppression and two- and three-particle correlations between a leading trigger particle and the associated hadrons. Two clear experimental facts summarize the findings: the strong suppression of the yields of leading-hadrons indicates that fast quarks and gluons lose a sizeable amount of energy when traversing dense matter; and the two- and three-particle correlations studies indicate that this energy reappears as softer particles far from the initial parton direction both in azimuth and rapidity space.

When the transverse momentum of the trigger and the associated particles are both similar and of the order of a few giga-electron-volts, the two-particle-correlation signal around the direction opposite to the trigger particle presents a dip in central collisions, in striking contrast with the typical Gaussian-like shape observed in proton–proton and deuteron–gold collisions (figure 2). The three-particle-correlation data indicate a cone-like emission rather than a deflected jet topology. One proposal is that conical structures result from shock waves or Cherenkov radiation produced by highly energetic partons traversing the medium. Such observations could thus help to constrain the value of the speed of sound or the dielectric constant of the plasma. A more conservative explanation proposes one-gluon exclusive bremsstrahlung radiation at the origin of the enhanced “Mercedes-like” topologies that experiments observe. The differential studies of the transverse-momentum and the centrality dependencies presented at the conference further constrain the models.

Interesting intrajet correlations also appear in the near side, that is at angles in the trigger particle hemisphere. Owing to the trigger bias the near-side signal is sensitive to interactions that originate close to the surface of the hot dense fireball, while the opposite-side particle production reflects the most probable longer path through this medium. The data from the STAR collaboration (up to pTtrig = 9 GeV/c) show that, although the near-side azimuthal correlations remain basically unchanged from proton–proton to central gold–gold collisions, the pseudo-rapidity distribution is substantially broadened in the gold–gold case and presents a ridge structure above which an almost unaffected Gaussian shape appears. The dynamical origin of this effect is not yet understood but, interestingly, it indicates a coupling of the longitudinally expanding underlying event with the jet development.

CCEstr4_03-07

With increased integrated luminosities, heavy-quark probes are becoming more and more important at RHIC. The latest PHENIX data indicate a large suppression of decay electrons from high-pT charm and beauty mesons. The amount of suppression is very similar to that of the light-quark mesons, which is difficult to accommodate in most jet-quenching models since QCD distinctly predicts a lower gluon radiation probability for heavy-quarks compared with massless quarks or gluons.

There are also new results for direct photon production in gold–gold collisions up to transverse momenta of 20 GeV/c. Comparison of the gold–gold and proton–proton yields indicates that the QCD factorization theorem also holds for hard scatterings with heavy ions. The quality of the data is such that small deviations from the proton–proton reference can be traced back to isospin corrections and nuclear modifications of the parton distribution function in the nucleus. With improved statistics it will certainly be possible to use such data in global-fit analyses to constrain the nuclear parton distributions.

Last but not least, the conference saw the first measurements of direct photons emitted back-to-back with high-pT hadrons. With reduced uncertainties such correlations will provide an important calibrated measure of the energy lost by the original parton.

J/ψ melting and lattice QCD

The suppression of charmonium bound states, in particular the J/ψ, was proposed 20 years ago as a smoking-gun signature for quark–gluon plasma formation, and this is still the experimental observable that offers the most direct connection with lattice QCD. The first high-statistics results from RHIC were presented in Shanghai. Surprisingly, they show that the amount of suppression is almost identical to that found at the SPS (figure 3). The medium created at RHIC is expected to be much denser and hotter than that at the SPS and most models predicted a stronger depletion.

A natural explanation of the similarity of the suppression at the two energies (√SNN = 17 and 200 GeV) is put forward by recent lattice data that indicate that the J/ψ (but not the χc and ψ’) survives up to temperatures twice the critical one. The increase in the temperature from the SPS to RHIC would not be enough to dissolve the J/ψ, which is then only indirectly suppressed due to the lack of feed-down contributions from the dissolved χc and ψ’ states. Alternative explanations point out that the recombination of charm and anticharm quarks in the thermal bath (up to 10 charm–anticharm pairs are produced in central gold–gold collisions at RHIC) could compensate almost exactly for the additional suppression. The influence of initial-state effects, those present already in proton–nucleus collisions, are not yet completely settled. More proton–nucleus (at the SPS) and deuteron–gold (at RHIC) reference data, as well as the study of different charmonium states in gold–gold, will be needed to unravel the origin of the suppressed J/ψ yields.

Towards the LHC

The emerging picture in ultra-relativistic heavy-ion collisions is that of the formation of a strongly interacting medium with negligibly small viscosity – a perfect liquid – and with energy densities as high as 30 GeV/fm3. These characteristics emerge from the ideal hydrodynamics description of collective elliptic flow, and the large energy loss suffered by energetic quarks and gluons traversing the system respectively. The detailed study of the transport properties of this medium and the potential observation of the anticipated weakly interacting quark–gluon plasma will require key measurements in lead–lead collisions at 5.5 TeV at the LHC. The higher initial temperatures, the greater duration of the quark–gluon plasma phase and the much more abundant production of hard probes expected at the LHC are likely to result in indisputable probes of the deconfined medium that depend much less on details of the later hadronic phase.

A whole morning session at the conference looked at “Heavy-Ion Physics in the Next 10 Years”. The imminence of the LHC start-up – with an active nucleus–nucleus programme being developed by the ALICE, CMS and ATLAS collaborations – guarantees an exciting future for the physics of high-density QCD.

Scientists at the D0 experiment discover new path to the top

On 8 December, scientists from the D0 experiment at Fermilab’s Tevatron announced the first evidence for top quarks produced singly, rather than in pairs. The top quark has played a prominent role in the physics programme at the Tevatron ever since it was discovered there nearly 12 years ago. Just before the discovery in 1995, D0 collaborators were already turning their attention to the electroweak production of single top quarks, with theorists suggesting that the cross-section should be large enough to observe in the Tevatron’s proton–antiproton collisions.

CCEnew2_01-07

A top quark is expected to be produced by itself only once in every 2 × 1010 proton–antiproton collisions, through the electroweak processes shown in figure 1. Although the cross-section is not much smaller than for top-quark pair-production, the signature for single top production is easily mimicked by other background processes that occur at much higher rates.

To stand a chance of observing the electroweak process, D0 physicists had to develop sophisticated selection procedures, resulting in around 1400 candidates selected from the thousands of millions of events recorded over the past four years (corresponding to 1 fb-1 of collision data). The team expected only about 60 true single top events among all these candidates, so had to exploit every piece of information to establish the presence of the events.

The researchers used three different techniques (boosted decision trees, matrix element-based likelihood discriminants and Bayesian neural networks) to combine many discriminating features in ways that enable single top quark events to be recognized. In this way they effectively reduced the multidimensional system to a single, powerful variable.

CCEnew3_01-07

With agreement among the three measurements, the D0 team finds the cross-section for single top quark production to be 4.9 ±1.4 pb, consistent with the Standard Model prediction (D0 Collaboration 2006). They estimate the chance of measuring this value as the result of a background fluctuation at less than 1 in 2800 (3.4 σ). This result establishes the first evidence for single top quark production.

The analysis also constrains the magnitude of |Vtb|, an important parameter of the Standard Model’s Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes how quarks can change from one type to another. If the CKM matrix describes the intermixing of three generations of quarks – with top and bottom forming the third generation – the value of |Vtb| should be close to one. Any departure from this value could be a sign of new physics, be it a new family of quarks or some unforeseen physical process. The D0 result provides the first opportunity for a direct measurement of |Vtb| and constrains it to lie between 0.68 and 1 with a 95% probability, consistent with the presence of only three generations of quarks.

In addition to its inherent success, this analysis is an important milestone in the D0 Collaboration’s continued search for the Standard Model Higgs boson. Higgs production is predicted to occur at rates smaller than single top quark production in the presence of substantial “irreducible” backgrounds (including single top). In this regard, D0 is developing a refined ability to “reduce the irreducible”, exemplified by this analysis and the recent evidence for the associated production of W and Z bosons. These high-level analyses and the detailed understanding of the growing data-set are becoming the backbone of D0’s search for the Higgs boson.

Recent ISOLDE results revisit parity violation

It was 50 years ago last October that Tsung-Dao Lee and Chen Ning Yang suggested that the invariance under mirror reflection that we experience in everyday physical laws – parity symmetry – might be violated on the microscopic scale by the weak interaction (Lee and Yang 1956). They made this truly revolutionary suggestion to solve the so-called θ-τ puzzle, which involved two different decay modes of what seemed to be a single particle (now known as the K-meson) and which violated parity conservation. Lee and Yang formulated a description of the weak interaction that enabled parity to be violated, and were later awarded the Nobel prize for their theory.

Proving the theory

Just a few months later, in January 1957, Chien-Shiung Wu of Columbia University and collaborators Ernest Ambler, Raymond Hayward, Dale Hoppes and Ralph Hudson from the National Bureau of Standards in Washington announced that they had successfully confirmed the theory by observing parity non-conservation (PNC) in the beta-decay of a polarized sample of the radioactive 60Co nucleus (Wu et al. 1957). PNC has since become a cornerstone of the formulation of the weak interaction and the Standard Model of particle physics, even though its origin remains unexplained.

CCEiso1_01-07

The experiment used a modified version of a facility at the Bureau of Standards in which the spin of all nuclei pointed in one direction, a feat that required cooling the nuclei in the presence of a magnetic field to a temperature of several millikelvin above absolute zero. Under these conditions, the nuclei decayed under the influence of the weak force and emitted beta particles (electrons) and antineutrinos. In fact, the antineutrinos could not be observed in this system, but the effect on the electrons was measurable. The team showed that, as suggested by the theory of Lee and Yang, the number of electrons emitted in one direction with respect to the nuclear spin direction was significantly greater than the number emitted in the opposite direction, clearly indicating that parity was violated. Had it been conserved, equal numbers of electrons would have been observed in both directions.

Parity violation is characteristic of the weak nuclear force, but the strong nuclear force and the electromagnetic force preserve parity. Since the 1950s, these three fundamental forces in nature have been combined in a single theory – the Standard Model. This model suggests that PNC can also manifest itself in processes that are dominated by the strong and electromagnetic interaction, via the weak interaction part in the nuclear Hamiltonian – but such effects are usually minute and very difficult to observe (Adelberger et al. 1985 and Desplanques et al. 1980).

CCEiso2_01-07

In this article we will focus on measurements of PNC effects in bound nuclear systems, where parity mixing occurs between pairs of nuclear states of the same spin as a consequence of the weak part in the Hamiltonian. A PNC effect in a bound system can be written as:

PNCeffect ∝ |<HPNC>|K /ΔE

Here <HPNC> is the matrix element of the weak Hamiltonian, ΔE is the energy difference between the parity doublet levels and K is a model-dependent amplification factor, related to the ratio between the reduced matrix element of the “normal” gamma decay and the “abnormal” (PNC-enabled) matrix element of the same multipolarity. Table 1 lists several such cases, indicating values of <HPNC> and estimates of K for each transition.

CCEiso3_01-07

Among these cases, the mixing of the levels with spin I = 8 in the 180mHf nucleus provides a unique opportunity to study PNC in the electromagnetic and strong sectors, owing to the very large amplification. This amplification, of around 109, arises from the details of the nuclear structure, such as the proximity of the 8+ and 8 levels to each other, and the very different structure of the 8 level with respect to the sequence of positive parity levels below, to which it decays (figure 3). In the early 1970s, Kenneth Krane and collaborators at the Los Alamos Scientific Laboratory succeeded in observing parity mixing in the decay of 180mHf, when they measured a left–right asymmetry of about 1% in the emission of the 501 keV gamma transition (Krane et al. 1971a and 1971b).

Revisiting the evidence

This observation has been the only clear-cut demonstration of this type of parity violation until now and, as such, a group of us felt that the case deserved revisiting using the modern techniques of radioactive beams provided by the ISOLDE facility at CERN. During an experiment in October 2005, we produced a beam of 180mHf nuclei in their isomeric (metastable) 8 level at ISOLDE and implanted it into a magnetized iron foil at around 20 mK inside the NICOLE low-temperature 3He – 4He dilution refrigerator (figure 2).

CCEiso4_01-07

By detecting the 501 keV decay gamma-rays (figure 4) in two horizontal germanium detectors situated outside the NICOLE refrigerator, fore and aft (0° and 180°) of the polarization direction, we could determine the left–right asymmetry of the decay – a direct consequence, and a direct proof, of PNC. We used another detector, below the beam line at 90° to the axis of polarization, to monitor the 0°/90° ratio that provides a measure of the nuclear polarization and temperature.

The results we obtained show the parity-violating effect in the 501 keV gamma transition (figure 1) in close agreement with the previous experiments. Analysis yields an asymmetry of about 1% (Stone et al. 2007).
So the present experiment re-establishes the case of 180mHf as the prime example of PNC in bound nuclear systems, a fitting tribute, 50 years on, to the work of the pioneering scientists.

SN1987A heralds the start of neutrino astronomy

Twenty years ago researchers observed neutrinos from the supernova SN1987A – the first detection of neutrinos from beyond our solar system. Underground detectors are now waiting to study the explosion and neutrino properties of the next nearby supernova.

CCEsup1_01-07

In the early 1980s scientists built the first big detectors underground to search for nucleon decays. Grand unified theories (GUTs), proposed in the late 1970s, unify strong, weak and electromagnetic interactions. They predict that quarks can be transformed to leptons and that even the lightest hadron, the proton, can decay to lighter particles, such as electrons, muons and pions. The predicted lifetime of the proton was then about 1030 years, inspiring the construction of detectors weighing several thousand tonnes. The Irvine–Michigan–Brookhaven (IMB) detector in the US, which started data-taking in 1982, was a Cherenkov detector with 7000 tonnes of water viewed by 2048 5-inch photomultiplier tubes (PMTs) (figure 1). It was soon followed by the Kamiokande water Cherenkov detector in Japan. This was a 3000 tonne detector with 1000 20-inch PMTs, and it started up in 1983 (figure 2). Unfortunately, these detectors could not detect a proton decay signal because the lifetime of the proton was ultimately predicted to be much longer than the early GUTs had indicated.

In 1984/5 the Kamiokande collaboration upgraded their detector to look for solar neutrinos. Previously, the only detector searching for solar neutrinos was the Homestake experiment of Ray Davis and colleagues. The experiment observed a solar-neutrino flux of about a third of that predicted by the standard solar model. This was the famous “solar-neutrino problem”, and further experiments were needed to solve the discrepancy. To detect solar neutrinos, the Kamiokande team installed new electronics to record the timing of each PMT. They also constructed an anticounter to reduce gamma rays from the rock and improved water-purification to reduce radon background. The IMB collaboration upgraded their 5 inch PMTs to 8 inch PMTs to lower the detector’s energy threshold.

Supernova!

On 23 February 1987 at 0735 (UT), when the Kamiokande detector was ready to detect solar neutrinos, it observed neutrinos from SN1987A. The progenitor of the supernova was a blue giant in the Large Magellanic Cloud, 170,000 light years away. The Kamiokande detector observed 11 events and the IMB detector registered eight. Researchers at the Baksan underground experiment in Russia later analysed their data for the same period and found five events. The neutrino burst observed lasted about 13 s (figure 3).

CCEsup2_01-07

The theory of stellar evolution predicts that the final stage of a massive star (typically more than eight solar masses) is a core collapse followed by a neutron star or a black hole. As the temperature and density at the centre of stars increase, nuclear fusion produces heavier elements. This leads finally to an iron core of about one solar mass; further nuclear fusion is prevented as iron has the largest binding energy of all elements. When the core becomes gravitationally unstable it triggers the supernova explosion.

The gravitational potential energy of the iron core gives the energy released by the core collapse, which is about 3 × 1053 ergs. Predictions indicated that neutrinos would release most of the energy, since other particles, such as photons, are easily trapped by the massive material of the star. Researchers used the energy and number of observed events observed by Kamiokande and IMB to estimate the energy released by neutrinos from SN1987A, which was found to agree very well with expectations. This result confirmed the fundamental mechanism of a supernova explosion.

CCEsup3_01-07

There has been extensive work to simulate the explosion of a supernova, taking into account the detailed nuclear physics and with the recent addition of multi-dimensional calculations. However, no simulation has produced an explosion. Something seems to be missing and further investigation and more experimental data are needed. Although the observation of neutrinos from SN1987A confirmed the supernova scenario, the observed number of events was too small to reveal details of the explosion.

The next event

More recent underground detectors will give very valuable information when the next supernova burst occurs. The Super-Kamiokande detector has a photo-sensitive volume of 32,000 tonnes viewed by 11,129 20-inch PMTs. It can detect about 8000 neutrino events if a burst occurs at the centre of our galaxy (a distance of about 10 kpc). Super-Kamiokande should be able to measure precisely the time variation of the supernova temperature by detecting the interactions of emitted antineutrinos on free protons. Neutrino–electron scattering events, which are about 5% of the total events, should pinpoint the direction of the supernova.

CCEsup4_01-07

The kilotonne-class liquid-scintillator detectors, LVD in the Gran Sasso National Laboratory and KamLAND in Japan, will give additional information as they have a lower energy sensitivity and contain carbon. The IceCube detector, currently being built at the South Pole, can detect a supernova neutrino signal as a coherent increase of their PMT dark rate.

Although the supernova rate expected in our galaxy is only one every 20–30 years, a detection would provide an enormous amount of information. Scientists are proposing megatonne-class water Cherenkov detectors to detect proton decay and investigate neutrino physics, for example CP-violation in the lepton sector. If such detectors are built, they could observe a supernova in nearby galaxies every few years.

Supernovae have occurred throughout the universe since just after the Big Bang. The flux of all supernova neutrinos, known as supernova relic neutrinos (SRN), is intriguing. The expected flux of SRN is about several tens per square centimetre per second. The first five years of data from Super-Kamiokande gave an upper limit on the flux about three times higher than this expectation. By improving detection, it may soon be possible to detect SRN.

The neutrino data of SN1987A also yielded data on elementary-particle physics. It provided a limit on the mass of the neutrino of less than 20 eV/c2 (which in 1987 was competitive with laboratory experimental limits) and an upper limit on the neutrino lifetime. Future supernova data could provide something new in elementary-particle physics, for example, if the neutrino-mass hierarchy is inverted and a close supernova is detected, the energy spectrum of supernova neutrinos could reveal the hierarchy.
A conference to discuss supernova data from the past 20 years and what could be learned from a future supernova will be held at Waikoloa, Hawaii, on 23–25 February 2007.

Physicists gather for an extravaganza of beauty

The 11th International Conference on B-Physics at Hadron Machines (Beauty 2006) took place on 25–29 September 2006 at the University of Oxford. This was the latest in a series of meetings dating back to the 1993 conference held at Liblice Castle in the Czech Republic. The aim is to review results in B-physics and CP violation and to explore the physics potential of current and future-generation experiments, especially those at hadron colliders. As the last conference in the series before the start-up of the LHC, Beauty 2006 was a timely opportunity to review the status of the field, and to exchange ideas for future measurements.

CCEphy1_01-07

More than 80 participants attended the conference, ranging from senior experts in the heavy-flavour field to young students. The sessions were held in the physics department, with lively discussions afterwards. There were fruitful exchanges between the physicists from operating facilities and those from future experiments (LHCb, ATLAS and CMS), with valuable input from theorists.

The conference reviewed measurements of the unitarity triangle, which is the geometrical representation of quark coupling and CP-violation in the Standard Model. The aim is to find a breakdown in the triangle through inconsistencies in the measurements of its sides and its angles, α, β and γ (φ2, φ1 _and φ3), as determined through CP-violating asymmetries and related phenomena.

The statistics and the quality of the data from the first-generation asymmetric energy e+ e B-factories are immensely impressive. The BaBar and Belle experiments, at PEPII and KEKB respectively, passed a significant milestone when they reached a combined integrated luminosity of 1000 fb-1 (1 ab-1), with 109 b bbar pairs now produced at the Υ(4S). The experiments are approved to continue until 2008 and should double their data-sets.

CCEphy2_01-07

The B-factories have studied with high precision the so-called golden mode of B-physics, the decay B0→J/ΨKs. The CP-asymmetry in this decay accesses sin 2β with negligible theoretical uncertainty and the measured world-average value in this and related channels is now 0.675 ± 0.026. The four-fold ambiguity in the value of β can be reduced to two by measuring cos 2β in channels such as B0→D*D*Ks. The results now strongly disfavour two of the solutions, although higher statistics and further theoretical effort are necessary to verify this interpretation.

A possible hint of physics beyond the Standard Model may appear in the measurement of sin 2β in b→s “penguin” decays (e.g. B0→φKs). There is a 2.6σ _discrepancy in the value averaged over a number of channels, namely sin 2β = 0.52 ± 0.05, when compared with the charmonium measurement. We need more data to resolve this ambiguity, and eagerly await further studies of these penguin processes at the LHC.

CCEphy3_01-07

BaBar and Belle have also produced important results related to the angles α and γ. The γ measurements are particularly interesting as it had generally been assumed that this parameter was beyond the scope of the B-factories. The angle is measured through the interference of tree-level B±→ D(*) K± and B± →Dbar(*)K± decay amplitudes. This strategy is intrinsically clean, and leads to a combined result for γ of 60(+38-24)°. The errors are still large and a precise measurement of γ is impossible at the B-factories. However, the LHCb experiment at CERN will improve the error on γ to less than 5°, with measurements contributing from the Bu, Bd and Bs sectors.

Year of the Tevatron

Despite the great successes of the B-factories, Beauty 2006 focused on B-physics at hadron machines, and 2006 has been the “Year of the Tevatron”. The CDF and D0 experiments at Fermilab’s Tevatron have not only demonstrated the proof-of-principle of B-physics at hadron machines, but have also made measurements that are highly competitive and complementary to those of the B-factories, in particular through the unique access that hadron machines have to the Bs sector. The results indicate the future at the LHC, where there should be 100 times more statistics.

The highlight of the conference was the first 5 σ observation of Bs oscillations, presented by the. They reported the mass difference between the mass eigenstates, Δms, as 17.77 ± 0.10 (stat) ± 0.07 (syst) ps-1, in agreement with Standard Model expectations. Data from hadronic channels, such as B0s→Dsπ have greatly enhanced the statistical power of the analysis; this measurement relies on the precision vertex detector. The measurement of Δms and Δmd allows the ratio of Cabibbo–Kobayashi–Maskawa (CKM) matrix elements |Vtd|/|Vts| to be extracted with around 5% systematic uncertainty (with input from lattice theory), which fixes the third side of the unitarity triangle with the same precision.

The study of rare processes dominated by loop effects provides an important window on new physics and should have significant contributions from new heavy-particle exchanges. The Tevatron experiments are intensively searching for the very rare decay Bs→μμ, which is expected to have a branching ratio of order 10-9 in the Standard Model, but is significantly enhanced in many supersymmetric extensions. The Tevatron is currently sensitive at the 10-7 level and is striving to improve this reach. The LHC experiments will explore down to the Standard Model value.

Towards the LHC

With the start-up of the LHC, B-physics will enter a new phase. Preparations for the experiments are now well advanced, as are the B-triggers necessary to enrich the sample in signal decays. Talks at the conference described the status of the detectors and their first running scenarios. The LHC pilot run scheduled for late 2007 will yield minimal physics-quality data, but will be invaluable for commissioning, calibrating and aligning the detectors. Researchers will accumulate the first real statistics for physics measurements in summer 2008. Key goals in the first two years of operation will be the first measurement of CP violation in the Bs system; a measurement approaching the Standard Model value (around 2°) of the Bs mixing phase in Bs→J/Ψφ; the likely first observation of the decay Bs→μμ; studies of the B angular distributions sensitive to new physics in the channels Bu,d→K*μμ and precise measurements of the angles α and γ. LHCb will cover a wide span of measurements, whereas ATLAS and CMS will focus on channels that can be selected with a (di-)muon trigger.

Participants at the conference made a strong science case for continued B-physics measurements beyond the baseline LHC programme, to elucidate the flavour structure of any new physics discovered. On the timescale of 2013, the LHCb collaboration is considering the possibility of upgrading the experiment to increase the operational luminosity to 10 times the present design, to accumulate around 100 fb-1 over five years. In addition there are two proposals on a similar timescale for asymmetric e+e “Super Flavour Factories” at luminosities of around 1036 cm-2s-1 – SuperKEKB and a linear-collider-based design (ILC-SFF) – each giving some 50 ab-1 of data by around 2018. The LHCb upgrade and the e+e flavour factories largely complement each other in their physics goals.

Social activities enabled discussions outside of the conference room. Keble College provided accommodation and hosted the banquet at which Peter Schlein, the founder of the conference series and chair for the first 10 meetings, was thanked for his efforts over the years and his pioneering contributions to B-physics at hadron machines.

The conference was extremely lively: B-physics continues to flourish and has an exciting future ahead. The B-factories and the Tevatron have led the way, but there is still much to learn. Heavy flavour results from ATLAS, CMS and, in particular, LHCb seem certain to be a highlight of the LHC era.

Antiprotons could help fight against cancer

A pioneering experiment at CERN with potential for cancer therapy has produced its first results. Exploiting the unique capability of CERN’s Antiproton Decelerator to produce an antiproton beam at the right energy, the Antiproton Cell Experiment (ACE) has shown that antiprotons are four times more effective than protons for cell irradiation.

Cancer therapy is about collateral damage: destroying the tumour while avoiding the healthy tissue around it. Unwanted exposure of healthy tissue could cause side effects and result in a reduced quality of life. It is also believed to increase the chances of secondary cancers developing. In radiation therapy there is an ongoing quest to reduce the radiation level to tissue outside the primary tumour volume.

CCEnew1_12-06

In hadron therapy, which began in 1946 with Robert Wilson’s seminal paper, “Radiological Use of Fast Protons”, the dose profile of heavy charged particles (hadrons) does not irradiate healthy tissue because most of the energy is deposited at the end of the flight path of the particles – the Bragg peak – with little before and none beyond. However, the question remains of how to maximize the concentration of energy onto the target.

The first speculations that antiprotons could offer a significant gain in targeting tumours through the extra energy released by annihilation date back more than 20 years (Gray and Kalogeropoulos 1984). Now the ACE collaboration has tested this idea by directly comparing the effectiveness of cell irradiation using protons and antiprotons.

To simulate a cross-section of tissue inside a body, the experiment uses tubes filled with live hamster cells suspended in gelatine. These are irradiated with beams of protons or antiprotons at a variety of intensities with about a 2 cm range in water. After irradiation the gelatine is extruded from the tubes and cut into 1 mm slices. These are then dissolved in growth medium and the cells are placed in Petri dishes in an incubator. After a few days the naked eye can see that some of the cells have produced healthy offspring. This gives a measure of the survival of cells along the beam path for the different dose levels. Cell survival is plotted for the entrance and the Bragg-peak regions as a function of particle fluencies, and the ratio of dose for a 20% survival in these two regions is extracted.

CCEnew2_12-06

Comparing beams of protons and antiprotons that cause identical damage at the entrance to the target, the results of the experiment show that the damage to cells inflicted at the end of the beam is four times higher for antiprotons (Holzscheiter et al. 2006.) The method directly samples the total effect of the beams on the cells, combining the enhanced energy deposition in the vicinity of the annihilation point and the higher biological effectiveness of this extra energy (delivered by nuclear fragments). The experiment demonstrates a significant reduction of the damage to the healthy cells along the entrance channel of a beam for antiprotons compared with protons.

While antiprotons may seem unlikely candidates for cancer therapy, the initial results from ACE indicate that these antimatter particles could lead to more effective radiation therapy. There is no doubt, however, that the first clinical application is still at least a decade away.

bright-rec iop pub iop-science physcis connect