Comsol -leaderboard other pages

Topics

KEK achieves first operation of crab cavities

A team of accelerator physicists at KEK has achieved effective head-on collisions of electrons and positrons while retaining the crossing angle. They accomplished this at the KEKB collider using new devices called "crab cavities". This success will pave the way to increasing KEKB’s luminosity – already the world’s highest – to an unprecedented level.

At KEKB, the electron and positron bunches cross at an angle of 22 mrad (1.3°). This non-zero crossing angle is one of the machine’s novel design features: it provides effective beam separation at the collision point without a high level of background in the detector. However, it is necessary to tilt the bunches of electrons and positrons so that they collide head-on while still crossing at an angle to boost the luminosity further.

To accomplish this goal, the team at KEKB built several "crab cavities", special superconducting radio-frequency cavities that tilt each bunch sideways – somewhat like the way a crab walks. The concept was first suggested by R Palmer almost 30 years ago for linear electron–positron colliders, and K Oide and K Yokoya proposed the use of crab cavities in storage rings around 10 years later (Oide and Yokoya 1989). This was followed by designs and prototype models of the crab cavity by K Akai as part of a collaboration between the KEK and Cornell laboratories around 1992. Detailed engineering and prototyping were then done at KEKB by K Hosoyama’s team, with the first full-size cavities being developed after a long struggle and installed in January 2007 (figure 1 and 2). Commissioning at KEKB started in February and continued until the end of June (Abe et al. 2007.)

The crab cavities achieved a tune shift at low currents, comparable to the record achieved at LEP-II at CERN (figure 3). Proportional to the luminosity divided by the product of beam currents, this is a measure of luminosity potential. The team was recently able to operate the machine at high beam currents (1300 mA in the low-energy positron beam and 700 mA in the high-energy electron beam) at a luminosity above 1034 cm–2 s–1.

These results from the first round of commissioning demonstrate the potential of the crab cavities, which according to simulations may eventually improve the luminosity by a factor of two. More commissioning runs and R&D will enable further increase in performance.

The KEKB collider operates at the Υ(4S) resonance and is used for studies of matter–antimatter asymmetry with beauty quarks and searches for new physics by the Belle experiment (CERN Courier June 2006 p22). For the future, a super B-Factory using crab cavities to achieve luminosities two orders of magnitude higher than existing accelerators is under discussion in Japan, and a competing proposal is also being discussed in Italy. Such a machine has the potential to discover physics beyond the Standard Model in rare decays. Crab cavities will also play a role in achieving high luminosity at other machines with a crossing angle, including the proposed International Linear Collider, upgrades of the LHC at CERN, and future synchrotron light sources.

NSCL discovers the heaviest known silicon isotope to date

Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have produced the heaviest silicon isotope ever observed. The recent identification of 44Si expands the chart of known isotopes and lays the groundwork for the future study of rare, neutron-rich nuclei.

CCnew5_07_07

Beyond a certain range of combinations of protons and neutrons, nuclei cannot form at all, and additional nucleons will immediately leave the nucleus owing to zero binding energies. Pursuit of this limit, known as the drip line, has proved to be a scientific and technical challenge – particularly when it comes to neutron-rich nuclei. While the proton drip line has been mapped out for much of the chart of nuclei, the neutron drip line is known only up to oxygen (Z = 8). Producing isotopes at or near the neutron drip line remains a long-standing goal in experimental nuclear physics. For example, 43Si was detected for the first time at Japan’s Institute of Physical and Chemical Research (RIKEN) in 2002 (Notani et al. 2002). That same year, researchers at the GANIL laboratory in France detected the neutron-rich isotopes 34Ne and 37Na (Lukyanov et al. 2002).

In the 44Si experiment conducted at the NSCL Coupled Cyclotron Facility in January, a primary beam of 48Ca was accelerated to 142 MeV/u and directed at a tungsten target. Downstream from the target, the beam was filtered through NSCL’s A1900 fragment separator. Eventually, some 20 different isotopes (including three nuclei of 44Si) hit a set of detectors that could identify each ion as it arrived (Tarasov et al. 2007).

CCnew6_07_07

The study was intended to document the yield of isotopes containing 28 neutrons that lie between 48Ca (the nuclei in the beam) and 40Mg to extrapolate the expected yields in this region. 40Mg is yet to be observed, and according to some theories should be on the drip line. Knocking out only protons from 48Ca could create these isotopes, although this is a difficult feat because of the larger number of neutrons in the beam nuclei. The production of 44Si is therefore an even greater feat, given that the collision must also transfer two neutrons from the tungsten target to the beam nucleus as it speeds past. The observation of 44Si in the A1900 fragment separator stretches the limits of its single-stage separation. The excessive number of particles that come along with the rare nuclei can swamp the detectors used to identify the beam in the separator. The next-generation technique will use two-stage separation, delivering fewer particles to the detectors as more are filtered out travelling down the beamline.

Researchers are developing new two-stage separators that could run experiments with higher initial beam intensities, which offer a better chance of generating the sought-after, near-dripline nuclei. Preliminary testing on a new two-stage separator at NSCL has delivered promising results. Also, a new device has just been constructed at RIKEN in Japan, and one is planned for GSI in Germany. Nuclear scientists at NSCL hope that two-stage separation will help uncover the next generation of rare isotopes.

ATLAS toroid endcaps and LHCb beam pipe take their final steps

In the early hours of 13 June, the first of the two gigantic toroid magnet endcaps touched the ATLAS cavern floor. The second endcap followed suit on 12 July. With this delicate operation complete, the ATLAS collaboration has now finished lowering all of the large heavy pieces of detector into the cavern.

CCnew3_07_07

The last steps were not without challenges, particularly for the first endcap. These included removing a 5 m portion between the top of the main door and the roof to fit the 13 m high, 240 tonne endcap into the building. Once inside, it was lifted by a mobile crane and secured to two gantry cranes on either side of the entry shaft.

Another issue was that the endcaps were higher than the 2 × 140 tonne overhead travelling crane used to lower them down to the cavern floor. In order to secure an endcap to this crane, it first had to be suspended by the two gantry cranes and lowered 5 m to the correct height using a system of jacks. At the end of the 80 m journey down the shaft, each endcap was placed between the barrel part of the detector and the wheels of the endcap muon chambers with a precision of 2 mm and a margin of 10 cm on either side.

CCnew4_07_07

The LHCb collaboration has meanwhile completely installed, interconnected, pumped down and baked out all four sections of the LHCb beam pipe, which includes a section that connects to the vacuum vessel containing the VErtex LOcator (VELO). The largest of the four conical sections is composed of stainless steel and the others are made of beryllium to minimize background in the experiment. One of the more challenging tasks was the installation of the longest section of beryllium beam pipe (6 m) through the RICH2 detector in January, which used temporary rails to guide it through the inner tube with a leeway of only 4 cm. In February, a crane was used to lift the 160 kg stainless steel section and position it in the middle of the iron walls of the muon system on supports that align it with the beamline.

After all of the installation work, the next step was to pump the beam pipe down to a pressure of 10–7 mbar. During the following bake out and non-evaporable getter (NEG) activation, the VELO was heated to 150 °C and the NEG reached 220 °C to obtain an ultra-high vacuum inside the beam pipe. Once the bake-out was complete, the pressure had gone down to 10–11 mbar.

First LHC sector and inner triplets pass the latest tests

On 13 July, an inner triplet assembly of quadrupole magnets successfully completed a pressure test in the LHC tunnel, after installation of metal cartridges to reinforce internal support structures that broke in an assembly during an earlier pressure test in March. The triplet, which included three quadrupole magnets and the associated cryogenic and power distribution box (DFBX), met all test specifications at the requisite pressure of 25 atm for one hour.

CCnew1_07_07

The triplets will focus particle beams prior to particle collisions at the four interaction regions in the LHC. The pressure test is designed to verify the accelerator components in conditions that will occur during LHC operations. To withstand the asymmetrical forces generated, the Q1 and Q3 magnets at either end of the triplet assembly had each been fitted with a set of four metal cartridges to limit movement of the magnets inside their cryostats. The cartridges have a compound design consisting of an aluminium-alloy tube and an Invar rod to allow them to function over a broad range of temperatures.

To address design flaws that emerged during the March pressure test, a team from CERN, Fermilab, KEK and the Lawrence Berkeley National Laboratory also made changes to the DFBXs and the attachment of the triplet to the tunnel floor. These changes passed the test on 13 July.

CCnew2_07_07

Fermilab, in collaboration with CERN and KEK, supplied eight sets of triplets – one for either side of each of the interaction regions, plus one spare set. About half of the quadrupole magnets were repaired by the end of July, with the remaining repairs estimated to take six weeks to complete. This will be followed by the installation of assemblies and interconnections between quadrupole magnets, DFBXs and the rest of the accelerator. The inner triplets will then become part of the different sectors of the LHC and will be tested as part of the pressure tests of all sectors.

In the meantime, electrical tests have continued on the first sector to be commissioned (sector 7-8), which was initially cooled down in April. On 25 May, the dipole circuits were successfully powered up to several thousand amps, followed by the quadrupole circuits on 20 June. This was still below the nominal values. Depending on the type of superconducting magnet, the nominal current of the electrical circuits varies between 60 A and 12 kA. During the tests, however, some circuits were powered up to the nominal current and quenches triggered deliberately to test the protection system as well as the system for extracting the stored energy in the magnets.

These power tests were the culmination of several weeks of electrical tests on sector 7-8. More than 100 electrical circuits for the superconducting magnets were checked one by one. An overall test, where all of the circuits were powered up overnight, also took place to ensure that they perform correctly over a prolonged period. Finally, a power cut was simulated at point 8 for the teams to verify that all of the systems were being supplied with power from the scheduled source – whether the normal or back-up power supply. Sector 7-8 will now be warmed up so that the triplet magnets to the left of point 8 can be connected up and some consolidation work can take place.

Dark Side of the Universe. Dark Matter, Dark Energy, and the Fate of the Cosmos

By Iain Nicolson, Canopus. Hardback ISBN 0954984633, £19.95.

CCboo1_07_07

If you are a particle physicist interested in cosmology, this book is for you. It makes a broad, clear and precise overview of our current understanding of dark matter and dark energy – the invisible actors governing the fate of the universe.

It is a challenge to try to make these apparently obscure concepts familiar to any motivated reader without a scientific background. But the author, Iain Nicolson, has been entirely successful in his enterprise. With a pleasant balance between text and colourful illustrations, he guides the reader through a fascinating, invisible and mysterious world that manifests its presence by shaping galaxies and the universe itself.

The book starts with an introduction to key concepts in astrophysics and the development of classical cosmology. It then describes the observational evidence for dark matter in galaxies and clusters of galaxies, showing that massive extremely dim celestial bodies cannot account for the missing mass. Particle physics is not neglected, with a description of our understanding of ordinary “baryonic” matter and the quest for detecting exotic weakly interacting massive particles (WIMPs). An entire chapter is also devoted to the idea that modified Newtonian dynamics (MOND) could be an alternative to the existence of dark matter. The second half of the book is devoted to cosmological observations and arguments that suggest the existence of dark energy – an even more mysterious ingredient of the universe. The pieces assemble through these chapters to reveal a universe that is flattened out by inflation and that is essentially made of cold dark matter, with dark energy acting as a cosmological constant.

This new cosmology is generally accepted as the standard model and gives the full measure of the dark side of the universe. The visible matter studied by astronomers so far appears to be just the tip of the iceberg (less than 1%) and even baryonic matter studied so far by physicists is only about 5% of the mass–energy content of the universe. The remaining 95% is unknown territory, which the book invites us to explore using all techniques available. This will be the major challenge for physics in the 21st century.

LHCb prepares for a RICH harvest of rare beauty

CCbea1_07_07

When the LHC starts up at CERN, it will provide proton collisions at higher energies than any previous accelerator and at high collision rates. While these conditions should reveal new high-energy phenomena, such as the Higgs mechanism and supersymmetry, they will at the same time open a different window onto new physics through the study of rare processes among existing particles in the Standard Model. This is the territory that the Large Hadron Collider beauty (LHCb) experiment will explore.

By undertaking precision studies of the decays of particles that contain heavy flavours of quarks (charm and beauty), LHCb will stringently test our knowledge of the Standard Model. In addition, these studies will search for new particles beyond the Standard Model through their virtual effects – just as the mass of the top quark was known well before it was directly observed. The results will provide a profound understanding of the physics of flavour and will cast more light on the subtle difference between matter and antimatter that is manifest in CP violation.

Good particle identification is a fundamental requirement

The LHCb detector looks very different from the average hadron collider detector – indeed, it looks more like a fixed-target detector (figure 1) – because of its focus on heavy flavour particles. This choice of detector geometry is motivated by the fact that, at high energies, both B(D) and B(D) hadrons are produced at predominantly low angles and in the same “forward” cone. The detector geometry is optimized to detect these forward events efficiently.

LHCb’s physics programme depends on being able to distinguish between the particle species produced so good particle identification is a fundamental requirement. The LHCb detector contains calorimeters and muon chambers to identify electrons, photons and muons. But to separate pions, kaons and protons in selected decays, a powerful different technique comes into play. This is the ring imaging Cherenkov (RICH) detector, first proposed at CERN in 1977 by Jacques Séguinot and Tom Ypsilantis, who was a member of the LHCb collaboration until his death in 2000.

The basic idea is that when a charged particle passes through a medium faster than the speed of light in that medium, it will emit Cherenkov radiation (named after the 1958 Nobel prize winner Pavel Cherenkov, who was the first to characterize the radiation rigorously). The effect is like a shock wave of light similar to the sonic boom of an aircraft travelling faster than the speed of sound.

This radiation is emitted at an angle to the direction of motion of the particle, forming a cone of light around the particle’s track. The angle of emission, θ, depends on the velocity of the particle but not on its mass, with cosθ = 1/nβ, where n is the refractive index of the medium and β is the velocity relative to the velocity of light in free space, c. Combining this velocity information with a measurement of the momentum of the particle (using tracking detectors and a known magnetic field), yields the mass of the particle and therefore its identity.

The simplest Cherenkov detectors are threshold devices that only produce a signal if the velocity of a charged particle exceeds the minimum necessary to produce Cherenkov radiation in a particular medium, or “radiator”. Taken together with a momentum measurement, this allows particles that are heavier than a certain mass to be separated from lighter ones. Such detectors have been employed in many experiments since the 1950s, for example in the classic detection of the antiproton at Berkeley – an experiment in which the young Ypsilantis participated.

Rings and radiators

CCbea2_07_07

The RICH detector is a far more sophisticated development. In a RICH device, the cone of Cherenkov light emitted in the radiator is detected on a position-sensitive photon detector. This allows the reconstruction of a ring or disc, the radius of which depends on the emission angle, θ, and hence on the velocity of the particle. In the RICH used by LHCb, the photons are collected by a spherical mirror and focused onto an array of photon detectors at the focal plane (figure 2 shows the principal in LHCb’s RICH1 detector). By focusing the radiation, the photons will form a ring with a radius that depends on the emission angle, θ, but not on where the light is emitted along the particle track.

The choice of which radiator to use is crucial, as every medium has a restricted velocity range over which it can usefully identify particles. Too low a velocity, and the particle will produce no light; too high, and the Cherenkov angle for all particle species will saturate to a common value, making identification impossible. It was therefore important for LHCb to choose a medium, or combination of different media, that would be effective over the full momentum range of interest – from around 1 GeV/c, up to and beyond 100 GeV/c. To achieve this coverage, the experiment uses a combination of three radiators – aerogel, perfluoro-n-butane (C4F10) and carbon tetrafluoride (CF4).

Silica aerogel is a colloidal form of quartz solid, but with an extremely low density and a high refractive index (1.01–1.10), which makes it perfect for the lowest-momentum particles (order of a few GeV/c). One of the key design issues for LHCb was the use of aerogel in ring-imaging mode. This was a new idea, inspired by the development of much higher-quality, very clear aerogel (figure 3). Previously, the material had only been used in threshold counters. To cover the regions of medium and high momentum, LHCb uses a combination of C4F10 and CF4 radiators for momenta from around 10 GeV/c to around 65 GeV/c, and from around 15 GeV/c to more than 100 GeV/c, respectively.

CCbea3_07_07

The early design of the system had three separate detectors, one for each radiator, but for a variety of reasons it proved more practical to combine the aerogel and C4F10 radiators into a single device with wide acceptance. This is the RICH1 detector, which is located upstream to detect the low-momentum particles (figure 1). The CF4 radiator is housed in RICH2, downstream of the tracking system and the LHCb magnet. This has an acceptance that is limited to the low-angle region where there are mostly high-momentum particles.

One challenge in both cases, was to minimize the amount of material within the detector acceptance. Therefore, the designs were changed at an early stage to tilt the focusing mirrors slightly and to introduce secondary flat mirrors that bring the Cherenkov radiation right out of the detector acceptance. This allows for a smaller photon-detector area and a more compact system.

A more radical redesign took place later, as the engineering designs for the various subdetectors became more realistic. It became clear that LHCb had too much material and needed re-designing. The challenge was also to improve the trigger performance by increasing the precision of the momentum measurement, and this required increasing the magnetic field in the region of the VErtex LOcator (VELO) and the trigger tracker (TT) between RICH1 and the dipole magnet (see figure 1).

While RICH2 remained relatively unaffected, RICH1 underwent a major redesign. To protect the sensitive photon detectors from the greatly increased magnetic field, extremely heavy iron shielding had to be added to the apparatus. Accommodating these shields in the very congested region of LHCb’s experimental area near RICH1 was a major challenge.

Seeing the light

Particles produced in the collisions in LHCb will travel through the mirrors of RICH1 prior to reaching measurement components further downstream. To reduce the amount of scattering, RICH1 uses special lightweight spherical mirrors constructed from a carbon-fibre reinforced polymer (CFRP), rather than glass. There are four of these mirrors, each made from two CFRP sheets moulded into a spherical surface with a radius of 2700 mm and separated by a reinforcing matrix of CFRP cylinders. The overall structure contributes about 1.5% of a radiation length to the material budget of RICH1. As RICH2 is located downstream of the tracking system and magnet, glass could be used for its spherical mirrors, which in this case are composed of hexagonal elements (see cover).

Perhaps surprisingly, the “flat” secondary mirrors in the RICH detectors are not truly flat. Producing completely flat, but thin, mirrors is a difficult technological challenge because it is hard to maintain their rigidity over a long period of time. Instead, giving the mirrors a small amount of curvature (a radius of curvature greater than 600 m in RICH1 and around 80 m in RICH2), increases their structural integrity. The small distortions that this curvature introduces to the images of the Cherenkov ring can be corrected with software during data analysis, and therefore do not degrade the final performance of the system.

The experiment requires 484 tubes in total

Both RICH detectors use hybrid photon detectors (HPDs) to measure the positions of the emitted Cherenkov photons. The HPD is a vacuum photon detector in which a photoelectron, released when an incident photon converts within a photocathode, is accelerated by a high voltage of typically 10–20 kV onto a reverse-biased silicon detector. The tube focuses the photoelectron electrostatically – with a demagnification factor of around five – onto a small silicon detector array.

The LHCb collaboration has developed a novel dedicated pixel–HPD for the RICH detectors, working in close co-operation with industry. Here, the silicon detector is segmented into 1024 “super” pixels, each 500 μm × 500 μm in area and arranged as a matrix of 32 rows and 32 columns. When a photoelectron loses energy in silicon, it creates electron-hole pairs at an average yield of one for every 3.6 eV of deposited energy. The nominal operating voltage of LHCb’s HPDs is –20 kV, corresponding to around 5000 electron-hole pairs released in the silicon. Careful design of read-out electronics and interconnects to the silicon detector results in a high efficiency for detecting single photoelectrons. The experiment requires 484 tubes in total – 196 for RICH1 and 288 for RICH2 – to cover the four detection surfaces.

Testing times

To verify the quality of the HPDs and the associated components in the low-level data acquisition (DAQ), the LHCb collaboration has conducted a series of RICH test-beam exercises, most recently during September 2006 in the North Area at CERN’s Prévessin site. In the test beam, the apparatus consisted of a gas vessel filled with either nitrogen (N2) or C4F10 as the radiator medium, together with a housing for the photo-detectors that was separated from the gas enclosure by a transparent quartz window. The test beam from the SPS consisted mainly of pions, with small contributions from electrons, kaons and protons, and had a 25 ns bunch-structure; the same as will be provided by the LHC.

CCbea4_07_07

Columns of 16 HPDs observed the Cherenkov radiation emitted by the particles as they traversed the gas enclosure. The ring of Cherenkov light illuminated either one HPD, when using N2 as radiator, or up to four neighbouring HPDs with the C4F10 radiator (figure 4). The resulting data were recorded using final versions of the DAQ electronics and pre-production releases of the LHCb online software environment. An early version of the LHCb online-monitoring kept a check on the status of the test set-up and the quality of recorded data.

The analysis of the recorded test-beam data using the full LHCb reconstruction and analysis software involved a significant effort, but the results made it worthwhile. The tests verified the design specifications of the HPDs in a “real life” environment, with the measurement of properties such as the photoelectron yield and the resolution of the Cherenkov angle reconstructed from the data. Using the official LHCb software framework for the analyses also allowed the quality of the software to be verified with real data, so the team could spot any issues not seen in earlier simulation studies. The evaluation of the beam-tests indicates so far that all the hardware and software components involved in the tests match – or exceed – expectations, successfully passing an important milestone on the way to the start-up of the LHCb experiment.

CCbea5_07_07

The full LHCb detector has been extensively modelled in a detailed simulation, based on the Geant4 software package, taking into account all important aspects of the geometry and materials together with a full description of the optics of the RICH detectors. This has provided a platform for the development of sophisticated analysis software to reconstruct the events and provide excellent particle identification. Figure 5 shows an example of the complex event environment that LHCb will face in collisions at the LHC. To disentangle the event, the analysis performs a combined likelihood-fit to all known tracks in the event. By considering all tracks and radiators in a single fit, the algorithm naturally accounts for the most predominant background to a given ring, namely the neighbouring rings.

CCbea6_07_07

Figure 6 illustrates just how powerful this technique is. Here, using the detailed Geant4 simulations, the mass peak for the decay BS → KK is shown, together with the background contributions from other two-body decays. Without the kaon identification capabilities provided by the RICH detectors, the BS signal is swamped by background. Such efficient hadron identification will be a crucial component in the successful analysis of LHCb data.

Currently, the RICH group is fully focused on the commissioning of the RICH detectors at the experimental area at Point 8 on the LHC ring. The RICH2 detector is completely installed and the HPDs and readout systems are being commissioned. The magnetic shielding and radiator enclosure for RICH1 is in place and installation of all HPDs and optics will be completed later this year. Commissioning of the detector control and safety systems, together with the readout DAQ systems is also progressing at full speed. Everything is on track to have the system fully functional and ready for action for first data in 2008.

Heavy-ion workshop looks to the future

When the LHC starts up, heavy-ion physics will enter an era where high transverse-momentum (pT) processes contribute significantly to the nucleus–nucleus cross-section. The LHC will produce very hard, strongly interacting probes – the attenuation of which can be used to study the properties of the quark–gluon plasma (QGP) – at sufficiently high rates to make detailed measurements. At the LHC, high rates are for the first time expected at energies at which jets can be fully reconstructed against the high background from the underlying nucleus–nucleus event.

CCfin1_07_07

To prepare for the new high pT and jet analysis challenges, the physics department at the University of Jyväskylä, Finland, organized the five-day Workshop on High pT Heavy-Ion Physics at the LHC. More than 60 participants attended the workshop, ranging from senior experts in heavy-ion physics to doctoral students. It brought together physicists from operating facilities – mainly RHIC at the Brookhaven National Laboratory (BNL) – as well as from future LHC experiments (ALICE, ATLAS and CMS), and included valuable contributions from theorists. Jyväskylä in early spring, coupled with reindeer-meat dinners and animated student lectures in the evening, created a superb atmosphere for many discussions of physics, even outside of the official programme.

Mike Tannenbaum of BNL gave an opening colloquium which looked back to the 1970s. He listed old results that raised the same questions that are the focus of today’s discussions. Many recent questions in high-pT physics can be traced back to the 1970s at CERN, with proton–proton (pp) collisions at the ISR, which were followed in the early 1980s by proton–antiproton collisions at the SPS. This was when jet physics was born and the first methods of jet analysis were developed. It was reassuring to learn that many CERN results remain valid and that recent thinking is really based on those early understandings. On the other hand, many ideas still remain in a premature state. Only the high-luminosity experiments at RHIC and the LHC are – or will be – able to investigate certain phenomena and measure their effects more precisely. These pp data are therefore very important, not merely because they serve as a baseline for understanding results in heavy-ion collisions.

Striking gold at RHIC

Several presentations at the workshop reviewed results from RHIC on single-particle spectra and two-particle correlations at high pT. Striking effects have been observed in central gold–gold collisions. Among the most prominent are the suppression of high-pT particles and the suppression of back-to-back correlations. These results show that the jet structure is strongly modified in dense matter consistent with perturbative QCD calculations of partonic energy loss via induced gluon radiation.

CCfin2_07_07
The first photon data have shown no nuclear effects up to 10–12 GeV/c, in line with the general expectation that photons (with no colour charge) have no final-state interaction with the deconfined matter that is produced. However, the recent measurement by the PHENIX experiment indicates unexpected suppression, by a factor of two, of photon production in the region above 15 GeV/c – this is almost as large as in the case of light mesons (figure 1). This surprising observation ignited great excitement at the workshop, leading to further discussion of what the possible consequences for LHC physics might be. Any detailed study, however, should await the release of the final data.

Data on heavy flavours from RHIC experiments have also provided puzzles. The measured suppression of heavy-flavour pT spectra, which is close to that of light flavours, cannot be explained by radiative energy loss alone and requires a contribution from elastic scattering. Further issues can be addressed by analysis of dijet topology or by the use of two- or multi-particle correlation techniques. Several experimental and theoretical presentations given at the meeting examined the possibility of using multi-particle correlation and photon–hadron correlations to study the partonic pT distributions, fragmentation functions, jet shape and other parton properties sensitive to the details of parton interactions with excited nuclear matter.

Another series of talks investigated the features of the parton coalescence process, which is supported by a large amount of experimental data on particle spectra and asymmetrical flow production. On the other hand, jet-orientated analysis of different data (for example, Ω-charged hadron correlations) does not show the behaviour expected from quark coalescence. Therefore, further work is needed to understand this puzzling situation before the new experiments begin at the LHC.

Towards the LHC

CCfin3_07_07

Among the four large LHC experiments, ALICE is the one that is optimized for heavy-ion physics. The CMS and ATLAS collaborations have also established a heavy-ion programme, which will certainly strengthen the field. The workshop heard about the capabilities of the three experiments for jet reconstruction and analysis of jet structure. The large background from the underlying event is a challenge for all experiments, requiring the development of new techniques for background subtraction. The strength of the ATLAS and CMS experiments is their full calorimetric coverage, and therefore large measured jet rates, which will allow them to measure jets in central lead–lead collisions up to 350 GeV and to perform Z0-jet correlation studies. ALICE will use the combination of its central tracking system and an electromagnetic calorimeter to measure jets. The smaller acceptance of the detector will limit the energy range to about 200 GeV. The strength of ALICE lies in its low-pT and particle identification capabilities. These allow ALICE to measure fragmentation functions down to small momentum fractions and to determine the particle composition of jets (figure 2).

A consistent theoretical approach to describe jet measurements in heavy-ion collisions can only be obtained through detailed Monte Carlo studies of jet production and in-medium modifications. They are needed to optimize the data analysis and to discriminate between different models. Some new event-generators adopted for the challenges of LHC physics (PyQuench, HydJet, HIJING-2) were also discussed during the meeting.

The workshop also examined the recent interest in understanding strongly interacting particles using conjectures from string and higher-dimensional physics. Stan Brodsky of SLAC gave a summary of his understanding of the many QCD effects that appear in kinematical regions not testable by perturbative QCD, where anti-de Sitter space/conformal field-theory models could come into consideration. In the duality picture, due to Juan Maldacena, the intensively interacting quark and gluon fields produced in heavy-ion collisions can be treated as a projection into the higher dimensional black-hole horizon. The equation of motion on the black-hole horizon could become analytically solvable in contrast to the vastly complicated numerical (lattice) approach in non-perturbative QCD theory. The new experiments at LHC energies May shed more light on the role of extra dimensions in curved space and could initiate a revolution in the description of strongly interacting matter.

The next workshop on this topic will be in Budapest in March 2008 and will offer the opportunity to display the latest theoretical results before the LHC is running with pp collisions at 14 TeV.

DIS 2007: physics at HERA and beyond

Exceptionally beautiful weather, Munich’s Holiday Inn hotel and the Gasteig, a modern cultural centre, combined to provide a pleasant and stimulating atmosphere for DIS 2007, the 15th International Workshop on Deep-Inelastic Scattering (DIS) and Related Subjects. Held on 16–20 April, the workshop united more than 300 physicists from around the world, including an encouraging number of students. The programme contained reviews of progress in DIS and QCD, as well as presentations of the latest results from HERA, the Tevatron, Jefferson Lab, RHIC and fixed-target experiments. It also covered related theoretical topics and future experimental opportunities.

With two full days of plenary sessions and six streams of parallel sessions on the other three days, the meeting followed the traditional style of DIS workshops. The parallel sessions covered structure functions and low-x physics, electroweak measurements and physics beyond the Standard Model, heavy flavours, hadronic final states, diffraction and spin physics. A special session that looked to the future of DIS was particularly topical in view of the shutdown at DESY of HERA, the world’s only electron–proton collider, at the end of June.

Yuri Dokshitzer, of the University of Paris VI and VII, opened the scientific programme with a review of recent developments in perturbative QCD (pQCD). He explained his motto “1-loop drill, 2-loop thrill, 3-loop chill” and expressed the hope that higher-order corrections can be calculated with the help of N = 4 super-Yang–Mills quantum field theory.

Latest results

CCdis1_07_07

Appetizing glimpses of the many new results from the two collider experiments at HERA featured in talks by Christinel Diaconu from the Centre for Particle Physics in Marseille, and by Massimo Corradi of INFN Bologna, for the H1 and ZEUS experiments, respectively. Both experiments have accumulated a total of 0.5 fb–1 at a proton beam energy of 920 GeV, and analyses of the entire data sample are in full swing. The first H1 and ZEUS combined analysis of xF3 was a clear highlight of the conference (figure 1). This is the structure function that is dominated by photon–Z interference and is sensitive to the valence quarks at low Bjorken-x.

Further highlighted results included new data on neutral-current and charged-current inclusive scattering, jets and heavy-flavour production. These data will serve as input for the next generation of more precise fits for parton distribution functions (PDFs) for the proton – essential for studying physics at the LHC at CERN.

Since mid-March the proton beam energy at HERA has been lowered to 460 GeV to enable, in conjunction with the high-energy data at 920 GeV, a model-free determination of the longitudinal structure function FL. This measurement is essential for a direct extraction of the gluon distribution within the proton and as a consistency check of DIS theory. Beyond the Standard Model, H1 continues to see, with the full statistics at high energy, the production of isolated leptons at a level of 3 σ above the expectation. In contrast, ZEUS sees no deviation from the Standard Model.

With the Tevatron proton–antiproton collider at Fermilab performing well, Giorgio Chiarelli of INFN Pisa was able to show a sample of beautiful new results from the CDF and DØ experiments. For this conference, he presented data corresponding to up to
2 fb–1, covering neutral B-meson oscillations, electroweak physics, jets, searches and results on the production of the top quark, with a new world average for its mass of 170.9 ± 1.8 GeV/c2. This new (low) value is interesting since, together with the mass of the W particle, it favours the minimal supersymmetric model.

William Zajc from Columbia University addressed current understanding of particle production in heavy-ion collisions, as studied at RHIC at Brookhaven National Laboratory (BNL). He highlighted several interesting experimental observations, such as “away-side” jet suppression, that cannot be described within current models, but which May be interpreted as a signal for the production of a nearly perfect, highly viscous quark–gluon fluid.

CCdis2_07_07

Turning to spin physics, Jörg Pretz from the University of Bonn gave an overview with emphasis on the nucleon spin puzzle. He presented recent data on helicity distributions for quarks (Δq) and gluons (ΔG), from the HERMES experiment at DESY and COMPASS at CERN, respectively, as well as direct measurements of ΔG from RHIC. He also showed the first combined results on transversity using data from both HERMES and COMPASS as well as from the BELLE experiment at KEK. In a related overview of the rich programme at Jefferson Lab, Zein-Eddine Mezani of Temple University in Philadelphia covered measurements of unpolarized and polarized structure functions and transversity, as well as deeply virtual Compton scattering and generalized parton distributions.

Theoretical input

Andreas Vogt of Liverpool University spoke about progress and challenges in determining and understanding the PDFs of the proton in next-to-leading order (NLO) and next-to-NLO. An important improvement in the extraction of PDFs, implemented by the Coordinated Theoretical–Experimental Project on QCD (CTEQ), is the inclusion of the effects of charm-mass suppression in DIS, which results in an increase in the PDFs for the u and d quark. A dramatic consequence is an increase by about 8% of the W/Z cross-sections expected at the LHC. Rates of W/Z events are foreseen to serve as precision “luminosity meters” for the LHC data-taking.

Gustav Kramer of Hamburg University discussed recent developments in heavy-flavour production and explained the various heavy-flavour schemes used for pQCD calculations. He stressed the importance of interpolating schemes with variable-flavour number and massive heavy quarks (like the general-mass variable-flavour-number scheme) and showed successful comparisons of calculations with data from HERA and the Tevatron.

CCdis3_07_07

To allow comparison with experiment, pQCD calculations usually need to be implemented in Monte Carlo generators. Zoltan Nagy from CERN covered this important subject and critically reviewed the various approximations of current implementations of parton showers and their matching to leading order or NLO matrix elements. Nagy expressed concern that current Monte Carlo tools might fail at the LHC and he argued for the development of a new shower concept that allows the shower to be matched to Born and NLO matrix elements.

Raju Venugopalan from BNL covered small-x physics and the expected non-linear effects beyond the conventional Dokshitzer–Gribov–Lipatov–Altarelli–Parisi evolution. He discussed the question of saturation in the context of various models (e.g. colour glass condensate) and data from HERA and RHIC. He also pointed to excellent opportunities at a possible future electron–ion collider (EIC) or even at a “super-HERA” collider such as a large hadron–electron collider (LHeC).

Peter Weisz and Johanna Erdmenger, both from MPI Munich, discussed non-perturbative aspects of QCD. Weisz presented recent algorithmic advances and various results in lattice QCD, indicating progress in the simulation of dynamical quarks beyond the quenched approximation. Erdmenger looked at new approaches that connect string theory and QCD by establishing a connection between a strong coupling (non-perturbative) theory, such as N = 4 SYM (“QCD”), and a “dual” weak coupling theory, such as supergravity. Such a relation – the anti-de Sitter/conformal field theory correspondence – can provide new tools to address problems within QCD.

The seven threads of parallel sessions contained a total of 260 talks. Despite the wonderful weather, the sessions had very good attendance, with many lively and fruitful discussions. The spontaneous formation of two additional topical sessions was very much in the spirit of the workshop. One of these was on αS measurements from HERA and LEP, and one was on the complications involved when dealing with a variable number of quark flavours in QCD fits. On the last day the convenors, usually a theorist and an experimentalist for each working group, summarized the parallel sessions.

Life after HERA

Concluding a special session on the future of DIS, Joel Feltesse of DAPNIA gave a detailed and critical view of future opportunities in DIS. In his opinion DIS will not stop with the end of data-taking at HERA. There is Jefferson Lab with its upgrade to 12 GeV and new machines, such as the EICs at Jefferson Lab and BNL, are on the horizon. An LHeC at CERN would offer an attractive physics programme, particularly if the LHC provides an additional physics case for it. The workshop itself concluded with a talk from Graham Ross from Oxford University. He discussed open questions beyond the Standard Model, which provide motivation for the next round of high-energy physics experiments at the LHC.

For the coming years, much careful analysis remains to be done with the data from HERA to achieve the best possible precision. This is expected to yield valuable information for the understanding of QCD and of the data to be produced at the LHC. HERA’s final legacy will be an important asset to high-energy physics. Although the LHC will, we hope, find the Higgs boson and “explain” the mass of gauge bosons, quarks and leptons, it remains the case that the mass of hadronic matter – about 99% of the mass of the visible universe – is entirely dominated by effects due to the strong interaction between gluons and quarks. Deep-inelastic scattering is the tool to study these interactions. It remains to be seen how much progress will be achieved in the future without new data from an electron–hadron collider.

VERITAS telescopes celebrate first light

CCver1_07_07

For three days in late April, collaboration members gathered with invited guests and the general public to celebrate the completion of the Very Energetic Radiation Imaging Telescope Array System (VERITAS). Located near Mount Hopkins in southern Arizona, the new array joins HESS in Namibia, MAGIC in the Canary Islands and CANGAROO-III in Australia in the exploration of the gamma-ray skies at energies from 100 GeV to beyond 10 TeV. The First Light Fiesta included a one-day scientific symposium, a well-attended public lecture, public tours of the new detector and a formal inauguration ceremony followed by an outdoor banquet.

Cherenkov astronomy

VERITAS is the latest stage in the evolution of very-high-energy (VHE) gamma-ray astronomy, a field where many aspects are closer to particle physics than to traditional astronomy. The basic idea is to use the Earth’s atmosphere as the “front end” of the detector, much like a calorimeter in a collider experiment. At high energies, gamma rays initiate extensive air showers in the upper atmosphere, and relativistic particles in these showers radiate Cherenkov photons that penetrate to ground level. An imaging detector located anywhere in the light pool can use the size and pattern of hits in its camera to reconstruct the energy and direction of the shower and, by extension, the primary particle that spawned it. This is the principle of the imaging atmospheric Cherenkov telescope (IACT). The effective area of the detector is the size of the light pool, which is of the order of 100,000 m2.

The main background comes from charged cosmic rays, energetic protons and light nuclei, which typically outnumber gamma rays by a factor of more than 1000. These can be rejected by using differences in the morphology of gamma-initiated and hadron-initiated showers that are manifest in the image at the camera’s focal plane. Indeed, it is the cosmic-ray rejection power afforded by multiple views of the shower that has motivated the construction of the modern arrays of IACTs.

The basic technique traces back to the early 1950s, when pioneering measurements were made using instruments built with war-surplus searchlight mirrors. Following a long learning curve, the Whipple collaboration announced the detection of the first VHE source, the Crab Nebula, in 1989. The detector used a 10 m mirror and a pixellated camera, allowing the use of imaging to improve the signal-to-noise ratio. The Crab Nebula, a strong and steady source with a spectrum extending to beyond 50 TeV, has since become the “standard candle” in the field.

During the 1990s, more imaging Cherenkov detectors were built as interest in high-energy gamma-ray astronomy intensified around the world. This was partly because the Compton Gamma-Ray Observatory had been placed in orbit and was discovering dozens of sources at giga-electron-volt energies. Notable among the second-generation detectors was HEGRA, an array of five small telescopes constructed on La Palma in the Canary Islands, which demonstrated the power of “stereo” observations.

CCver2_07_07

Towards the end of the decade, plans began for a third generation of detectors, exploiting the advantages of arrays from large reflectors viewed by fine-grained cameras. The VERITAS collaboration, with members from institutes in the US, Canada, Ireland and the UK, combined the original Whipple group with new collaborators from gamma-ray, cosmic-ray and particle-physics research. Together, they proposed a detector for southern Arizona, built a prototype at the Whipple Base Camp in the summer of 2003 and obtained funding for a four-telescope array later that year.

The final array consists of four IACTs, each of which uses a 12 m-diameter mirror to focus light onto a camera comprising 499 close-packed 29 mm photomultiplier tubes (PMTs). Each mirror is tessellated, with 350 identical hexagonal facets mounted on a steel frame. PMT pulses are digitized by custom-built 500 MS/s flash analogue-to-digital converters and readout is initiated by a three-level trigger, which starts with discriminators on each channel, proceeds to pattern recognition in each camera and finally makes an array-based decision.

First and future light

Although the First Light ceremony was held in April, VERITAS has been making observations in a variety of configurations since 2003, as each telescope has been commissioned. The first stereo observations were made in 2006 when the second telescope was completed and came in time to detect the blazar Mrk 421 in an active state. More importantly, VERITAS detected a similar source, Mrk 501, during a quiescent phase with a flux of only 0.8 gammas per minute. Such a measurement had not been possible with only one VERITAS telescope. During the 2006–2007 observing season, with two and then three telescopes, VERITAS has measured phase-dependent variable VHE flux from the micro-quasar candidate LSI +61303, and has detected VHE gamma rays from the giant radiogalaxy M87, as well as the distant active galaxy 1ES 1218+30.4. Analysis of these and other topics are well in hand for the summer conferences, and the collaboration presented its preliminary findings at the First Light symposium.

CCver3_07_07

The fourth telescope was completed in early 2007 and the array is now the most sensitive gamma-ray telescope in the northern hemisphere. It is able to make a 5 σ detection of a source with a flux level a tenth that of the Crab Nebula in under an hour (the original Whipple detection of the Crab Nebula required more than 50 hours). In the energy range from 100 GeV to 30 TeV, VERITAS’s effective area rises from around 30,000m2 to well over 100,000 m2 and its energy resolution is 10–20%. Single-event angular resolution is better than 0.14°, and sources with reasonable flux will be located to better than 100 arc-seconds. The 3.5° field of view, with off-axis acceptance above 65% out to 1° from the centre, will allow sky surveys as well as the mapping of extended sources.

In contrast to collider experiments, where data on different physics topics are accumulated simultaneously with different triggers, telescopes are pointed instruments and a scheduling committee decides where they point. For the first two years of observations, VERITAS will spend half of the available hours on four Key Science Projects (KSPs). The remaining time will be given over to observations proposed by groups within the collaboration.

One KSP is a survey of part of the Milky Way visible from the northern hemisphere, which will search for new sources with fluxes greater than about 5% of the Crab Nebula. Another KSP is an indirect search for dark matter. WIMPs could cluster in gravitational wells such as nearby dwarf galaxies or globular clusters and then annihilate, producing a continuum of gamma rays that May be strong enough to be seen by VERITAS. Although less direct than a search for supersymmetric particles at an accelerator, the gamma-ray technique targets a larger range of candidate masses.

CCver4_07_07

Another KSP concerns galactic sources such as pulsar-wind nebulae and supernova remnants (SNRs), while yet another deals with extragalactic sources known as active galactic nuclei (AGNs). SNRs are interesting because they could possibly be the source of most galactic cosmic rays. With the new-generation detectors, their morphologies can be resolved and this will aid in the understanding of particle acceleration models. Gamma rays from AGNs are thought to originate in their relativistic plasma jets, which are powered by accretion of host-galaxy material by a supermassive black hole. These sources are notoriously time-variable, so the plan is to conduct multi-wavelength campaigns using contemporaneous X-ray, optical and radio observations to uncover the physics processes at work in these high-energy objects.

All observations will be pre-empted when a gamma-ray burst (GRB) occurs in a visible part of the sky and VERITAS turns its attention to it. The telescope is connected to a network that relays GRB detections from spacecraft and the array can slew to any part of the sky at a rate of 1°/s.

Later this year the Gamma-ray Large Area Space Telescope (GLAST) should join the hunt for high-energy gamma rays from a vantage point in orbit around the Earth. With its wide field of view and sensitivity in energy from 20 MeV to more than 100 GeV, it will provide complementary data and increase the scientific reach of the new ground-based observatories. After many years of design, construction and commissioning, the VERITAS collaboration anticipates a rewarding future.

Carbon ions pack a punch

Hirohiko Tsujii is director of the Research Center for Charged Particle Therapy, at the National Institute of Radiological Sciences (NIRS), in Chiba, Japan. He is known internationally for his work on treating cancer with carbon ions and is the first doctor to have treated patients using hadron therapy in a clinical environment. Japan is the first country to have a heavy-ion accelerator for medical purposes, built as part of a national 10-year strategy for cancer control. Since the Heavy Ion Medical Accelerator in Chiba (HIMAC) opened in 1994, the facility has provided treatment for more than 3000 patients with various cancers and has resulted in a significant increase in the number of survivors after treatment. Recently, the Committee of Senior Officials for Scientific and Technical Research (COST) and the European Network for Research in Light-Ion Hadron Therapy (ENLIGHT) invited Tsujii as guest of honour at the COST-ENLIGHT workshop on hadron therapy, held at CERN on 3–4 May.

CCint1_07_07

Tsujii has three decades of experience in developing hadron therapy as a novel treatment for cancer. The deposited radiation dose for charged hadrons (protons and heavier ions) rises to a peak near the end of the particle’s range. The aim with hadron therapy is to use this effect to irradiate tumours, while sparing healthy tissue better than with X-rays. “Before working at NIRS, I was involved with proton-beam therapy at Tsukuba University,” he says. Tsujii also worked on research for pion treatment in the US, where the use of pions in cancer therapy was pioneered at Los Alamos in co-operation with New Mexico University. “The biological effect was not as high as expected and it was also claimed that pions could produce a very nice distribution in the human body,” he explains. “However, compared with hadron therapy, such as with protons or carbon ions, the distribution was not that good. Eventually it was decided to stop the study that used pions.”

Japan is a major pioneer of hadron therapy. Each year, 650,000 people in the country are diagnosed with cancer and the number is expected to increase to 840,000 by 2020. Deep-seated tumours are the most challenging type of cancer and Tsujii has developed a special interest in treating them. Tumours found in the lungs, cervix, head and neck, liver, prostate, or bone and soft tissue, for example, are often treated with hadron therapy as they can be difficult to operate on and conventional radiotherapy is not always as effective.

“The reason we at NIRS decided to use carbon ions rather than protons is that it is the most balanced particle. It has the property of providing a constant treatment to the tumour and also has a higher biological effect on the tumour,” explains Tsujii. This means that the carbon-ion beam can be more focused on the tumour, resulting in the greatest cell damage to the tumour with less injury to the surrounding healthy tissue. “Of course, as the mass of the particle increases there is a higher relative biological effectiveness (RBE). But the ratio of RBE between the peak to plateau [before the peak] gets worse when using a particle with a higher mass. Therefore, when considering the biological effect, the carbon ion is the most balanced.”

After treating more than 3000 patients, Tsujii feels that it has been a good decision to use carbon ions in cancer treatment. “There was a lot of discussion in deciding what particle would be best. We decided to choose carbon ions and, for the time being, I am satisfied with this decision.” It took several years before coming to the optimum level of treatment with carbon ions. The local control for almost all types of tumours is 80–90%, and after choosing the optimal level of treatment the local control is expected to be more than 90%.

“Another point that I want to focus on is the use of ‘hypofractionated’ radiotherapy,” says Tsujii. A patient treated with photons – X-ray treatment – will, on average, require about 30–40 fractions (doses) over 6–7 weeks. With carbon ions, the treatment can be given in a single day (just one dose or fraction) for stage I lung cancer while cervical and prostrate cancer or other large tumours require only 16–20 fractions against around 40  fractions using conventional treatment. “It is important to note that there is a minimal toxicity to healthy cells. At the beginning we had some severe toxicity, but we analysed the treatment and techniques, and completely overcame the problem we had when we initially started the studies.”

As chair of the Particle Therapy Co-operative Group, an international group that coordinates all hadron therapy (such as protons and carbon ions), Tsujii sees the future development of carbon-ion therapy as the more popular choice for oncologists. Even the name of this group suggests the changes taking place. Once the Society for Proton Beam Therapy, the name now reflects increased development in high-energy radiation with carbon ions.

“I believe that many parts of radiotherapy will be replaced by carbon therapy – it is just simpler in terms of smaller fractions to apply to the patient, compared with photons. It is a rather complicated procedure with carbon ions, but as each part of the procedure is established, once it is decided, the necessary technique is fixed. This means that we can apply the more reliable technique to the patient’s treatment,” says Tsujii.

For small tumours, the results with carbon ions or photons May be similar, such as in early-stage lung cancer, where the tumour is smaller than 3 cm in diameter. If the tumour is larger, then carbon ions prove to be the better treatment. “We are especially interested in the treatment of tumours in the pelvis or spinal area, which are often difficult to treat with surgery, and we have focused on treating bone and soft-tissue sarcoma – large tumours of 10–15 cm diameter – and we are very satisfied with the improved local control and longer survival rates,” says Tsujii.

Tsujii has not seen a single case of radiation-induced cancer among the patients treated since starting the carbon-ion treatment for cancer 13 years ago. There is a possibility of some cancer being induced by carbon-ion irradiation, but the distribution close to the target area is much better than in traditional treatment. However, the risks of developing radiation-induced cancer are probably similar for both treatments.

The cost of building these kinds of facilities is something that many governments are considering, including Germany and Italy (CERN Courier December 2006 p17). Japan has started a new carbon-ion therapy facility and two proton therapy facilities at a cost of around €100 m, while In Germany and Italy, new facilities with dual capabilities for using carbon ions and protons are expected to open in 2008, at a cost of €90 m each. Tsjuii’s pioneering work seems certain to be expanded to other parts of the world.

bright-rec iop pub iop-science physcis connect