Comsol -leaderboard other pages

Topics

Axions create excitement and doubt at Princeton

The lightweight axion is one of the major candidates for dark matter in the universe, along with weakly interacting massive particles. It originally arrived on the scene about 30 years ago to explain CP conservation in QCD, but there has never been as much theoretical and experimental activity in axion physics as there is today. Last year, the PVLAS collaboration at INFN Legnaro reported an intriguing result, which might indicate the detection of an axion-like particle (ALP) and which has triggered many further theoretical and experimental activities worldwide. The international workshop Axions at the Institute for Advanced Study, held at Princeton on 20–22 October 2006, brought together theorists and experimentalists to discuss current understanding and plans for future experiments. The well organized workshop and the unique atmosphere at Princeton provided ideal conditions for fruitful discussions.

CCEaxi1_03-07

In 2006, the PVLAS collaboration reported a small rotation of the polarization plane of laser light passing through a strong rotating magnetic dipole field. Though small, the detected rotation was around four orders of magnitude larger than predicted by QED (Zavattini et al. 2006). One possible interpretation involves ALPs produced via the coupling of photons to the magnetic field.

Combining the PVLAS result with upper limits achieved 13 years earlier by the BFRT experiment at Brookhaven National Laboratory (Cameron et al. 1993) yields values of the ALP’s mass and its coupling strength to photons of roughly 1 MeV and 2 × 10-6 GeV-1, respectively (Ahlers et al. 2006). If the PVLAS result is verified, these two values challenge theory because a standard QCD-motivated axion with a mass of 1 MeV should have a coupling constant seven orders of magnitude smaller. Another challenge to the particle interpretation of the PVLAS result comes from the upper limit measured recently at CERN with the axion solar helioscope CAST, which should have clearly seen such ALPs. However this apparent contradiction holds true only if such particles are produced in the Sun and can escape to reach the Earth.

So far there is no direct experimental evidence for conventional axions. The first sensitive limits were derived about two decades ago from astrophysics data (mainly from the evolution of stars, where axions produced via the Primakoff effect would open a new energy-loss channel so stars would appear older than they are), and also from experiments searching for axions of astrophysical origin (cavity experiments and CAST for example) and accelerator-based experiments. The conclusions were that QCD-motivated axions with masses in the micro-electron-volt to milli-electron-volt range seem to be most likely – if they exist at all.

CCEaxi2_03-07

The combined PVLAS–BFRT result would fit well into these expectations if the coupling constant were not too large by orders of magnitude. Theoreticians have tried to deal with this problem and develop models in line with the ALP interpretation of the PVLAS data and astrophysical observations. There may be some possibilities involving “specifically designed” ALP properties. However, to the authors’ understanding, such attempts fail if the conclusion announced at the workshop persists: according to preliminary new PVLAS results, the new particle is a scalar, whereas conventional axions are pseudoscalars. Consequently either the interpretation of the data or the experimental results must be reconsidered.

Although the PVLAS collaboration has measured the Coutton–Mouton effect – birefringence of a gas in a dipole magnetic field – for various gases with unprecedented sensitivity, the workshop openly considered possible systematic uncertainties. While experimental tests rule out many uncertainties, others are still to be checked. For example, the relatively large scatter of individual PVLAS measurements and the influence of the indirect effects of magnetic fringe fields remain to be understood. The PVLAS collaboration is therefore planning further detailed analyses.

In search of ALPs

One clear conclusion is the need for more experimental data. A “smoking gun” proof of the PVLAS particle interpretation would be the production and detection of ALPs in the laboratory. In principle the BFRT collaboration has already attempted this in an approach called “light shining through a wall”. In the first part of such an experiment, light passes through a magnetic dipole field in which ALPs would be generated; a “wall” then blocks the light. Only the ALPs can pass this barrier to enter a second identical dipole magnet, in which some of them would convert back into photons (figure 1). Detection of these reconverted photons would then give the impression of light shining through a wall. The intensity of the light would depend on the fourth power of the magnetic field strength and the orientation of the light polarization plane with respect to the magnetic dipole field.

CCEaxi3_03-07

The PVLAS collaboration and other groups are planning a direct experimental verification of the ALP hypothesis. Table 1 provides an overview of some of the approaches presented at the workshop. Besides PVLAS the ALPS, BMV and LIPSS experiments should take data in 2007. BMV and OSQAR (as well as the Taiwanese experiment Q&A) will confirm directly the rotation of the light polarization plane that PVLAS claims. The BMV collaboration aims for such a measurement in late 2007.

Research during the coming year should therefore clarify the PVLAS claim in much greater detail. The measurement of a new axion-like particle would be revolutionary for particle physics and probably also for our understanding of the constituents of the universe. However, considering the theoretical difficulties described above, a different scenario might emerge. Within a year from now we might be confronted both with an independent confirmation of the PVLAS result on the rotation of the light polarization plane, and simultaneously with only upper limits on ALP production by the light shining through a wall approaches. This situation would require new theoretical models.

The planned experiments listed in Table 1 do not have the sensitivity to probe conventional QCD-inspired axions. In the near future, CAST will be the only set-up to touch the predictions for solar-axion production. The workshop in Princeton, however, heard about other promising experimental efforts to search directly for axions or other unknown bosons with similar properties. These studies use state-of-the-art microwave cavities – for example, as in ADMX in the US, which is looking for dark-matter axions – or pendulums to search for macroscopic forces mediated by ALPs.

On the theoretical side, as we mentioned above, attempts to interpret the PVLAS result have generated some doubts on the existence of a new ALP. Perhaps micro-charged particles inspired by string theory might provide a more natural explanation of the PVLAS result. Researchers are thus discussing novel ideas of how to turn experimental test benches for accelerator cavity development into sensitive set-ups to test for micro-charged particles. However, as Ed Witten explained in the workshop summary talk, string theories also predict many ALPs, so perhaps we are on the cusp of discovering an entire new sector of pseudoscalar particles.

In summary, it is clear that small-scale non-accelerator-based particle-physics experiments can have a remarkable input to particle physics. Stay tuned for further developments.
The authors wish to thank the Princeton Institute for Advanced Study for the warm hospitality, and especially Raul Rabadan and Kris Sigurdson for their perfect organization of the workshop.

CMS collaboration takes on a cosmic challenge

The strategy for building the CMS detector is unique among the four major experiments for the LHC at CERN. The collaboration decided from the beginning that assembling the large units of the detector would take place in a surface hall before lowering complete sections into the underground cavern. At the time the main driving factor was the attempt to cope with delivery of the underground cavern late in the schedule as a result of running the previous accelerator, LEP, together with civil-engineering works that were complicated by the geology of the terrain. Another goal was to minimize the large underground assembly operations, which would inevitably take more time and be more complex and risky in the confined space of the cavern. As construction and assembly progressed above ground, however, it became clear that there would a valuable opportunity for system integration and commissioning on the surface.

CCElhc1_03-07

The complexity of CMS and the other LHC experiments is unprecedented. For this reason, the collaboration believed that the early combined operation of the various subsystems would be an important step towards a working experiment capable of taking data as soon as the LHC provides colliding beams. Initial plans focused on testing the state-of-the-art 4 T solenoid. This would require closing the yoke, already substantially instrumented with muon chambers. Since final elements of other subsystems would also be available by this stage, installed in their final locations, the idea of staging a combined system test in the surface hall became an attractive possibility.

Such a test also required the presence of the full magnet control system and scaled-down versions of the detector control, data-acquisition (DAQ) and safety systems. After much brainstorming and pragmatic criticism, the idea developed into the “cosmic challenge” for which the overall benchmark of success was the recording, and ultimate reconstruction, of cosmic-muon tracks passing through all sub-detectors simultaneously. This objective alone placed a big demand on the compatibility and interoperability of the sub-detectors, the magnet, the central DAQ, the control and monitoring systems and the offline software. The groups working on the Electromagnetic Calorimeter (ECAL) and the Tracker decided to find the resources to contribute active elements, rather than passive mechanical structures. This was a major factor in the positive feedback that eventually led virtually all systems, which will be needed to operate CMS in the LHC pilot run, to participate in the Magnet Test and Cosmic Challenge (MTCC).

In more detail, the objectives of the cosmic challenge were to: check closure tolerances, movement under a magnetic field, and the muon alignment system; check the field tolerance of yoke-mounted components; check the installation and cabling of the ECAL, the Hadron Calorimeter (HCAL), and Tracker inside the coil; test combined sub-detectors in 20° slice(s) of CMS with the magnet, using as near as possible final readout and auxiliary systems to check noise and interoperability; and last but not least, trigger and record cosmic muons and try out operational procedures.

CCElhc2_03-07

In addition, the cosmic tests had to make no significant impact on progress in assembling the detector, and had to take place in the shadow of the work on commissioning and field-mapping the magnet. The tests also had to complement the trigger-system (high rate) tests taking place in the electronics-integration centre. Moreover, the aim was to use final systems as far as possible, that is with no (or very few) specific developments for the cosmic test. Another important aspect was to build a fully functional commissioning and operations team of experts from a collaboration that brings together more than 2000 people from laboratories worldwide, transcending linguistic and cultural backgrounds.

In order not to interfere with assembly work, electronics racks and control rooms for the tests were installed just outside the surface-assembly building in a large control barrack recovered from the OPAL experiment at LEP. Substantial investments were nonetheless needed in the surface hall, general and sub-system infrastructure, the triggering system, some temporary power supply systems, and in the tracker “slice” that was specially made for the cosmic challenge within a full replica of the final containment tube.

As the project progressed, the collaboration began to recognize its importance as a first test of intra-collaboration communication and remote participation, and the original scope expanded to include more substantial objectives for offline as well as online systems. A series of Run Workshops, culminating in a readiness review in June 2006, established the final objectives of the project. Weekly Run Meetings open to all CMS, eventually becoming daily, also ensured coordination. Ultimately the diligent work of hundreds of people aided by a little good fortune transformed the cosmic challenge into a cosmic success for CMS.

CCElhc3_03-07

Four sub-detectors took part in the challenge. The silicon tracker system comprised 75 modules of the Tracker Inner Barrel in two partially populated layers, 24 modules of the Tracker Outer Barrel, and 34 modules of the Tracker Endcap system in two partially populated “petals”. By normal standards these 133 modules were a substantial system, comparable to any silicon detector used at LEP. It is worth remarking that this represents only 1% of the final CMS system, by far the largest ever built using silicon detectors. In addition, there were two barrel supermodules comprising 3400 lead tungstate crystals of the ECAL, or about 5% of the total; eight barrel sectors (22%), four endcap sectors (11%), and four sectors of the outer barrel section of the HCAL. For muon detection there were three (out of 60) muon barrel sectors, consisting of drift tube (DT) and resistive plate chambers (RPC), together with cathode strip chambers (CSCs) forming endcap muon chambers – in all, 8% of the total system.

As was the case for the sub-detectors, all common support systems were tested in close to final versions, using in most cases production hardware and software. The first priority was the definition and implementation of elements of the Detector Safety System. The teams had also to integrate sub-detectors with the central Detector Control System and a scaled-down version of the trigger system. The tests used the central DAQ with its final architecture and approximately 1% of the final computing power, and successfully operated the integrated run control, event builder, event filter, data storage and transfer to the CERN Advanced Storage manager (CASTOR). Throughout the whole exercise a fully functional event display enabled a simple and quick feedback on the status of different sub-detectors.

Other important organizational components of the operations were the consistent use of an electronic logbook, webcams, video-conferencing tools and Wiki-based documentation, as well as web-based monitoring, which was extensively tested. The challenge involved data transfers from Tier-0 (at CERN) to some Tier-1 centres (at CNAF/Bologna, PIC/Madrid and Fermilab) through the Physics Experiment Data Export (PHEDEX) protocol exercising the fast offline analysis and remote monitoring at the Meyrin CERN site as well as at the Fermilab Remote Operations Center.

There were two distinct phases of the cosmic challenge: the first phase in July and August 2006 was parasitic to the commissioning of the magnet. During this phase around 25 million “good” events were recorded with, at least, DT triggers and the ECAL and Tracker slice in the readout. Of these, 15 million events were at a stable magnetic field of at least 3.8 T – close to the maximum field of 4 T. A few thousand of the events corresponded to the benchmark where a cosmic ray was recorded in all four CMS sub-detector systems – Tracker, the ECAL, the HCAL and muon system – with nominal magnetic field. The image of the first of these events rapidly became a symbol of the success of the cosmic challenge and the demonstration of the CMS detector “as built”. During the challenge, data-taking efficiency reached more than 90% for extended periods. Data transfer to some Tier-1 centres, online event display, quasi-online analysis on the Meyrin site, and fast offline data-checking at Fermilab were some of the highlights of Phase I, which in this way offered a first taste of the full running experience. One example of an encouraging result was the good agreement between the predicted and measured cosmic muon spectra, both in momenta and angular distributions, using the new CMS software, CMSSW.

Phase II took place during October and November, after an efficiently executed “cosmic shutdown”, during which the tracker slice and the ECAL were removed and replaced with a field-mapper. While not as glamorous as the first phase, Phase II provided a wealth of solid information relevant to commissioning and operating CMS as an instrument for physics. For this phase, the team corrected and tested several minor faults found in Phase I in the magnet, detectors and central systems, and took more data with the HCAL and muon systems. Phase II recorded about 250 million events for studies of calibration, alignment and efficiency. The measurements made of the effect of the magnetic field on the response of the HCAL and on the drift paths in the muon barrel DTs were particularly crucial. Integration work on aspects of the trigger also allowed data to be recorded with some final systems.

Less than two weeks after the end of the magnet tests, the CMS detector was fully re-opened so the major elements could begin to be lowered into the experiment cavern. Meanwhile, work on analysing the millions of cosmic-ray events recorded in the cosmic challenge continues in many of the institutes in the collaboration. Now, as attention turns to completing the remaining assembly and installation of the muon, tracking and ECAL systems, the whole collaboration is looking forward eagerly, and with confidence, to re-assembling the detector underground and repeating the exciting and successful accomplishments of 2006, but this time with tracks from collisions of LHC beams.

PSI pixels spin off into research with X-rays

When the CMS experiment begins recording data at the LHC the first components to detect particles produced in the head-on proton–proton collisions will be those in the layers of silicon pixels that form the inner part of the CMS tracker (figure 1). The pixel detectors in the barrel part of the cylindrical tracker are the responsibility of a Swiss collaboration based at the Paul Scherrer Institute (PSI). These specially developed detectors have to fulfil extreme requirements. In addition, their high performance in tests has resulted in similarly designed but simpler detectors for investigations into protein crystallography at the Swiss Light Source and radiography at the Swiss Spallation Neutron Source. These detectors are already in operation and a start-up company has been formed to supply them to other synchrotron light sources.

CCEpsi1_03-07

Researchers at PSI began developing a hybrid pixel detector system 12 years ago that would be suitable for the very high rates of particle tracks expected at the LHC. Roland Horisberger, project manager of the pixel project, recalls that at the time such a pixel system appeared very futuristic and posed many questions. Nevertheless, researchers from PSI, the universities of Basel and Zurich, and the ETH Zurich gathered together and formed a pixel competence centre to develop a pixel vertex detector for CMS.

Pixels at work

This new detector is in essence a very large digital camera for recording the tracks of ionizing particles. It comprises 65 million silicon pixels, each 100 μm × 150 μm, which are micro-bump-bonded to special complementary metal-oxide semiconductor (CMOS) read-out chips. This enables researchers to record precisely the position and time of a penetrating particle track. The read-out chip detects the hit pixels and records their analogue pulse height. It is then possible, using charge division, to achieve an excellent position resolution of 10–15 μm, while limiting the data transfer to the hit pixels only. For this purpose each pixel is equipped with a fast charge-sensitive amplifier with an analogue sample-and-hold circuit and a discriminating comparator circuit to perform the hit “decision”.

The pixel barrels contain as many as 768 silicon sensor modules, each of which has a sensitive area of about 10 cm2 and consists of a matrix of 160 pixels × 416 pixels. The charge produced by the 66,560 pixels is conducted through an equal number of micro-bump-bonded contacts to 16 read-out chips, each containing 4160 cells, where the charge signals are amplified and processed.

CCEpsi2_03-07

These “hybrid” pixel detectors depend on a special high-density connection technique, which was developed in co-operation with the Laboratory for Micro- and Nanotechnology at PSI. The contact between pixel and microchip – the bump – is a 17 μm solder ball of indium, a metal with a low melting point. The technique, known as bump-bonding, was taken from industry and miniaturized further to achieve the desired small bump ball size (figure 2). The work requires that the bump-bonding is achieved with a precision of 1–2 μm.

At CMS the pixel modules are placed close to the beam pipe at radii of 4, 7 and 11 cm. They provide the three innermost charged-particle tracking points of the experiment and should enable the reconstruction of the secondary displaced vertices arising from b-quark decays, a crucial signature for the discovery of new physics processes. At the design luminosity of the LHC the enormous particle flux of nearly 1010 particles per second will create 120 GB of data every second. The intensive bombardment creates an extreme radiation load on the detectors and the associated on-board electronics. Yet tests at the PSI proton accelerator have shown that this does not significantly affect the functioning of the detectors.

From the LHC to protein crystals

The detector technology developed for CERN measures particle tracks for high-energy particle physics, but at the Swiss Light Source the same technology operates as a very sensitive digital X-ray camera known as the PILATUS 6M detector (for “Pixel Apparatus for the SLS”; figure 3). It consists of 60 modules with 6 million pixels, making up an active area of 43 cm × 45 cm. Adapted to the needs of experiments at synchrotrons, the detector operates in single-photon counting mode – each incoming photon is counted and the number for each pixel is stored digitally.

CCEpsi3_03-07

The process has no electrical background interference, so it achieves an extremely high dynamic range. Very weak and very intense signals can therefore be measured at the same time in a single image, and exposure times can be selected freely between 1 ms and several hours. The CMOS chips and the sensors are radiation tolerant, and the PILATUS 6M detectors have a dynamic range of 20 bits, a highest sensitivity in the energy range of 3–30 keV and a read-out time of a few milliseconds.

When this equipment was being developed the focus was on its application for protein crystallography. Understanding the molecular structure of a crystal requires knowing the intensities of all the reflections as accurately as possible. Researchers can use this information to calculate the actual arrangement of the atoms and molecules in the protein, but the quality of data used to decode the molecular structure is crucial.

In these experiments the researchers fire a tightly bundled X-ray beam onto a protein crystal. This results in images that are patterns formed by thousands of scattered Bragg reflections. The advantage of pixel detectors is that they can deal with the incoming data in the most efficient way. The rate of more than 1 million X-ray photons per second hitting just a few pixels means that the reflections at the centre of the image are extremely intense; at large scattering angles towards the edge there are far fewer reflections from a few dozen photons. Molecular biologists are excited about the excellent data quality they have obtained so far with the PILATUS 6M detector.

The PILATUS 100K detector is a smaller system that was developed in parallel. This system consists of a matrix of approximately 500 pixels × 200 pixels and enables information to be recorded even faster and with greater precision than with comparable commercial detectors. The system is currently used for material science research at synchrotrons and improves insight in several research areas such as the surface properties of materials. To meet growing demands, a spin-off company was recently founded and the CEO, Christian Broennimann, leads a team of four. The market for these detectors lies mainly in the field of synchrotron radiation.

Meanwhile, the precision work on the individual modules for the CMS experiment continues. The barrel pixel modules are currently being fabricated at PSI at a rate of four to six a day, with a total of 720 modules to be delivered ready for service in late autumn 2007. The PSI pixels will then be on the look-out for passing particles and playing their part in the search for new physics at the LHC.

LUMI’06 takes strides towards LHC upgrade

Over the past two years, studies to upgrade the LHC have made great progress under the joint auspices of the European CARE accelerator network on High-Energy High-Intensity Hadron Beams (HHH) and the US LHC Accelerator Research Program (US-LARP). These efforts recently culminated in the third topical workshop of the CARE-HHH-APD network, LUMI’06, which was held in Valencia on 16–20 October 2006. About 70 members of CARE and LARP and their associated institutes attended, including 13 participants from major US laboratories and two from KEK in Japan.

CCElum1_03-07

LUMI’06 was devoted to the beam dynamics of the LHC luminosity upgrade and to high-intensity effects limiting the performance of both the LHC accelerator complex at CERN and the Facility for Antiproton and Ion Research (FAIR) at GSI. More specifically, the double objective of LUMI’06 was to establish a forward-looking baseline scenario for the LHC luminosity upgrade and to concur on a scientific rating of alternative scenarios for the upgrade of the CERN accelerator complex, while also assessing the performance of the GSI FAIR synchrotrons.

The workshop concluded an exciting year of intense HHH networking activity, in which several other workshops and conferences were devoted to various LHC upgrade issues, treating topics such as crystal collimation and channelling, rapid switching devices, superconducting magnet design, magnet optimization, super-ferric storage-ring approaches and beam dynamics in high-brightness hadron beams. Throughout the year, in preparing for LUMI’06, there had also been great progress made on the development of a web repository for accelerator physics codes, code benchmarking and on the construction of a database for superconducting cables and magnets.

Accomplishing key goals

A highlight of experimental studies just before the workshop was the first successful test of crystal reflection with a 400 GeV proton beam at CERN in the SPS North Area by the H8-RD22 collaboration. The demonstration of an extremely high effective field, together with more than 95% extraction efficiency, opens up a new perspective for the upgrade of the LHC collimator system. Such an improvement is certainly welcome, in view of the known obstacles on the way to reaching the nominal LHC performance.

CCElum2_03-07

Several speakers at LUMI’06, including CERN’s Ralph Assmann, Rudiger Schmidt and Gianluigi Arduini, surveyed the various difficulties and limitations of the nominal LHC and of the existing CERN complex – related, for example, to collimation, machine protection and the injectors – and they pointed out the challenges that need to be overcome to reach the LHC design luminosity of 1034 cm-2s-1. Nevertheless, after five days of intense discussions, the workshop participants displayed great optimism about the upgrade goal of boosting the LHC peak luminosity by another factor of 10 beyond nominal towards 1035 cm-2s-1.

A key objective that LUMI’06 successfully accomplished was to select the most promising upgrade paths and, possibly, improve them or identify new ones. The workshop considerably reduced the number of alternative scenarios for the upgrade of the interaction region by arguing against all layouts with strong separation dipoles between the collision points and the low-beta quadrupoles closest to them. A primary argument in favour of the “quadrupole-first” solutions is the different level of difficulty and implied development timescale. In particular, at present nobody in the world is actively prototyping strong superconducting dipole magnets.

In considering the technology on which to base the new low-beta quadrupoles there are two alternatives – namely “pushed” NbTi and Nb3Sn – that the workshop decided to pursue in parallel until the first results become available from long Nb3Sn prototype magnets to be built in the US. This should be within the next two or three years. CERN’s Tom Taylor in particular proposed an intriguing “hybrid” solution, combining both NbTi and Nb3Sn technologies.

CCElum3_03-07

Two novel concepts that would greatly enhance the luminosity potential of an LHC upgrade foresee complementing the interaction-region upgrade with additional slim superconducting dipole magnets (DO) or quadrupole doublets (QO), which would be embedded deeply inside the upgraded detectors. Together with other measures, such elements may allow squeezing the beta functions at the collision point by a factor of seven, as opposed to two, beyond nominal, down to a ß* of around 8 cm. Extensive studies are needed for the accelerator and detectors before these novel schemes can be soundly judged for viability.

The compensation of long-range beam–beam effects by a current-fed metre-long wire running parallel to the beam is by now almost established as a valuable and inexpensive complementary tool for enhancing performance. At LUMI’06, Fermilab’s Vladimir Shiltsev proposed the additional use at the LHC of electron lenses both for head-on beam–beam compensation and as a halo collimator. Large-angle “crab” cavities for interaction-region layouts with large crossing angles were rejected in view of numerous technical challenges, which several speakers identified, including Brookhaven’s Rama Calaga and Ramesh Gupta, and CERN’s Rogelio Tomas and Joachim Tuckmantel. Participants appreciated the high risk involved with choosing a crossing geometry that would fully rely on their functionality. In contrast, simpler small-angle crab cavities were recognized as a potentially powerful tool for realizing very small beta functions in conjunction with the detector–integrated dipole D0. KEK’s Kazuhito Ohmi presented simulations of LHC emittance-growth with crab cavities and feedback. The results of the first-ever crab cavity operation in a collider at the KEKB electron-positron machine will be the next milestone. Expected soon, these results will have a big impact on the further pursuit of using crab cavities at hadron colliders.

Figure 1 shows two example layouts of an upgraded LHC interaction region, accommodating several of the advanced elements discussed during the workshop. Advantages of the first scheme, with a detector-integrated slim dipole located about 3 m from the interaction point, are the reduced number of long-range collisions and the absence of geometric luminosity loss. The second scheme relaxes the triplet quadrupole requirements and decreases the chromaticity. A combination of the two schemes – that is an interaction region layout containing both D0 and Q0 – is another possibility, which combines all the advantages.

Tackling the beam-parameter frontier

The workshop also made sigificant progress at the beam-parameter frontier. In the past, parameter sets suffered either from an unacceptable number of events per crossing or from an electron-cloud heat load that by far exceeded the available cooling capacity. LUMI’06 approved two compromise solutions with 25 ns and 50 ns bunch spacing, which the authors presented (table 1). For these new sets of beam parameters the number of events per crossing stays near the maximum acceptable value, while the predicted electron heat load remains safely below the projected cooling capability.

The 25 ns option is accompanied by an 8 cm ß*, which requires a D0 magnet inside the detector, Nb3Sn large-aperture quadrupoles and a low-angle crab cavity. The 50 ns option has ß* = 25 cm, for which optics solutions exist based on either technology for the quadrupoles. In addition it needs only the wire compensation of long-range beam-beam effects. Since LUMI’06, the two biggest LHC experiments, CMS and ATLAS, have indicated a preference for the scenario with 50 ns spacing. LUMI’06 rejected the original baseline upgrade scenario with 12.5 ns bunch spacing – half the nominal – since accelerator physicists, cryogenics experts and detector physicists now generally agree that this spacing will produce an insurmountable heat load. Indeed, at this bunch spacing the well-known heating from image currents in the resistive wall and from synchrotron radiation already require the entire local cooling capacity, leaving zero reserve for the electron cloud, which is predicted to be the dominant heat source.

For the LHC injector upgrade, LUMI’06 has endorsed the Linac4/Superconducting Proton Linac upgrade, as well as PS2, a normal-conducting replacement for CERN’s venerable Proton Synchrotron (PS) with twice the PS circumference. However, the workshop also made it clear that these new accelerators alone may not overcome existing intensity limits in the Super Proton Synchrotron (SPS) and that complementary SPS “enhancements” are likely to be required. Several participants challenged the alternative to the normal-conducting PS2, namely a fast cycling superconducting PS2+. Issues of concern here include the distributed beam losses in a cold machine, heating from the fast ramp, technological development risks, missing physics arguments and lack of human resources. In addition, preliminary simulations presented by Miguel Furman of LBNL indicate that the electron cloud could be a serious problem for new superconducting injector rings.

In summary, the LUMI’06 workshop developed novel scenarios for the upgrade of the LHC interaction regions, while eliminating a number of previous options and proposed novel sets of beam parameters better tailored to a higher-luminosity LHC. The workshop also discussed the supporting upgrades to the CERN accelerator complex, including replacement of the PS, which may be necessary for boosting the integrated LHC luminosity, as well as the peak luminosity. With a substantial participation from US-LARP, the European and US upgrade activities could successfully be re-aligned and a general consensus emerged on the future steps to be taken. According to the present schedule, the LHC interaction regions will be upgraded by around 2014. The interaction region and beam-parameter upgrades should increase the peak luminosity several times. However, harvesting the full gain in the integrated luminosity as well will almost certainly require accompanying upgrades to the CERN injector complex, improving turnaround time and removing intensity bottlenecks.

• The HHH Networking Activity is supported by the European Community Research Infrastructure Activity under the European Union’s Sixth Framework Programme “Structuring the European Research Area” (CARE, contract number RII3-CT-2003-506395).

• Dedicated to the memory of Francesco Ruggiero.

String theory meets heavy ions in Shanghai

The field of ultra-relativistic heavy-ion collisions held its 19th international quark-matter conference, QM2006, in Shanghai on 14–20 November 2006. More than 600 physicists discussed the latest experimental and theoretical advances in the study of quantum chromodynamics (QCD) at extreme values of temperature, density and low parton fractional momentum (“low-x”).

CCEstr1_03-07

The wealth of data from the six years of operation of the RHIC collider at Brookhaven, together with that from the fixed-target programme at CERN, is leading to a developing paradigm of the matter produced in high-energy nucleus–nucleus collisions as a “perfect liquid” with negligible viscosity. Far from the ideal parton-gas limit, the lattice QCD calculations for several quantities, such as the equation of state and the quarkonia spectral functions, reveal a non-trivial structure up to temperatures three times higher than the critical QCD temperature – the region that experiments are studying. The current theoretical and experimental efforts centre on characterizing in detail the unanticipated properties of this strongly interacting medium.

Strings and the perfect fluid

One of the most important indications of the formation of thermalized collective QCD matter in heavy-ion collisions is the observation of hydrodynamic flow fields in the form of large anisotropies in the final particle yields with respect to the reaction plane. If the medium expands collectively, the pressure gradients present in non-central collisions – with an initial lens-shaped overlap area – result in momentum anisotropies in the final state. Fluid-dynamics calculations indicate that such gradients must develop very early in the collision, during the high-density partonic phase, in order to reproduce the strong “elliptic flow” seen in the data.

CCEstr2_03-07

Two new experimental results presented at the conference supported this theoretical expectation. The PHOBOS and STAR collaborations at RHIC have observed large dynamical fluctuations of the elliptic-flow strength, which are fully compatible with those expected from event-by-event variations in the initial collision geometry alone. These results confirm that the strength of the collective flow is driven by the initial spatial anisotropy of the medium. The PHENIX collaboration presented data on the azimuthal anisotropy of electrons coming from the decay of charm and beauty mesons (figure 1). They have observed momentum anisotropies as large as 10%, indicating that charm quarks interact collectively and participate in the common flow of the medium. Both results clearly suggest a robust hydrodynamical response developing during the early partonic phase of the reaction.

On the theory side, progress was reported on hydrodynamical approaches including computationally expensive descriptions of the full three-dimensional evolution of the plasma, as well as, for the first time, viscosity corrections. These calculations indicate that the dimensionless viscosity-to-entropy-density ratio, η/s, has to be very small in order to reproduce the liquid-like properties seen in the data. If the viscosity is negligibly small then the produced medium is the most perfect fluid ever observed, in striking contrast with the ideal-gas behaviour at high temperatures that asymptotic freedom predicts. Determining the transport properties of such a medium in the region not far from the critical temperature is, however, a difficult task: perturbative expansions break down in this region while finite-temperature lattice techniques are not well adapted for studying real-time quantities.

Techniques developed in the context of string theory, where strong coupling regimes are accessible to computation, can help to circumvent this difficulty. The meeting in Shanghai heard of new approaches that make use of the correspondence between anti-de-Sitter (AdS) and conformal-field theories (CFT) to estimate the properties of strongly coupled systems in N = 4 super-symmetric Yang–Mills theory from calculations in dual weakly coupled gravity. The somewhat controversial hope is that these theories, though different from QCD, capture the relevant dynamics in the range of interest for phenomenology at RHIC. One of the first observables computed by these methods is the η/s ratio, which is conjectured to exhibit a universal lower bound of 1/4π.

CCEstr3_03-07

The conference also reviewed results on the heavy-quark diffusion coefficient, the jet-quenching parameter, and photon and dilepton emission rates. The application of AdS/CFT techniques, which the field has received with both enthusiasm and criticism, is nonetheless providing new insight into dynamical properties of strongly interacting systems that cannot be directly treated by either perturbation theory or lattice methods. At the same time this approach is opening novel directions for phenomenological studies and experimental searches.

Jet quenching

The study of hadron spectra at high-transverse momentum in heavy-ion collisions – the apparent modifications of which are generically known as jet quenching – was again one of the main conference topics. Speakers from experiments at both RHIC and CERN’s SPS presented new results on inclusive high-pT hadron suppression and two- and three-particle correlations between a leading trigger particle and the associated hadrons. Two clear experimental facts summarize the findings: the strong suppression of the yields of leading-hadrons indicates that fast quarks and gluons lose a sizeable amount of energy when traversing dense matter; and the two- and three-particle correlations studies indicate that this energy reappears as softer particles far from the initial parton direction both in azimuth and rapidity space.

When the transverse momentum of the trigger and the associated particles are both similar and of the order of a few giga-electron-volts, the two-particle-correlation signal around the direction opposite to the trigger particle presents a dip in central collisions, in striking contrast with the typical Gaussian-like shape observed in proton–proton and deuteron–gold collisions (figure 2). The three-particle-correlation data indicate a cone-like emission rather than a deflected jet topology. One proposal is that conical structures result from shock waves or Cherenkov radiation produced by highly energetic partons traversing the medium. Such observations could thus help to constrain the value of the speed of sound or the dielectric constant of the plasma. A more conservative explanation proposes one-gluon exclusive bremsstrahlung radiation at the origin of the enhanced “Mercedes-like” topologies that experiments observe. The differential studies of the transverse-momentum and the centrality dependencies presented at the conference further constrain the models.

Interesting intrajet correlations also appear in the near side, that is at angles in the trigger particle hemisphere. Owing to the trigger bias the near-side signal is sensitive to interactions that originate close to the surface of the hot dense fireball, while the opposite-side particle production reflects the most probable longer path through this medium. The data from the STAR collaboration (up to pTtrig = 9 GeV/c) show that, although the near-side azimuthal correlations remain basically unchanged from proton–proton to central gold–gold collisions, the pseudo-rapidity distribution is substantially broadened in the gold–gold case and presents a ridge structure above which an almost unaffected Gaussian shape appears. The dynamical origin of this effect is not yet understood but, interestingly, it indicates a coupling of the longitudinally expanding underlying event with the jet development.

CCEstr4_03-07

With increased integrated luminosities, heavy-quark probes are becoming more and more important at RHIC. The latest PHENIX data indicate a large suppression of decay electrons from high-pT charm and beauty mesons. The amount of suppression is very similar to that of the light-quark mesons, which is difficult to accommodate in most jet-quenching models since QCD distinctly predicts a lower gluon radiation probability for heavy-quarks compared with massless quarks or gluons.

There are also new results for direct photon production in gold–gold collisions up to transverse momenta of 20 GeV/c. Comparison of the gold–gold and proton–proton yields indicates that the QCD factorization theorem also holds for hard scatterings with heavy ions. The quality of the data is such that small deviations from the proton–proton reference can be traced back to isospin corrections and nuclear modifications of the parton distribution function in the nucleus. With improved statistics it will certainly be possible to use such data in global-fit analyses to constrain the nuclear parton distributions.

Last but not least, the conference saw the first measurements of direct photons emitted back-to-back with high-pT hadrons. With reduced uncertainties such correlations will provide an important calibrated measure of the energy lost by the original parton.

J/ψ melting and lattice QCD

The suppression of charmonium bound states, in particular the J/ψ, was proposed 20 years ago as a smoking-gun signature for quark–gluon plasma formation, and this is still the experimental observable that offers the most direct connection with lattice QCD. The first high-statistics results from RHIC were presented in Shanghai. Surprisingly, they show that the amount of suppression is almost identical to that found at the SPS (figure 3). The medium created at RHIC is expected to be much denser and hotter than that at the SPS and most models predicted a stronger depletion.

A natural explanation of the similarity of the suppression at the two energies (√SNN = 17 and 200 GeV) is put forward by recent lattice data that indicate that the J/ψ (but not the χc and ψ’) survives up to temperatures twice the critical one. The increase in the temperature from the SPS to RHIC would not be enough to dissolve the J/ψ, which is then only indirectly suppressed due to the lack of feed-down contributions from the dissolved χc and ψ’ states. Alternative explanations point out that the recombination of charm and anticharm quarks in the thermal bath (up to 10 charm–anticharm pairs are produced in central gold–gold collisions at RHIC) could compensate almost exactly for the additional suppression. The influence of initial-state effects, those present already in proton–nucleus collisions, are not yet completely settled. More proton–nucleus (at the SPS) and deuteron–gold (at RHIC) reference data, as well as the study of different charmonium states in gold–gold, will be needed to unravel the origin of the suppressed J/ψ yields.

Towards the LHC

The emerging picture in ultra-relativistic heavy-ion collisions is that of the formation of a strongly interacting medium with negligibly small viscosity – a perfect liquid – and with energy densities as high as 30 GeV/fm3. These characteristics emerge from the ideal hydrodynamics description of collective elliptic flow, and the large energy loss suffered by energetic quarks and gluons traversing the system respectively. The detailed study of the transport properties of this medium and the potential observation of the anticipated weakly interacting quark–gluon plasma will require key measurements in lead–lead collisions at 5.5 TeV at the LHC. The higher initial temperatures, the greater duration of the quark–gluon plasma phase and the much more abundant production of hard probes expected at the LHC are likely to result in indisputable probes of the deconfined medium that depend much less on details of the later hadronic phase.

A whole morning session at the conference looked at “Heavy-Ion Physics in the Next 10 Years”. The imminence of the LHC start-up – with an active nucleus–nucleus programme being developed by the ALICE, CMS and ATLAS collaborations – guarantees an exciting future for the physics of high-density QCD.

Amazing particles and light

Promising developments in hadronic physics, microwave superconductivity, free-electron lasers and efficient energy-recovery techniques in accelerators were beckoning me – after 25 colourful years at Berkeley, including two spent at CERN. I was also concerned about the longevity of a profession in which I had personally invested. I had seen the attrition of talents, many of whom I mentored, to other professions, driven by socio-economic realities of large particle accelerators. This inspired me to motivate accelerator-science practitioners to diversify their portfolio by developing the small and mezzo-scale engines that would drive emerging nano- and bio-sciences. Today, on the eve of another personal transition as I prepare to take the helm at the UK’s Cockcroft Institute, new developments and challenges once again invite comment.

I observe a few key developments contributing at the frontier of “discovery”, while others attest to “innovation” and “diversification”. These include: development of electron, proton and ion beams of unprecedented precision based on normal and superconducting material technology and advanced feedback control; diversification and growth of synchrotron radiation sources worldwide; evolution of sophisticated table-top laser-plasma acceleration techniques with necessary control to produce giga-electron-volt electron beams; demonstration of self-amplified spontaneous emission for the planned X-ray free-electron lasers; demonstration of efficient energy use and recovery in superconducting linacs; and production of ultra-short femtosecond flashes of electrons, infrared light and X-rays for studies of ultra-fast phenomena – to name but a few.

We have consolidated the “discovery” sector and diversified the “innovation” sector

I also admit to occasional sombre worries that perhaps accelerators will be just a passing moment in history. But I was always awakened by the realization that particle accelerators have been and must continue to be singularly distinctive instruments of discovery and innovation, in various measures. What we are witnessing is a mere partitioning of the balance between these values in the context of the evolving human condition. We have consolidated the “discovery” sector and diversified the “innovation” sector. The fundamental value of accelerators, articulated in my 2002 Viewpoint, remains invariant: they package and focus energy and information in patterns of space–time bursts to serve a multitude of human pursuits – hence their universal, timeless appeal. Amazing particles and light, carrying focused energy and information in special staccato-fashion, beam into matter and life, illuminating what our eyes do not see and manipulating what our hands cannot.

Throughout the 20th century, fundamental discoveries were enabled by bold conception and realization of ever-larger particle accelerators, which today must be consolidated into just a few carefully selected facilities so large that they can only be supported internationally. Hence the emergence of but a few grand future machines: the Large Hadron Collider, the X-ray free-electron lasers, a potential International Linear Collider (ILC- or CLIC-based), and neutrino/muon facilities. This consolidation is a must for mastering the global resources necessary to discover fundamentals at the core of the physical world: hidden dimensions, symmetries and structures; origins of mass, dark matter and dark energy; unification of gravity; and exotic states of matter.

In parallel with that consolidation, we continue to anticipate tremendous diversification in the innovation sector of clever techniques and merger of technologies in creating unique bursts of particles and light. These efforts will lead not only to novel affordable scientific devices (for example, energy-recovery and laser-plasma-based compact high-brightness particle and light sources), but also to an increasing set of affordable instruments and processes that more directly enrich our everyday lives (such as novel medical imaging, diagnostics, therapy and radiation oncology; micro-machined instruments for use in medicine, scientific research, information technology and space exploration; designer nano-materials; and knowledge of complex protein structures for drug discovery).

The vision is one of discovering the secrets of the hidden energy and matter in the universe’s evolution; of understanding the protein as the molecular engine of life through studying its energetics and structural folding; of innovating new eco- and bio-friendly materials for human use; and of eliminating radioactive waste and dependence on fossil fuels. Extraordinarily clever particle accelerators drive this at all scales from “small” to “mezzo” to “grand”.

Is this just a dream? Inspired by US poet Carl Sandburg, I respond: “Nothing happens, unless first a dream.”

Quantum Optics: an Introduction

by Mark Fox, Oxford University Press. Hardback ISBN 9780198566724, 9.95 ($89.50). Paperback ISBN 9780198566731, £24.95 ($44.50).

e1ac941a39e35c8e699e169bfae78981f980e3a4-00-00

This is a modern text on quantum optics for advanced undergraduate students. It provides explanations based primarily on intuitive physical understanding, rather than mathematical derivations. There is a strong emphasis on experimental demonstrations of quantum optical phenomena, in both atomic and condensed-matter physics. Other topics include squeezed light, Hanbury–Brown–Twiss experiments, laser cooling, Bose–Einstein condensation, quantum computing, entangled states and quantum teleportation. The book also includes worked examples and exercises.

The New Physics for the 21st Century

By Gordon Fraser (ed.), Cambridge University Press. Hardback ISBN 9780521816007, £30 ($60).

Seventeen years ago a book called The New Physics illuminated – vividly for the layperson and sensibly for the student – a series of scientific advances and philosophical obsessions, and it trailed them as signposts for the future. As so often happens, the future went off in a somewhat different direction. While Paul Davies was editing the first volume, physicists wondered loudly and publicly about dark matter and cosmic strings; black holes and the end of time; grand unification theory and cosmic inflation; the new window on the universe by the yet to be launched Hubble Space Telescope; and the claim by the Nobel prize-winner Luis Alvarez that an asteroid had crashed into the planet 65 million years ago and ended both the Cretaceous era and the dinosaurs. In fact, Alvarez and his planet-bruising bolide never got a mention in the Davies volume, but at the time there seemed quite a lot else to be getting on with.

CCEboo1_01-07

What a difference the decades make. In the past 17 years, experimental physicists have delivered a fifth state of matter in the Bose–Einstein condensate; slowed light down first to walking speed and then to a complete standstill; dropped the idea that time might run backwards and instead proposed interminable heat death in an ever expanding cosmos; demonstrated quantum entanglement and teleportation; mapped the fluctuations in the cosmic background radiation; introduced branes and apparently dropped cosmic strings; and discovered dark energy in a big way – so big that it accounts for three-quarters of everything. Nanotechnology emerged as both engineering obsession and practical investment, amid royal alarm in the UK about global death by grey goo. The Hubble telescope went up with a faulty mirror, and NASA launched its International Space Station but seemed to run out of steam. Global warming – physics at a practical level for most people – announced its arrival with a procession of record temperatures globally, and the debate about the Cretaceous catastrophe flowered into a much larger argument about asteroid impact-warning and deflection.

Physics never seemed so glamorous, but student numbers continued to fall and university departments continued to close. The old New Physics didn’t look so new and now CERN’s own Gordon Fraser has produced a companion volume of 19 essays, just as substantial, just as wide-ranging, and in some cases just as much fun.

Physics is not easy (it is after all done by PhDs, not dilettantes) but each essay begins comprehensibly and even enticingly, before diving quite briskly into mathematics, hard argument and occasionally hostile language. (Did Michael Green, writing about superstring theory, really have to head a section “beyond the naive perturbative approximation”?) Chris Quigg looks at particle physics and puts the Large Hadron Collider handsomely in its scientific context. But Fraser plays no special favourites. Nanoscience is there, and the Grid, and there are welcome surveys of biophysics and medical physics; the last essay is a reminder that without the physics of imaging, some neuroscience would be little more than voodoo.

All the classical preoccupations – cosmology, astronomy, gravity and the quantum world – get a fresh look. Robert Cahn’s survey of the physics of materials is a big help for the benighted. Ugo Amaldi ends the volume with a handsome canter through the connections between physics and society, and echoes many of the themes tackled in the book’s previous 18 chapters. The bad news is that physics still has an image problem. The good news is that this time Alvarez gets a mention, although not for bolide impacts, dinosaurs or the present concerted international effort to identify and track near-Earth objects. No, he gets a mention for not solving the world’s energy crisis: to be fair, for admitting that, for a few exhilarating moments, he thought that he had solved the world’s fuel problems for all time by fusing a proton with a deuteron to form helium-3. This anecdote appears, a little unkindly, under the heading “usable knowledge”.

Experimental Techniques for Low-Temperature Measurements: Cryostat Design, Material Properties, and Superconductor Critical-Current Testing

By Jack W Ekin, Oxford University Press. Hardback ISBN 9780198570547, £65 ($125).

experimentaltech0000ekin_0001

This extensively illustrated book presents a step-by-step approach to the design and construction of low-temperature measurement apparatus. The main text describes cryostat design techniques, while an appendix provides a handbook of materials-property data for carrying out designs. Tutorial aspects include construction techniques for measurement cryostats, operating procedures, vacuum technology and safety. Many recent developments in the field not previously published are covered in this volume.

LHC on course for 2007 start-up

As the final countdown begins towards the scheduled start-up of the Large Hadron Collider at CERN later this year, work on the machine and the experiments has seen a series of achievements during the closing weeks of 2006. The cool-down of the first complete sector – an eighth of the machine – has already begun and installation of the magnets should be completed in March.

CCEnew1_01-07

At the end of October, the final sector of the cryogenic distribution line, sector 1-2, passed pressure and helium leak tests at room temperature, “completing the circle” for at least one major component of the LHC. The line will circulate helium in liquid and gas phases, at different temperatures and pressures, to provide the cryogenic conditions for the superconducting magnets. The test marked the end of a key part of the project that has had to overcome major difficulties, including manufacturing faults.

Then on 10 November the first complete sector – sector 7-8 – became operational, with the magnets, cryogenic line, the vacuum chambers and the distribution feedboxes all fully interconnected. The interconnection work had required several thousand electrical, cryogenic and insulating connections to be made on the 210 interfaces between the magnets in the arc, the 30 interfaces between the special magnets and the interfaces with the cryogenic line. Although representing only an eighth of the LHC, the fully equipped sector from points 7 to 8 will be the world’s largest operating cryogenic system.

Production of the LHC’s main magnets has finally finished, with a celebration at CERN on 27 November. In all 1232 main dipole and 392 main quadrupole magnets have been manufactured in an unprecedented collaboration effort between CERN and European industry.

The LHC experiments are also continuing to make good progress. On 8 November, the giant ATLAS barrel toroid magnet reached its nominal field of 4 T, with a current of 21 kA in the superconducting coils. At the same time, the first sections of the CMS detector had begun to arrive in the experimental cavern, 100 m below ground. The first forward hadronic (HF) calorimeter, weighing 250 tonnes, led the way on 2 November, with the second HF following a week later. The first end-cap disc, the 410 tonne YE+3, made its 10 h descent on 30 November, followed by YE+2 on 12 December. The third end-cap disc, YE+1, weighing in at nearly 1300 tonnes, was the heaviest piece so far to be lowered, taking 11 h on 9 January.

These milestones were a major feature of a confident report on the LHC to CERN Council at its 140th meeting on 15 December. The meeting also saw the election of Torsten Åkesson of the University Lund as president of Council from 1 January 2007, taking over from Enzo Iarocci. On the same date, Sigurd Lettow replaced Andre Naudi as CERN’s chief financial officer.

bright-rec iop pub iop-science physcis connect