Comsol -leaderboard other pages

Topics

Can invading black holes explain GRBs?

X-ray observations of gamma-ray bursts (GRBs) by the Swift satellite suggest that the central engine can be active for up to a few hours. A new theoretical study shows that this is difficult to explain in the standard scenario of jet formation and instead proposes a different mechanism that would work not only for collapsing stars but also for stars invaded by a black hole companion in a binary system.

The collection of hundreds of GRB afterglows by NASA’s Swift satellite since its launch in November 2004 is an observational breakthrough in the characterization of these powerful stellar explosions. The typical X-ray-afterglow emission is characterized by a rapid fading in the first minutes followed by a shallow decay lasting up to a few hours and a somewhat steeper decay afterwards. In addition, many GRB afterglows show X-ray flares superimposed on this general trend (CERN Courier October 2005 p11). While those features are consistent with the cannonball model, they were unexpected in the frame of the standard fireball model (CERN Courier December 2005 p20). Despite these difficulties, the latter remains the favoured model for long GRBs.

In this context, the intermediate shallow decay and the presence of X-ray flares are interpreted as evidence of ongoing activity of the central engine for several hours after the prompt GRB. This is a problem for models in which the ultra-relativistic jet at the origin of the GRB phenomenon is powered by the annihilation of neutrinos in a disc of matter that forms around a nascent black hole at the heart of a collapsing star. Indeed, the neutrino-heating mechanism requires a high mass-accretion rate onto a rapidly spinning black hole – a process that cannot be sustained for more than a few minutes.

A new theoretical study by Maxim V Barkov and Serguei S Komissarov from the Department of Applied Mathematics at the University of Leeds proposes an alternative to the prolonged neutrino-heating problem in the standard “collapsar” model. They demonstrate that jets of long GRBs can also be powered via a magnetic process, such as the Blandford–Znajek mechanism. This mechanism, proposed in 1977, uses the rotational energy of the spinning black hole to power the jet. Compared with the neutrino-driven GRB model it has the advantage that it can account for the prolonged jet activity with a somewhat lower constraint on the spin of the black hole; but on the other hand, it requires a strong magnetic field at the black-hole horizon. One way to reduce the magnetic constraint is to start with a neutrino-driven supernova explosion that opens jet channels for the subsequent magnetically driven GRB.

A particularly interesting possibility discussed by Barkov and Komissarov is the case of a close binary system composed of a Wolf-Rayet star – a massive, dense star that has blown away its outer layer of hydrogen – and a black hole. The black hole could lose momentum in the wind of the companion star and ultimately spiral into the star’s centre, devouring it from the inside. In the Milky Way, there is luckily only one such binary system, Cyg X-3, which has an orbital period of about 5 hours. The black hole might eventually parasite on the Wolf-Rayet star, disrupt it and produce a GRB to be observed in billions of years’ time in a remote galaxy.

Further reading

M V Barkov and S S Komissarov 2009. Submitted to MNRAS.

http://arxiv.org/abs/0908.0695.

The EIC’s route to a new frontier in QCD

CCcol1_09_09

Understanding the fundamental structure of matter requires determining how the quarks and gluons of QCD are assembled to form hadrons – the family of strongly interacting particles that includes protons and neutrons, which in turn form atomic nuclei and hence all luminous matter in the universe. Leptons have proved to be an incisive probe of hadron structure because their electroweak interaction with the hadronic constituents is well understood. Experiments to probe the quarks and gluons within the hadrons require high-intensity, high-energy lepton beams incident on nucleons; and if the leptons and nucleons are polarized, then measurements of spin-dependent observables are possible, so casting light on the spin structure of the hadrons.

Current experiments with polarized leptons focus predominantly on the valence quarks. To learn more about the sea quarks and gluons, physicists who study hadron structure have identified a high-luminosity, polarized electron–ion collider (EIC) as the next-generation experimental facility for exploring the fundamental structure of matter. The proposed EIC would be unique in that it would be the first to collide highly energetic electrons and nuclei, and be the first to collide high-energy beams of polarized electrons on beams of polarized protons and, possibly, a few other polarized light nuclei. It would be designed to achieve at least 100 times the integrated luminosity of the world’s first electron–proton collider, HERA, over a comparable operating period.

The EIC would offer unprecedented opportunities to study, with precision, the role of gluons in the fundamental structure of matter. Without gluons, matter as we know it would not exist. Gluons collectively provide a binding force that acts on a quark’s colour charge but – unlike the photons of QED – they also possess a colour charge, so can self-interact. These self-interactions mean that gluons are the dominant constituents of matter, making QCD equations extremely difficult to solve analytically. Recent theoretical breakthroughs indicate that analytic solutions may be possible for systems in which gluons collectively behave like a very strong classical field – the so-called “colour glass condensate (CGC)”. This state has weak colour coupling despite the high gluon density and is characterized by a “saturation” momentum scale, Qs, which is related to the gluon density. QCD also predicts a universal saturation scale where all nuclei, baryons and mesons have a component of their wave function with identical behaviour, implying that they all evolve into a unique form of hadronic matter.

CCcol2_09_09

The discovery of CGC would represent a major breakthrough in the understanding of the role of gluons in QCD under extreme conditions. To probe the CGC optimally requires collisions of high-energy electrons and heavy ions (with large atomic number, A) resulting in large centre-of-mass energy (i.e. small gluon momentum fraction, x). The EIC would allow exploration of this novel regime of QCD because the use of heavy nuclei in experiments amplifies the gluon densities significantly over electron–proton collisions at comparable energies. Figure 1 shows the dependence of the saturation scale Qs2 on x and A and indicates the region that would be accessible to the EIC.

The ability to collide spin-polarized proton and light-ion beams with polarized electrons (and possibly also positrons) would give the EIC unprecedented access to the spatial and spin structure of protons and neutrons in the gluon-dominated region, complementary to the existing polarized-proton collider, RHIC, at Brookhaven National Laboratory (BNL). Figure 2 illustrates how the EIC would extend greatly the kinematic reach and precision of polarized deep-inelastic measurements compared with present (and past) polarized fixed-target experiments at SLAC, CERN, DESY and Jefferson Lab.

The polarizations measured so far for the sea quarks and gluons are consistent with zero, albeit with large uncertainties. Given that the quarks contribute only about 30% to the spin of the proton, this is surprising. The EIC is ideally suited to resolve this puzzle: it would measure with precision the contribution of the quarks and gluons to the nucleon’s spin deep in the non-valence region (figure 3) and also study their transverse position and momentum distributions, which are thought to be associated with the partonic orbital angular momentum. This could provide tomographic images of the nucleon’s internal landscape beyond the valence-quark region, which will be probed with the 11 GeV electron beam at Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF). Both measurements are essential to understand the constitution of nucleon spin.

CCcol3_09_09

Excited by these prospects, physicists came together in 2006 to form the Electron–Ion Collider Collaboration (EICC) to promote the consideration of such a machine in the US. They have developed the scientific case for an EIC with a centre-of-mass energy in the 30–100 GeV range and luminosity of about 1033 cm–2s–1. The flagship US nuclear-physics laboratories BNL and Jefferson Lab have developed preliminary conceptual designs based on their existing facilities, namely RHIC and CEBAF, respectively. These early concepts have since evolved into significantly more advanced designs (figure 4). Options include the possibilities of realizing electron–nucleus and polarized electron–proton collisions at lower energies and at lower initial costs. Considerable effort is underway to achieve the highest luminosities – up to 1035 cm–2s–1 – which would maximize the access to the physics and help in make the strongest possible case for the EIC.

Future prospects

The scientific argument for the EIC has been discussed in the US nuclear-physics community since its first formal presentation at the Nuclear Science Advisory Committee’s (NSAC) 2002 long-range planning exercise and most recently in a similar exercise held in 2007. The result is that the EIC has been embraced as embodying the vision for reaching the next QCD frontier. The community recognizes that the EIC would provide unique capabilities for the study of QCD well beyond those available at existing facilities worldwide and would be complementary to those planned for the next generation of accelerators in Europe and Asia. NSAC has recommended that resources be allocated to develop the necessary accelerator and detector technology for the EIC.

CCcol4_09_09

Two separate proposals for EICs are being considered in Europe. In the LHeC the existing LHC hadron beam would collide with a 70–140 GeV electron beam. The resulting collisions with the 7 TeV proton beam would allow a centre-of-mass energy of about 1.4 TeV. Such a high energy would enable the study of gluons and their collective behaviour at their highest possible densities (lowest possible x). It would also allow exploration of possible physics beyond the Standard Model with a lepton probe at very high Q2. The other European EIC proposal is motivated by the spin structure of the nucleon. The European Nucleon Collider (ENC) would make use of the High-Energy Storage Ring (HESR) and the PANDA detector at the proposed Facility for Antiproton and Ion Research (FAIR) at GSI. The centre-of-mass energy proposed for this facility is around 14 GeV, which lies between the fixed-target experiments HERMES at DESY and COMPASS at CERN. The primary goal of ENC is to explore the 3D structure of the nucleons, including the transverse-momentum distributions and generalized parton distributions for the quarks.

Since 2007 the EICC has met approximately every six months at Stony Brook University in New York, Hampton University in Virginia, Lawrence Berkeley National Laboratory and, most recently, at GSI. The next meeting is scheduled to take place at Stony Brook University in January 2010. The directors of BNL and Jefferson Lab have formed an EIC International Advisory Committee (EICAC) to help prepare the case for the project in the US. The EICAC met for the first time in Washington DC in February 2009 and will meet again in November at Jefferson Lab. The EICC is working towards the consideration of the EIC by NSAC as a priority for new construction in its next long-range plan anticipated in 2012 or 2013.

• Detailed information on EICC and EICAC meetings is available at http://web.mit.edu/eicc.

Accelerator R&D gets a collaborative boost

design of a compact crab cavity

A sustained effort to develop the potential and performance of particle accelerators is a key ingredient of experimental particle physics. It is also true for nuclear physics, light sources and myriad other applications of accelerators in research and industry. These different fields often share ambitious R&D challenges – hence the idea of a common venture that brings together different partners in Europe and beyond and that focuses on top-priority accelerator R&D issues.

The Co-ordinated Accelerator Research in Europe (CARE) project, which was overseen by the European Steering Group for Accelerator R&D (ESGARD), pioneered collaborative work in accelerator R&D on a European scale. The five-year project grouped together 22 partners and more than 60 associated institutes and was co-funded by the EU’s Framework Programme 6. Combining networking and R&D, CARE had three main goals: to optimize the use of existing infrastructures: collaborate on new state-of-the-art technologies; and develop links between accelerator physicists and particle physicists.

CARE’s networking activities enhanced European knowledge to investigate efficient and cost-effective methods to produce intense and high-energy electron, proton, muon and neutrino beams. These are the Electron Linear Accelerator Network (ELAN) and the networks for High-Energy High-Intensity Hadron Beams (HHH) and Beams for European Neutrino Experiments (BENE). R&D activities within CARE included:

• Superconducting Radio Frequency (SRF) to investigate superconducting cavity technology with a gradient exceeding 35 MV/m
• PHIN, an activity for photoinjector technology for two-beam acceleration concepts, new-generation light sources and novel acceleration techniques
• High Intensity Pulsed Proton Injectors (HIPPI) to study normal and superconducting structures for the acceleration of high-intensity proton beams, as well as challenging beam-chopping magnets and beam dynamics
• Next European Dipole (NED) for research into cable technology for reaching high magnetic fields (>15 T) using high current densities (>1500 A/mm2)

When CARE came to an end on 31 December 2008, the co-ordinator, Roy Aleksan from the Commissariat à l’énergie atomique (CEA), announced that 129 deliverables and more than 700 scientific publications had been achieved, including 18 PhD theses.

From CARE to EuCARD

Beyond the scientific outcome, CARE created a favourable environment for future European projects in accelerator R&D. ESGARD triggered the preparation of a new European project, taking into account the new priorities in accelerator R&D. The result is the European Co-ordination for Accelerator Research & Development (EuCARD), a four-year project co-funded by the EU’s Framework Programme 7 (FP7), which involves 37 partners from 45 European accelerator laboratories, universities, research centres and industries. In response to the EC’s request, this project’s mandate includes a contribution to the emergence of lasting structures in the accelerator field, beyond the duration of European projects.

The EuCARD contribution brings the added value of collaborative work between accelerator laboratories, universities, specialized institutes and private companies

The EuCARD project started on 1 April 2009 and is co-ordinated by CERN, with Jean-Pierre Koutchouk as project co-ordinator, Ralph Assmann as deputy and Svetlomir Stavrev as administrative manager. Its management bodies are the Governing Board and the Steering Committee. The Governing Board represents the project partners and has elected Tord Ekelof, of the University of Uppsala, as chair. The Steering Committee represents the project’s activities, with all work-package co-ordinators and deputies as members. A co-ordination office at CERN offers central support to the community, benefitting from the active involvement of the CERN’s EU Office.

The collaborative R&D programme includes 21 “tasks” grouped under five themes (work packages), described in more detail below. Most studies are deeply rooted in the work plans and funding of the collaborating laboratories, thereby providing a robust environment. The EuCARD contribution brings the added value of collaborative work between accelerator laboratories, universities, specialized institutes and private companies.

Five themes

The theme “High-field magnets”, led by Gijs de Rijk of CERN and François Kircher of the CEA, involves 13 partners. The primary goal is a new jump in achievable magnetic fields, rated as the top priority by the EC project review. This includes the study, design and construction of a model 13 T niobium–tin (Nb3Sn) accelerator dipole. The study and construction of a very ambitious inner coil “booster” to be added to this dipole aims to reach significantly higher fields, possibly 20 T. Potential applications are test stations for superconducting cables (e.g. FRESCA at CERN); phase II of the LHC upgrade; wigglers and undulators; and all accelerators requiring more compactness. Two associated studies will investigate the use of high-temperature superconductors for superconducting links and of Nb3Sn for short-period helical undulators for the International Linear Collider (ILC) positron source.

Field of 14T model dipole

The theme “Collimation and materials”, led by Ralph Assmann of CERN and Jens Stadlmann of GSI, involves nine partners. Robust and efficient collimation is a necessity and a challenge for both the LHC and the future Facility for Antiproton and Ion Research (FAIR) at GSI. This theme is recent in accelerator sciences and includes the successive steps necessary to allow collimator implementation: beam modelling; energy deposition calculations; behaviour of materials, especially under shock waves (accidental beam loss); radiation damage; and implementation and testing of warm and cold collimators.

The theme “Linear colliders”, led by Grahame Blair of Royal Holloway, University of London, and Erik Jensen of CERN, involves 11 partners. It has two aspects: technologies for the Compact Linear Collider Study (CLIC), with two-beam acceleration; and the stabilization and beam delivery issues that are common to CLIC and the ILC. The studies include: the design and construction of improved power extraction and transfer structures for CLIC Test Facility 3; the demonstration of higher-order mode (HOM) damping in the presence of alignment errors; breakdown simulations and diagnostics instrumentation; and precise synchronization devices with 20 fs resolution. Stabilization will be investigated for the linac and final focus, using purposely built mock-ups, with targets of 1 nm and 0.1 nm respectively. Beam delivery methods and instrumentation for emittance preservation will be investigated on the Accelerator Test Facility 2 in Japan and on PETRA III at DESY.

The theme “Superconducting radio frequency technologies” is led by Olivier Napoly of the CEA and Olivier Brunner of CERN and it involves 15 partners. This is the largest work package of EuCARD and covers various aspects of the superconducting technology applied to the production of RF fields and related topics. It will study new technologies being investigated for superconducting thin-film deposition. Using bulk material, accelerating cavities will be developed for hadron linacs and investigations carried out on couplers, mostly with a view to reliable and industrial cleaning. New, advanced telecommunication computing-architecture technology will be applied to low-level RF, with an application at the free-electron laser facility, FLASH, at DESY. On the same machine, investigations will be done on the HOM signals used as beam-diagnostic devices. An improvement programme for the superconducting RF gun at the Electron Linac for beams with high Brilliance and low Emittance (ELBE), Forschungszentrum Dresden-Rossendorf, covers beam characterization and the preparation and characterization of photo-cathodes.

The fifth theme, “Innovative accelerator concepts”, is led by Marica Biagini of INFN and Rob Edgecock of the UK’s Rutherford Appleton Laboratory (RAL) and involves five partners. It provides support for exciting innovative concepts developed in several laboratories, such as the crab-waist crossing-scheme, non-scaling, fixed-field, alternating-gradient accelerators and plasma-wave acceleration. The EuCARD contributions range from feasibility studies to beam diagnostics for a better evaluation and understanding of their upcoming implementations.

A large community

EuCARD also involves networks, which are grouped under three headings. The network for neutrino facilities (NEu2012), led by Vittorio Palladino of INFN and Silvia Pascoli of the Institute for Particle Physics Phenomenology, Durham University, aims to structure the European neutrino community for a coherent approach to the upgrade of existing infrastructures and/or a road map to new ones. The network is in liaison with EUROnu, the FP7 Design Study for A High Intensity Neutrino Oscillation Facility in Europe, and worldwide studies. The network is already active in participating or contributing to the organization of all major neutrino physics events.

The accelerator science networks (AccNet), which are led by Frank Zimmermann of CERN and Alessandro Variola of CNRS-LAL, divide into two specialized networks, though some topics such as crab cavities span them both. The accelerator-performance (EuroLumi) network continues and extends the activity of CARE-HHH on the LHC, FAIR and other accelerator upgrades, interfacing with the US LHC Accelerator Research Program. It bridges the gap between accelerator physics, accelerator technology and experimental physics, with the goal of defining optimized upgrades. The RF technologies (RFTech) network, which covers both normal and superconducting RF, encompasses all aspects of RF technology, such as klystron development, RF power distribution, cavity design, low-level RF and costing tools.

Finally, the network for scientific communication and outreach is led by Ryszard Romaniuk of Warsaw University of Technology and Kate Kahle of CERN. An important aspect of European projects and motivation for EU funding is communication, dissemination and outreach, so as to strengthen the European research area and facilitate future collaborative ventures. This network is already active, creating a website, publication portal and database, and a project newsletter. It is looking into the possibility of publishing a series of booklets on accelerator sciences and the co-ordinators welcome contact from potential authors.

EuCARD partners

Research accelerators are by nature open to a large community of users. To stimulate and support wide use of accelerator-related R&D facilities, EuCARD operates two schemes under EU rules for “Transnational Access”: the Muon Ionization Cooling Experiment (MICE), with a muon beam of around 200 MeV/c and ionization-cooling facility under development at RAL; and the High Radiation Material test facility (HiRadMat), a pulsed irradiation facility under development at CERN. In both cases more details for potential users can be found at http://cern.ch/EuCARD/activities/access.

As soon as the project started in April, the EuCARD activities entered an active phase, achieving the planned early milestones. Several articles have already been published, thanks to the anticipation of several partners. The co-ordinator and Steering Committee look forward to the collaborative work ahead.

• For more about EuCARD and its scientific events, visit http://cern.ch/eucard.

A night to remember

Remember the night of 24 November 1959? Of course I do. I was sitting in the canteen eating supper with John Adams, as we had done many times that fall. There was not a wide choice of food in those days – spaghetti or ravioli or, occasionally, fried eggs – but our thoughts were not on the meal. We had hardly spoken, our spirits were low, then John lit his pipe and said, “Well, now that we’ve finished eating, we might as well walk over and see if anything is happening.” As we went in the direction of the PS buildings, I asked him, “Shall we go to the Main Control Room or over to the Central Building? Chris Schmelzer said that Wolfgang Schnell has that radial phase-control thing working.” John pulled on his pipe, “Probably doesn’t matter, it may not do much good.” Our hopes had been dashed fairly often. Then, after a few more steps, he added, “Let’s go to the Central Building and see what they’re up to.” It was about quarter to seven.

Trudging along, I thought back over the past weeks, back to 16 September when, during the Accelerator Conference at CERN, Adams had made the electrifying announcement that protons injected into the PS had gone one turn round the magnet ring. Since that time, attempts to put the PS into operation had brought a few triumphant moments but most of the time we had been discouraged, puzzled by the beam’s behaviour, frustrated by faulty equipment or, after quick trials of this remedy or that, in despair over the lack of success. The protons just didn’t want to be accelerated.

I had to go back soon to help on the AGS. Pressure for high-energy protons in the United States was mounting even higher with the imminent production of European ones, so I had already booked passage to sail home. For some time I had been saying to everyone that we must get the protons through “transition” before I left. Now it was 24 November, I must leave Geneva the following day, but the prospects were bleak. Would this beam-night be any different?

Although the PS had been ready to accept protons from the linac in September, a great deal of final testing had not been completed and installation and cabling was going on in the ring and the Main Control Room. Consequently, for the first few weeks, beam tests could be scheduled only for Tuesdays and Thursdays from six to ten in the evening; during the final weeks of my stay there was also some time on Friday evenings. During these sessions, our spirits ranged from high to low as the beam behaved somewhat as expected or baffled us completely.

Early in October, the programmed part of the r.f. system was ready for trial. Schmelzer and Hans Geibel were in the Central Building and Pierre Germain was peering at scopes in the Main Control Room. Linac said beam was ready and inflector working. Hine was in the MCR, looking at the injected beam, adjusting quadrupoles, changing inflector voltage, rushing from one scope to another. The beam isn’t spiralling properly… wait… all right, go ahead r.f…. Central Building says it’s on, programme on. Yes, beam is being captured… it’s accelerated… but lost after a few milliseconds. Changes in the r.f. programming… is the beam better… yes, now it goes for 10 milliseconds… no, it’s 15… now it’s gone again. But we went home satisfied – some beam had been captured, there had been some acceleration.

More evenings with trials of the r.f. programme followed. The r.f. system had been designed to run with a frequency programme to a few GeV, then to switch over to an automatic system with a phase-lock and with errors in the beam’s radial position fed back to the r.f. amplitude for correction. When this automatic system was ready, it was tried with switching-in much earlier than planned and this did succeed in accelerating the beam somewhat longer. But then it was lost, usually in a series of steps and all gone after a few tens of milliseconds. I don’t remember if we reached 2 or 3 GeV on an occasional pulse, but certainly no more. The behaviour of the beam remained erratic and unstable. What was wrong?

Measurements of the beam’s position on the radial pickup electrodes were hastily plotted by Adams to show that the closed orbit was off in some places, but only by a few centimetres, surely not enough to prevent some beam from going to transition. The rate of rise of the magnetic field was varied to look for eddy-current troubles. Colin Ramm and the Magnet Group rushed round the ring in the daytime, searching for stray fields or remanence effects. Jean Gervaise scanned the survey data for possible errors in magnet positions while Jack Freeman hunted for signs of beam disappearances with radiation monitors. More trials of the r.f., with and without phase-lock, more diagnostic equipment hurriedly inserted, more measurements. But the protons made no progress.

A broad green trace

During those Tuesday and Thursday evenings in October and early November, many of the PS builders gathered round the tables in the centre of the Main Control Room. At one stage, to save (or prevent?) people from going home to eat and being late for the scheduled 6 p.m. start-up, Hine arranged cold meats, cheese and bread to be sent to the MCR. As I recall this was not a rousing success. There were periods of frantic activity. But there were also long periods of waiting. We sat at the tables and waited and waited. One night, just as beam came on, all of the lights went out – trouble at the CERN main power house – and we groped our way out in darkness, Adams striking matches all the way.

I had a desk in Mervyn Hine’s office where, in the mornings, particularly after beam-nights, one after another would come in – Johnsen, Hereward, Schoch, Schmelzer, sometimes Adams, many others – and the talk would start. Are the closed-orbit deviations causing serious trouble? Is the linac emittance all right? What about the missing bunches, caused by the poor performance of the inflector? Every Monday morning, in the PS Conference Room, there was a meeting of the “Running-in Committee”, starting at 9 a.m. sharp and lasting until well after 1 p.m., or even 2 p.m. Discussions and arguments – on and on.

Occasionally, on a Sunday, I would go along the lake to visit my good friends, Kjell and Aase Johnsen, and we would recall the days in 1953 when the first designs for the PS were being worked out by groups in various places (Harwell, Paris, Heidelberg, Bergen etc.) all under the leadership of Odd Dahl in Bergen. John Blewett and I had spent some months in Bergen in the summer of 1953 and, during that time, Johnsen had been working on the behaviour of the beam at transition energy (where there is no phase stability). His calculations had given us the first confidence that beam could be accelerated through this dangerous region.

Many of these things were in my thoughts as Adams and I approached the Central Building. I was depressed about having to leave the next day, with the protons still balking. I had wanted so much to see this machine operate successfully before I left. All through the years, I had been so involved with CERN and its PS that I had felt a glow of pride with each milestone passed during construction. More than ever, over these past weeks, I had felt that it was partly my machine too. John interrupted my thoughts with, “Well, Hildred, we haven’t done much during your stay. It’s hardly been worthwhile, you haven’t learnt…”. I broke in, “Wolfgang thinks this radial phase-control will really work, he’s very optimistic, and maybe…”. But I knew that no-one else had great hopes for any improvement. Even Schmelzer had thought it was hardly worth the effort, but Schnell had gone ahead over the last couple of weeks wiring it up for a quick test. Just a few days before, I had been down in the basement lab, listening to his enthusiasm. The idea was to use the radial-position signal from the beam to control the r.f. phase instead of the amplitude. With this system, the sign of the phase had to be reversed at transition and, in his haste, Schnell had built this part into a Nescafe tin, the only thing of the right size.

Adams opened the door to the Central Building. For a moment the lights blinded us, then we saw Schmelzer, Geibel and Rosset – they were smiling. Schnell walked towards us and, without a word, pulled us over to the scope. We looked… there was a broad green trace… What’s the timing… why, why the beam is out to transition energy? I said it out loud – “TRANSITION!”

Just then a voice came from the Main Control Room. It was Hine, sounding a bit sharp (he was running himself ragged, as usual, and more frustrated than anyone), “Have you people some programme for tonight, what are you planning to do? I want to…”. Schnell interrupted, “Have you looked at the beam? Go and look at the scope.” A long silence… then, very quietly, Hereward’s voice, “Are you going to try to go through transition tonight?” But Schnell was already behind the racks with his Nescafe tin, Geibel was out in front checking that the wires went to the right places, not the usual wrong ones. Quickly, quickly, it was ready. But the timing had to be set right. Set it at the calculated value… look at the scope… yes, there’s a little beam through… turn the timing knob (Schnell says that I yelled this at him, I don’t remember)… timing changed, little by little … the green band gets longer… no losses. Is it… look again… we’re through… YES, WE’RE THROUGH TRANSITION!

How far? What’s the energy? Something below 10 GeV because the magnet cycle is set for lower fields and a one-second repetition rate for testing. Hurried call to Georgijevic in the Power House. Change the magnet cycle to full field. Beam off while we wait. The long minutes drag by. Will the beam come on again? This is just the time for that dratted inflector to go off again, or the high-voltage set to arc over. Hurry up, Power House!

I remember Schnell murmuring, “I promised you we’d get through transition.” But we were all rather awed by it. No one spoke – Schmelzer lit a cigar, Adams relit his pipe, we waited.

Finally, the call came through – magnet on again, pulsing to top field. Call the linac for beam. Beam on, it’s injected, inflector holding, beam spiralling, r.f. on, all set as before, with the blessed phase-control and the Nescafe tin. Change timing on the scopes, watch them and hold your breath. One second (time for acceleration) is a long time. The green band of beam starts across the scope… steadily, no losses… to transition… through it… on, on how far will it go… on, on IT’S ALL THE WAY! Can it be? There it goes again, all the way as before… and again… and again. Beautiful, smooth, constant, no-loss green band… Look again at the timing… all the way… it must be 25 GeV! I’m told that I screamed, the first sound, but all I remember is laughing and crying and everyone there shouting at once, pumping each other’s hands, clapping each other on the back while I was hugging them all. And the beam went on, pulse after pulse.

Did someone change the timing?

Slowly, we came back to Earth. John Adams was first. Looking very calm, he went to the phone to ring up the director-general, C J Bakker, to tell him the news but Bakker didn’t seem to grasp it right away. (Could it be that John was just a little incoherent?) Schmelzer was beaming, for once even his cigar forgotten, cold on the ashtray. Schnell looked supremely happy, he was the hero of the hour. Gradually, I collected my wits enough to write out a telegram to Brookhaven that Geibel dashed off to send immediately. We went over to the Main Control Room and found Hine calling round to locate some sort of counter for checking the energy. Johnsen was saying, heatedly, “Did someone change the timing on this scope? I just turned away from it for a moment and here is the beam going out…” How could it be 25 GeV without poleface windings on? But all of the scopes showed the same smooth, green trace, one-second long – it really was 25 GeV. Even more unbelievable, the signal on the pickup electrodes gave an intensity of about 1010 protons a pulse. No, that can’t possibly be right, we’re lucky if it’s 109. Check and recheck… look at the calibrations… yes, that number is right, 1010.

The rest of that evening has been described many times. People came flooding in, I don’t know who told them the news. Polaroid pictures of the scope traces were passed around for signatures on the back, cherished souvenirs. Bottles appeared, by magic, including the famous bottle of vodka given to Adams by Nikitin (PS and LEP: a walk down memory lane). Bakker arrived with a bottle of gin under his arm. Bernardini bounded in, hugged Adams and Hine, launched into a description of what he wanted to do as a first experiment, then lapsed into pure Italian. Miss Steel and the secretaries were there, smiling happily – they had had to put up with our complaints and bad humours. I remember Colin Ramm muttering, “Where do we go from here? What about two or three hundred GeV?” (He was ahead of the times.) I left shortly before midnight to pack my suitcases.

Early next morning (at 2 a.m. New York time) I had a phone call from John Blewett offering congratulations from Brookhaven and asking questions. My telegram had come as a bombshell and the word had spread rapidly across the United States. What had brought success? I told him about the phase-control system and, since it was similar to the one being built for the AGS, it was a relief to know that this was just what the protons liked.

Then out to the Lab for final goodbyes, over to the auditorium to hear Adams tell the story to all of CERN, my PS friends grinning proudly but no one happier than I.

• Hildred Blewett (1911–2004) joined Brookhaven National Laboratory at its start in 1947 and in the early 1950s became one of the team who collaborated on the design of CERN’s first high-energy accelerator, the Proton Synchrotron (PS), while also working on the similar machine proposed for Brookhaven, the Alternating Gradient Synchrotron (AGS). In the summer of 1959 she was invited to CERN to observe the commissioning and start-up of the PS, several months before the AGS would be ready.

When LEP, CERN’s first big collider, saw beam

CClep1_09_09

On 13 November 1989, heads of state, heads of government and ministers from the member states assembled at CERN together with more than a thousand invited guests for the inauguration of the Large Electron–Positron (LEP) collider (PS and LEP: a walk down memory lane.). Precisely one month earlier, on 13 October, large audiences had packed CERN’s auditorium and also taken advantage of every available closed-circuit TV to see the presentation of the first results from the four LEP experiments, ALEPH, DELPHI, L3 and OPAL – results that more or less closed the door on the possibility that a fourth type of neutrino could join those that were already known. This milestone came only two months after the first collisions on 13 August and three months after beam had circulated around LEP for the first time.

Champagne corks had already popped the previous summer, soon after 23.55 p.m. on 12 July 1988, when four bunches of positrons made the first successful journey between Point 1, close to CERN’s main site at Meyrin (Switzerland) and Point 2 in Sergy (France) – a distance of 2.5 km through much of the first of eight sectors of the 27-km LEP ring. It was a heady moment and the culmination of several weeks of final hardware commissioning. Elsewhere, the tunnel was still in various stages of completion, the last part of the difficult excavation under the Jura having been finished only five months earlier.

A year to do it all

Steve Myers led the first commissioning test and a week later he reported to the LEP Management Board, making the following conclusions: “It worked! We learnt a lot. It was an extremely useful (essential) exercise – exciting and fun to do. The octant behaved as predicted theoretically.” This led to the observation that, “LEP will be more interesting for higher-energy physics than for accelerator physics!”. However, he also warned, “We should not be smug or complacent because it worked so well! Crash testing took 4 months for about a tenth of LEP; at the same rate of testing the other nine tenths will require 36 months.” Yet the full start-up was already pencilled in for July 1989, in only 12 months’ time.

The following months saw a huge effort to install all of the equipment in the remaining 24 km of the tunnel – magnets, vacuum chambers, RF cavities, beam instrumentation, control systems, injection equipment, electrostatic separators, electrical cabling, water cooling, ventilation etc. This was followed by the individual testing of 800 power converters and connecting them to their corresponding magnets while carefully ensuring the correct polarity. In parallel, the vacuum chambers were baked out at high temperature and leak-tested. The RF units, which were located at interaction-regions 2 and 6, were commissioned and the cavities conditioned by powering them to the maximum of 16 MW. Much of this had to be co-ordinated carefully to avoid conflicts between testing and installation work in the final sector, sector 3-4. At the same time a great deal of effort – with limited manpower – went into preparing the software needed to operate the collider, in close collaboration with the accelerator physicists and the machine operators.

The goal for the first phase of LEP was to generate electron–positron collisions at a total energy of around 90 GeV, equivalent to the mass of the Z0, the neutral carrier of the weak force. It was to be a veritable Z0 factory, delivering Z0s galore to make precision tests of the Standard Model of particle physics – which it went to do with outstanding success.

To “mass produce” the Z0s required beams not only of high energy, but also of high intensity. To deliver such beams required four major steps. The first was the accumulation of the highest possible beam current at the injection energy of 20 GeV, from the injection chain. (This was itself a major operation involving the purpose-built LEP Injection Linac (LIL) and Electron–Positron Accumulator (EPA), the Proton Synchrotron (PS), the Super Proton Synchrotron (SPS) and, finally, transfer lines to inject electrons and positrons in opposite directions, which curved not only horizontally but also vertically as LEP and the SPS were at different heights). The second step was to ramp up the accumulated current to the energy of the Z0, with minimal losses. Then, to improve the collision rate at the interaction regions the beam had to be “squeezed”, by reducing the amplitude of the betatron oscillations (beam oscillations about the nominal orbit) to a minimum value. Finally the cross-section of the beam had to be reduced at the collision points.

The first turn

In June 1989 the LEP commissioning team began testing the accelerator components piece by piece, while the rest of CERN’s accelerator complex continued as normal. Indeed, the small team found themselves running the largest accelerator ever built in what was basically a back room of the SPS Control Room at Prévessin.

The plan was to make two “cold check-outs” – without beam – on 7 and 14 July, with the target of 15 July for the first beam test. The cold check-out involved operating all of the accelerator components under the control of the available software, which proved important for debugging the complete system of hardware and software for energy ramping in particular. On 14 July, however, positrons were already available from the final link in injection chain – the SPS – and so the second series of tests turned into a “hot check-out”. Over a period of 50 minutes, under the massed gaze of a packed control room, the commissioning team coaxed the first beam round a complete circuit of the machine – one day ahead of schedule.

In the days that followed, the team began to commission the RF, essential for eventual acceleration in LEP. The next month proved crucial but exciting as it saw the transition from a single turn round the machine to a collider with beams stored ready for physics.

By 18 July the first RF unit was in operation, with the RF timed in correctly to “capture” the beam for 100 turns round the machine. Two days later, the Beam Orbit Monitoring system was put into action, which allowed the team to measure and correct the beam’s trajectory. Measurements showed that the revolution frequency was correct to around 100 Hz in 352 MHz, or equivalently, that LEP’s 27 km circumference was good to around 8 mm. Work then continued on measuring and correcting the “tune” of the betatron oscillations, so that by 23 July a positron beam was able to circulate with a measured lifetime – derived from the observed decay of the beam current – of 25 minutes. Then, following a day of commissioning yet more RF units, the first electrons were successfully injected to travel the opposite way round the machine on 25 July.

Now it was time to try to accumulate more injected beam in the LEP bunches and to see how this affected the vacuum pressure in the beam pipe. By 1 August the team was observing good accumulation rates and measured a record current of 500 μA for one beam. This was the first critical step towards turning LEP into a useful collider. The next would be to ramp up the energy of the beam.

The late evening of 3 August saw the first ramp from the injection energy of 20 GeV, step by step up to 42.5 GeV, when two RF units tripped. On the third attempt – at 3.30 a.m. on 4 August – the beam reached 47.5 GeV with a measured lifetime of 1 hour. Three days later, both electrons and positrons had separately reached 45.5 GeV. Then 10 August saw the next important step towards a good luminosity in the machine – an energy ramp to 47.5 GeV followed by a squeeze of the betatron oscillations.

In business

On 12 August LEP finally accumulated both electrons and positrons. The next day the beams were ramped and squeezed to 32 cm, yielding stable beams of 270 μA per beam. It was time to turn off the electrostatic separators that allowed the two beams to coast without colliding. The minutes passed and then, just after 11 p.m., Aldo Michelini, the spokesperson of the OPAL experiment, reported seeing the first collision. LEP was in business for physics.

So began a five-day pilot-physics run that lasted until 18 August. During this time various technical problems arose and the four experiments collected physics data for a total of only 15 hours. Nevertheless, the maximum luminosity achieved of 5 × 1028 cm–2s–1 was important for “debugging” the detector systems and allowed for the detection of around 20 Z0 particles at each interaction region.

A period of machine studies followed, allowing big improvements to be made in the collider’s performance and resulting in a maximum total beam current of 1.6 mA at 45.5 GeV with a squeeze to 20 cm. Then, on 20 September, the first physics run began, with LEP’s total energy tuned for five days to the mass peak for the Z0 and sufficient luminosity to generate a total of some 1400 Z0s in each experiment. A second period followed, this time with the energy scanned through the width of the Z0 at five different beam energies – at the peak and at ±1 GeV and ±2 GeV from the peak. This allowed the four experiments to measure the width of the Z0 and so announce the first physics results, on 13 October, only three months after the final testing of the accelerator’s components.

By the end of the year LEP had achieved a top luminosity of around 5 × 1030 cm–2s–1 – about a third of the design value – and the four experiments had bagged more than 30,000 Z0s each. The Z0 factory was ready to gear up for much more to come.

• Based on several reports by Steve Myers, including his paper at the second EPAC meeting, in Nice on 12–16 June 1990.

PS and LEP: a walk down memory lane

CCann1_09_09
CCann2_09_09
CCann3_09_09
CCann4_09_09
CCann5_09_09
CCann6_09_09
CCann7_09_09
CCann8_09_09
CCann9_09_09
CCann10_09_09
CCann11_09_09
CCann12_09_09
CCann13_09_09
CCann14_09_09
CCann15_09_09
CCann16_09_09
CCann17_09_09

CCann18_09_09

Roy Glauber casts a light on particles

CCint1_09_09

When Roy Glauber was a 12-year-old schoolboy he discovered the beauty of making optical instruments, from polarizers to telescopes. His mathematical skills stem from those early school days, when a teacher encouraged him to begin studying calculus on his own. When he progressed to Harvard in 1941 he was already a couple of years ahead and had absorbed a fair fraction of graduate-level studies by 1943, when he was recruited into the Manhattan Project at the age of 18. It was then that the erstwhile experimentalist began the transition to theoretician. Finding the experimental work rather less demanding than theory – “It seemed to depend on how to keep a good vacuum in a counter,” he recalls, “and I didn’t think I would do it any better” – he asked to join the Theory Division and was set to work on solving neutron-diffusion problems.

Following the war, Glauber gained his BSc and PhD from Harvard and after apprenticeships with Robert Oppenheimer in Princeton and Wolfgang Pauli in Zurich, he stood in for Richard Feynman for a year at Caltech and then settled back at Harvard in 1952. By this time, he says, “all of the interest was in nuclear physics studied through scattering experiments”. With increasing energies becoming available at particle accelerators, the wavelength associated with the incident particles was decreasing to nuclear dimensions and below. Viki Weisskopf and colleagues had already developed the cloudy crystal-ball model of the nucleus, which successfully described averaged neutron cross-sections, and Glauber believed that the idea could be extended. “I had this conviction that it ought to be possible to represent the nucleus as a semi-translucent ball, from 20 MeV up,” he recalls. However, what the optical models lacked, in Glauber’s view, “was a proper quantitative derivation based on the scattering parameters of individual nucleons”.

Inspired by work on electron diffraction by molecules that he had pursued at Caltech, Glauber began to think about how to apply optical Fraunhofer-diffraction theory to higher-energy nuclear collisions – in a sense, bringing about a fusion of two of his interests. At higher energies, he argued, individual collisions could be treated diffractively and allow nuclear calculations to be based on the familiar ground of optical-diffraction theory.

The result was a generalized nuclear diffraction theory, in which he introduced charges and internal co-ordinates that did not exist in the optical case, such as spin and isospin, and dealt with scattering from nuclei that contained many nucleons by treating arbitrary numbers of successive collisions. The key was to consider energy transfers that were small compared with the incident energy. This was a reasonable assumption at higher energies and it led to a useful approximation method that provided a mathematical development of the original optical model, and allowed treatment of the preponderance of inelastic transitions.

CCint2_09_09

The theory turned out to work quite well for proton–deuteron and proton–helium collisions in experiments at the Cosmotron at Brookhaven. “You could see single and double scattering in the deuteron and helium,” he explains, “and shadowing” – where target nucleons lie in the shadow of others. However, at the time there were no studies of heavier nuclei.

Glauber made the first of many visits to CERN in 1964 and arrived for a six-month sabbatical in February 1967. “It was a most dramatic time for me,” he recalls. The group led by Giuseppe Cocconi had begun measurements of proton scattering from nuclear targets using the first extracted-proton beam from the PS. They made a series of measurements at 19.3 GeV/c but with the resolution of the spectrometer limited to 50 MeV, they could not separate elastic from inelastic scattering. Glauber realized that, extended to inelastic scattering, the theory would cover essentially all nuclear excitations in which there was no production of new particles. Together, the calculated elastic and inelastic cross-sections agreed exactly with what Cocconi’s group was measuring. Glauber presented the results of his work with Giorgio Matthiae of Cocconi’s group at a meeting in Rehovot in the spring of 1967. “We were doing quantitative high-energy physics for a change,” he says.

The work at CERN with Cocconi’s group left a big impression on Glauber: “It was something wonderful and inspiring.” He became “hooked on CERN”, returning many times for summers and sabbaticals, working on models for elastic scattering for experiments at the ISR and for UA4 on the SPS proton–antiproton collider. However, by the 1990s – the era of the Large Electron–Positron (LEP) collider – his visits became less frequent. “I found I had nothing new to say about LEP cross-sections,” he admits.

Today there is renewed interest in Glauber’s work, in particular among physicists involved with heavy-ion collisions. His early calculations of multiple diffraction laid the foundations for ideas that are central (in more ways than one) to studies in which nuclei collide at very high energies. The basic formalism of overlapping nucleons can be used to calculate the “centrality” of a collision – in other words, how head-on it is. However, other work in the field of optical theory also finds relevance in the unusual environment of heavy-ion collisions – in this case Glauber’s work on a quantum theory of optical coherence, which led to his share of the Nobel prize in 2005.

This work again dates back to the late 1950s and the discovery by Robert Hanbury-Brown and Richard Twiss of correlations in the intensities measured by two separated photon detectors observing the same light source. Their ultimate aim had been to extend their pioneering work on intensity interferometry at radio wavelengths to the optical region, so as to measure the angular sizes of stars – which they went on to do for Sirius and others. However, they first set up an experiment in the laboratory to reassure themselves that the technique would work at optical wavelengths. The result was surprising: light quanta have a significant tendency to arrive in pairs, with a coincidence rate that approaches twice that of the random background level. Extending the idea led to predictions that a laser source, with its narrow bandwidth, should show a large correlation effect. Glauber was sceptical, so he embarked on a proper quantum-theoretical treatment of the statistics of photon detection.

“Correlated pairs are characteristic of unco-ordinated chaotic emission from lots of sources,” he explains, “where the statistics are Gaussian. This is not a characteristic of light from a laser where all of the atoms know quite well what the other atoms are doing.” He realized correctly that this co-ordination means that there should be no Hanbury-Brown–Twiss correlation for a laser source and he went on to lay down the theoretical ground work for the field of quantum optics – the work that led to the Nobel prize.

There are similarities between the statistics in the detection of photons (bosons) and those of the detection of pions (also bosons) in heavy-ion collisions. The energetic collision should be like a thermal light source, with correlated pion emission akin to the Hanbury-Brown–Twiss correlations allowing the possibility of measuring the size of the source, as in the astronomical studies. Experiments do find such an effect but they do not see the full factor of two above the random background and the reason is yet to be properly understood. While the width of the measured peak may relate to the radius of the source, “we don’t have a theory of the radiation process that explains fully the correlation”, says Glauber, “no real quantitative explanation. Perhaps other things are upsetting the correlations.”

The LHC will explore further the realm of heavy-ion collisions and push on with measurements of the proton–proton total cross-section, a focus of the TOTEM experiment. While these links remain between his work and CERN, Glauber observes that the laboratory has changed a great deal since his first visits, but he is still “very devoted to the place as an ideal”. What then, does he hope in general for the LHC? “Pray to find a surprise,” he says. “It may be difficult to design an experiment to detect what you least expect, but we really need some surprises.”

• For Roy Glauber’s colloquium at CERN on 6 August, see http://indico.cern.ch/conferenceDisplay.py?confId=62811.

NA60: in hot pursuit of thermal dileptons

CCion1_09_09

Heavy-ion collisions at ultrarelativistic energies explore the transition from ordinary matter to a plasma of deconfined quarks and gluons – a state of matter that probably existed in the first few microseconds of the universe. Early experiments of this kind began 25 years ago at CERN, at the Super Proton Synchrotron (SPS), and at Brookhaven, at the Alternating Gradient Synchrotron followed by the Relativistic Heavy Ion Collider in 2000 – and now the LHC at CERN is preparing for heavy-ion collisions in 2010. Studies of the hadrons produced have given insight into numerous aspects of the medium formed in the collisions, including collective behaviour and thermalization. They have also indicated that the temperatures reached at beam energies above about 40A GeV may already exceed the critical temperature Tc for deconfinement into a quark-gluon plasma.

Electromagnetic probes such as photons and dileptons (l+l pairs) have long held the promise of a more direct insight. Escaping without final-state interactions, they can reveal the entire space–time evolution of the produced medium, from the early partonic (quark–gluon) phase to the final freeze-out of hadrons, when all interactions cease. In the case of dileptons, experimental difficulties associated with low signal-to-background ratios (from high multiplicity densities), the superposition of nonthermal sources and a lack of sufficient luminosity have hindered clear insight in the past. Nevertheless, experiments at CERN observed an encouraging excess above known sources: CERES/NA45 in the mass region below 1 GeV, NA38/NA50 in the region above 1 GeV and HELIOS/NA34-3 in both mass regions. The very existence of an excess gave a strong boost to theory, leading to hundreds of publications, and provoked a number of open questions.

CCion2_09_09

For masses below 1 GeV, thermal dilepton production is dominated by the hadronic phase and mediated mainly by the light vector meson ρ (770 MeV). With its strong coupling to μ+μ and a lifetime of only 1.3 fm – much shorter than that of the “fireball” produced – the ρ is the key test particle for “in-medium” changes of hadron properties such as mass and width close to the transition where chiral symmetry is restored, as Robert Pisarski first proposed. However, questions about how the ρ changes in the medium – does it shift in mass or broaden? – remained open. Above 1 GeV, thermal dileptons could be produced as “Planck-like” continuum radiation in both the early partonic and late hadronic phases, so offering access to the expected deconfinement transition, as first Edward Shuryak, and later Keijo Kajantie and many others, have pointed out. However, the ori-gin of the dilepton-excess observed above 1 GeV was not clear. Does it arise from the enhanced production of open charm or from thermal radiation? Is it from partonic or hadronic sources? The status of thermal dilepton production in both mass regions at RHIC is even less clear.

Novel detectors

The NA60 experiment at CERN’s SPS was built specifically to follow up on these open questions. By taking a big step forward in technology this third-generation experiment has achieved completely new standards of data quality in the field. Approved in 2000, it took data on indium–indium collisions at 158A GeV for just one running period, in 2003. Briefly, the apparatus complements the muon spectrometer (MS) previously used by NA10/NA38/NA50 with a novel radiation-hard, silicon-pixel vertex telescope (VT), placed inside a 2.5 T dipole magnet between the target region and the hadron absorber (Arnaldi et al. 2009a). The VT tracks all of the charged particles before they enter the absorber and determines their momenta independently of the MS, free from multiple-scattering effects and the energy-loss fluctuations that occur in the absorber. The associated read-out pixel chips were originally developed for the ALICE and LHCb experiments.

The matching of the muon tracks in the VT and the MS, in both co-ordinate and momentum space, greatly improves the dimuon mass resolution in the region of the vector mesons ρ, ω, and φ, reducing it from approximately 80 MeV to around 20 MeV. It also significantly reduces the combinatorial background from μ and K decays and makes it possible to measure the muon offset with respect to the primary interaction vertex, thereby allowing the tagging of dimuons from simultaneous semileptonic decays of DDı pairs – that is, open charm. The additional bend by the dipole field gives a much greater acceptance for opposite-sign dimuons at low mass and low transverse momentum than was possible in all previous dimuon experiments. Finally, the selective dimuon trigger and the radiation-hard vertex tracker, with its high read-out speed, allowed the experiment to run at high rates for extended periods, enabling a high luminosity.

Low mass to high mass

CCion3_09_09

Starting with the low mass region, M <1 GeV, figure 1 shows the net dimuon mass spectrum from NA60, integrated over centrality, after subtraction of the two main background sources: combinatorial background and fake matches between the two spectrometers (Arnaldi et al. 2006 and 2008). The plot contains about 440,000 dimuons in this mass region and exceeds previous results by up to three orders of magnitude in effective statistics, depending on mass. The spectrum is dominated by the known sources: the electromagnetic two-body decays of the η, ω and φ resonances, which are completely resolved for the first time in nuclear collisions, and the Dalitz decays of the η, η’ and ω. While the peripheral, “p–p like” data – the very glancing collisions – are quantitatively described by the sum of a “cocktail” of these contributions together with the ρ and open charm, this is not true for the more centrally weighted – more “head on” – total data shown in figure 1. This is because of the underlying dilepton excess observed previously.

Now, for the first time, the high data quality allows this excess to be isolated without any assumptions about its nature and without fits. The cocktail of decay sources is subtracted from the total data using local criteria that are based solely on the measured mass distribution itself; the ρ is not subtracted. Figure 2 shows the excess for one region in centrality (Arnaldi et al. 2006 and 2009b). The peaked structure seen here appears for all centralities, broadening strongly for the more central collisions, but remaining centred on the nominal pole position of the ρ. At the same time, the total yield relative to the cocktail ρ increases with centrality, becoming up to six times larger than for the most peripheral collisions.

All of this is consistent with an interpretation of the dilepton excess as arising predominantly from μ+μ annihilation via intermediate ρ mesons, which are continuously regenerated throughout the hadronic phase of the expanding fireball. (This is the “ρ-clock”, which “ticks” at the rate of the ρ’s lifetime and is presumably the most accurate way to measure the lifetime of the fireball). It is important to point out that the data as plotted, i.e. without any acceptance correction and pT selection, can be directly interpreted as the space–time averaged spectral function of the ρ, owing to a fortuitous cancellation of the mass and pT dependence of the acceptance filtering by the photon propagator and Bose factor associated with thermal dilepton emission (Damjanovic et al. 2007).

CCion4_09_09

Figure 2 also shows the two main theoretical scenarios for the in-medium spectral properties of the ρ: dropping mass, suggested by Gerald Brown and Mannque Rho, and broadening as proposed by Ralf Rapp, Jochen Wambach and colleagues. The dropping-mass scenario, which ties hadron masses directly to the value of the chiral condensate (with vanishing values as chiral restoration is approached), leads to a shifted and broadened distribution that is clearly ruled out. The unmodified ρ, defined as the full amount of regenerated ρ mesons without any in-medium spectral changes (“vacuum ρ”), is also clearly ruled out. Only the broadening scenario, based on a hadronic many-body approach, describes the data well, up to about 0.9 GeV where processes other than 2μ set in, as described below.

The results from NA60 thus end a decades-long controversy about the spectral properties of hadrons close to the QCD phase boundary. In general terms, chiral restoration should restore the degeneracy between chiral partners such as the vector ρ and the axialvector a1, which are normally split by 0.5 GeV. Whether this happens by moving masses or by a complete “melting” with full overlap of the two partners has always been open to debate, but the question is now answered for the ρ – and with it probably for all light hadrons. Meanwhile, a more explicit connection between chiral-symmetry restoration and the hadron “melting” observed is under discussion by Rapp, Wambach and others.

Turning now to the mass region above 1 GeV, the use of the silicon VT has allowed NA60 to measure the offset between the muon track and the primary interaction vertex and thereby disentangle, for the first time in nuclear collisions, prompt dimuons from offset pairs from D-meson decays (Arnaldi et al. 2009a). The results are perfectly consistent with no enhancement of open charm relative to the level expected from scaling up the results from NA50 for masses above 1 GeV in proton–nucleus collisions. The dilepton excess, previously observed by NA34-3 and NA38/NA50, is therefore solely prompt, with an enhancement over Drell–Yan processes by a factor 2.3±0.08. This excess can be isolated, rather as for masses below 1 GeV, by subtracting the expected known sources, here Drell–Yan and open charm, from the total data. The resulting mass spectrum is quite similar to the shape of open charm and much steeper than that for Drell–Yan.

A true thermal spectrum

In the absence of resonances, the signature of any thermal source should be a Planck-like radiation spectrum. Now a 25-year-old dream has become reality with NA60’s measurement of such a spectrum in high-energy nuclear collisions, isolated from all other sources. Figure 3 shows the mass spectrum of the excess dileptons for the complete range 0.2 <M <2.6 GeV, corrected for experimental acceptance and normalized absolutely to the charged-particle rapidity density (Arnaldi et al. 2009a). The shape is mainly a pure exponential, indicative of a flat spectral function as in the black-body case, except for the slight modulation around the nominal pole position of the ρ.

CCion5_09_09

The figure also shows recent theoretical results from the three major groups working in this field. The general agreement between the data and these theoretical results, which are not normalized to the data, but are calculated absolutely, is remarkable, both for the spectral shapes and the absolute yields, and strongly supports the term “thermal”. At the level of the detailed description of the dominant dilepton sources, all three groups agree on μ+μ annihilation for M <1 GeV, one doing somewhat better than the others below 0.5 GeV through additional secondary sources and a larger contribution from ρ–baryon interactions. Above 1 GeV, 2μ processes become negligible, and other hadronic processes such as 4μ (including vector–axialvector mixing) and partonic processes such as quark–antiquark annihilation, qqı → l+l, take over.

All three models explicitly differentiate between the hadronic and partonic processes. But while the spectral shape and total yield for M >1 GeV are described about equally well, the fraction of partonic processes relative to the total varies from 25% to more than 85% depending on the model. The large variations are from differences both in the underlying spectral functions and the fireball dynamics, which at least partially compensate each other in the total yields. However, the space–time trajectories are not the same for genuine partonic and hadronic processes, the former being “early” (i.e. from the initial temperature Tinit to Tc) and the latter only “late” (i.e. from Tc to thermal freeze-out at temperature Tf). The question therefore arises whether these differences leave a measurable imprint on the dileptons that could reveal the dominant source.

The answer is “yes”. Unlike real photons, lepton pairs are characterized by two variables: mass and transverse momentum pT. Quite different from mass, pT not only contains contributions from the spectral functions, but also encodes the key properties of the expanding fireball: temperature and transverse expansion (“radial flow”). The latter causes a blue-shift of pT, which is well known from hadron production. However, in contrast to hadrons, which receive the full flow reached at the moment of decoupling, dileptons are continuously emitted during the evolution of the fireball and so reflect the space–time integrated temperature-flow history in their final pT spectra. Because flow builds up monotonically during this evolution – being small in the early partonic phase (in particular at SPS energies, owing to the “soft point” in the equation-of-state) and increasingly larger in the late hadronic phase – the final pT spectra keep a memory of the time ordering of the different dilepton sources, thereby offering a diagnostic tool for the emission region.

The variable commonly used here is mT = (pT+ M2)1/2 and all mT spectra for the dilepton excess are found to be nearly exponential (Arnaldi et al. 2008, 2009a, 2009b). The full information can therefore be reduced to one parameter, the inverse slope Teff, obtained by fitting the spectra with the expression: 1/mTdN/dmT α exp(–mT/Teff). Figure 4 shows the mass dependence of Teff for the complete mass range 0.2 <M <2.6 GeV. It also includes the hadron data for μ and for η, ω, φ obtained as a by-product of the cocktail-subtraction procedure. A separate value is added for the ρ peak visible in figure 2, which is generally interpreted as the “freeze-out ρ” without in-medium effects. It is obtained by disentangling the peak from the underlying continuum through a side-window method.

Taken together, the dilepton data and the hadron data suggest the following interpretation. The parameter Teff is roughly described by a temperature part and a radial-flow part: Teff ≈ T + Mv2, where v is the average flow velocity. The general rise of Teff with mass up to about 1 GeV is therefore consistent with the expectations for radial flow. Maximal flow (about half of the speed of light) is reached for the ρ, owing to its maximal coupling to pions, while all other hadrons freeze out earlier. The dilepton values rise nearly linearly up to the pole position of the ρ, but always stay well below the ρ line (dotted). This is exactly what would be expected for radial flow of an in-medium, hadron-like source (here μ+μ → ρ) decaying continuously into dileptons. The average temperature associated with this region is 130–140 MeV.

For M >1 GeV, i.e. beyond the 2μ region, the dilepton values fall suddenly by about 50 MeV down to a level of 200 MeV – an effect that is even more abrupt for the pure in-medium continuum (Arnaldi et al. 2009b). The trend set by a hadron-like source in the low-mass region makes it extremely difficult to reconcile such a fast transition with emission sources that continue to be of predominantly hadronic origin above 1 GeV. A much more natural explanation is a transition to a mainly early, i.e. partonic source with processes such as qqı → l+l for which flow has not yet built up. The observed slope parameter of Teff around 200 MeV, which is essentially independent of M in this region, is then perfectly reasonable and reflects the average thermal values in the fireball evolution between a Tinit of around 220–250 MeV and a Tc of about 170 MeV. All in all, these findings on Teff may well represent a further breakthrough, pointing to a partonic origin of the observed thermal radiation for M >1 GeV and thus, rather directly, to deconfinement at SPS energies.

One final point further underlines the thermal-radiation character of the observed excess dileptons. The study of the dimuon angular distributions in NA60 has yielded complementary information on the production mechanism and the distribution of the annihilating particles, again a first in the field of nuclear collisions (Arnaldi et al. 2009c). Because of the lack of sufficient statistics for higher masses the study is restricted to the region M <1 GeV, but it finds that all coefficients describing the distributions (the “structure function parameters” λ, μ and ν, related to the spin-density matrix elements of the virtual photon) are zero and projected distributions in ¦cosθ¦ and ¦φ¦ are uniform (figure 5). This is a non-trivial result: the annihilation of partons or pions along the beam direction would lead to λ = +1, μ = ν = 0 (the well known lowest-order Drell–Yan case) or λ = –1, μ = ν=0, corresponding to transverse and longitudinal polarization of the virtual photon, respectively. The absence of any polarization is consistent with the interpretation of the excess dimuons as thermal radiation from a randomized system, as Paul Hoyer first suggested.

To summarize, the NA60 experiment, a latecomer at the SPS, has provided answers to all of the major questions left over by previous dilepton experiments: on the spectral function of the ρ in connection to the chiral transition; on the origin of the excess dileptons for M >1 GeV in connection to the deconfinement transition; and on the thermal-radiation character of all excess dileptons. In addition, there has been major progress on charmonia. The answers are probably as clear as they could be at this stage of the field, but they will surely benefit from further progress in theory.

LEP – The Lord of the Collider Rings at CERN, 1980–2000: The Making, Operation and Legacy of the World’s Largest Scientific Instrument

By Herwig Schopper, Springer. Hardback ISBN 9783540893004 €39.95 (£36.99, $59.95). Online version ISBN 9783540893011.

CCboo1_09_09

Herwig Schopper’s energy and vitality remain undimmed, even though he turned 85 this year (CERN honours Schopper at 85). His book surveys the two decades of the Large Electron–Positron (LEP) collider, extending far beyond his own reign as CERN director-general in the years 1981–88.

From the outset, Schopper criticizes historians who have spurned his offer of first-hand but anecdotal input, preferring conventional archives and minutes. He contends that such lack of imagination can obscure the full picture. Thus the book is at its best when he relates how CERN’s history was moulded rather than recorded. Nobody was taking minutes when Schopper had working breakfasts with influential council delegates. Another example is his nomination as CERN’s director-general, where Italy was initially pushing for its own candidate. The sequel came later, when he carefully stage-managed an extension to his mandate to oversee the construction of LEP through to completion.

Fierce debate centred on the parameters of LEP: its circumference, tunnel diameter, precise footprint and the energy of its beams. Overseeing LEP called for a high level of scientific statesmanship. It was the largest civil-engineering project in Europe prior to the Channel Tunnel. As well as the technical challenge of building such a large underground ring at CERN, close to the Jura mountains, there was the diplomatic and demographic challenge of doing so beneath an international border, running close to and under suburbs and villages.

Closer to home was the thorny problem of catering for the physicists clamouring to use the new machine. How many detectors would be needed? Who would build and operate them? Who would lead the teams? With so much at stake, and so much enthusiasm, there was a lot of pushing and shoving to scramble aboard.

Schopper inherited the proton–antiproton collider in CERN’s Super Proton Synchrotron ring and while LEP was being planned and built he presided over the laboratory during the historic discovery of the W and Z particles – the carriers of the electroweak force. He recalls how this fast-moving research called for some skilful moves. In the middle of all this, the UK’s prime minister Margaret Thatcher dropped in, accompanied by her husband – “an elder (sic) gentleman whom she treated with astonishing kindness,” writes Schopper.

Experience had shown that LEP had to be presented from the outside as an integral part of CERN’s basic programme. However, this meant that no new money would be available. CERN’s research activities had to be pruned, a decision that did not go down well everywhere. Equally controversial were some deft moves on CERN’s balance sheets, transferring money between columns earmarked for operations and investments.

While planning and construction of the machine was hectic, it was usually predictable, but in the middle of it all, CERN was caught unawares when the UK, one of its major contributors, suddenly menaced to pull out completely. To counter the threat, CERN had to undergo painful invasive examination by an external committee. Its final recommendations were difficult to swallow but left CERN leaner and sharper. Schopper’s inside account of this period is most revealing.

Probably the biggest LEP controversy came right at the end. With its beam energy boosted to the limit in 2000, LEP was beginning to show tantalizing hints of the long-awaited Higgs particle. But the CERN juggernaut is irresistible. Before it had completed its act, LEP was kicked off the stage by the LHC proton collider for which the tunnel had been presciently designed right from the start. Schopper describes the resulting criticism and points out that it would indeed be ironic if the LHC found the Higgs inside the energy range that was still being explored by LEP.

Making decisions is not easy: long-term advantages can demand short-term sacrifices. Political popularity is another luxury, but highly visible VIP visits do seem to boost an organization’s self-esteem. Most titillating is when Schopper puts LEP aside and reveals what went on behind the scenes to get the Pope, the Dalai Lama and other VIPs to visit CERN. The initial machinations and detailed planning for the visits of French presidents and prime ministers had to be abandoned when their last-minute changes called for frantic improvisation.

The cumbersomely titled The Lord of the Collider Rings is a valuable addition to particle-physics literature but it is mainly written for insiders. The names of people, machines and physics measurements tumble onto the page with little introduction. Schopper acknowledges that some of the illustrations are not optimal. This makes the book look as though it were hastily assembled and gives the CERN reader a sense of déjà vu, which is underlined by a statutory presentation of the Standard Model.

There are a few minor errors. Schopper naturally prefers the Germanic Wilhelm von Ockham to William of Occam, of eponymous razor fame, who was English (but died in Bavaria). Physics World is published by the UK Institute of Physics, not the “British Physical Society”. Furthermore, there is little mention of the Stanford Linear Collider, which briefly trod on LEP’s toes in 1989.

Schopper’s anecdotes and insider views are certainly better entertainment – and possibly more incisive – than a dry formal history. After his LEP revelations, one now looks forward to what his successors at CERN will say about the groundwork for the LHC (historians, please take note).

The Large Hadron Collider: a Marvel of Technology

by Lyndon Evans (ed), EPFL Press. Paperback ISBN 97829400222346, €45 (SFr69).

CCboo1_08_09

Edited by Lyn Evans, the LHC project leader, this book outlines in a well balanced manner the history, physics and technologies behind the most gigantic scientific experiment at CERN: the LHC accelerator and its detectors. The book describes the highlights of the LHC’s construction and the technologies developed and used for both the accelerator and the experiments. The 16 chapters are all written by leaders of activities within the LHC project. The timing is perfect because the book is on the shelf just in time for the anticipated start of LHC-physics data-taking.

There are thousands of people at CERN – from universities and collaborating institutions around the globe – who have accompanied the LHC project over the past two decades or joined during the construction phase. In this book they will find a superb record and detailed account of their own activities and the many aspects and challenges that their colleagues involved in the LHC construction had to face and solve. It features excellent photos that illustrate many of the ingenious technological inventions and show the detailed LHC infrastructure, components and experimental equipment installed both in the tunnel and above ground.

The interested readers will learn about the scientific questions and theory behind the LHC. The book presents in detail the scale, complexity and challenges inherent in the realization of this wonder of technology. Readers will gain an insight into the managerial and organizational aspects of long-term planning in present-day, large-scale science projects. They will learn much about superconductivity and superconducting magnets; industrial-scale cryogenic plants and cryogenics; ultra-high vacuum techniques; beam physics, injection, acceleration and dumping; as well as environmental protection and security aspects around the LHC. They will also read about the complex political processes behind the approval, funding, purchasing and construction of these enormous scientific experiments.

Colleagues involved in new, large-scale scientific projects in Europe – e.g. ITER, XFEL, FAIR, ESS – are well advised to benefit for their respective projects by reading this book. Many unforeseen problems faced during project execution, which required unconventional flexible measures to be adopted, are openly presented and discussed, with mention of the lessons to be learnt.

A significant part of the book is devoted to the description of the four major LHC experiments by their respective spokespersons and to the LHC data analysis and the Grid. The introduction is written by T S Virdee and provides a good overview of particle-detection basics, detector developments and challenges at the LHC. This section of the book is dedicated not only to the thousands of scientists, engineers and technicians involved in preparing LHC detectors worldwide but also – an interesting idea – to the agencies that funded the LHC detectors to a large extent.

In summary, this book comes at the right time and should be on the shelf of all friends of the LHC because it represents a nicely balanced record of the historical developments, technical challenges and scientific background. It is packed with many, many photos of the LHC taken during construction and assembly.

bright-rec iop pub iop-science physcis connect