Comsol -leaderboard other pages

Topics

Jefferson Lab starts its 12 GeV physics upgrade

CCjef1_03_09

The US Department of Energy’s (DOE) Thomas Jefferson National Accelerator Facility in Newport News, Virginia, has awarded four contracts as it begins a six-year construction project to upgrade the research capabilities of its 6 GeV, superconducting radio-frequency (SRF) Continuous Electron Beam Accelerator Facility (CEBAF).

The resulting 12 GeV facility – with upgraded experimental halls A, B and C and a new Hall D – will provide new experimental opportunities for Jefferson Lab’s 1200-member international nuclear-physics user community.

CCjef2_03_09

The contracts are the first to be awarded following DOE’s recent approval of the start of construction. The DOE Office of Nuclear Physics within the Office of Science is the principal sponsor and funding source for the $310 million upgrade, with support from the user community and the Commonwealth of Virginia.

Under a $14.1 million contract, S B Ballard Construction, of nearby Virginia Beach, will build Hall D and the accelerator tunnel extension used to reach it as well as new roads and utilities to support it. Hall D civil construction is expected to last from spring 2009 until late summer 2011.

Two further contracts are for materials for Hall D’s particle detectors and related electronics. Under a $3.3 million contract, Kuraray of Japan will produce nearly 3200 km of plastic scintillation fibres for the new hall’s largest detector – a barrel calorimeter approximately 4 m long and nearly 2 m in outer diameter.

This calorimeter will detect and measure the positions and energies of photons produced in experiments. Its precision will allow the reconstruction of the details of particle properties, motion and decay. Under a $200,000 contract, Acam- Messelectronic GmbH of Germany will provide some 1440 ultraprecise, integrated time- to-digital converter chips for reading out signals from particles in experiments.

Last, a $1.5 million contract has gone to Ritchie-Curbow Construction of Newport News for a building addition needed for doubling the refrigeration capacity of CEBAF’s central helium liquefier, which enables superconducting accelerator operation at 2 K.

CEBAF already offers unique capabilities for investigating the quark–gluon structure of hadrons. Since operations began in the mid-1990s, more than 140 experiments have been completed.

The experiments have already led to a better understanding of a variety of aspects of the structure of nucleons and nuclei, as well as the nature of the strong force. These include the distributions of charge and magnetization in the proton and neutron; the distance scale where the underlying quark and gluon structure of strongly interacting matter emerges; the evolution of the spin-structure of the nucleon with distance; the transition between strong and perturbative QCD; and the size of the constituent quarks.

The beautiful programme of parity violation in electron scattering has permitted the precise determination of the strange quark’s contribution to the proton’s electric and magnetic form factors, with results that are in excellent agreement with the latest results from lattice QCD. This programme has placed new constraints on possible physics beyond the Standard Model.

New opportunities

CCjef3_03_09

Careful study in recent years by users and by the US Nuclear Science Advisory Committee has shown that a straightforward and comparatively inexpensive upgrade that builds on CEBAF’s existing assets would yield tantalizing new scientific opportunities.

The DOE study Facilities for the Future of Science: A Twenty-Year Outlook recommended the 12 GeV upgrade as a near-term priority. This 20-year plan used plain language to explain why. Speaking of quarks, it read: “As yet, scientists are unable to explain the properties of these entities – why, for example, we do not seem to be able to see individual quarks in isolation (they change their natures when separated from each other) or understand the full range of possibilities of how quarks can combine together to make up matter.”

The 12 GeV upgrade will enable important new thrusts in Jefferson Lab’s research programme, which generally involve the extension of measurements to higher values of momentum-transfer, probing correspondingly smaller distance scales. Moreover, many experiments that can run at a currently accessible momentum-transfer will run more efficiently at higher energy, consuming less beam time.

The generalized parton distributions will allow researchers to engage in nuclear tomography for the first time

For the first time nuclear physicists will probe the quark and gluon structure of strongly interacting systems to determine whether QCD gives a full and complete description of hadronic systems. The 12 GeV research programme will offer new scientific opportunities in five main areas. First, in searching for exotic mesons, in which gluons are an unavoidable part of the structure, researchers will explore the complex vacuum structure of QCD and the nature of confinement. Second, extremely high-precision studies of parity violation, developed to study the role of hidden flavours in the nucleon, will enable exploration of particular kinds of physics beyond the Standard Model on an energy scale that cannot be explored even with the proposed International Linear Collider.

The combination of luminosity, duty factor and kinematic reach of the upgraded CEBAF will surpass by far anything available through this kind of research. This will open up a third opportunity: by yielding a previously unattainable view of the spin and flavour dependence of the distributions of valence partons – the heart of the proton, where its quantum numbers are determined. The upgrade will also allow a similarly unprecedented look into the structure of nuclei, exploring how the valence-quark structure is modified in a dense nuclear medium. These studies will yield a far deeper understanding, with far-reaching implications for all of nuclear physics and nuclear astrophysics.

Lastly, the generalized parton distributions (GPDs) will allow researchers to engage in nuclear tomography for the first time – discovering the true 3D structure of the nucleon. The GPDs also offer a way to map the orbital angular momentum carried by the various flavours of quark in the proton.

New equipment

The CEBAF accelerator consists of two antiparallel 0.6 GV SRF linacs linked by recirculation arcs. With up to five acceleration passes, it serves three experimental halls with simultaneous, continuous-wave beams – originally with a final energy of up to 4 GeV, but now up to 6 GeV, thanks to incremental technology improvements. Independent beams are directed to the three existing experimental halls, each beam with fully independent current, a dynamic range of 105, high polarization and “parity quality” constraints on energy and position.

The maximum energy for five passes will rise to 11 GeV for the three original halls

The new Hall D will be built at the end of the accelerator, opposite the present halls. Experimenters in Hall D will use collimated beams of linearly polarized photons at 9 GeV produced by coherent bremsstrahlung from 12 GeV electrons passed through a crystal radiator. To send a beam of that energy to that location requires a sixth acceleration pass through one of the two linacs. This means adding arecirculation beamline to one of the arcs. It also requires augmenting the accelerator’s present 20 cryomodules per linac with five higher-performing ones per linac. Each 25-cryomodule linac will then represent 1.1 GV of accelerating capacity. The maximum energy for five passes will rise to 11 GeV for the three original halls, with experimental equipment upgraded in each.

As of early 2009, not only have the first contracts been awarded, but solicitations have been issued for about 40% of the total construction cost. CEBAF’s upgrade is the highest-priority recommendation of the Nuclear Science Advisory Committee’s December 2007 report The Frontiers of Nuclear Science: A Long Range Plan. “Doubling the energy of the JLab accelerator,” the report states, “will enable three-dimensional imaging of the nucleon, revealing hidden aspects of its internal dynamics. It will test definitively the existence of exotic hadrons, long-predicted by QCD as arising from quark confinement.” Efforts to realize this new scientific capability are now well underway.

Franco Bonaudi: wise spirit of the early CERN

CCbon1_03_09

Franco Bonaudi, who died on 21 December 2008, was one of the first electronic engineers to work for CERN. In July 1952, two years before the organization was formally created, he was sent from Rome by Edoardo Amaldi to Liverpool, to learn about synchrocyclotrons. Speaking only some basic English, he arrived in Liverpool with Frank Krienen, assistant to Cornelis Bakker, the newly appointed team leader for the 600 MeV Synchrocyclotron (SC) that was to be CERN’s first accelerator facility. Bonaudi got on so well with his hosts and his new boss that, as well as perfecting his English and learning about accelerators, he acquired valuable training in dealing with industrial firms, as Krienen had earlier worked in the research laboratories of Philips at Hilversum. Krienen and Bonaudi left Liverpool when the CERN staff started to gather close to the Geneva site where the new European laboratory was to be built. Most of the SC team were housed in barracks at Geneva’s airport, but Bonaudi, with Joop Vermeulen, was soon dispatched to a hut on the Meyrin site to oversee the construction of the accelerator. The staff of the infant CERN numbered around 150 at that time and were of many different nationalities and nearly all strangers to the region. Communicating in poor English, they worked together as family and friends – the first CERN telephone directory contained private numbers. Bonaudi said of that period: “We made real friendships”. At the same time, they rapidly completed their professional task and the first beam circulated in the SC on 1 August 1957.

Meanwhile, a much larger undertaking was progressing well, with the construction of the 24 GeV PS. Before the machine saw its own first beam on 24 November 1959, Bonaudi had become leader of the Apparatus Layout Group and so taken his first steps from machine builder towards experimental support, which was to become his greatest strength. Theo Kröwerath, a charismatic figure who progressed from driving a tank to being responsible for CERN’s Transport Group, still remembers the trips to suppliers that he made with Bonaudi at that time and believes that he owes his professional success to those, like Bonaudi, whose approach epitomized the spirit of the early CERN. This ethos was grounded in a tremendous respect for the work of all members of a team, whether they were engineers, physicists, technicians, mechanics or crane drivers, with – an added speciality of CERN – the more nationalities in a group, the better.

CCbon2_03_09

In 1963, with construction of the SLAC 20 GeV linear accelerator just beginning, Bonaudi went to California for a year to help to design the experimental areas. He shared an office with David Coward, who remembers that Bonaudi’s experience proved invaluable. He made significant contributions to designs of radiation shielding, for both personnel and experimental equipment, and to the design of the distribution of utilities throughout the SLAC experimental areas. He also actively participated in the physics meetings that helped to establish the nascent SLAC experimental physics programme. At the same time, Bonaudi made friends for life and helped to initiate a successful series of exchanges of physicists and engineers.

Back at CERN, Bonaudi was asked to design the experimental areas for the Intersecting Storage Rings (ISR), based on the space and facilities indicated by some initial ideas for experiments. Construction began in 1966, with CERN’s Meyrin site extended into France to accommodate the machine. Bonaudi, as head of the ISR General Layout Group, was responsible for building the halls for experiments and the tunnels for the whole machine.

The group included its own civil engineering section and later had sections for both electrical cabling and power distribution. Taking over responsibility for the control and signal cabling – invariably underestimated for physics and machines alike – revealed another insight into Bonaudi’s way of solving problems. When cabling teams fell behind schedule, he would invite all of the members of the group to join him on a cable-pulling weekend. Such was his popularity that it was always a huge success, with everybody knowing that “the boss” would be working harder than anyone, on the worst part of the task. His unspoken motto was: “Let’s get the job done as simply and quietly as possible.”

From the ISR to LEP

With the ISR construction satisfactorily completed and first beams colliding in January 1971, Bonaudi converted his construction team into the ISR Experimental Support Group, which offered extensive assistance to the many and diverse experiments. It is worth noting that, while it was necessary to excavate a few pits to create more space under the collision regions for some of the later, larger experiments, the halls proved to be correctly dimensioned, though some of the built-in flexibility, such as demountable machine piers, was never needed.

CCbon3_03_09

Both John Adams and Léon Van Hove, as executive and research directors-general respectively, recognized Bonaudi’s success at the ISR. As a result, he became a much appreciated member of the directorate with responsibility for the entire CERN accelerator complex. His mandate included the period when the SPS was converted into a proton–antiproton collider, following the proposal by Carlo Rubbia. Bonaudi chaired the committee of accelerator experts that defined the final layout of the whole project and chose stochastic cooling, invented by Simon van der Meer at the ISR, to produce the low-emittance, 3.5 GeV beams of antiprotons. This latter choice, together with the decision to accelerate the antiproton beam in the PS before injection into the SPS – an essential point that Bonaudi recognized and finally decided – was the key to the success of the Nobel Prize-winning project.

After completing his three-year term in the directorate, Bonaudi was delighted to be invited to join the UA2 experiment and work hands on with particle detectors, namely the central calorimeter, from testing to data taking. Pierre Darriulat, who was the spokesman of the UA2 experiment, recalls that Bonaudi’s colleagues on UA2 liked him a lot, and respected him highly for his wisdom. On many occasions where a difficult decision had to be made, his advice was taken and followed. In Darriulat’s words: “He visibly enjoyed the exciting research atmosphere and the contacts with younger colleagues, and his relations with the members of the collaboration were of a very close and profound friendship.” Bonaudi’s wisdom was soon required again by CERN for the LEP project. He was invited to join the project management team with responsibility for the experimental areas. The four, deep, underground areas in CERN’s first project to be classified as an Installation Nucléaire de Base by the French government required a new approach to safety throughout the construction and installation phase. Bonaudi took these aspects seriously and the low accident rates during the project show how successful he was.

The director-general of the time, Herwig Schopper, notes that Bonaudi’s responsibilities for the infrastructure of the LEP experiments – which involved getting the complicated detectors and all of the necessary services installed in time – represented a formidable challenge. It was made particularly tricky by the changing time-schedule that arose from difficulties with the LEP tunnelling. “If the experiments were ready to take data at the turn on of the machine, it was in great part thanks to the untiring efforts of Franco,” says Schopper. Both Schopper and Emilio Picasso, the LEP project leader, stress the importance of Bonaudi’s presence on the LEP Management Board. “Franco’s regular contributions at meetings were always appreciated for the competence of his intervention, and we always followed his suggestions,” Picasso recalls. “I also greatly appreciated that, thanks to him and his group, the collaboration with the physics community was smooth and successful.”

CCbon4_03_09

Once the LEP beams were successfully circulating in 1989, Bonaudi’s attention returned to particle detectors, this time taking on the task of scientific secretary of the Detector Research and Development Committee, which advised the director-general on the numerous detector R&D projects being launched for the future high-luminosity LHC. While some might see this as a routine task, for Bonaudi it was an opportunity to work with friends and colleagues from the detector community, as he prepared for retirement from CERN in March 1993.

Retirement meant more time to devote to helping people in other ways, and Bonaudi immediately became involved in training and education, in particular in his home city of Turin, where he was appointed a member of the Academy of Science in 1991. He gave lectures on detectors and accelerators at both Turin University and the Politecnico, where he had completed his own studies in 1950. Even before retiring, in 1988 he became an active member of the scientific committee of the Associazione Sviluppo Piemonte (ASP: the Association for the Development of Piedmont), taking care of the relationship between ASP and CERN.

Throughout the 1990s, Bonaudi gave seminars and lectures to complement courses at Turin University on accelerators and detectors. In 1991 he was one of the founding organizers of the successful school for Italian young researchers and doctoral students, Giornate di studio dei Rivelatori. Emilio Chiavassa, professor of physics at Turin, recalls: “Franco was not only a promoter and organizer of the school, but actively participated every year with enthusiasm and competence.” Now in its XIX edition, the school held on 10–13 February was subtitled “Scuola F Bonaudi” in his memory. In addition, he gave many courses on accelerator physics and detectors at the Politecnico, where he was, says Piero Quarati, “a reference point for all the engineering students who went to work at CERN for their laurea or PhD”. Elsewhere, Bonaudi was sought after as a member of several advisory committees, notably at the INFN-Frascati Laboratory.

Andrew Hutton, director of the Accelerator Division at the US Thomas Jefferson Laboratory, who chaired the DAΦNE Machine Advisory Committee, particularly appreciated his experience at the interface between the accelerator builders and the experimenters. “While Franco had the technical understanding of both groups,” says Hutton, “more importantly, he had the personality to be able to bridge the mutual incomprehension between them. Franco always aimed to help everyone see the best way forward and to understand the point of view of the other side, so everyone left his meetings with the sense that they had gained something – a rare talent.” Bonaudi organized a series of meetings between the DAΦNE accelerator builders and the future experimenters, bringing to the Machine Advisory Committee the results of the consensus that he had engineered. “In the committee he was always low key, adding a word here and there to facilitate the discussions,” remembers Hutton. “What I came to realize only later was that he was aware that I had never chaired a committee like this before, and he was steering me away from pitfalls and mistakes without anyone, including me, being aware of it.”

This ability was also appreciated outside Italy. For a number of years Bonaudi was invited ad personam to be a member of more than one advisory committee for the European Southern Observatory (ESO), and he also chaired a working group. Per Olof Lindblad, who was the representative of the ESO Council on another group, recalls that Bonaudi made particularly constructive contributions concerning the roles of a project scientist and the need for a project manager for the Very Large Telescope.

Bonaudi’s concern for others was always evident, whether driving the elderly and needy for a Swiss charitable organization or actively participating in the Middle East Scientific Collaboration (MESC). Eliezer Rabinovici, professor of physics at the Racah Institute of Physics, the Hebrew University, Jerusalem, recalls: “The group of scientists and interested people that gathered under the umbrella of the MESC was very colourful. We prepared together the activities highlighted by the very special meetings in Dahab and Turin.” In particular, the meeting in Turin was the first occasion when the idea of the Synchrotron-light for Experimental Science and Applications in the Middle East – SESAME, the synchrotron radiation laboratory created under the auspices of UNESCO in Jordan – was introduced to a middle-eastern audience.

Franco Bonaudi’s contribution to CERN is obvious and inestimable. He helped to shape the successful, world-renowned research organization that we know today. His influence went far beyond the boundaries of CERN and, no matter where, all of his colleagues remember him as a great friend, the wisest of men, who will be sorely missed. It was a privilege to know and work with him.

LHeC: novel designs for electron–quark scattering

CClec1_03_09

Abdus Salam, at the Rochester Conference in Tbilisi in 1976, considered the idea of “unconfined quarks” and leptons as a single form of matter, in contrast to the distinctions between them in the Standard Model. Some 30 years later, it is appropriate to ask if a high-performance electron–proton collider could be built to investigate such ideas, complementing the LHC and a pure lepton collider at the tera-electron-volt energy scale. Entering this unexplored territory for electron–quark scattering is a challenging prospect, but one that could yield vast rewards.

On 1–3 September 2008, some 90 physicists met at Divonne, near CERN, for the inaugural meeting of the ECFA–CERN Large Hadron Electron Collider (LHeC) workshop on electron–proton (ep) and electron–ion (eA) collisions at the LHC. The workshop will initially run for two years, and a diverse mixture of accelerator scientists, experimentalists and theorists will produce a conceptual-design report. This will assess the physics potential of an electron beam interacting with LHC protons and ions, as well as details of the electron–beam accelerator, the interaction region, detector requirements and the impact on the existing LHC programme. HERA, at DESY, was the previous ep machine at the energy frontier. By the time it ceased operation in 2007, its 15 year programme had led to many new insights into strong and electroweak interactions, provided much of the current knowledge of the parton densities of the proton and placed important constraints on physics beyond the Standard Model.

Physics potential

A new era of high energy and intensity for proton beams is now beginning with the switch-on of the LHC. Preliminary estimates suggest that the addition of an electron beam could yield ep collisions at a luminosity of the order of 1033 cm–2s–1 and a centre-of mass energy of 1.4 TeV (J Dainton et al. 2006). This would probe distance scales below 10–19 m (figure 1). In comparison, the best performance ever achieved at HERA was a luminosity of 5 × 1031 cm–2s–1 at an ep centre-of-mass energy of 318 GeV (figure 2).

The large luminosity and energy increases set the LHeC apart from other future ep colliders previously considered. If realized, it would lead to the first precise study of lepton–quark interactions at the tera-electron-volt scale and would have considerable discovery potential. The workshop in Divonne began with remarks from CERN’s chief scientific officer, Jos Engelen, who expressed CERN’s interest and support for the study. Encouragement from ECFA was sent via its chair, Karlheinz Meier of Heidelberg, and the involvement of the nuclear-physics community was highlighted by Guenther Rosner from Glasgow, now chair of the Nuclear Physics European Collaboration Committee (NuPECC). In his opening lecture, Guido Altarelli of Rome introduced the wide-ranging possibilities of ep physics at the “terascale” and urged that ep/eA collisions must happen at some point during the LHC’s lifetime. The chair of the LHeC steering group, Max Klein of Liverpool, then summarized previous promising work on the topic and the aims of the new workshop.

Following the opening session, the meeting split into smaller groups to discuss specialized issues in more detail, with each group reporting its findings at the conclusion of the meeting.

CClec2_03_09

An LHeC would be uniquely sensitive to the physics of massive electron–quark bound states and to other exotic processes involving excited or supersymmetric fermions. Beyond the search for new particles such as these, the LHeC would complement the LHC in the investigation of the Standard Model and in understanding new physics. Light Higgs bosons would be produced dominantly through WW fusion and could be precisely studied in decay modes such as bb, which is expected to be problematic at the LHC. At the LHeC, top quarks would be produced copiously, both singly and in pairs, in the relatively clean environment offered by ep scattering.

With LHeC data the parton densities of the proton could be measured at momentum- transfer-squared beyond 106 GeV2 and at small fractions of the proton momentum (with Bjorken-x below 10–6), which are previously unexplored regions (figure 3). The kinematic range covered would match that required for a full understanding of parton–parton scattering in LHC proton–proton (pp) collisions. LHeC data would constrain each of the quark flavours separately for the first time, giving unrivalled sensitivity to the heavy quarks and to the gluon density over several orders of magnitude in x. In the process the strong coupling-constant could be measured to unprecedented precision.

CClec3_03_09

Such an ep collider would provide an unrivalled laboratory for the study of strong-interaction dynamics. It would access a low-x region where quarks that are usually “asymptotically free” meet an extremely high background-density of partons. Various novel effects are predicted, including a well supported conjecture that, in protons at LHC energies, pairs of the densely packed partons begin to recombine into single quarks or gluons.

An LHeC would also allow for the scattering of leptons off heavy ions, which also has outstanding potential because all current knowledge of nuclear-parton distributions has been obtained in fixed-target experiments. The LHeC would extend the x and Qranges explored by up to four orders of magnitude, offering an understanding of the initial partonic states in LHC heavy-ion collisions and amplifying the sensitivity to the new physics of ultradense partonic systems. Electron–deuteron scattering would allow the first exploration of neutron structure at collider energies, leading to further unique studies of parton densities and to tests of long-proposed relationships between diffraction and nuclear shadowing.

Accelerator challenges

To realize these wide-ranging physics possibilities the main challenge lies in bringing the LHC’s protons or heavy ions into collision at high luminosity with a new electron beam, without inhibiting the ongoing hadron–hadron collision experiments. Working groups are pursing two basic lay-outs of the electron accelerator for the conceptual design report, in order to understand fully the advantages and consequences of each.

CClec4_03_09

An electron beampipe in the same tunnel as the LHC has the advantage of high luminosity, beyond 1033 cm–2s–1, at energies of 50–70 GeV for reasonable power consumption (figure 4). According to a preliminary study, synchronous ep and pp LHC operation appears to be possible. This set-up would require by-pass tunnels of several hundred metres around existing experiments. These ducts could be used to host the RF infrastructure and could be excavated in parallel with normal LHC operations. Injection to an electron ring could be provided by the Superconducting Proton Linac (SPL), which is under consideration as part of the LHC injection upgrade. A further option for an initial phase of the LHeC is to use multiple passes in the SPL for the full electron acceleration, which could produce energies of around 20 GeV.

CClec5_03_09

An alternative solution for the electron beam is a linear accelerator (linac) with somewhat reduced luminosity but with an installation that is decoupled from the existing LHC ring (figure 5). The linac could use RF cavity technology under development for the proposed International Linear Collider, in either pulsed or continuous-wave mode. Power and cost permitting, it could produce energies of 100 GeV or more and provide electron–quark collisions at a centre-of-mass energy approaching 2 TeV.

Detailed calculations of the LHeC electron-beam optics have led to proposals for the layout of the interaction region, which is also a major consideration for the detector design. The highest projected luminosities, which are required to probe the hardest of ep collisions, may be achieved by placing beam-focusing magnets close to the interaction point. However, measurements at small angles to the beampipe are also important for the study of the densest partonic systems at low x and the hadronic final state at high x. Among the many interesting ideas, one proposed design involves instrumenting the focusing magnets for energy measurements. A first detector study for ep and eA physics at the LHeC includes high-precision tracking and high-resolution calorimetry, which would lead to a new level of precision in ep collider experiments.

Following an interim report presented to ECFA at the end of November 2008, the conceptual design work on an ep/eA collider at the LHC continues, with a second major workshop meeting scheduled for 7–8 September 2009. If realized, this facility would become an integral part of the quest to understand fully the new terascale physics that will emerge as the LHC era unfolds.

Planck satellite takes off to chart the universe

CCpla1_03_09

ESA’s Planck spacecraft is the first European satellite dedicated to the study of the cosmic microwave background (CMB) radiation. Due to be launched on 29 April aboard an Ariane 5 rocket from ESA’s launch site in Kourou, French Guyana, Planck’s primary goal is to determine the cosmological parameters of the universe and to survey astronomical sources. Scientists are hopeful that it should also answer many other fundamental and astrophysical questions.

CCpla2_03_09

The satellite will orbit at the second Lagrangian point (L2) of the Earth–Sun system at 1.5 million km from the Earth (figure 1). From this position, Planck will explore the unknowns of the cosmic background radiation – the relic radiation that brings with it many secrets of the history and evolution of the universe. For 380,000 years following the Big Bang, all of the dramatic events that steered the evolution of the universe, its geometry and properties were imprinted and memorized in the CMB.

The CMB today permeates the universe and has an average temperature of 2.725 K, though observations have revealed slightly colder and hotter spots known as anisotropies. Highly accurate studies of where these anisotropies are and what produced them may allow researchers to decode a wealth of information about the properties of the universe. Planck’s task – 13.7 billion years after the Big Bang – is not an easy one, however, because the radiation signal is feeble and is embedded in all of the other galactic and extragalactic signals, each emitting at different frequencies.

Following in WMAP’s footsteps

The first two scientific missions to map the CMB and its anisotropies were NASA’s Cosmic Background Explorer (launched in 1989) and the Wilkinson Microwave Anisotropy Probe (WMAP, launched in 2001). The data from these two satellites confirmed that the universe is flat, that its expansion is accelerating and that only 4% consists of known forms of matter. Nevertheless, given the lower accuracy of previous experiments, many questions remain concerning the nature of dark energy (73% of the universe) and dark matter (23%), as well as the processes that marked the infancy of the universe.

CCpla3_03_09

Planck comes eight years after WMAP and is designed to improve significantly on those results. The satellite is equipped with both the Low Frequency Instrument (LFI) and the High Frequency Instrument (HFI). “Together, the two instruments will scan the universe in nine frequency channels, with a sensitivity that is 10 times better than that of WMAP,” says Reno Mandolesi of the Italian Institute of Space Astrophysics and Cosmic Physics in Bologna (IASF-BO/INAF). He is also the principal investigator of the consortium that built the LFI. “However, the main improvement of Planck, with respect to previous missions, is in the suppression and control of systematic effects. The HFI and LFI employ two different detection techniques and this drastically reduces the systematic effects. Both instruments operate at cryogenic temperatures, at which the intrinsic noise coming from the devices is reduced to a minimum,” he adds.

The systematic effects can also be controlled by an appropriate choice of orbit- and sky-scanning strategy. “WMAP was the first satellite to orbit round L2 and Planck will fly in a similar orbit. From L2 the noise from the Earth is drastically reduced,” confirms Mandolesi. Also, from this position the satellite’s telescope can always be protected from illumination from the Earth, the Sun and the Moon, thanks to the optimal design and observational strategy.

The LFI is an array of 22 radiometers, each one made of an antenna to capture the signal and cryogenically cooled (20 K) electronics – a combination of ultralow-noise amplifiers and high-electron-mobility transistors – for read-out. “Low-noise temperature fluctuations in the amplifiers are a crucial factor in the measurement,” says Mandolesi. “The LFI radiometers meet the requirements for both noise and bandwidth, with low power consumption at all frequencies – and they establish world-record low-noise performances in the 30–70 GHz range. This is particularly important considering that the main noise sources come from our own galaxy and have their minimum around the 70 GHz frequency,” he explains.

The HFI is an array of 48 bolometric detectors that is placed at the focal plane of the Planck telescope. These will measure the energy of the incident CMB radiation in six frequency channels between 100–857 GHz, with sensitivity in the lower frequencies close to the fundamental limit set by the photon statistics of the background. The HFI was designed and built by a consortium of scientists led by Jean-Loup Puget of the Institut d’Astrophysique Spatiale in Orsay. The detectors operate at the cryogenic temperature of 0.1 K, obtained using a cryochain of sorption, mechanical and dilution coolers.

Signals from the CMB are polarized in two types of mode, known as E-modes and B-modes. The E-modes have already been measured (Kovac et al. 2002; Page et al. 2007). All of the LFI channels and four of the HFI channels can measure the intensity of the CMB radiation as well as its linear polarization. “By combining the signals measured by the LFI and HFI, Planck might be able to discover the B polarization mode, which is linked to the existence of the primordial gravitational waves” says Mandolesi.

CCpla4_03_09

“In some cosmological models it could even be possible to find signatures that might correspond to scenarios with extra dimensions of the universe. Also, the mass and quantum fluctuations that occurred at 10–35s after the Big Bang, and might have affected the cosmic inflation, can be explored by studying the polarization modes of the CMB with high accuracy. Furthermore, Planck’s excellent sensitivity might allow the discovery of interesting physics hidden behind the non-Gaussian distribution of the temperature anisotropies predicted by many cosmological models,” he explains.

Planck will start collecting physics data after a three-month period of commissioning in orbit. Six months later the scientific teams will start the analysis, aimed at the early release of a catalogue of compact sources, the first to be made at so many frequencies. It is expected to become public about 15 months after the launch. A core team of about 100 scientists supporting the Data Processing Centre in Trieste will carry out the processing and analysis of LFI data. The HFI data will be processed by a distributed system involving several institutes in France and the UK. The satellite will accomplish two complete surveys of the sky over 14 months and the hope is that this will be extended to four surveys.

• For more about Planck, see www.esa.int/SPECIALS/Planck/.

Knowledge transfer: from creation to innovation

There have been many studies of economic returns to member states from purchasing. The most recent report to be published clearly indicates that European industry values highly the benefits that result from technological learning. In the LHC experiments almost half of the participants are from non-member states of CERN and many contracts are placed in nonaffiliated countries. Thus the spillover of technological learning from high-energy physics now extends worldwide. However, CERN’s potential may well be underutilized when it comes to industry.

It could probably enhance the spectrum of its technological impact by paying more attention to the management of technological learning and by fostering more-explicit exchanges of new technological developments outside the needs of just the purchasing sphere. This is particularly important today because developments in high energy physics can take up to 20 years before their impact is felt outside the field.

The basis for the study

A recent study by Helsinki University and CERN, entitled Knowledge Creation and Management in the Five LHC Experiments at CERN: Implications for Technology Innovation and Transfer, has made a detailed analysis of knowledge transfer in the context of the LHC experiments ALICE, ATLAS, CMS, LHCb and TOTEM. This both conceptualizes and confirms with quantitative data CERN’s role in the creation of knowledge.

Each year hundreds of young people join CERN as students, fellows, associates or staff members taking up their first employment. This continuous flow of people – who come to CERN, are trained by working with CERN’s experts and then return to their home countries – provides a useful example of knowledge and technology transfer through people. The new study provides evidence that the social process of participation in meetings, the acquisition of skills in different areas and the development of interests through interaction with colleagues are key elements in the learning process.

The study analysed 291 replies to a questionnaire handed out during LHC-experiment collaboration meetings, which asked questions concerning individual perception and assessment of knowledge acquisition and transfer, as well as the means used to communicate knowledge. The respondents consisted of CERN users (79%) and staff members (21%), with 80% of them physicists. Some 70% of respondents were younger than 40.

CCktt1_03_09

Figure 1 illustrates the model for the pattern of knowledge acquisition and transfer on which the study was based. It is important to underline that within this scheme, and indeed in general, only individuals can create knowledge, which expands from tacit-to-explicit knowledge through social interaction. Tacit knowledge is essentially individual and cannot necessarily be communicated and shared in a systematic or logical manner. It has to be converted into words or numbers that are understood easily enough to be shared and become explicit. However, not all individual tacit knowledge becomes explicit. Explicit knowledge is something more formal and systematic. It can be expressed in words and numbers, is easily communicated and shared in the form of written and spoken language, hard data, scientific formulae and codified procedures.

For organizations to be effective in the process of knowledge transfer they must provide a context in which individuals can hold both formal and informal discussions to steer new ideas as well as foster collective learning. Economists and sociologists see this knowledge generation as being particularly important because it underlies societal and technological innovation and is of relevance to the industrial and wider world. One of CERN’s core assets is individual and organizational learning – the latter being the social process where a group of people collectively enhance their capacities to produce an outcome. The creation of organizational knowledge amplifies the knowledge that is created by individuals who spread it at the group level through dialogue, discussion, experience sharing or observation.

Learning benefits

CCktt2_03_09

Large experiments, such as those at the LHC, form the hub of an institutional and organizational network. The interactions between individuals – both among teams and within teams that share a common interest – as well as between experiments, are important routes for knowledge transfer, according to the study (figures 2a and 2b).

Such interactions are enabled by the organizational structure of the collaboration and by the frequent use of modern communication tools, such as e-mail and websites. Furthermore, the results indicate that knowledge acquisition in the multicultural environment plays a mediating role in the interaction between social capital constructs (social interaction, relationship quality and network ties) and outcomes related to competitive advantage (invention development and technological distinctiveness). In short, the fertile environment of the LHC experiments fosters a dynamic, interactive and simultaneous exchange of knowledge both inside and outside the collaborations (figure 2c).

Individuals can create and expand knowledge through the social process, which also involves industry at various phases of project development. The study was unable to assess the interaction with industry completely because it was carried out towards the end of the installation phase, when R&D was over and most of the important, challenging orders had already been placed. There was, therefore, not much need of follow-up and contact with industry. Nevertheless, the respondents generally agreed that they had benefited from relationships with and knowledge of industry (figure 2d). It was clear that only a select group of people had been in charge of relations with industry; the scarcity of data (for the reasons explained previously) did not allow their profile to be characterized.

The study also assessed the personal outcomes of knowledge transfer, which were found to be substantial in all of the experiments. These were evaluated in terms of the widening of scientific interests and knowledge; the expansion of social networks; and the enhancement of scientific skills at many different levels (planning, data analysis, paper writing) with the acquisition of new technical and technological skills. These positive outcomes span a wide age-range, demonstrating a benefit to both young and experienced physicists. The domains of useful technological learning ranged from physics to detector technologies, electronics, information technologies and management. The many innovative developments can be categorized as follows: 41% in detector technologies; 33% in computing; 25% in electronics: and 1% in other areas.

The results also show the importance of management in large physics collaborations (94 of 291 respondents had a management and co-ordination role in addition to their physics or engineering functions). Almost 50% of the respondents underlined the positive effect on their career of having performed managerial functions.

The development of these personal skills, which fall into four categories (learning technical skills, learning scientific skills, improving social networking, and increasing employment potential in the labour market) should be managed, used and catalysed to target individual development to improve opportunities in the labour market for individuals working in high-energy-physics environments. The researchers who responded to the study also showed a certain amount of entrepreneurship, with a positive approach towards going to work for companies or towards creating their own company (˜6%). Of those who would consider going to work for a company, about half are below the age of 55. These results should encourage further research studies into how best to foster learning and innovation of “big science” enterprises.

Dark-matter research arrives at the crossroads

CCdes1_03_09

There is overwhelming evidence that the universe contains dark matter made from unknown elementary particles. Astronomers discovered more than 75 years ago that spiral galaxies, such as the Milky Way, spin faster than allowed by the gravity of known kinds of matter. Since then there have been many more observations that point to the existence of this dark matter.

Gravitational lensing, for example, provides a unique probe of the distribution of luminous-plus-dark matter in individual galaxies, in clusters of galaxies and in the large-scale structure of the universe. The deflection of  gravitational light depends only on the gravitational field between the emitter and the observer, and it is independent of the nature and state of the matter producing the gravitational field, so it yields by far the most precise determinations of mass in extragalactic astronomy. Gravitational lensing has established that, like spiral galaxies, elliptical galaxies are dominated by dark matter.

Strong evidence for the fact that most of the dark matter has a non-baryonic nature comes from the observed heights of the acoustic peaks in the angular power spectrum of the cosmic microwave background measured by the Wilkinson Microwave Anisotropy Probe, because the peaks are sensitive to the fraction of mass in the baryons. It turns out that only about 4% of the mass of the universe is in baryons, whereas about 20% is in non- baryonic dark matter – a finding that is also in line with inferences from primordial nucleosynthesis.

A host of candidates

This leaves some pressing questions. What is the microscopic nature of this non- baryonic dark matter? Why is its mass fraction today about 20%? How dark is it? How cold is it? How stable is it?

CCdes2_03_09

Progress in finding the answers to such questions provided the focus for the 2008 DESY b  Theory Workshop, which was held on 29 September – 2 October.

Organized by Manuel Drees of Bonn, it sought to combine results from a range of experiments and confront them with theoretical predictions. It is clear that the investigation of the microscopic nature of dark matter has recently entered a decisive phase. Experiments are being carried out around the globe to try to identify traces of the  mysterious dark-matter particles. Since the different theoretical candidates appear to have quite distinctive signatures, there are good reasons to expect that from a combination of all of these efforts a common picture will materialize within the next decade.

Theoretical particle physicists have proposed a whole host of candidates for the constituents of non-baryonic dark matter, with fancy names such as axions, axinos, gravitinos, neutralinos and lightest Kaluza–Klein partners. The best-motivated of these occur in extensions of the Standard Model that have been proposed to solve other problems besides the dark-matter puzzle. The axion, for example, arose in extensions that aim to solve the strong CP problem. It later turned out to be a viable dark- matter candidate if its mass is in the micro-electron-volt range. Gravitinos and neutralinos, on the other hand, are the superpartners of the graviton and the neutral bosons, respectively. They arise in supersymmetric extensions of the Standard Model, which aim at a solution of the hierarchy problem and at a grand unification of the strong and electroweak interactions. In fact, neutralinos are natural candidates for dark matter because they have cross-sections of the order of electroweak interactions and their masses are expected to be of the order of the weak scale (i.e. 100 GeV). This leads to the fact that their relic density resulting from freeze-out in the early universe is just right to account for the observed amount of dark matter.

Neutralinos belong to the class of weakly interacting massive particles (WIMPs). Such particles seem to be more or less generic in extensions of the Standard Model at the tera-electron-volt scale, but their stability (or a long enough lifetime) has to be imposed. This is not necessary for super-weakly interacting massive particles (superWIMPs), such as sterile neutrinos, gravitinos, hidden sector gauge bosons (gauginos) and the axino. For example, unstable but long-lived gravitinos in the 5–300 GeV mass range are viable candidates for dark matter and provide a consistent thermal history of the universe, including successful Big Bang nucleosynthesis.

Detecting dark matter

Owing to their relatively large elastic cross-sections with atomic nuclei, WIMPs such as neutralinos are good candidates for direct detection in the laboratory, yielding up to one event per day, per 100 kg of target material. The expected WIMP signatures are nuclear recoils, which should occur uniformly throughout the detector volume at a rate that shows an annual flux modulation by a few per cent. Intriguingly, the DAMA experiment in the Gran Sasso National Laboratory has seen evidence for such an annual modulation.

However, there is some tension with other direct-detection experiments. Theoretical studies have revealed that interpretation in terms of a low-mass (5–50 GeV) WIMP is marginally compatible with the current limits from other experiments. In contrast to DAMA, which looks just for scintillation light, most of the latter exploit at least two observables out of the set (phonons, charge, light) to reconstruct the nuclear recoil-energy.

CCdes3_03_09

Many different techniques based on cryogenic detectors (e.g. the Cryogenic Dark Matter Search), noble liquids (e.g. the XENON Dark Matter Project) or even bubble chambers, are currently employed to search for WIMPs via direct detection. Detectors with directional sensitivity (e.g. the Directional Recoil Identification From Tracks experiment) may not only have a better signal-to-background discrimination but may also be capable of measuring the local dark-matter, phase-space distribution. In summary, these direct experiments are currently probing some of the theoretically interesting regions for WIMP candidates. The next generation of experiments may enter the era of WIMP (astro) physics.

The axion is another dark-matter candidate for which there are ongoing direct- detection experiments. Both the Axion Dark Matter Experiment (ADMX) in the US and the Cosmic Axion Research with Rydberg Atoms in a Resonant Cavity (CARRACK) experiment in Japan exploit a cooled cavity inside a strong magnetic field to search for the stimulation of a cavity resonance from a dark-matter axion–photon conversion in the microwave frequency region, corresponding to the expected axion mass. While they differ in their detector technology – ADMX uses microwave telescope technology whereas CARRACK employs Rydberg atom technology – both experiments are designed to cover the 1–10 μeV mass range. Indeed, if dark matter consists just of axions then it should soon be found in these experiments. The CERN Axion Solar Telescope, meanwhile, is looking for axions produced in the Sun.

There are also of course possibilities for indirect detection. Dark matter may not be absolutely dark. In fact, in regions where the dark-matter density is high (e.g. in the Earth, in the Sun, near the galactic centre, in external galaxies), neutralinos or other WIMPs may annihilate to visible particle–antiparticle pairs and lead to signatures in gamma-ray, neutrino, positron and antiproton spectra. Moreover, superWIMPs (e.g. gravitinos), may also leave their traces in cosmic-ray spectra if they are not absolutely stable.

CCdes4_03_09

Interestingly, the Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA) satellite experiment recently observed an unexpected rise in the fraction of positrons at energies of 10–100 GeV, thereby confirming earlier observations by the High Energy Antimatter Telescope balloon experiment. In addition, the Advanced Thin Ionization Chamber balloon experiment has reported a further anomaly in the electron-plus positron flux, which can be interpreted as the continuation of the PAMELA excess to about 800 GeV. The quantification of these excesses is still quite uncertain, not least because of relatively large systematic uncertainties. It is well established that they cannot be explained by the standard mechanism, namely the secondary production of positrons arising from collisions between cosmic-ray protons and the interstellar medium within our galaxy. However, a very conventional astrophysical source for them could be nearby pulsars.

On a more speculative level, these observations have inspired theorists to search for pure particle-physics models that accommodate all results. Generically, interpretations in terms of WIMP annihilation seem to be disfavoured, because they require a huge clumpiness of the Milky Way dark-matter halo, which is at variance with recent numerical simulations of the latter. This constraint is relaxed in superWIMP scenarios, where the positrons may be produced in the decay of dark-matter particles (e.g. gravitinos).

It is clear that one of the keys to understanding the origin of the excess in the positron fraction is the accurate, separate measurement of positron and electron fluxes, which can be done with further PAMELA data and with the Alpha Magnetic Spectrometer satellite experiment. Furthermore, distinguishing different interpretations of the observed excesses requires a multimessenger approach (i.e. to search for signatures in the radio range, synchrotron radiation, neutrinos, antiprotons and gamma rays).

Fortunately the Fermi Gamma-Ray Space Telescope is in orbit and taking data. Together with other cosmic-ray experiments it will probe interesting regions of parameter space in WIMP and superWIMP scenarios of dark matter.

Dark matter at colliders

Clearly, at colliders the existence of a dark-matter candidate can be inferred only indirectly from the apparent missing energy, associated with the dark-matter particles, in the final state of the collision. However, such a measurement can be made with precision and under controlled conditions. To extract the properties, such as the mass, of dark-matter particles, these final-state measurements have to be compared with predictions from theoretical models. In a supersymmetric extension of the Standard Model, for example, with the neutralino as the lightest superpartner, experiments at the LHC would search for signatures from the cascade decay of gluinos and squarks into gluons, quarks, leptons and neutralinos. This would show up as large missing transverse-energy in events with some jets and leptons. The endpoints in kinematic distributions could then be used for the determination of the dark-matter candidate’s mass, which could be compared with the mass determined eventually by measurements of recoil energy in direct-detection experiments.

This complementarity between direct, indirect and collider searches for dark matter is essential. Although collider experiments might identify a dark-matter candidate and precisely measure its properties, they will not be able to distinguish a cosmologically stable particle from one that is long-lived but unstable. In turn, direct detection cannot tell definitely what kind of WIMP has been observed. Moreover, in many superWIMP dark matter scenarios a direct detection is impossible, while detection at the LHC may be feasible. For example, if the lightest superpartner is a gravitino (or hidden gaugino) and the next-to-lightest is a charged lepton, experiments at the LHC may search for the striking signature of a displaced vertex plus an ionizing track.

In many cases, however, precision measurements from a future electron–positron collider seem to be necessary to exploit fully the collider–cosmology–astrophysics synergy. In addition “low-energy photon- collider” experiments – such as the Axion-Like Particle Search at DESY, the GammeV experiment at Fermilab and the Optical Search for QED magnetic birefringence, axions and photo regeneration at CERN, where the interactions of intense laser beams with strong electromagnetic fields are probed – may give viable insight into the existence of very lightweight, axion-like, dark-matter candidates.

In summary, there is evidence for non-baryonic dark matter that is not made of any known elementary particle. We are today in the exploratory stage to figure out its microscopic nature. Many ideas are currently being explored in theories and in experiments, and more will come. Nature has given us a few clues that we need to exploit. The data coming soon from accelerators, and from direct and indirect detection experiments, will be the final arbiter.

Relativity: A Very Short Introduction

by Russell Stannard, Oxford

University Press. Paperback ISBN 9780199236220, £7.99.

CCboo1_03_09

In the series of Very Short Introductions by Oxford University Press there have been nuggets and non-nuggets. The book Relativity is definitely a nugget. We can all do the simple maths and use Pythagoras’s theorem but I have always found it difficult – even from Albert Einstein’s popular little book – to gain some “more intuitive” understanding of relativity. Russell Stannard’s text is the best that I have read.

He begins with the familiar: simultaneity, constancy of the speed of light, the paradox of the twin astronauts and so on. In each case he goes straight to the heart of the phenomenon – and each time I felt that I came out with a deeper understanding and better appreciation of how simple it all is. Stannard has in this short work collected all of the best analogies that I have come across while also managing to keep the reader smiling with some tongue-in-cheek remarks. There are a number of mathematical expressions sprinkled throughout the text; and they are not beyond the abilities of the interested layperson. The drawings and formulae are good, with artwork that is vastly better than in some of the other volumes in the series.

However, OUP has still not got it all entirely right. For example, the square root symbol – important in this particular text – is just a V symbol. Weird. In all, this is a pleasant book to read. It reminds one of how strange reality really is and how difficult it is for us humans to make simple mental models. This book is to be recommended.

Cosmic Impressions: Traces of God in the Laws of Nature

by Walter

Thirring, Templeton Foundation Press. Paperback ISBN 9781599471150, $19.95.

CCboo2_03_09

This is a translation from German of Kosmische Impressionen: Gottes Spuren in der Naturgesetzen so, in principle, I should have little to add to the excellent review by Herwig Schopper (CERN Courier March 2005 p48). The book is a presentation of the universe, its history and its laws, as well as covering cosmology, physics, chemistry and biology. It describes the fantastic progress of our knowledge from the end of the 19th century to the present. Thirring’s point of view is that the structure of the universe is so beautiful, and the conditions of our existence on Earth so miraculously set, that it is difficult not to see the signs of a superior architect behind it all. Whether or not you agree with the author (I do), this volume is extremely informative for everybody. It also contains colourful accounts of the encounters between Thirring (not only a witness but an important player) and the great men who made these incredible changes to our views of the world. (For more details, see Schopper’s review.)

The book makes it clear to all, including atheists, that naive positivism à la August Comte is dead. First you have the probabilistic nature of quantum mechanics: if you take a uranium nucleus you cannot predict if it will decay tomorrow or in one million years. Even in purely classical mechanics, you cannot predict the evolution of a complex system from initial conditions known with an arbitrarily small uncertainty beyond the “Lyapounov time”. Moreover, in nature you find spontaneously broken symmetries, which break in an unpredictable way. Many people, such as Murray Gell-Mann, think that the universe can be randomly projected on certain states at random times. Therefore we are far from the “clockmaker” God of Descartes. Despite all of this, however, the predictivity of physics has never been as fantastic as now: the calculated value of the magnetic moment of the electron given to 12 digits agrees perfectly with the experimental value.

Like Schopper, I can only recommend this book. The author makes a considerable effort to avoid technicalities. As a scientist, I am not in a position to say whether someone without a scientific background could follow it, but I believe that it is ideally suited to engineers, especially accelerator engineers, who aren’t always aware of the beautiful endeavour to which they contribute so much.

The Thermodynamic Universe: Exploring the Limits of Physics

by B G Sidharth, World Scientific. Hardback ISBN 9789812812346, £29 ($58).

51WHuwC+qEL._SX332_BO1,204,203,200_

This text examines developments that are leading to a paradigm shift and a new horizon for physics, at a time when the underlying principle of reductionism is being questioned. Presenting the new paradigm in fuzzy space–time, it is based on some 100 published journal papers and two recent books. This work has predicted correctly epoch-turning observations, for example, that the universe is accelerating with a small cosmological constant driven by dark energy – when the prevalent line of thinking was the exact opposite. Regarding a unified description of gravitation and electromagnetism via fluctuations, several other highlighted features presented are in complete agreement with experiments.

Mémoires d’un Déraciné, Physicien, Citoyen du Monde

by Georges Charpak, Odile Jacob. Paperback ISBN 9782738121844, €23.

Eighty-five years and at least three lives’ worth of living unfold in the three sections of these memoirs by Georges Charpak with the contributions from François Vannucci, Roland Omnès and Richard L Garwin.

Uprooted as a child from his native town on the Polish–Ukrainian border during the anti-Semite persecutions of the Russian civil war, he narrates the tribulations of a central-European immigrant in the first part of the book, entitled “Déraciné” (Uprooted). This is the account of his incredible early destiny, from his arrival in France at the age of seven, through his brilliant secondary studies in Paris to his engagement in the struggle against Fascism and subsequent imprisonment, and finally his survival of deportation to Dachau.

CCboo2_03_09

Charpak’s career as a physicist “started at age 24 and was more complex than that of most young French scientists”. This sets off the second part “Physicien”, which is entirely devoted to physics and – through the account of his career – a golden age of physics. After liberation, he first joined the Ecole des Mines (“not the right choice,” he says on p24) before finally moving to the laboratory of Frédéric Joliot Curie at the Collège de France, where he specialized in particle detection. A detailed account follows of all the steps in the invention of the multiwire chamber, from Curie’s lab through Charpak’s career at CERN to applications in medicine. This is all complete with images, anecdotes and original documents. “One of the ambitions of the book,” Charpak writes on the back cover, “is to show the extraordinary construction of particle physics in the space of one century”. For this reason he asked Vannucci to write an in-depth but accessible explanation of the meaning of the Standard Model, which is included in this section.

Another objective of the book is to “throw light on the imminent threat to all the treasures accumulated by civilizations over thousands of years, if we do not change radically the way that mankind manages its material and spiritual richness, its creativity and the education we give to children”. The last part, “Citoyen du monde” (Citizen of the world), written with Richard Garwin, details another chapter of Charpak’s life, devoted to the teaching of science to the young and towards the cause of total nuclear disarmament – “le danger toujours plus pressant … non seulement pour la paix mais pour la survie même de l’humanité” (the most pressing danger, not only for peace but for the survival of mankind).

Personal anecdotes provide another enjoyable feature of the book. As Charpak says, he “did not hesitate to describe … his short-term dreams”, such as his research on fossil sound in ancient objects and his attempts to sell a comedy scenario inspired by Dr Strangelove to Hollywood.

bright-rec iop pub iop-science physcis connect