Comsol -leaderboard other pages

Topics

Perspectives for nuclear-physics research in Europe

cernnup1_4-04

In Vienna, Austria, in December 2001 the Nuclear Physics European Collaboration Committee (NuPECC) started to prepare a new long-range plan for nuclear physics in Europe. NuPECC’s goal was to produce “a self-contained document that reflects on the next five years and provides vision into the next 10-15 years”. The previous long-range plan had been published as a report, “Nuclear Physics in Europe: Highlights and Opportunities”, in December 1997.

NuPECC first defined the active areas of nuclear physics that were to be addressed. Working groups were formed, spanning all the subfields of nuclear physics and its applications: nuclear structure; phases of nuclear matter; quantum chromodynamics; nuclear physics in the universe; and fundamental interactions and applications. Convenors for each of these groups were appointed and two liaison members of NuPECC were assigned to each of them. The working groups were then asked to provide recommendations for possible future directions and a prioritized list of the facilities and instrumentation needed to address them.

The next step in the process was a town meeting, organized at GSI Darmstadt on 30 January – 1 February 2003, to discuss the long-range plan. Prior to this, the preliminary reports of the groups had been posted on the NuPECC website. The town meeting was well attended with around 300 participants, including many young scientists, and the following summarizes the general trends and exciting ideas about modern nuclear physics that were presented at the meeting and given in the report.

Progress in nuclear research

At a deeper level, nuclear physics is the physics of hadrons. Here, recent developments in lattice quantum chromodynamics (QCD) calculations have raised a great deal of interest in hadron spectroscopy. According to QCD, gluon-rich hadrons can be formed, as well as hybrid states of combinations of quark and gluonic excitations. There is also interest in quark dynamics, since in hadrons the polarization of gluons and the orbital angular momentum of quarks play an important role, together with a large transverse quark polarization. Nowadays the measurement of generalized parton distributions – which are generalizations of the usual distributions describing the momentum or helicity distributions of the quarks in the nucleon – receives much attention as the measurements will improve our knowledge of the structure of the hadron. Quark confinement and the study of phenomena in the non-perturbative regime of QCD will be addressed in future. Phase transitions of nuclear matter are being investigated in two regimes: at the Fermi energy, at which a liquid-gas phase transition is expected, and at very high energies and/or densities where a quark-gluon plasma (QGP) is expected. In the first phase transition interesting isospin effects turn out to play a role in the formation of exotic isotopes, whereas at the second phase transition the deconfinement of quarks is expected at very high temperatures and colour superconductivity at low temperatures and very high densities.

cernnup2_4-04

A long-term and fundamental goal of nuclear physics is to explain low-energy phenomena starting from QCD. In a first step, the connection could be made through QCD-motivated effective field theories. This should go hand in hand with experimental investigations that allow tests of these models. Recently, new developments have taken place, raising interest in nuclear structure, and besides the development of equipment and refined detection methods, it is now possible to use exotic beams of unstable nuclei. Furthermore, due to the increase in computing capacity, ab initio calculations with two- and three-body forces up to mass 12 can be performed. Experimentally, it is now possible to broaden the research field of the 300 stable nuclei to the approximately 6000 atomic nuclei that are predicted to exist. This means that a number of questions can now be addressed, such as what happens in extreme conditions of the neutron to proton (N/Z) ratio, at a high excitation energy, at an extreme angular momentum, or at a very heavy mass – that is, at considerably more extreme conditions than those we have investigated so far. Phenomena to be addressed here include neutron halo structures, super-heavy elements, new magic numbers, hyperdeformation and many other exotic forms of atomic nuclei.

In the past 20 years nuclear astrophysics has developed into an important subfield of nuclear physics. It is a truly interdisciplinary field, concentrating on primordial and stellar nucleosynthesis, stellar evolution, and the interpretation of cataclysmic stellar events such as novae and supernovae. It combines astronomical observation and astrophysical modelling with research into meteoritic anomalies, and with measurements and theory in nuclear physics. With the use of new methods, as well as the availability of radioactive-ion-beam (RIB) accelerators, astrophysically relevant nuclear reactions are already being measured. In future, this research will be intensified with the new generation of RIB facilities.

In the past, research on symmetries and fundamental interactions (and the physics beyond the Standard Model) has made large steps with the development of techniques that facilitate precision measurements. In this subfield, research on the properties of neutrinos (mass measurement), time-reversal and charge-parity violation (through measurements of electric-dipole moments of molecules, atoms and nucleons as well as correlations between electrons and neutrinos in ß-decay), and the determination of fundamental constants, is in progress.

Finally, there has been progress in the applications of nuclear-physics techniques and methods. These cross over into several disciplines, such as life sciences, medical technology, environmental studies, archaeology, future energy supplies, art, solid-state and atomic physics, and civilian safety.

Research facilities

Several new research facilities are now being developed or built. The most ambitious is the International Accelerator Facility for Beams of Ions and Antiprotons (IAFBIA) in Darmstadt (currently GSI) (see IAFBIA box). This will be available for experiments after 2010. For nuclear structure and related studies with extreme N/Z ratios, RIB facilities are required and can be realized by means of the in-flight fragmentation (IFF) technique, as aimed at with the IAFBIA, or the isotope-separation online (ISOL) method (see figure 1). In Europe, a plan to build the European ISOL (EURISOL) facility, which would be ready after 2013, exists, and intermediate to this are the ISOL facilities already operational at CERN, GANIL and Louvain-la-Neuve, and the upgrades at REX-ISOLDE and SPIRAL2, as well as the future facilities SPES in Legnaro and MAFF in Munich.

cernnup3_4-04

Recommendations

The first of NuPECC’s recommendations is to exploit fully the existing and competitive lepton, proton, stable isotope and radioactive ion-beam facilities and instrumentation. In addition to their physics-research potential, they will serve as important training sites and facilities where major beam-production development and detector R&D can be performed in the next 5 to 10 years. In its previous long-range plan, NuPECC gave high priority to the ALICE experiment at CERN, which has an extensive programme to investigate QGP in the framework of the large and active heavy-ion programme at the Large Hadron Collider (LHC) in the near future. A huge European effort is already underway to build the ALICE detector in time for the LHC. In accordance with the high priority given to ALICE in the previous long-range plan, NuPECC strongly recommends its timely completion to allow early and full exploitation at the start of the LHC.

Support of the university-based nuclear-physics groups, including their local infrastructure, is seen by NuPECC as essential for the success of the programmes at the present facilities and at future large-scale projects. Furthermore, NuPECC recommends that efforts should be taken to strengthen local theory groups in order to guarantee the development needed to address the challenging basic issues that exist or may arise from new experimental observations. NuPECC also recognizes the positive role played by the ECT* centre in Trento in nuclear theory, especially in its mission of strengthening unifying contacts between nuclear and hadron physics. In addition, NuPECC recommends that efforts to increase literacy in nuclear science among the general public be intensified.

Priorities for the future

The specific recommendations and priorities follow on from the new experimental facilities and advanced instrumentation that have been proposed, or are under construction, to address the challenging basic questions posed by nuclear science. NuPECC supports, as the highest priority for a new construction project, the building of the IAFBIA. This international facility (see IAFBIA box) will provide new opportunities for research in the different subfields of nuclear science. Envisaged for producing high-intensity radioactive ion beams using the IFF technique, the facility is highly competitive, even surpassing in certain respects similar facilities that are either planned or under construction in the US or in Japan. With the experimental equipment available at low and high energies, and at the New Experimental Storage Ring with its internal targets and electron collider ring, the facility will be a world leader in research in nuclear structure and nuclear astrophysics, in particular for research performed with short-lived exotic nuclei far from the valley of stability. The high-energy, high-intensity stable heavy-ion beams will facilitate the exploration of compressed baryonic matter with new penetrating probes. The high-quality cooled antiproton beams in the High-Energy Storage Ring, in conjunction with the planned detector system, PANDA, will provide the opportunity to search for the new hadron states that are predicted by QCD, and to explore the interactions of the charmed hadrons in the nuclear medium. In short, this facility is broadly supported since it will provide almost all fields of nuclear science with new research opportunities.

After the construction of IAFBIA, NuPECC recommends the highest priority to be the construction of the advanced ISOL facility, EURISOL. The ISOL technique for producing radioactive beams has clear complementary aspects to the IFF method. First-generation ISOL-based facilities have produced their first results and have been shown to work convincingly. The next-generation ISOL-based RIB facility EURISOL aims at increasing, beyond 2013, the variety of radioactive beams and their intensities by orders of magnitude over what is available at present for various scientific disciplines, including nuclear physics, nuclear astrophysics and fundamental interactions. EURISOL will employ a high-power (several MW) proton/ deuteron (p/d) driver accelerator. A large number of possible projects, such as a neutrino factory, an antiproton facility, a muon factory and a neutron spallation source, may benefit from the availability of such a p/d driver, and synergies with closely and less closely related fields of science are abundant. Considering the wide interest in such an accelerator, NuPECC proposes joining with other interested communities to do the Research and Technological Development (RTD) and design work necessary to realize the high-power p/d driver in the near future.

NuPECC also gives a high priority to the installation at the Gran Sasso underground laboratory of a compact, high-current 5 MV accelerator for light ions, equipped with a high-efficiency 4π array of germanium detectors. Such a facility will enhance the uniqueness of the present facility at Gran Sasso, and its potential to measure astrophysically important reactions down to relevant stellar energies.

On a longer timescale, the full exploration of non-perturbative QCD, e.g. unravelling hadron structure and performing precision tests of various QCD predictions, will require a high-intensity, high-energy lepton-scattering facility. NuPECC considers physics with a high-luminosity multi-GeV lepton-scattering facility to be very interesting and of high scientific potential. However, the construction of such a facility would require worldwide collaboration, so NuPECC recommends that the community pursues this research from an international perspective, incorporating it into any existing or planned large-scale facilities.

To exploit the current and future facilities fully and most efficiently, advanced instrumentation and detection equipment will be required to carry on the various programmes. The AGATA project for the construction of a 4π array of highly segmented germanium detectors for γ-ray tracking will benefit research programmes in the subfields of nuclear science at the various facilities in Europe. NuPECC gives its full support to the construction of AGATA, and recommends that the R&D phase be pursued with vigour.

•For more information about NuPECC, see www.nupecc.org.

The importance of funding outreach

cernvie1_4-04

Science and technology play an increasingly important role in our everyday lives, and many of life’s decisions now depend on some sort of scientific or technical knowledge. At the same time, advances in modern science occur quickly as each subject evolves and entirely new subjects are created, so it is often difficult for the general public and for teachers to keep up with scientific discoveries and technological innovations. However, science can be made more accessible and interesting to students, teachers and the public if they are exposed to the exciting ideas and discoveries of the latest research, for example in the field of high-energy particle physics.

Research in particle physics involves advanced technology, such as the large-scale use of superconductivity, precision particle detectors, and state-of-the-art electronics and computing systems. The technology of particle accelerators and detectors can also be applied to medicine and many other areas of science and industry, bringing alive the “appliance of science” to everyday life. Moreover, research has led to advances in information technology, such as the World Wide Web, which can bring about a “high tech” approach to learning about science. Aspects of classical physics, such as electromagnetism, optics and kinematics, can also be given a new lease of life through examples from modern physics, as compared with traditional teaching.

It is now generally agreed that education and awareness in science have to be strengthened in modern society. Indeed during the past few years increasing efforts have been made to improve awareness in the general public – especially young people in schools – of the importance of natural science to everyone. Scientific outreach, which promotes awareness and an appreciation of current research, has become an essential task for the research community and for many scientists.

As a result of an increased awareness of the importance of outreach activities, the European particle-physics community created the European Particle Physics Outreach Group (EPOG) in 1997 to promote outreach activities in particle physics. EPOG members represent the particle-physics communities of the 20 member states of CERN and, more recently, the US, together with the major laboratories of CERN, DESY and INFN. The group has received its mandate from the High Energy Particle Physics (HEPP) division of the European Physical Society (EPS) and the European Committee for Future Accelerators (ECFA). EPOG aims to help make scientific results and discoveries accessible to schools and the general public, and to introduce modern science into the school curricula.

Since its inception, the members of EPOG have both learned from each other and worked together on joint activities. There are many particle physicists active within their own countries who are working on a variety of initiatives, such as the development of new teaching materials, the translation of materials, workshops and masterclasses for both students and teachers, and visits to CERN. This work is often undertaken on a voluntary basis, with little or no official funding, and is dependent on the goodwill of the hardworking contributors and their institutions.

To be really successful though, outreach activities have to be done in a professional manner. Leading scientists who have a specialist knowledge in their subjects can form a powerful team with educators and those familiar with modern techniques in disseminating information to large groups of people. However, as we are competing with television and other leisure pursuits, outreach activities also require proper funding to be able to produce an attractive and engaging image of the natural sciences.

For this reason EPOG, together with ECFA and the EPS HEPP division, has written to a number of science research councils and other funding bodies in various countries to encourage them to recognize scientific outreach as an important and natural part of the research process, and to make financing available to the scientists for professional outreach activities. As we say in the letter, we realize that in some countries the importance of scientific outreach activities has already been recognized and is regarded as a natural part of the research activity. A particularly good example is the awareness in the US, which has resulted in organized funding. In many other countries, however, this is still not the case, and we believe that proper funding is crucial for an increased interest in and awareness of science and technology.

In summary, it is important that outreach activities are taken seriously by the bodies that fund our research. They should be recognized as a natural and logical part of research, and as an important link between research and society. With appropriate funding we could have the opportunity to make our mark and, who knows, to make a real difference.

Le miroir aux neutrinos (The Neutrino Mirror)

by François Vannucci, Odile Jacob. ISBN 2738113311, €23.50.

cernboo1_4-04

Neutrinos have excited scientists since 1930 and have allowed some important discoveries: Gargamelle’s 1973 observation of neutral currents in fact constituted the first manifestation of the Z boson, and as such marked the experimental foundation of the Standard Model. More recently, the beautiful phenomenon of neutrino oscillations has demonstrated that the Standard Model needs to be enlarged to account for neutrino masses. In a nutshell, neutrinos are in the spotlight.

For this reason it is very pleasant to see one of our colleagues undertake to communicate to a broad public his enthusiasm and excitement for these particles that are so hard to detect. The “mirror” through which Vannucci invites us to discover these neutrinos is, in the end, that of his own personality. The reader finds a typically French character, profoundly cultured, who revels in the company of literary quotes that mirror his thoughts and that enrich them with a touch of melancholic beauty. Marcel Proust and Oscar Wilde top his favourite author’s list, which extends from Saint Augustine to Daniel Pennac, via Jean-Paul Sartre and the medical dictionary. Sometimes a school-boy’s wink, and often a sensuous shiver, express themselves through these quotations, which is testimony to the fact that science speaks not only to the brain but also the heart. I am not sure that I have grasped what these quotations are supposed to explain, but they certainly carry a form of emotion.

The book tells the story of neutrinos, at a level that is meant to be accessible to pupils in the final years of high school (15-18 years old), as well as scientifically cultivated adults. It begins with a discussion of perception and detection, first of ordinary objects and then of particles. Then we arrive at Pauli and his “radioactive ladies and gentlemen”, followed immediately by UA1 and the discovery of the W. (Sartre and Le Verrier are quoted…but no word of Carlo Rubbia. This will soothe the feelings of all those who felt they should have appeared.) Then we go back to the experiments to measure the neutrino mass followed by neutrinoless double-beta decay, and the detection of the first neutrino interactions by Fred Reines. As one can see, the experiments that have established the properties of neutrinos are listed thematically and not necessarily historically, something that I appreciated.

With occasional irony towards his colleagues (or himself?), Vannucci takes us around the experiments that made history in neutrino physics; those that were right and those that were wrong, those that made us understand and those that got us confused. This is followed by a discussion on uncertainties and the scientific method. I am not sure I agree fully when what we don’t know yet but are striving to know and will hopefully understand (“the big bang cannot be considered a physical event”), is compared with medieval legends (“angels, archangels and cherubim of the middle ages”). However, do read carefully and you will find the definition of the “miroir aux alouettes”, which inspired the title of the book and is taken from a quotation in…a dictionary.

It is not obvious for whom this book is best suited. For whom would I buy it? It seems more for our fathers – and mothers – or our colleagues than for teenagers, who may be discouraged by the unlikely mix of literature and science.

The Global Approach to Quantum Field Theory

by Bryce DeWitt, Oxford University Press (vols I and II). Hardback ISBN 0198510934, £115 ($230).

71GxoUhRL5L

It is difficult to describe or even summarize the huge amount of information contained in this two-volume set. Quantum field theory (QFT) is the more basic language to express the fundamental laws of nature. It is a difficult language to learn, not only because of its technical intricacies but also because it contains so many conceptual riddles, even more so when the theory is considered in the presence of a gravitational background. The applied field theory techniques to be used in concrete computations of cross-sections and decay rates are scarce in this book, probably because they are adequately explained in many other texts. The driving force of these volumes is to provide, from the beginning, a manifestly relativistic invariant construction of QFT.

Early in the book we come across objects such as Jacobi fields, Peierls brackets (as a replacement of Poisson brackets), the measurement problem, Schwinger’s variational principle and the Feynman path integral, which form the basis of many things to come. One advantage of the global approach is that it can be formulated in the presence of gravitational fields. There are various loose ends in regular expositions of QFT that are clearly tied in the book, and one can find plenty of jewels throughout: for instance a thorough analysis of the measurement problem in quantum mechanics and QFT, something that is hard to find elsewhere. The treatment of symmetries is rather unique. DeWitt introduces local (gauge) symmetries early on; global symmetries follow at the end as a residue or bonus. This is a very modern point of view that is spelt out fully in the book. In the Standard Model, for example, the global symmetry (B-L, baryon minus lepton number) appears only after we consider the most general renormalizable Lagrangian consistent with the underlying gauge symmetries. In most modern approaches to the unification of fundamental forces, global symmetries are quite accidental. String theory is an extreme example where all symmetries are related to gauge symmetries.

There are many difficult and elaborate technical areas of QFT that are very well explained in the book, such as heat kernel expansions, quantization of gauge theories, quantization in the presence of gravity and so on. There are also some conceptually difficult and profound questions that DeWitt addresses head on with authority and clarity, including the measurement problem mentioned previously and the Everett interpretation of quantum mechanics and its implications in quantum cosmology. There is also a cogent and impressive study of QFT in the presence of black holes, their Hawking emission, the final-state problem for quantum black holes and a long etcetera.

The book’s presentation is very impressive. Conceptual problems are elegantly exhibited and there is an inner coherent logic of exposition that could only come from someone who had long and deeply reflected on the subject, and made important contributions to it. It should be said, however, that the book is not for the faint hearted. The level is consistently high throughout its 1042 pages. Nonetheless it does provide a deep, uncompromising review of the subject, with both its bright and dark sides clearly exposed. One can read towards the end of the preface: “The book is in no sense a reference book in quantum field theory and its applications to particle physics…”. I agree with the second statement but strongly disagree with the first.

Das große Stephen Hawking Lesebuch, Leben und Werk (The Big Stephen Hawking Reader)

by Hubert Mania (ed.), Rowohlt Verlag. Hardback ISBN 3498044885, €17.90.

cernboo2_4-04

The Big Stephen Hawking Reader includes excerpts from books written by Hawking, as well as information about his life and work. This naturally divides the book into two parts: the first half is a short biography of Hawking interspersed with sections explaining the basic physics of his work. In this way it not only introduces Hawking himself, but also his thoughts and ideas.

Mania admits in the prologue that he wrote the biography from a “respectful distance”, honouring Hawking’s wish to be remembered for his work and not his “involuntary presence in the gossip columns”. Because of this, Mania sometimes leaves out things that could shed a less favourable light on Hawking. For example, Hawking’s treatment of his first wife is only mentioned very briefly. Nevertheless there are some nice anecdotes about Hawking, such as when he was thinking about A Brief History of Time. “If he was going to neglect his research to write a popular book, then it should be profitable for him.”

The second half of the book is made up of excerpts from A Short History of Time, The Illustrated Short History of Time and Einstein’s Dream. The chapters are well chosen and understandable with the help of Mania’s comments.

Even if Mania’s book is sometimes a little sketchy, I enjoyed reading it and would recommend it to anyone who wants a short introduction to Stephen Hawking’s life and work – and it whets the appetite for more books about and by this well known scientist.

Interactive Quantum Mechanics

by Siegmund Brandt, Hans Dieter Dahmen and Tilo Stroh, Springer-Verlag. Hardback ISBN 387002316, €69.95 (£54.00, $69.95).

cernboo5_3-04

“Physical intuition” is a precious commodity for all physicists. Richard Feynman, when asked once what his intuition was concerning a certain problem, is said to have replied that he didn’t have any because he hadn’t done the calculation yet. Common sense is frequently a poor guide, even in the classical domain, but there our intuition can be built up with the help of reasoned interpretations of phenomena we can experience directly, and by the performance of many relatively simple and realistic calculations. Gaining intuition about the quantum world is much harder: we have little, if any, direct perception of it, and few realistic problems are mathematically easy to solve. Thus, students have a hard time “thinking physically” when faced with quantum problems.

Surely computers ought to be able to help. Quite complicated problems can be quickly solved numerically, and – most importantly – the results can be presented in a variety of graphical forms. Indeed, several recent undergraduate texts on quantum mechanics have included disks demonstrating the solutions of standard problems. These generally have a modest capability for the student to “play with the parameters”, but there has been nothing more radically interactive. This book is the first (so far as I know) to fill this gap.

In fact, it might be more accurate to describe it as a computer program on a CD, accompanied by extended notes, rather than as a book accompanied by a CD. The program, called “INTERQUANTA” or IQ for short, has a self-explanatory user interface written in Java. It is easy to install and simple to work with – the instructions are even suitable for computer illiterates like myself. It can be used passively, to watch (and listen to) demonstrations that illustrate the main points in the text, but in the second, interactive mode the user is offered considerable freedom in designing the problems to be solved and the ways in which the answers may be displayed. As the authors put it, users can enter a “computer laboratory in quantum mechanics”.

Eight physics topics are treated in as many chapters: free-particle motion, bound states and scattering, first in one and then in three dimensions; two-particle systems in one dimension; and special functions of mathematical physics. Each chapter begins with a section called “Physical Concepts”, in which the relevant concepts and formulae are assembled without proof. Each section of the text is carefully keyed to a corresponding part of IQ, and the graphical outputs are well designed and easy to read. More than 300 numerical exercises are included to stimulate the reader’s exploration, and many contain useful prompts encouraging the reader to suggest a physical explanation for particular results. A final chapter contains hints for the solution of some exercises, and an appendix provides a systematic guide to IQ.

IQ contains much useful material, and the authors are to be congratulated on having produced something rather novel that is so user friendly. But I believe its value would be greatly enhanced if the range of topics were to be significantly extended. For example, all the presentations are static, yet there are many fascinating and important time-dependent phenomena in quantum theory for which a “movie” would be a valuable aid to understanding. And it is a pity that the whole vital area of perturbation theory is omitted, where there is ample scope for numerical instruction. A program that included topics such as these would surely be a major resource for both students and teachers.

Chercheurs entre reve et réalité (Scientists at the rim of reality)

a film by Samy Brunett (in French or English versions), Blue in Green Productions. DVD or VHS PAL €20.00. (Available directly from samy.brunett@village.uunet.be.)

cernboo4_3-04

Bringing the fundamental physics research of today, or of the last 30-40 years, within the reach of the general public is a very difficult task. It is verging on the impossible, to be perfectly honest, as it requires some prior general scientific knowledge on the part of the public and a great command of the subject on the part of the author(s). Having said that, the difficulty of the task does not imply that it should not be attempted, in fact quite the reverse. And this is what Samy Brunett does in his DVD on the theory of everything.

The approach chosen for Brunett’s DVD is to use interviews and discussions with a number of young and not-so-young physicists who are working for or at CERN, some of whom are well known and some of whom are not (in any case, the general public is unlikely to tell the difference). These physicists are fairly representative of their profession, which is already a point in the DVD’s favour, as they all speak with enthusiasm and passion.

The interviews are preceded by a number of computer-generated images, not all of which are entirely appropriate to the subject matter, which is, quite simply, the origins of the universe – a different kettle of fish altogether to the famous magic potion of Asterix the Gaul! However, having said this, once the introduction is over we do go on to meet the actual people working in physics. Those of us who will recognize it may lament the rather drab setting of the CERN cafeteria, but the sentiments are well expressed and quite convincing, whether they are uttered by young physicists or by the leading lights of the field.

Of course from a single viewing of this DVD alone, the “man in the street” is not going to aspire to understand what “we” mean today by the theory of everything, extra dimensions, the very early universe or particle mass, especially since the links between the different subjects are not always clear. However, its great merit – and certainly not the only one – is that it has been made and that it allows the viewer to get an idea of today’s leading figures in fundamental research.

In the current wave of commemorative events (the recent centenaries of the discovery of radioactivity and of Einstein’s first articles, for example, or CERN’s 50th anniversary this year), this kind of modern technology-based publication can do nothing but good for the rather stale image of this discipline of ours that is so difficult to popularize.

Measurement and Control of Charged Particle Beams

by M G Minty and F Zimmermann, Springer. Hardback ISBN 3540441875, €74.85 (£54.00, $79.95).

cernboo2_3-04

This is a specialist book written primarily for the high-energy particle-accelerator community, in which Minty and Zimmermann present a contemporary view of charged-particle beam measurement and control in high-energy physics (HEP) machines. With an eye on the next generation of such machines, the authors cover, in some detail, the pioneering work being carried out around the world on electron-positron linear colliders.

The subject matter and references are laudably taken from worldwide resources. The references are given in abundance and the authors have provided an admirable service by trawling through the ever-more voluminous proceedings of conferences and schools to list the key papers. There are 172 figures, which are frequently of “live” examples taken from the world’s foremost HEP laboratories, and the authors have also taken care to expand the theory in the more advanced or less well known areas. Each chapter is backed up by exercises with solutions that provide the authors with a useful vehicle for more theoretical explanations and alternative views that could not be conveniently integrated into the text. Newcomers to HEP machines, however, should heed the warning on page 5 that the reader is expected to know basic optics and, one might add, advanced applied mathematics as well.

The experienced reader can omit the introductory chapter 1, while newcomers would be better served by building their knowledge through the more basic references given by the authors. Thereafter, the book reads smoothly, starting with single-particle optics and moving progressively through emittance, photoinjectors, collimation, longitudinal optics, longitudinal manipulation, injection and extraction, polarization and cooling. In general, the authors start by reviewing how to measure the parameters in a particular category and then continue with how to control those parameters.

Chapter 2 starts with the measurement of transverse optical parameters. Many of the techniques described are relatively recent and depend on the tremendous advances that have been made in digital electronics and online computing power. The use of “multiknobs” is described. This concept has existed for many decades in the form of tune and chromaticity control using two independent corrector families, but it can be greatly extended using matrix techniques for quasi-linear systems and powerful matching routines for the more non-linear cases.

Chapter 3 addresses the important subject of closed-orbit correction, where the reader will be brought up to date with jargon such as “corrector ironing” (page 87). This chapter also includes newer topics such as wake-field bumps, dispersion-free steering, orbit feedback and dynamic orbits excited by an alternating-current dipole. Chapter 4 deals with the difficult task of emittance measurement and tackles both the transverse and longitudinal planes, bringing in equilibrium emittances and the control of partition damping numbers. The next chapter briefly breaks the mould of the earlier ones by reviewing low-emittance photoinjectors and the production of flat beams using a solenoid, which together are of great importance to linear colliders.

Chapter 6 takes up collimation, but with only seven pages the reader sees relatively little of this critical subject. Collimation is important in low-energy high-intensity machines, high-energy superconducting machines and in electron-positron linear colliders. In each of these cases the problems and parameters are different. The collimation proposals for linear colliders would have fitted well into the context of this book, as the authors are clearly preparing for the next generation, while high-efficiency collimation for machines like the LHC is arguably an even more important topic that could have been included.

The book then returns to the basic mould with excellent accounts of the measurement of longitudinal optics parameters in chapter 7, followed logically by the manipulation of the longitudinal phase space in chapter 8. One small disappointment is that the tomographic measurement of longitudinal phase space, although mentioned in one of the examples and referenced, is not treated in a separate section as a diagnostic tool in its own right.

Chapter 9 could be arguably more suited to a book on lattice design and contains somewhat surprising excursions into septum and kicker magnet design. However, the reader will no doubt find extraction by resonance islands and bent crystals highly interesting. Chapter 10 on polarization fills a gap in the literature and the authors have accordingly paid more attention to theory. The practicalities of the harmonic correction of depolarizing resonances, adiabatic spin flipping, tune jumps and Siberian snakes of complete and partial types are all addressed. The final chapter describes the fascinating topics of stochastic cooling, electron cooling, laser cooling, ionization cooling, crystal beams and beam echoes, many of which merit their own monograph.

In summary, this book is a very welcome and valuable addition to the accelerator literature. As noted by the authors, there is relatively little material in the book specifically for low-energy machines, but industrial users may still find it useful to read the book and adapt or develop the ideas rather than apply them directly.

Supernovae and Gamma Ray Bursters

by Kurt Weiler (ed), Springer. Hardback ISBN 3540440534, €89.95 (£69.00, $115.80).

cernboo3_3-04

The association of gamma-ray bursts (GRBs) with supernova (SN) explosions has been suspected since the discovery of GRBs by the Vela satellites in 1967. However, observational evidence for a GRB-SN association was first found accidentally after the discovery of GRB afterglows 30 years later. The prompt search for an optical afterglow of GRB980425 led to the discovery, two days after the burst, of a relatively nearby supernova, SN1998bw, whose sky position and estimated explosion time were consistent with those of GRB980425. The physical association between them suggested that GRB980425 was produced by a highly relativistic and narrowly collimated jet viewed off axis and ejected by SN1998bw, which appeared to be unusually energetic for a supernova because it was viewed near axis.

These conclusions were not immediately reached by the majority of the GRB and SN communities, who were more accustomed to spherical models of SN explosions and GRB “fireballs”. So the above evidence for a GRB-SN association was at first dismissed as being either an accidental sky coincidence between a distant GRB and a close SN, or as a physical association between a new type of faint GRB (see p293 in the book) and a new type of SN, much more energetic than ordinary supernovae (SNe) and dubbed “hypernovae” (see pp243-281).

These interpretations began to erode, however, as observational data on optical afterglows of other GRBs accumulated. The data indicated that GRBs take place in star formation regions in host galaxies and their afterglows. In particular those of the relatively close GRBs show evidence that long duration GRBs are produced by jetted ejecta in SN explosions akin to SN1998bw, and not in fireballs produced by the merger of neutron stars in binary systems due to gravitational wave emission. But it was the dramatic spectroscopic discovery on 8 April 2003 of SN2003dh in the late afterglow of GRB030329 that convinced the majority of the GRB and SN communities that ordinary, long duration GRBs are produced in SN explosions akin to SN1998bw.

The book Supernovae and Gamma Ray Bursters edited by Kurt Weiler, an expert on radio SNe, appeared shortly before the discovery of GRB030329 and SN2003dh. It contains a collection of contributions on SNe and GRBs. Many of the contributions are very informative, well written and satisfy the stated aim of Springer’s Lecture Notes In Physics: “intended for broader readership, especially graduate students and non-specialist researchers wishing to familiarize themselves with the topic concerned.”

The first half of the book is devoted to SNe and includes contributions such as “Supernova Rates” by Enrico Capelaro and “Measuring Cosmology with Supernovae” by Saul Perlmutter and Brian Schmidt. The second half is devoted to GRBs and includes contributions such as “Observational Properties of Cosmic GRBs” by Kevin Hurley, “X-ray Observations of GRB Afterglows” by Filippo Frontera and “Optical Observations of GRB Afterglows” by Elena Pian.

However, the book as a whole is unbalanced, both in its coverage of SNe and GRBs and in its coverage of the possible GRB-SN association, which presumably was its main aim. The first half covers in great detail (seven out of ten chapters) the fireworks from the interaction of the SN ejecta with the SN environment, but lacks a detailed discussion of our current knowledge of the different SN progenitors, the mechanisms that explode them and the compact remnants that are left over. It is needless to emphasize the importance and relevance of these subjects to the GRB-SN association. The second part of the book, which is devoted to GRBs and their afterglows, focuses mainly (four chapters extending over 100 pages) on the observations of GRB afterglows, with only a single chapter (16 pages long) on the prompt gamma-ray emission. Our current “theoretical understanding” of GRBs is summarized in a single chapter, which is exclusively devoted to the presentation of the party line – the fireball model – as if it were a dogma. It does not discuss the model’s severe problems, nor possible future tests of the model. It does not even mention alternative models, such as the “Cannonball Model”, which is ab initio based on a GRB-SN association, is falsifiable, and is much more predictive and successful than the fireball models.

In summary, the book contains some useful summaries of observational data on SNe and GRBs, but sheds no light on the production mechanism of SNe and GRBs, nor on the GRB-SN association.

The Millennium Problems

by Keith Devlin, Granta Books. Hardback ISBN 1862076863, £20.00 ($29.95).

cernboo1_3-04

On 24 May 2000 in a lecture hall at the College de France in Paris, Michael Atiyah from the UK and John Tate from the US announced that a $1 million prize would be granted to those who first solved one of the seven most difficult open problems in mathematics. These are known as the “Millennium Problems”, and the whole idea was an initiative of the Clay Mathematical Institute (CMI), which was established one year earlier by magnate Landon Clay. The list of problems was selected by a committee of top mathematicians, including – along with Atiyah and Tate – Arthur Jaffe, the current director of the CMI, Alain Connes, Andrew Wiles and Edward Witten, the only physicist, who is also a Fields medallist.

One-hundred years before, also in Paris, David Hilbert had given the famous address laying out an agenda for the mathematics of the 20th century. He proposed a total of 23 problems; a few turned out to be simpler than anticipated, or were too imprecise to admit a definite answer, but most were genuine, difficult and important problems that brought instant fame and glory (if not necessarily wealth) to those who solved them.

Some of the differences between the two sets of problems should be mentioned. While Hilbert’s set provided a guideline for mathematics research, the Millennium Problems provide a description of the current frontier of knowledge in the subject. The other important difference is that among the CMI set, two are inspired by deep physics problems: fluid dynamics and the structure of gauge theories.

In this book, Keith Devlin, a well known mathematician who writes excellent books and articles for a lay audience, takes up the daunting task of explaining these problems as best as can be done to an audience assumed to have no mathematical sophistication beyond the high-school curriculum – and all this in only about 200 pages! Although such an ambitious aim is nigh on surreal, the results are quite satisfactory. The book is able to communicate to a large extent the depth and importance of the problems, and to give a glimpse of the deep elation whoever solves them will feel. For anyone involved in research, the $1 million prize is almost beside the point.

The author chooses to present the problems from the “simplest” to the most arcane. The first is the Riemann hypothesis. Deeply related to the distribution of prime numbers, this is the only one of the problems proposed by Hilbert that has not been settled. Technically, one needs to find the location of the zeros of a certain function (Riemann’s zeta function). If proven true, there are literally hundreds of important results in number theory that would follow, and it is quite likely that a poll among mathematicians would identify this as the most important open problem in mathematics.

The second problem has a physics flavour to it. All the basic interactions in the Standard Model of particle physics are gauge interactions. If we consider the idealized situation of a pure gauge theory, we would like to have a mathematically sound proof that the quantum theory exhibits confinement, and that the mass of the first excited state is definitely positive (there is a mass gap). Most physicists would agree with these properties, however, a real proof may provide completely new methods in quantum field theory that may bring a revolution similar to the invention of calculus by Leibnitz and Newton.

The third problem is related to computational complexity where one can ask, among those functions or propositions that can be computed, which are “easy” and which are “hard”. Problems that can be solved in polynomial time with respect to the length of the input (in bits) are assigned to complexity class P, while the class NP contains problems that are considered hard because, so far, any algorithm used to solve them requires exponential time. Among the latter, one of the most famous is the travelling-salesman problem. Nobody knows whether the classes P and NP are equivalent. This is a central problem in computational theory and its resolution may have far-reaching technological consequences.

The fourth problem again has a physics flavour, and is related to the Navier-Stokes equation describing the fluid flow of an incompressible fluid with viscosity. It is difficult to exaggerate the importance of this equation in the design of aeroplanes and ships. Although there are plenty of approximation and numerical methods, as in the case of gauge theories, we are still lacking a deep mathematical methodology that will allow us to understand in detail the space of solutions for given initial data. To appreciate the difficulty of the problem, a solution would imply a detailed understanding of the phenomenon of turbulence – no small accomplishment.

The fifth problem is at the root of modern topology: the Poincaré conjecture. Roughly speaking, topology is the study of spaces that are stable under arbitrary continuous deformations. Thus a tetrahedron or cube can be continuously deformed to a sphere, while it is impossible to do the same with the surface of a doughnut. It is a very interesting and legitimate question to ask for the complete set of topological invariants in a given dimension. In the case of a two-dimensional sphere it is clear that we cannot lasso it. This property is known as simple-connectedness and it characterizes the sphere topologically. Poincaré asked whether a similar property would completely characterize a three-dimensional sphere. During the 20th century we have achieved a topological classification for spaces whose dimension is different from three, curiously the dimension where Poincaré’s conjecture was formulated. Progress in the past two decades seems to indicate that it may be settled in the positive in a few years time.

The last two problems are truly arcane. They go under the names of the Birch and Swinnerton-Dyer, and the Hodge conjectures. To exhibit the difficulty in explaining the first, it should be noted that definite support for it came from the settling of Fermat’s last theorem by Wiles and the subsequent progress in the so-called Taniyama-Shimura conjectures. The problem is deeply rooted in the study of rational solutions to special types of equations (elliptic curves), which in turn are contained in the apparently unfathomable world of Diophantine equations. The Hodge conjecture is, according to Devlin, the one most difficult to formulate for a lay audience. Its resolution would provide deep insights into analysis, topology and geometry, but its mere formulation requires advanced knowledge in all three subjects.

Devlin comes out with flying colours in his effort to make these fascinating problems accessible to a wide audience. It is inevitable that there are some dark corners and a few inaccuracies in such a challenging task. However, for anyone interested in the frontiers of mathematics and scientific knowledge, this book provides enthralling reading.

bright-rec iop pub iop-science physcis connect