by Siegmund Brandt, Hans Dieter Dahmen and Tilo Stroh, Springer-Verlag. Hardback ISBN 387002316, €69.95 (£54.00, $69.95).
“Physical intuition” is a precious commodity for all physicists. Richard Feynman, when asked once what his intuition was concerning a certain problem, is said to have replied that he didn’t have any because he hadn’t done the calculation yet. Common sense is frequently a poor guide, even in the classical domain, but there our intuition can be built up with the help of reasoned interpretations of phenomena we can experience directly, and by the performance of many relatively simple and realistic calculations. Gaining intuition about the quantum world is much harder: we have little, if any, direct perception of it, and few realistic problems are mathematically easy to solve. Thus, students have a hard time “thinking physically” when faced with quantum problems.
Surely computers ought to be able to help. Quite complicated problems can be quickly solved numerically, and – most importantly – the results can be presented in a variety of graphical forms. Indeed, several recent undergraduate texts on quantum mechanics have included disks demonstrating the solutions of standard problems. These generally have a modest capability for the student to “play with the parameters”, but there has been nothing more radically interactive. This book is the first (so far as I know) to fill this gap.
In fact, it might be more accurate to describe it as a computer program on a CD, accompanied by extended notes, rather than as a book accompanied by a CD. The program, called “INTERQUANTA” or IQ for short, has a self-explanatory user interface written in Java. It is easy to install and simple to work with – the instructions are even suitable for computer illiterates like myself. It can be used passively, to watch (and listen to) demonstrations that illustrate the main points in the text, but in the second, interactive mode the user is offered considerable freedom in designing the problems to be solved and the ways in which the answers may be displayed. As the authors put it, users can enter a “computer laboratory in quantum mechanics”.
Eight physics topics are treated in as many chapters: free-particle motion, bound states and scattering, first in one and then in three dimensions; two-particle systems in one dimension; and special functions of mathematical physics. Each chapter begins with a section called “Physical Concepts”, in which the relevant concepts and formulae are assembled without proof. Each section of the text is carefully keyed to a corresponding part of IQ, and the graphical outputs are well designed and easy to read. More than 300 numerical exercises are included to stimulate the reader’s exploration, and many contain useful prompts encouraging the reader to suggest a physical explanation for particular results. A final chapter contains hints for the solution of some exercises, and an appendix provides a systematic guide to IQ.
IQ contains much useful material, and the authors are to be congratulated on having produced something rather novel that is so user friendly. But I believe its value would be greatly enhanced if the range of topics were to be significantly extended. For example, all the presentations are static, yet there are many fascinating and important time-dependent phenomena in quantum theory for which a “movie” would be a valuable aid to understanding. And it is a pity that the whole vital area of perturbation theory is omitted, where there is ample scope for numerical instruction. A program that included topics such as these would surely be a major resource for both students and teachers.
a film by Samy Brunett (in French or English versions), Blue in Green Productions. DVD or VHS PAL €20.00. (Available directly from samy.brunett@village.uunet.be.)
Bringing the fundamental physics research of today, or of the last 30-40 years, within the reach of the general public is a very difficult task. It is verging on the impossible, to be perfectly honest, as it requires some prior general scientific knowledge on the part of the public and a great command of the subject on the part of the author(s). Having said that, the difficulty of the task does not imply that it should not be attempted, in fact quite the reverse. And this is what Samy Brunett does in his DVD on the theory of everything.
The approach chosen for Brunett’s DVD is to use interviews and discussions with a number of young and not-so-young physicists who are working for or at CERN, some of whom are well known and some of whom are not (in any case, the general public is unlikely to tell the difference). These physicists are fairly representative of their profession, which is already a point in the DVD’s favour, as they all speak with enthusiasm and passion.
The interviews are preceded by a number of computer-generated images, not all of which are entirely appropriate to the subject matter, which is, quite simply, the origins of the universe – a different kettle of fish altogether to the famous magic potion of Asterix the Gaul! However, having said this, once the introduction is over we do go on to meet the actual people working in physics. Those of us who will recognize it may lament the rather drab setting of the CERN cafeteria, but the sentiments are well expressed and quite convincing, whether they are uttered by young physicists or by the leading lights of the field.
Of course from a single viewing of this DVD alone, the “man in the street” is not going to aspire to understand what “we” mean today by the theory of everything, extra dimensions, the very early universe or particle mass, especially since the links between the different subjects are not always clear. However, its great merit – and certainly not the only one – is that it has been made and that it allows the viewer to get an idea of today’s leading figures in fundamental research.
In the current wave of commemorative events (the recent centenaries of the discovery of radioactivity and of Einstein’s first articles, for example, or CERN’s 50th anniversary this year), this kind of modern technology-based publication can do nothing but good for the rather stale image of this discipline of ours that is so difficult to popularize.
by M G Minty and F Zimmermann, Springer. Hardback ISBN 3540441875, €74.85 (£54.00, $79.95).
This is a specialist book written primarily for the high-energy particle-accelerator community, in which Minty and Zimmermann present a contemporary view of charged-particle beam measurement and control in high-energy physics (HEP) machines. With an eye on the next generation of such machines, the authors cover, in some detail, the pioneering work being carried out around the world on electron-positron linear colliders.
The subject matter and references are laudably taken from worldwide resources. The references are given in abundance and the authors have provided an admirable service by trawling through the ever-more voluminous proceedings of conferences and schools to list the key papers. There are 172 figures, which are frequently of “live” examples taken from the world’s foremost HEP laboratories, and the authors have also taken care to expand the theory in the more advanced or less well known areas. Each chapter is backed up by exercises with solutions that provide the authors with a useful vehicle for more theoretical explanations and alternative views that could not be conveniently integrated into the text. Newcomers to HEP machines, however, should heed the warning on page 5 that the reader is expected to know basic optics and, one might add, advanced applied mathematics as well.
The experienced reader can omit the introductory chapter 1, while newcomers would be better served by building their knowledge through the more basic references given by the authors. Thereafter, the book reads smoothly, starting with single-particle optics and moving progressively through emittance, photoinjectors, collimation, longitudinal optics, longitudinal manipulation, injection and extraction, polarization and cooling. In general, the authors start by reviewing how to measure the parameters in a particular category and then continue with how to control those parameters.
Chapter 2 starts with the measurement of transverse optical parameters. Many of the techniques described are relatively recent and depend on the tremendous advances that have been made in digital electronics and online computing power. The use of “multiknobs” is described. This concept has existed for many decades in the form of tune and chromaticity control using two independent corrector families, but it can be greatly extended using matrix techniques for quasi-linear systems and powerful matching routines for the more non-linear cases.
Chapter 3 addresses the important subject of closed-orbit correction, where the reader will be brought up to date with jargon such as “corrector ironing” (page 87). This chapter also includes newer topics such as wake-field bumps, dispersion-free steering, orbit feedback and dynamic orbits excited by an alternating-current dipole. Chapter 4 deals with the difficult task of emittance measurement and tackles both the transverse and longitudinal planes, bringing in equilibrium emittances and the control of partition damping numbers. The next chapter briefly breaks the mould of the earlier ones by reviewing low-emittance photoinjectors and the production of flat beams using a solenoid, which together are of great importance to linear colliders.
Chapter 6 takes up collimation, but with only seven pages the reader sees relatively little of this critical subject. Collimation is important in low-energy high-intensity machines, high-energy superconducting machines and in electron-positron linear colliders. In each of these cases the problems and parameters are different. The collimation proposals for linear colliders would have fitted well into the context of this book, as the authors are clearly preparing for the next generation, while high-efficiency collimation for machines like the LHC is arguably an even more important topic that could have been included.
The book then returns to the basic mould with excellent accounts of the measurement of longitudinal optics parameters in chapter 7, followed logically by the manipulation of the longitudinal phase space in chapter 8. One small disappointment is that the tomographic measurement of longitudinal phase space, although mentioned in one of the examples and referenced, is not treated in a separate section as a diagnostic tool in its own right.
Chapter 9 could be arguably more suited to a book on lattice design and contains somewhat surprising excursions into septum and kicker magnet design. However, the reader will no doubt find extraction by resonance islands and bent crystals highly interesting. Chapter 10 on polarization fills a gap in the literature and the authors have accordingly paid more attention to theory. The practicalities of the harmonic correction of depolarizing resonances, adiabatic spin flipping, tune jumps and Siberian snakes of complete and partial types are all addressed. The final chapter describes the fascinating topics of stochastic cooling, electron cooling, laser cooling, ionization cooling, crystal beams and beam echoes, many of which merit their own monograph.
In summary, this book is a very welcome and valuable addition to the accelerator literature. As noted by the authors, there is relatively little material in the book specifically for low-energy machines, but industrial users may still find it useful to read the book and adapt or develop the ideas rather than apply them directly.
The association of gamma-ray bursts (GRBs) with supernova (SN) explosions has been suspected since the discovery of GRBs by the Vela satellites in 1967. However, observational evidence for a GRB-SN association was first found accidentally after the discovery of GRB afterglows 30 years later. The prompt search for an optical afterglow of GRB980425 led to the discovery, two days after the burst, of a relatively nearby supernova, SN1998bw, whose sky position and estimated explosion time were consistent with those of GRB980425. The physical association between them suggested that GRB980425 was produced by a highly relativistic and narrowly collimated jet viewed off axis and ejected by SN1998bw, which appeared to be unusually energetic for a supernova because it was viewed near axis.
These conclusions were not immediately reached by the majority of the GRB and SN communities, who were more accustomed to spherical models of SN explosions and GRB “fireballs”. So the above evidence for a GRB-SN association was at first dismissed as being either an accidental sky coincidence between a distant GRB and a close SN, or as a physical association between a new type of faint GRB (see p293 in the book) and a new type of SN, much more energetic than ordinary supernovae (SNe) and dubbed “hypernovae” (see pp243-281).
These interpretations began to erode, however, as observational data on optical afterglows of other GRBs accumulated. The data indicated that GRBs take place in star formation regions in host galaxies and their afterglows. In particular those of the relatively close GRBs show evidence that long duration GRBs are produced by jetted ejecta in SN explosions akin to SN1998bw, and not in fireballs produced by the merger of neutron stars in binary systems due to gravitational wave emission. But it was the dramatic spectroscopic discovery on 8 April 2003 of SN2003dh in the late afterglow of GRB030329 that convinced the majority of the GRB and SN communities that ordinary, long duration GRBs are produced in SN explosions akin to SN1998bw.
The book Supernovae and Gamma Ray Bursters edited by Kurt Weiler, an expert on radio SNe, appeared shortly before the discovery of GRB030329 and SN2003dh. It contains a collection of contributions on SNe and GRBs. Many of the contributions are very informative, well written and satisfy the stated aim of Springer’s Lecture Notes In Physics: “intended for broader readership, especially graduate students and non-specialist researchers wishing to familiarize themselves with the topic concerned.”
The first half of the book is devoted to SNe and includes contributions such as “Supernova Rates” by Enrico Capelaro and “Measuring Cosmology with Supernovae” by Saul Perlmutter and Brian Schmidt. The second half is devoted to GRBs and includes contributions such as “Observational Properties of Cosmic GRBs” by Kevin Hurley, “X-ray Observations of GRB Afterglows” by Filippo Frontera and “Optical Observations of GRB Afterglows” by Elena Pian.
However, the book as a whole is unbalanced, both in its coverage of SNe and GRBs and in its coverage of the possible GRB-SN association, which presumably was its main aim. The first half covers in great detail (seven out of ten chapters) the fireworks from the interaction of the SN ejecta with the SN environment, but lacks a detailed discussion of our current knowledge of the different SN progenitors, the mechanisms that explode them and the compact remnants that are left over. It is needless to emphasize the importance and relevance of these subjects to the GRB-SN association. The second part of the book, which is devoted to GRBs and their afterglows, focuses mainly (four chapters extending over 100 pages) on the observations of GRB afterglows, with only a single chapter (16 pages long) on the prompt gamma-ray emission. Our current “theoretical understanding” of GRBs is summarized in a single chapter, which is exclusively devoted to the presentation of the party line – the fireball model – as if it were a dogma. It does not discuss the model’s severe problems, nor possible future tests of the model. It does not even mention alternative models, such as the “Cannonball Model”, which is ab initio based on a GRB-SN association, is falsifiable, and is much more predictive and successful than the fireball models.
In summary, the book contains some useful summaries of observational data on SNe and GRBs, but sheds no light on the production mechanism of SNe and GRBs, nor on the GRB-SN association.
by Keith Devlin, Granta Books. Hardback ISBN 1862076863, £20.00 ($29.95).
On 24 May 2000 in a lecture hall at the College de France in Paris, Michael Atiyah from the UK and John Tate from the US announced that a $1 million prize would be granted to those who first solved one of the seven most difficult open problems in mathematics. These are known as the “Millennium Problems”, and the whole idea was an initiative of the Clay Mathematical Institute (CMI), which was established one year earlier by magnate Landon Clay. The list of problems was selected by a committee of top mathematicians, including – along with Atiyah and Tate – Arthur Jaffe, the current director of the CMI, Alain Connes, Andrew Wiles and Edward Witten, the only physicist, who is also a Fields medallist.
One-hundred years before, also in Paris, David Hilbert had given the famous address laying out an agenda for the mathematics of the 20th century. He proposed a total of 23 problems; a few turned out to be simpler than anticipated, or were too imprecise to admit a definite answer, but most were genuine, difficult and important problems that brought instant fame and glory (if not necessarily wealth) to those who solved them.
Some of the differences between the two sets of problems should be mentioned. While Hilbert’s set provided a guideline for mathematics research, the Millennium Problems provide a description of the current frontier of knowledge in the subject. The other important difference is that among the CMI set, two are inspired by deep physics problems: fluid dynamics and the structure of gauge theories.
In this book, Keith Devlin, a well known mathematician who writes excellent books and articles for a lay audience, takes up the daunting task of explaining these problems as best as can be done to an audience assumed to have no mathematical sophistication beyond the high-school curriculum – and all this in only about 200 pages! Although such an ambitious aim is nigh on surreal, the results are quite satisfactory. The book is able to communicate to a large extent the depth and importance of the problems, and to give a glimpse of the deep elation whoever solves them will feel. For anyone involved in research, the $1 million prize is almost beside the point.
The author chooses to present the problems from the “simplest” to the most arcane. The first is the Riemann hypothesis. Deeply related to the distribution of prime numbers, this is the only one of the problems proposed by Hilbert that has not been settled. Technically, one needs to find the location of the zeros of a certain function (Riemann’s zeta function). If proven true, there are literally hundreds of important results in number theory that would follow, and it is quite likely that a poll among mathematicians would identify this as the most important open problem in mathematics.
The second problem has a physics flavour to it. All the basic interactions in the Standard Model of particle physics are gauge interactions. If we consider the idealized situation of a pure gauge theory, we would like to have a mathematically sound proof that the quantum theory exhibits confinement, and that the mass of the first excited state is definitely positive (there is a mass gap). Most physicists would agree with these properties, however, a real proof may provide completely new methods in quantum field theory that may bring a revolution similar to the invention of calculus by Leibnitz and Newton.
The third problem is related to computational complexity where one can ask, among those functions or propositions that can be computed, which are “easy” and which are “hard”. Problems that can be solved in polynomial time with respect to the length of the input (in bits) are assigned to complexity class P, while the class NP contains problems that are considered hard because, so far, any algorithm used to solve them requires exponential time. Among the latter, one of the most famous is the travelling-salesman problem. Nobody knows whether the classes P and NP are equivalent. This is a central problem in computational theory and its resolution may have far-reaching technological consequences.
The fourth problem again has a physics flavour, and is related to the Navier-Stokes equation describing the fluid flow of an incompressible fluid with viscosity. It is difficult to exaggerate the importance of this equation in the design of aeroplanes and ships. Although there are plenty of approximation and numerical methods, as in the case of gauge theories, we are still lacking a deep mathematical methodology that will allow us to understand in detail the space of solutions for given initial data. To appreciate the difficulty of the problem, a solution would imply a detailed understanding of the phenomenon of turbulence – no small accomplishment.
The fifth problem is at the root of modern topology: the Poincaré conjecture. Roughly speaking, topology is the study of spaces that are stable under arbitrary continuous deformations. Thus a tetrahedron or cube can be continuously deformed to a sphere, while it is impossible to do the same with the surface of a doughnut. It is a very interesting and legitimate question to ask for the complete set of topological invariants in a given dimension. In the case of a two-dimensional sphere it is clear that we cannot lasso it. This property is known as simple-connectedness and it characterizes the sphere topologically. Poincaré asked whether a similar property would completely characterize a three-dimensional sphere. During the 20th century we have achieved a topological classification for spaces whose dimension is different from three, curiously the dimension where Poincaré’s conjecture was formulated. Progress in the past two decades seems to indicate that it may be settled in the positive in a few years time.
The last two problems are truly arcane. They go under the names of the Birch and Swinnerton-Dyer, and the Hodge conjectures. To exhibit the difficulty in explaining the first, it should be noted that definite support for it came from the settling of Fermat’s last theorem by Wiles and the subsequent progress in the so-called Taniyama-Shimura conjectures. The problem is deeply rooted in the study of rational solutions to special types of equations (elliptic curves), which in turn are contained in the apparently unfathomable world of Diophantine equations. The Hodge conjecture is, according to Devlin, the one most difficult to formulate for a lay audience. Its resolution would provide deep insights into analysis, topology and geometry, but its mere formulation requires advanced knowledge in all three subjects.
Devlin comes out with flying colours in his effort to make these fascinating problems accessible to a wide audience. It is inevitable that there are some dark corners and a few inaccuracies in such a challenging task. However, for anyone interested in the frontiers of mathematics and scientific knowledge, this book provides enthralling reading.
Estonia’s parliament has recently approved special funding from the country’s state budget of some €100,000 annually for the period 2004-2010. The funds are to boost scientific co-operation between Estonia and CERN, which to date involves the following Estonian research institutions: the National Institute of Chemical Physics and Biophysics, the University of Tartu (notably its Institute of Physics), the Technical University of Tallinn and the Observatory of Tartu.
Estonia’s co-operation with CERN will now focus on a number of objectives: consolidation of participation in the CMS experiment at the Large Hadron Collider (LHC); participation in LHC Grid Computing and other information-technology projects at CERN; collaboration with research groups at CERN in theoretical and experimental particle physics, as well as material sciences; and the creation of an Estonian graduate school, with students trained at CERN. The school already plans to send six Estonian students to participate in CERN’s Summer Student Programme in 2004.
I was born in Dimboola, in the state of Victoria, Australia. Back then it was a town of about 2000 people, but it’s more like 1000 today. It is by the Wimmera River, which carries rainwater falling inside the Great Dividing Range of Australia northwards until it sinks into the sands. My mother, a schoolteacher, was very keen that her children should have an education in Melbourne, so we moved there when I was two years old; all of my schooling was in Melbourne. At Melbourne University I took a four-year course for a Bachelor of Arts (Honours Mathematics) and a Bachelor of Science (Physics), and then I took my PhD in Cambridge.
How did you become interested in science?
I was always interested in mathematics. Physics was a later interest, since it involved the use of mathematics.
What led you to Cambridge?
In 1946 I was awarded the Aitchison Travelling Scholarship of Melbourne University. I married at age 21 and took my wife with me [to Cambridge]. My supervisor there was Kemmer and my first aim was to learn how to use quantum mechanics. There wasn’t much knowledge of that in Melbourne in those days.
What sparked your interest in quantum mechanics?
Quantum mechanics was essential for research in physics. Paul Dirac’s The Principles of Quantum Mechanics was the book to study. Its first edition in 1930 was sparse in words and very difficult to read. The 1935 edition was rewritten but was unobtainable after the war. Dirac lectured from third-edition proofs in 1946 and I attended a second time in 1947, with my own copy. Mrs [Bertha Swirles] Jeffreys also gave very intelligible and useful lectures. Lectures were not required for postgraduate students, but we went along out of interest.
What was your PhD thesis work?
Its title was “Zero-zero transitions in nuclei”. Primarily it was a study of the transitions from the first level of oxygen, which has spin-parity 0+, to the ground state, which also has 0+, together with a number of other topics added as appendices.
Was your thesis entirely theoretical?
Yes, it was entirely theoretical but it stemmed from experiments by [Samuel] Devons at the Cavendish Laboratory. After two years at Cambridge, I ran out of money. We had a young child by that time so I took up a one-year post at the University of Bristol.
What came next?
I was a student assistant to Professor Mott. He began in nuclear physics in the early 1930s but many students at the Cavendish Laboratory consulted him (himself a student) about their solid-state physics research. He did this so well that he quickly became known as a solid-state physics expert. He never found time to take a PhD himself. However, he recognized the high quality of the research being done by the Cosmic Ray Group on the fourth floor of the Physics Department at Bristol University. He wished to know more about this work and perhaps even to take part in it. This was the group of C F Powell, who not long before had identified the pion as Yukawa’s nuclear-force meson. It was there that I learned about elementary particles first-hand, because they were the people finding them. Mott was in such demand in solid-state problems that I never managed to help him make the transition back to nuclear physics.
At Bristol I got involved in problems of cosmic-ray particles. I took a particular interest in the “tau meson”, which we call the K+ meson today. That tau meson decayed into three pions. I started collecting evidence about them and their decay configurations. Although I thought a lot about them, I did not do any work on them until I had completed my thesis in 1960, more than a year later.
This year at Bristol was vital for my development in many ways, a very important year for me, in my opinion. I was invited to join the department of Professor Peierls at Birmingham University. My first year there was mainly occupied with completing my thesis work. I was also learning how to use the quantum-electrodynamical methods of Feynman, which I used to generate a number of appendices to my thesis.
Did you stay at Birmingham after completing your thesis?
Yes, I wrote the thesis in the first year, then I was a research fellow and later a lecturer. It was a strong group, centred on Peierls. This was his style; Peierls supervised all of the students. He had a wide range of understanding in physics and in life.
I was very lucky. Dyson, who had worked in America showing that the theoretical formalisms of Feynman and Schwinger were equivalent, did so on a UK fellowship that required him to return to England for two years after his work there. He chose to work at Birmingham. He was in a fairly relaxed state then, because he’d done his most important work and so he had an amount of time to talk with me now and then. His presence, and my contact with him, was considerable and important for me.
I did my work then [in 1951] on the neutral pion decay, to a photon and an electron-positron pair [the “Dalitz pair”], before moving on to the tau-meson decay, for which I devised a convenient representation, the so-called “Dalitz plot”.
How did you come up with the Dalitz plot?
The Dalitz plot is a kind of map, summarizing all of the possible final configurations, each dot representing one event. I came at it from a geometrical perspective because I visualize geometry better than numbers. The idea was convenient then for all systems decaying into three particles. Tau-meson decay to three pions is particularly simple. With parity conservation (P), I used the plot to show that if the tau meson was also capable of decay to two pions, then the three-pion plot should show special features, which are absent in the data; and also to show that the tau meson had zero spin. If the K+ meson can decay to three-pion and two-pion states, then these two final states must have opposite parity. These facts were the first intimation that P might fail for weak decay interactions.
When did you visit Cornell University, from Birmingham?
I was at Birmingham University from 1949 to 1953. Then I was given two years leave to work in America, primarily at Cornell University in Ithaca, upstate New York, in the group of Professor Bethe, at his invitation. He was a tremendous stimulation. Our names appear together on one paper, but our contributions were made at different places and different times. My work was mostly on pion-nucleon scattering and the production of pions. I was also very fortunate to be able to work at a number of places for short periods. I spent one summer at Stanford University, another at the Brookhaven National Laboratory and one semester at the Institute for Advanced Studies at Princeton.
And when did you go to the University of Chicago?
I joined the faculty of the University of Chicago and its Enrico Fermi Institute for Nuclear Studies in 1956. After Fermi died in 1954, a number of senior theoretical physicists left Chicago – Gell-Mann went to CalTech, Goldberger went to Princeton University, and there were others. Those appointed to senior posts at the University of Chicago then had a tremendous opportunity – to build up groups again and get things going, with the junior faculty still to be appointed. There were quite a number of good students there too, many from other countries.
My interest in hypernuclear events developed particularly well in Chicago because a young emulsion experimenter, Riccardo Levi-Setti, whose work I had known from his hypernuclear studies at Milan, came to the Institute for Nuclear Studies at this time. We each benefited from the other, I think, and we got quite a lot done.
Did all of this happen over just two years in Chicago?
No, I was connected to the University of Chicago for 10 years in all. I enjoyed Chicago. I thought it a very interesting place and a very fine university. I approved of the way the university did things, although the place wasn’t very fashionable with American physicists. At that time they tended to go to either the east coast or the west coast. Relatively few of them were interested in being in the middle of the country; perhaps more do these days.
After Chicago, you went to Oxford University
Peierls became the Wykeham professor of theoretical physics in Oxford, where there had not really been any central department for this. There were some individual theoretical physicists, but only a small number. Peierls brought all that together, and he was very keen for me to go back with him to Oxford.
I became a research professor of the Royal Society. They have no buildings for research, but they had funds and could appoint some researchers to be in various universities. I was responsible for organizing particle-physics theory in Oxford. Besides quark-model work, I still did work on hypernuclear physics, much of this with Avraham Gal of Jerusalem.
Life became increasingly busy as the years went by. I was attached to the Rutherford High-Energy Laboratory, as it was called in those days. They had their own accelerator and I was their adviser on theoretical matters. That was quite a happy arrangement, also.
I’ve heard scientists call you the “father of QCD”. Do you think that’s fair to say?
Oh, no. I wouldn’t claim that. I first heard quark colours mentioned in a seminar by Gell-Mann. I just picked up the ball very quickly since this concept immediately resolved some deep difficulties with the quark model that we had adopted in 1965. Of course, many people wouldn’t give any credence for the quark theory at that early stage, but I was always interested in it, and others came to Oxford to join in the work.
As time passed, heavier quarks, charm (c) and bottom (b), became established and we became interested in the spin correlations between the quark and antiquark jets from electron-positron annihilation events. Finally we came to the top quarks, for which these effects would probably be quite different.
What was your involvement in the discovery of the top quark?
Two groups at the Tevatron (Fermilab) were doing experiments at sufficiently high energies to find the top (t) quark, but little was known about their progress. We – myself and Gary Goldstein (at Tufts University) – thought about the problems of how one might identify tops and antitops from the decay processes that seemed most natural for them, and worked out a geometrical method by which experimental data could be used to deduce the top quark mass.
It was known that there was one event that seemed to have the features needed – this had been shown at a conference by the CDF group at Fermilab – but which the CDF experimenters would not accept as a possible top-antitop production and decay event. Since they wished to determine the top pair-production cross-section, they had laid down fiducial limits for such events. However, these limits were not always relevant for determining the existence and mass of the top quark. Knowledge of this one event made us think very hard about devising this method – empirical data drive the theoretical mind! We tried out our method, with the conclusion that, if this event were top-antitop production and decay, the top quark mass must be greater than about 130 GeV, an unexpectedly large value. But of course this one event might not have been a top-antitop event. This could only be decided on the basis of a large number of observed events, all of them being consistent with a unique mass, and this was the case when the two experimental groups came to conclude later that the top mass was about 180 GeV.
You’ve had a lot of good fortune and hard work along the way!
Yes, I know…I’m very aware of that. I have been lucky.
•Melanie O’Byrne, Thomas Jefferson Laboratory, talked to Richard Dalitz during the 8th International Conference on Hypernuclear and Strange Particle Physics, held at Jefferson Lab in October 2003. This article is based on the interview published in Jefferson Lab’s newsletter, On Target, in March 2004, and is published with the laboratory’s permission.
The second CHINA-CERN Workshop took place from 31 October to 1 November 2003 in Weihai City in the Shandong province, some 800 km south-east of Beijing. Co-organized by the National Natural Science Foundation of China (NSFC) – China’s main funding agency for the Large Hadron Collider (LHC) – and CERN, the workshop allowed the attending spokesmen to review the status of their collaborations with Chinese colleagues and funding agencies.
The first CHINA-CERN Workshop was held 1999, and at that time China participated mainly in the CMS experiment, and to a lesser extent in ATLAS. Since then, however, Chinese scientists have joined all four major LHC collaborations, three of which are now formally funded by China. The second workshop was attended by 62 participants. The 13 non-Chinese members of the LHC collaborations included the spokesmen Michel Della Negra (CMS), Peter Jenni (ATLAS), Tatsuya Nakada (LHCb) and Jürgen Schukraft (ALICE). Representatives from the Chinese funding agencies – the NSFC, the Chinese Ministry of Science and Technology, the Ministry of Education, and the Chinese Academy of Sciences – acted as reviewers and organizers. Chinese institutions and universities were also represented by 36 participants from: the Central China Normal University; the China Institute of Atomic Energy; the Central China Science and Technology University; the Institute of High Energy Physics (IHEP); the Institute of Theoretical Physics (ITP) of the Chinese Academy of Sciences; Nanjing University; Peking University; Shandong University; Tsinghua University; and the University of Science and Technology in Hefei/Anhui.
The workshop consisted mainly of plenary presentations, and there were opening addresses from Peiwen Ji (NSFC), Diether Blechschmidt (CERN) and Tao Zhan, president of the host Shandong University. Zhan pledged continued support for the LHC programme at Shandong University, which has been a member of the ATLAS collaboration since 1999. The four spokesmen of the LHC collaborations then presented the status of their experiments, and representatives of the Chinese collaborators reported on their contributions to three LHC experiments: Guoliang Tong of IHEP reviewed work on the ATLAS experiment in China, and Chunhua Jiang and Yuanning Gao described progress on CMS and LHCb at IHEP and Tsinghua University, respectively.
The sessions continued with Yuqi Chen of ITP, reporting on the progress of theoretical studies in collider physics in China for the past two years, and Gang Chen of IHEP, who looked at the computing needs for future physics. Alexandre Nikitenko from Imperial College in the UK gave an outlook on the early physics reach of CMS, and Torsten Akesson of Lund presented the prospects for computing for ATLAS.
On the second day, reports on muon projects for ATLAS and CMS were given by George Mikenberg from the Weizmann Institute and Guenakh Mitselmakher of Florida, respectively. Chris Seez of Imperial College talked about the trigger system for CMS, and Antonio Pellegrino from NIKHEF reported on the outer tracking system for LHCb. Activities in China were presented by Guoming Chen of IHEP, who described his studies on Bc physics at CMS; Yong Ban and Sijin Qian, who reviewed the work at Peking University on the resistive plate chambers for CMS and on the CMS physics programme, respectively; and Chengguang Zhu, who reported on the production of the thin gap chambers for ATLAS in the host university of Shandong.
After almost one-and-a-half days of plenary sessions, Chinese physicists and their colleagues in the LHC experiments met in four parallel sessions – one for each experiment – to review progress, address problems and plan future work, especially for the upcoming LHC physics analysis. In the afternoon of the second day, there was a lively and broad discussion among all workshop participants on LHC computing, with Jürgen Knobloch of CERN’s Information Technology Division acting as convener. As a result of these meetings, the current situation and problems of computing and networking in China have become much clearer. As a next step, Chinese groups will have to find a solution to their problems with the help of CERN and supported by their funding agencies.
In summary, the 2003 CHINA-CERN Workshop provided an ideal forum to review the progress and commitment of China to the LHC programme. The venue and agenda were well prepared by the NSFC and CERN, and issues of common concern to all LHC experiments, such as computing and networking, were well addressed. The finding of appropriate solutions to such common issues is of key importance, not only to the LHC collaborations but also to their Chinese participants, who wish to harvest and analyse the overwhelming flow of physics data that the LHC experiments will provide as of 2007.
The workshop encouraged Chinese colleagues to participate more actively in various LHC conferences, especially in computing and LHC physics studies, so that ideas and research results can be promptly communicated within the whole collaboration community, and so that problems may be solved more effectively with help from experts at CERN and other institutes around the world.
In view of the success of the second China-CERN Workshop, it can be expected that similar workshops will be held in the future.
When the convention for the establishment of a European organization for nuclear research was signed in Paris on 1 July 1953, the 12 states who signed up to the formal establishment of CERN agreed that: “The basic programme of the organization shall comprise: (a) the construction of an international laboratory for research on high-energy particles; (b) the operation of the laboratory specified above.” (Article II, paragraph 3.) So CERN was born, and it is well known that over the past 50 years the laboratory has fulfilled its mandate extremely well in these respects. However, the paragraph continues with a third part: “(c) the organization and sponsoring of international co-operation in nuclear research, including co-operation outside the laboratory.” This part of CERN’s mission is less well known, and it seems to have been less strongly implemented by the member states.
As I begin my mandate as director-general of CERN, in the organization’s 50th anniversary year, it seems increasingly important that the member states should place more emphasis on this neglected aspect of CERN’s mission. In particular I believe that CERN should come to be recognized as the place where the European programme in particle physics is coordinated, shared and supported by all the European players in the field.
The connection between CERN and the rest of Europe is of the utmost importance, especially now that the European Commission (EC) is doing a great deal to help science. In March 2000 the Lisbon European Council endorsed the project of creating a European Research Area (ERA), as a central element of its strategy for Europe to become, by 2010, “the most competitive and dynamic knowledge-based economy in the world”. The aim is that connections in Europe in one discipline can help to strengthen the players, and that synergy between laboratories in different countries can avoid a wasteful duplication of effort in research and development.
Within this context we now have the opportunity for the EC to help us recover the “lost” part of CERN’s original mission. Building a research area across Europe requires coordination, and in particle physics this coordination should be the task of the CERN Council. In this way, the investment of the member states in CERN could be seen more overtly to be fed back into those states.
What steps can we now take? CERN’s co-operation with other European particle-physics laboratories should be strengthened and deepened, with more collaboration towards common goals. In line with the policy of the EC for structuring the ERA, CERN could participate with other laboratories in research and development and new infrastructure, and help to launch a variety of studies in co-operation with other laboratories. The programme of the CARE (Coordinated Accelerator Research in Europe) network, funded by the EC within Framework Programme 6, is an example of this kind of initiative.
For many years there has been collaboration between CERN and groups in the member states in detector development and data analysis, for example, which has been driven by an obvious necessity. However, collaboration in the accelerator domain has been less common, and competence in accelerators has become more concentrated at a few centres, such as CERN and DESY. The benefits back in the member states themselves have therefore not been as obvious as in the case of the physics collaborations, where there has been clearly defined work to be done within member states, with related local benefits.
The time now seems right for the accelerator domain to follow this example, with multilateral collaborations between CERN and other laboratories in the member states. This would be collaboration at a system level rather than at a component level as has so far generally been the case. CERN can in this respect take on a specific role in coordinating the realization of the infrastructure. Interestingly, such collaborative work has occurred in the past, but mostly with non-member states, such as the Russians, for example, rather than with the member states.
As an example of what might be possible in future, consider CLIC (the compact linear collider) project. This year the CLIC Test Facility 3 (CTF3) will be used to demonstrate the technical feasibility of the key concepts of the new radiofrequency power source for CLIC, and for further tests with high field gradients. This facility has already received some technical contributions from other laboratories: INFN (Italy), LAL (France), RAL (UK) and Uppsala (Sweden) in Europe, and SLAC in the US. Nevertheless, development work for CLIC could provide the right opportunity to set up a collaborative venture with a much larger group of European laboratories, to be blessed by the CERN Council.
So my vision is to see CERN, in particular at the level of the CERN Council, develop beyond its mission to supervise the CERN laboratory, and to develop a new – or rather old – objective to promote and steer the activities in particle physics across Europe. Remember that CERN belongs to the member states and also to their laboratories: CERN belongs to you!
Marietta Blau – Sterne der Zertrümmerung (stars of fragmentation) is the third in a new series devoted to scientists from Austrian history, following on from those about Hans Thirring and Ludwig Boltzmann.
In brief, Marietta Blau was born in Vienna in 1894 to a moderately well-to-do Jewish family, and was among the first women to study physics at the University of Vienna. In 1923 she joined the Radium Institute in Vienna, but was forced into exile in 1938. After five years in Mexico and 16 years in the US she returned to her native Austria in 1960, aged 66 and badly in need of medical treatment. She died of cancer in Vienna in 1970.
The book begins with a long and well-documented biographic chapter written by the editors. Here the history of the Radium Institute is described so vividly that I had the feeling I was actually moving around the building and meeting the people working there. The reader is presented with some interesting and at times surprising details.
For example, more than one-third of the researchers at the Radium Institute were women and the majority of Blau’s PhD students were female. However, this was not a general phenomenon during the 1930s. After leaving Austria, aided by Albert Einstein, Blau found refuge in Mexico as a staff member at the Polytechnic Institute in Mexico City between 1939 and 1944. One of the pictures in the book shows the teaching staff at the institute in 1940, and out of 58 people in the photo, Blau is the only woman. (It is interesting to compare this with our own time. A recent picture in CERN Couriershows the participants at a conference on supergravity, and three out of the 52 are women.)
Another surprising fact is that the majority of the researchers at the Radium Institute, both male and female and including Blau, were unpaid. Perhaps they were working there simply to have a meaningful life? The book quotes the famous Austrian physicist Lise Meitner to have said in 1963: “I believe that all young people think about how they would like their lives to develop. When I did so, I always arrived at the conclusion that life need not be easy; what is important is that it not be empty. And this wish I have been granted.”
The book also describes how Blau turned down academic job offers during her most productive years to take care of her sick mother, and her close collaboration with Hertha Wambacher (1903-1950), who having originally studied chemistry had turned to physics and chosen Blau as her supervisor. After Wambacher had finished her doctorate, the two women had an extensive and fruitful collaboration for the next six years, in particular in trying to improve the emulsion technique for detecting particles. However, much of their relationship remains a great mystery. Wambacher was a member of the Nazi party, but obituaries for her shed no light on this matter as they deal exclusively with her work.
Other topics covered in detail include Blau’s work before and after the Second World War, and there are reminiscences from those who had contact with Blau in the latter stages of her career when she was in her sixties. Blau’s last PhD student worked from 1960 to 1964 analysing an experiment done at CERN in which emulsions were exposed to a beam of protons. This was one of a large number of experiments at CERN at this time that used emulsions. At the end of the book there is a reprint of three of Blau’s papers, two of which were with Wambacher, and a list of all her publications.
Blau was an expert on nuclear emulsions, a detection technique with old roots. In 1937 she and Wambacher observed 31 “stars” in emulsions exposed to cosmic rays. The stars were made by collisions in the emulsions, which produced several particle tracks emanating from the collision point; one of the stars had no less than 12 tracks! The observation of these stars drew the attention of the scientific community to emulsions, which were considered by some as being rather out of date. However, to claim that the work by Blau and Wambacher was a prerequisite for the discovery of the pion is a gross exaggeration. Emulsions had to be enormously improved to achieve the required sensitivity. One of the authors in the book also claims that Blau must have been frustrated that Cecil Powell was awarded the Nobel prize for a discovery using her method, so much so that she erroneously attributed the first observation of the negative pion to Don Perkins. However, this speculation is unfounded; Blau was in fact giving a correct account of what had happened. Another point made concerns the nomination of Blau and Wambacher for the Nobel Prize in Physics. By itself this is not a measure of the highest excellence as every year there are a large number of such nominations.
The literature on emulsion techniques is vast and it is very difficult to do justice to all those who have contributed to this field. Nonetheless, I was deeply touched by the portrait of this exceptional woman, described as shy, gentle and highly dedicated to her métier, but who had the misfortune to live in a hostile environment, a victim of a sick society.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.