The bubble chamber, which was invented by Donald Glaser in 1952, made its major contributions to particle physics over three decades, from the late 1950s until the 1980s. This period saw chambers of increasing size, particle beams of increasing energy, more and more automatic measuring machines, and increasingly powerful computers. The initial era was pioneered by groups in the US, in particular the Alvarez group at the Lawrence Berkeley Laboratory. Later, major contributions came from European groups, with CERN playing a central role. In Italy the bubble-chamber technique provided the opportunity to revitalize the field of particle physics, bringing together a large number of physicists from many Italian universities. This was coordinated by INFN, which also created a national centre called the Centro Nazionale Analisi Fotogrammi (CNAF) in Bologna.
It was against this background that the Bologna Academy of Sciences organized a meeting on 18 March entitled “30 years of bubble chamber physics”. Around 100 physicists from 28 different institutions attended the meeting, which was sponsored by the Bologna Academy of Sciences, the University of Bologna and the Department of Physics, and the INFN (CNAF and Sezione di Bologna). The programme included talks on the beginning of bubble chambers, the first instruments and the first results, the impact of bubble chambers on particle physics, and hydrogen, helium and heavy-liquid bubble chambers.
The early bubble chambers were very small, but over the years they increased in size by a factor of around one million, with the largest chambers containing 40 m3 of liquid. More than 100 bubble chambers were built throughout the world, and more than 100 million stereo pictures were taken. The 80 cm Saclay bubble chamber at CERN, the 2 m CERN bubble chamber and the Big European Bubble Chamber (BEBC) took more than half of these pictures.
The sociology of bubble-chamber collaborations is an interesting one. In the initial period, many small chambers took pictures that were analysed by in-house groups. Later, bigger bubble chambers were built and run by experts in large laboratories using refined beams at accelerators of increasing energy. These chambers were considered facilities that could be used by internal and external groups, and this increased the number of international collaborations, with several groups from different countries and around 20-50 physicists per experiment. The role of large laboratories like CERN was always a central one.
One of the earliest bubble-chamber papers, “Demonstration of parity non conservation in hyperon decay”, which was published in 1957, was signed by physicists from four teams: the Columbia- BNL team that was headed by Jack Steinberger, the Bologna team headed by Giampietro Puppi, the Pisa team headed by the late Marcello Conversi and the Michigan team led by Donald Glaser (F Eisler et al. 1957).
It is worth recalling that in the beginning every team had to scan and measure bubble-chamber photographs with very primitive equipment. Eventually, digitized tables were made and one started to hear of “Mangiaspagos” in Italy, of more elaborate semiautomated or fully automated “Frankensteins” and “PEPRs” in the US, and of “MYLADYs” and “HPDs” in Europe. A large number of scanners was needed to cope with the increasing number of photographs, making measurements and pre-measurements more precise.
Computer technology grew in parallel with the increase in size and automation of the bubble chambers. At the beginning of the bubble-chamber era, slide rules and electromechanical calculators were used. But soon the IBM650 computer began to be used, and this was followed by even more powerful machines. Similarly, the measured co-ordinates of points along the tracks were initially punched onto cards manually, but then semiautomatic projectors took over this task. The installation of mainframe computing capacity was driven by the demands of bubble-chamber physics. For example, the CERN mainframe central computers increased their speed and capacity by a factor of more than 1000 during the bubble-chamber era.
The meeting also reviewed several areas of physics where bubble chambers have had an impact, for example, parity violation in hyperon decay, the weak neutral current, baryon resonances, charm particles and multihadronic production. In the round-table discussion on “The legacy of 30 years of bubble chamber physics”, several participants completed the overall view of the field, with an emphasis on topics such as the neutrino field and some of the special bubble chambers.
The main scientific legacy of the bubble chamber towards our understanding of the microworld of particle physics forms an impressive list that includes: strange particles, such as the omega-minus; meson and hadron resonances, leading to the hadron spectrum, SU(3) and constituent quarks; neutral weak currents and electroweak unification; and scaling in neutrino-nucleon deep inelastic scattering, leading to partons and therefore to dynamical quarks (“Bubbles 40” 1994).
The final session at the meeting dealt with particle physics and society, and with the popularization of science. In this respect, selected bubble-chamber pictures can provide a global and intuitive view of particle-physics phenomena. They allow an untutored audience to realize that our field is based on simple and intelligible experimental facts. A large number of photographs of bubble-chamber events was on show to participants as part of a small historical exhibit, which included an early propane chamber that was built in Padova in 1955, early instruments and the central part of a Mangiaspago measuring projector.
To paraphrase Tolstoy’s introduction to Anna Karenina, every developing country is developing in its own way. It is for each developing country to define its own needs and set its own agenda. So in this context, what is, or what should be, the relationship of CERN to developing countries? In what ways do they already benefit from the work at CERN, and how might they benefit from further collaboration?
CERN’s original raison d’être was to provide a vehicle for European integration and development, whilst also enabling smaller countries to participate in cutting-edge research, and to reduce the brain drain of young European scientists to the United States. Nowadays, CERN is internationally recognized for setting the standard of excellence in a very demanding field, and serves as a beacon of European scientific culture. CERN is open to qualified scientists from anywhere in the world, and beyond its 20 European member states currently has co-operation agreements with 30 countries. Prominent among these – beyond North America and Japan – are Brazil, China, India, Iran, Mexico, Morocco, Pakistan, Russia and South Africa, and more than 1000 people from these countries are listed in the database of scientists as using CERN for their experiments.
Experimental groups from developing nations are not asked to make large cash contributions to the construction of detectors, but rather to produce components. These are valued according to European prices, and if the developing countries can produce them more cheaply using local resources, then more power to their elbows. In Russia’s case, European and American funds were important in helping to convert military institutes into civilian work.
In addition to participating in experiments, some of these countries, notably Russia and India, have also contributed to the construction of accelerators at CERN. Russia and India are now making important contributions to the Large Hadron Collider (LHC) that is being constructed at CERN, and Pakistan has also offered to contribute. Again, CERN does not require these countries to pay any money towards the construction or operation of its accelerators. Indeed, CERN pays cash for the accelerator components that Russia and India provide, which these countries use to support their own scientific activities.
What, then, are the main benefits for developing countries in collaborating with CERN? It certainly provides them with a way to participate in research at the cutting edge, just as it always has for physicists from smaller European countries. In general, these users spend limited periods at CERN, preparing experiments, taking data and meeting other scientists. Thanks to the Internet, and to CERN’s World Wide Web in particular, particle physicists were the first to make remote collaboration commonplace, and this habit has spread to many other fields beyond the sciences. It is now relatively easy for scientists working on an experiment at CERN to maintain contact with their colleagues around the world, and they can even contribute to software development, data analysis and hardware construction from their home institute. The Web has enabled Indian experimentalists to access LEP data, and their theoretician colleagues to access the latest scientific papers from around the world, all while sitting at their home desks.
CERN is now also a leading player in European Grid computing initiatives. These will benefit many other scientific fields, for which applications are already being developed. Grid projects involve writing a great deal of software and middleware, which is split up into many individual work packages. CERN is keen to share the burden of preparing the Grid with developing countries. For example, several LHC Grid work packages have been offered to India, and other countries such as Iran and Pakistan have expressed an interest and would be welcome to join. In this way, such countries can become involved in developing the technology themselves, thus avoiding the negative psychological dependency on technological “hand outs” (as in the “cargo cults” in New Guinea after the Second World War).
The everyday acts of collaborating with colleagues in more developed nations exposes physicists from developing countries to the leading global standards in technology, research and education. Collaborating universities and research institutes are therefore provided with applicable standards of comparison and excellence, as well as training opportunities for their young scientists. These may be particularly valuable when educational values are threatened by a combination of increasing demand, insufficient resources and inefficiencies. One country where this is currently a concern is Pakistan. Its chief executive, Pervez Musharraf, has clearly stated his interest in encouraging scientific and technological development in Pakistan, and has exhorted other Islamic countries to do likewise.
How might such “ISO 9000” educational and academic standards be transferred to the wider society? Their value is limited if only a few élite institutions in each country benefit from the international contacts and they are not available throughout the educational system. This is essentially an issue for the internal organization within the country concerned, but CERN is happy to help out. The laboratory has archives of lectures in various formats available through the Web, offering resources for remote learning.
In India, for example, the benefits of collaborating with CERN increase to the extent that physicists from smaller universities outside the main research centres are brought into particle-physics research. In South Africa there are clear priorities in human development. However, a South African experimental group has joined the ALICE collaboration and CERN has welcomed a number of South Africans to its summer student programme, as well as a participant to its high-school teacher programme.
The information technologies that CERN has available should be of benefit to wider groups in developing societies. For example, could video archiving and data-distribution systems be used to disseminate public health information? This exciting idea was proposed to CERN by Rajan Gupta from the Los Alamos National Laboratory, and Manjit Dosanjh of CERN is now developing a pilot project in collaboration with the Ecole Superieure des Beaux Arts de Genève, supported by the foundation “Project HOPE” (see “Project HOPE” box).
This project will be demonstrated at the conference on The Role of Science in the Information Society (RSIS) that CERN is organizing in December 2003 as a side event of the World Summit on the Information Society (WSIS) (see “The Role of Science in the Information Society” below). Other sessions at this event will explore the potential of scientific information tools for aiding problems related to health, education, the environment, economic development and enabling technologies.
In 1946 Abdus Salam left his native Pakistan to pursue his scientific dreams in the West – dreams that were more than fulfilled with the award of the Nobel Prize for Physics in 1979. However, his dream of bridging the gap between rich and poor through science and technology remained largely unfulfilled, as Riazuddin has described. If the world can develop its information society properly, a future Salam might not have to leave his – or her – country in order to do research in fundamental physics at the highest level. Moreover, a country’s participation in research at CERN might benefit not only academics and students, but also the wider society at large.
The Role of Science in the Information Society
On 10-12 December 2003, the first phase of the World Summit on the Information Society (WSIS) will take place in Geneva. The aim is to bring together key stakeholders to discuss how best to use new information technologies, such as the Internet, for the benefit of all. The International Telecommunications Union, under the patronage of UN secretary-general Kofi Annan, is organizing WSIS, and the second phase will take place in Tunis in November 2005.
The “information society” was made possible by scientific advances, and many of its enabling technologies were developed to further scientific research and collaboration. For example, the World Wide Web was invented at CERN to enable scientists from different countries to work together. It has gone on to help break down barriers around the world and democratize the flow of information.
For these reasons, science has a vital role to play at WSIS. Four of the world’s leading scientific organizations: CERN, the International Council for Science (ICSU), the Third World Academy of Science (TWAS) and UNESCO, have teamed up to organize a major conference on The Role of Science in the Information Society (RSIS), as a side event to WSIS. The conference will take advantage of CERN’s location close to Geneva to play a full role at the Summit.
Through an examination of how science provides the basis for today’s information society, and of the continuing role for science, the conference will provide a model for the technological underpinning of the information society of tomorrow. Parallel sessions will examine science’s future contributions to information and communication issues in the areas of education, healthcare, environmental stewardship, economic development and enabling technologies, and the conference’s conclusions will be discussed at the UNESCO round table on science at the Summit itself.
ICSU, TWAS and UNESCO have a long tradition of scientific, political and cultural collaboration across boundaries. CERN produces knowledge that is freely available for the benefit of science and society as a whole – the World Wide Web was made freely available to the global community and revolutionized the world’s communications landscape. Working together, these organizations are providing a meeting place for scientists of all disciplines, policy makers and stakeholders to share and form their vision of the developing information society.
The RSIS conference will take place on 8-9 December. Its conclusions will feed in to the UNESCO round table at WSIS, and it will set goals and deliverables that will be reported on at Tunis in 2005. The scientific community’s commitment is long-term.
Participation at the RSIS conference will be by invitation and is limited to around 400. However, anyone who feels they have something to contribute to the debate can do so via a series of on-line forums that are accessible through the conference website. These forums will have the same themes as the parallel sessions at the conference and will be moderated by the session convenors. Their conclusions will provide valuable input to the conference itself, and as an added incentive, CERN is offering up to 10 expenses-paid invitations to the conference for those making the most valuable on-line forum contributions.
Within the framework of the CERN-Asia Fellows and Associates Programme, CERN offers three grants every year to East, Southeast and South Asia* postgraduates under the age of 33, enabling them to participate in its scientific programme in the areas of experimental and theoretical physics and accelerator technologies. The appointment will be for one year, which might, exceptionally, be extended to two years.
Applications will be considered by the CERN Associates and Fellows Committee at its meeting on 18 November 2003. An application must consist of a completed application form, on which “CERN-Asia Programme” should be written; three separate reference letters; and a curriculum vitae including a list of scientific publications and any other information regarding the quality of the candidate. Applications, references and any other information must be provided in English only.
Application forms can be obtained from: Recruitment Service, CERN, Human Resources Division, 1211 Geneva 23, Switzerland. E-mail: Recruitment.Service@cern.ch, or fax: +41 22 767 2750. Applications should reach the Recruitment Office at CERN by 17 October 2003 at the latest.
The CERN-Asia Fellows and Associates Programme also offers a few short-term Associateship positions to scientists under 40 years of age who are on a leave of absence from their institute. These are open either to scientists who are nationals of the East, Southeast and South Asian* countries who wish to spend a fraction of the year at CERN, or to researchers at CERN who are nationals of a CERN Member State and wish to spend a fraction of the year at a Japanese laboratory.
• Candidates are accepted from: Afghanistan, Bangladesh, Bhutan, Brunei, Cambodia, China, India, Indonesia, Japan, Korea, the Laos Republic, Malaysia, the Maldives, Mongolia, Myanmar, Nepal, Pakistan, the Philippines, Singapore, Sri Lanka, Taiwan, Thailand and Vietnam.
by Bryce DeWitt, Oxford University Press. Hardback ISBN 0198510934, £115.
This work in two volumes covers classical field theory, quantum mechanics and all major theoretical aspects of quantum field theory, and shows how they are related. Fields are viewed as global entities in spacetime, rather than as systems evolving from one instant of time to the next. The book should be particularly useful for quantum field theorists (especially students), theoretical physicists and mathematicians with an interest in physics.
Just as in the diffraction of light, beams of elementary particles diffract off each other in scattering experiments at high energies. The resulting diffraction pattern contains crucial information on the nature of the strong force and, in particular, on the pomeron.
For more than 40 years now, particle physicists have been trying to understand the physics of particle scattering at high beam energies. Central to the theory is the notion of complex angular momentum pioneered by Tulio Regge, where single particle exchange is generalized to the exchange of a collaboration of many particles that collectively look like a single particle carrying complex angular momentum. The pomeron, named after Isaak Pomeranchuk, is the collective exchange that is dominant at high-enough beam energies.
This book carefully collects the key theoretical ideas and confronts them with the available data in a systematic way. Given that there is, as yet, no consensus on the exact nature of the pomeron and that the literature is often quite confused, such a well written and accessible book as this is most welcome. The authors present an approach based firmly on the theory of Regge and make very good use of both perturbative and non-perturbative QCD to help develop and support their ideas. The authors have considerable expertize and experience, which they particularly bring to bear when presenting their ideas on the use of non-perturbative techniques to study the pomeron. They are also the principal advocates of the idea that there may be two pomerons, with one pomeron dominant in soft interactions and the other dominant in hard interactions. Within this framework they succeed in presenting a rather coherent picture of the physics, notwithstanding that there are a few areas where the theory remains to be developed.
The book is quite pedagogical and is written at a level suitable for those who already have a good grasp of the basic elements of quantum field theory and elementary particle physics. There is a self-contained introduction to S-matrix theory and Regge poles, which provides the necessary foundation for the remainder of the book. Although there is a brief introduction to QCD, a prior exposure to QCD as a quantum gauge field theory would be helpful, particularly if one is to appreciate fully the sections that present the authors’ ideas in non-perturbative QCD.
Over the past 10 years, data from the HERA and Tevatron colliders have allowed us to make substantial advances in our understanding of high-energy processes. Continued understanding can be expected in the light of data that will come from future colliders and this book will, I suspect, continue to provide an excellent introduction to the subject.
by Alfredo Bellen and Marino Zennaro, Oxford University Press. Hardback ISBN 0198506546, £59.95.
The latest in a series on numerical mathematics and scientific computation, this book by Bellen and Zennaro provides an introduction to the Cauchy problem for delay differential equations. It is aimed at mathematicians, physicists, engineers and other scientists interested in this area of numerical methods.
by David Caldwell (ed.), Springer-Verlag. ISBN 3540410023, €79.95, £56.00.
When, almost 70 years ago, Wolfgang Pauli wrote “I have done a terrible thing, I have postulated a particle that cannot be detected,” he could not have anticipated that even now that particle, albeit detected, would continue to be the most elusive, and also the most astonishing, paradoxical and intriguing of elementary objects. We now know that it appears as (at least) three different species, possibly some of them massive, all uncharged, spinning and blind to strong interactions, and all playing the most crucial role in modern theories of the history and structure of the universe – from the smallest to the largest scales.
Indeed, few things in recent years have had as much of an impact on our view of particle physics as the recent impressive developments in neutrino physics. Experiments in this field are challenging due to the very small neutrino interaction cross-section. As Haim Harari put it: “Neutrino physics is largely an art of learning a great deal by observing nothing.” Today, new technologies and ideas have allowed us to conceive projects that may soon bring us to a much better understanding or even to a solution of the neutrino puzzle. Neutrino physics, interplaying between elementary particles and astrophysics, is currently one of the hottest subjects in physics.
The fast development of neutrino physics is – paradoxically – a reason for the small number of textbooks on the subject. Proceedings of conferences and schools are too advanced and detailed, and thus do not make up this deficiency, while standard textbooks on particle physics cannot afford to expand the subject of neutrinos. We are therefore left with a “literature hole”, a hole that is well known to graduate and postgraduate university teachers. This book, edited by David Caldwell, seems to meet these needs. It comprises a set of purpose-written, up-to-date, advanced reviews, which also offer a comprehensive view of the field – a rare but fundamental feature of a textbook – and it is aimed at both specialists and beginners.
The book begins with a concise history of neutrino physics, and is followed by a theoretical discussion of the nature of massive neutrinos. The ensuing chapters review our experimental knowledge, interleaved with a guiding theoretical framework: measurements of the neutrino masses, their flux from the Sun and the atmosphere, studies of neutrinos at reactors and accelerators, and finally double beta decay searches. The next two chapters contain phenomenological and theoretical interpretations of this empirical knowledge, and the last three chapters refer to cosmological scales and review the neutrino’s role in supernovae, in the early universe and finally in astronomy. Each chapter starts with a very good introduction and closes with a superb summary. Reference lists, compiled separately for each chapter, are extensive and up-to-date until the time the book was edited. They often contain popular and review articles.
The authors, one of whom is the editor, are recognized authorities on the topic of each chapter. The only surprise is their geographic bias: all 16 are from the United States and six of them come from California. A newcomer to the field may suspect that neutrino physics blossoms mainly along the west coast of America. The important role played by the neutrino experimental communities in Japan and Europe should have been better reflected in the choice of book contributors.
A more detailed presentation of forthcoming experiments and facilities, such as MINOS, ICARUS, OPERA, and neutrino factories, could have been included in the book. It is also unfortunate that the traditional role of neutrinos as probes of the structure of matter and interactions was completely neglected. The editor is aware of this shortcoming and states: “While they [neutrinos] have been important tools for studying particle properties, such as structure functions and the nature of weak interactions, at present this is not the thrust of most research and hence is not covered in this book.” While this is indisputably true, a short account of those efforts could have been given for completeness; after all they are still being undertaken, for example in the CCFR and NuTeV measurements of ν(νbar)-nucleon cross-sections. An appendix with Web addresses for databases or for websites where the reader can find updated or more detailed information would also be useful.
The book appears to have been carefully proofread, but the index is surprisingly poor. The names of future experiments and facilities discussed in the text are missing and some of the page numbers are wrong.
Most of the book was completed before the results of the SNO experiment were published in mid-2001, but because of their anticipated importance publication was delayed so that the results could be summarized in an addendum. Since then, KamLAND has confirmed the disappearance of the electron antineutrinos. There is always a risk that reviews have a short life-time, especially now in neutrino physics as it is developing rapidly. However, this book should be useful for a long time yet, both as a reference and a textbook, due to its comprehensive content, clear logic in ordering the material, and extremely good oversight of most aspects of neutrino physics.
The new Kavli Institute for Particle Astrophysics and Cosmology has been inaugurated at SLAC. It is named after physicist and philanthropist Fred Kavli, whose Kavli Foundation pledged $7.5 million to establish the new institute. The institute, which will focus on recent developments in astrophysics, high-energy physics and cosmology, will eventually be located in a new building at SLAC between the research office building and the auditorium, and will open its doors in 2005. At the site of the future institute, Kavli unveiled a 2 m tall, steel and glass sculpture that incorporates a piece of SLAC history in the form of the window from the 1 m (40 inch) bubble chamber.
Roger Blandford, who will become the institute’s director in October, was one of the speakers at the ceremony. He said that initially he intends to follow a roadmap that balances theory, computational astrophysics and phenomenology on one side, and experimental astrophysics and high-energy observing on the other. It will draw upon existing strengths in theoretical physics and astrophysics, gravitational physics and underground physics at Stanford. As Blandford noted, “Part of the excitement of the field is that it is impossible to predict where it will be in five years’ time and what its scientific focus will be”.
In 1972, only 20 months after its construction had finally been agreed, the SPEAR electron-positron collider went into service on a parking lot at SLAC, and by spring 1973 had started to deliver its first physics data. From its humble beginnings, the machine went on to revolutionize particle physics, with two of the physicists who used it receiving Nobel prizes. It also pioneered the use of synchrotron radiation in a variety of fields in scientific research. In March this year, technicians began upgrading SPEAR, and now only the housing and control room remain of the original machine. Burt Richter, whose dogged determination led to the machine’s existence, likens SPEAR to a character in Alice in Wonderland. “It’s like the Cheshire cat,” he says, “there’s nothing left but its smile.”
SPEAR was elusive from the start. “The initial question was, how do you build such a machine?” says Martin Perl, who like Richter was to receive the Nobel prize for his work on the machine. “The idea of building an electron-positron collider was not in the mainstream back then.” Richter and others at Stanford first proposed building the Stanford Positron-Electron Asymmetric Rings (SPEAR) in 1964, at a time when hitting a fixed target with a beam was the standard way of doing high-energy physics. From 1964 to 1970, annual requests for funding to the US Atomic Energy Commission (AEC) were repeatedly rejected, even though Richter slashed the application from $20 million to $5 million. During one of the revisions to the proposal, the two planned rings became one and SPEAR was no longer asymmetric; but the name stayed. Finally, in 1970, SLAC’s director, W K H “Pief” Panofsky, spoke to the AEC’s comptroller, John Abbadessa, who said that if SPEAR was an experiment with no permanent buildings, it could be built out of SLAC’s normal operating budget.
Richter’s team had hoped to build the collider in two years; they finished four months ahead of schedule. “It certainly was the most fun I’d ever had building a machine,” says John Rees, one of the accelerator physicists involved. Moreover, the funding delay had actually worked to SLAC’s advantage in some ways, since they now had other colliding-beam storage rings to look to. “By that time, we’d learned enough from other people to be able to build the best machine,” explains Perl.
SPEAR had another advantage: a new kind of detector, called the SLAC-LBL Magnetic Detector or Mark I, which uniformly surrounded the interaction point. The design “flew in the face of conventional wisdom about how to build detectors for colliders,” says Marty Breidenbach of SLAC, who was a post-doc at the time.
SPEAR had a second interaction point devoted to more specialized experiments than the Mark I. “We wanted to give more independent physicists an opportunity to use this new and unique facility, and they all worked,” recalls Panofsky. “But they were basically less productive than the approach of having one detector looking at everything that came from the collisions and then later, whilst offline, unpickling everything to sort out what was important.” Since then, says Panofsky, “colliding machines all over the world have followed the pattern set by the general-purpose, solenoidal-type magnetic detectors, which were the Mark I and Mark II.”
From the beginning, some Stanford faculty members, including Sebastian Doniach, William Spicer and Arthur Bienenstock, realized SPEAR’s potential to produce useful synchrotron radiation, so they asked Panofsky and Richter to devise a way to allow X-rays out of SPEAR. The X-ray synchrotron radiation emitted by the circulating beams in the machine was much higher in intensity than anything available for structural analysis in many areas of research, from semiconductor materials to protein molecules. So Richter’s team attached an extra vacuum chamber to SPEAR and made provision for a hole in the shielding wall for the beamline. This was the start of the Stanford Synchrotron Radiation Project (SSRP).
The revolution begins
In the spring of 1973, SPEAR began to gather high-energy physics data. By the next year, the machine was measuring very erratic values of R, the ratio of hadron production to lepton production. These were the first signs of a new particle, which Richter’s team called the “psi” (Ψ). “Nobody dreamed that there was any state, particle, that was as narrow in width as the Ψ turned out to be,” says Richter. “So the first question was what the hell was wrong with the apparatus, is there something wrong with the computers, is there something wrong with the data taking?”
No-one could find any such errors, and some researchers on the Mark I collaboration pushed to rescan the region. But by this time SPEAR had been upgraded and Bob Hofstadter, who was running an experiment at SPEAR’s other detector, wanted to move on to higher energies. Finally Richter decided to go ahead with rechecking the anomalous results, but only for one weekend in November 1974. At about 3.1 GeV the group began to see impossibly high particle production. “It didn’t take very long before the control room started to fill up with people, because the yield of these particles kept going up and up and up as we made tiny little changes in the energy of the machine,” recalls Richter. Word travelled fast. “We started getting calls from all over the country,” says Breidenbach. “There was no need to check anything – the signal was beyond any statistics. It was there. No-one had ever seen anything like it.”
One of the first physicists outside SLAC to learn of the discovery was Sam Ting of Brookhaven National Laboratory, who happened to be visiting SLAC the day after the psi’s discovery. Ting’s lab, it turned out, had detected the same particle using a different method, but hadn’t yet confirmed it to Ting’s satisfaction. He called it the J. Whatever the name, Ting’s results meant that the new particle had been observed in two experiments and the table of particles had to be revised. Around this time, Panofsky went to the AEC to see Abbadessa. “I said I wanted to announce the discovery of an unauthorized particle on an unauthorized machine,” Panofsky recalls. “He liked that.”
SPEAR meanwhile continued to yield breakthroughs. “We had a fantastic time for a year or so – we were writing close to a paper per week,” Breidenbach remembers. Subsequent experiments revealed that the J/Ψ was the bound state of a new quark – charm – with its antiquark. This was the first discovery of a new quark since Murray Gell-Mann and George Zweig had first put forward the ideas of the quark model in 1964, and it brought the number of known quarks to four. It also confirmed the theoretical ideas of Sheldon Glashow, John Iliopoulos and Luciano Maiani, which grouped four quarks in two “generations”. This breakthrough came to be known as the November Revolution. “And then the next year Martin Perl changed the rules of the game again,” Richter says.
The tau was discovered soon after the J/Ψ and with the same detector, but there the comparison ended. Perl wanted to test his idea that electrons and muons were just the beginning of a series of particles. Rather than designing an experiment to find the next-heaviest particle in the series, he teased out the tau from data recorded on Mark I during more general runs. The tau particle turned out to be part of a third generation of matter, which involves six quarks rather than the four known at the time. Richter and Ting won the Nobel prize in 1976, reflecting the physics community’s swift acceptance of the J/Ψ. The third generation turned out to be harder to verify than the second, but Perl was finally rewarded with his Nobel prize in 1995.
Synchrotron radiation for all
Even though it began as a parasitic operation, synchrotron radiation represented an unparalleled opportunity. Use of the SSRP quickly expanded from materials science to chemistry to structural biology. “No-one had ever had effective access to a broad spectrum ranging from the deep ultraviolet into the hard X-rays from a multi-GeV storage ring for these kinds of experiments,” says Keith Hodgson, now associate director of the SSRP’s successor, the Stanford Synchrotron Radiation Laboratory (a division of SLAC). “This really was one of the first of the modern synchrotron radiation research user facilities.” The National Science Foundation approved the SSRP grant proposal early in 1973, and soon a pilot beam was up and running. The SSRP team began accepting proposals for experiments, and recorded its first useable data in summer 1974.
The November Revolution later that year was a disaster for SSRP’s users, because after that the high-energy physicists began doing experiments in the 3.1 GeV region or 1.55 GeV per beam, which was nowhere near the 2.4 GeV per beam that SPEAR was capable of. “We had what we called the X-ray drought,” says Herman Winick, SSRP’s first full-time employee and deputy director. In 1978, the group solved this problem by installing “wiggler” magnets in the storage ring, the first time such magnets were used in synchrotron radiation experiments. Wiggler magnets cause particles to wind sharply back and forth as they travel through a storage ring, emitting focused synchrotron radiation with every turn. Not only did the wigglers enhance synchrotron emission and extend it to higher energies, but they also boosted luminosity for the high-energy physicists.
By the decade’s end, synchrotron radiation research was gaining the upper hand at SPEAR, with 50% of the machine experiment time devoted to X-ray research. In 1980 the Stanford Synchrotron Radiation Laboratory (SSRL), as the SSRP was by then known, received a National Institutes of Health grant to make its X-rays more accessible to structural biologists. Dramatic growth in demand and productivity was also seen in materials sciences and other areas, especially after SPEAR operation was transferred to the US Department of Energy (DOE) in 1982. In the following years, under the stewardship of the DOE Office of Basic Energy Sciences, the SSRL has grown to serve about 1800 users, who mount over 1000 individual experiments each year from a range of disciplines. In 1997 particle-physics experiments on SPEAR ended and the ring became devoted solely to synchrotron radiation research.
SPEAR revolutionized X-ray analysis just as it revolutionized high-energy physics. For example, to determine atomic structural information using crystallographic techniques, researchers must crystallize the material, record its diffraction pattern and invert that pattern to obtain the real space structure – all tricky endeavours. With the availability of SPEAR and synchrotron radiation, researchers began, for the first time, to use specific wavelengths of synchrotron radiation to directly solve the “phase” part of the experiment (the so-called “phase problem” in crystallography). This new technique, called multiple-wavelength anomalous dispersion phasing (MAD), has proved extremely valuable in solving large numbers of protein molecule structures, and today forms the basis of much of the work done in this field worldwide. X-rays from SPEAR found many other important applications, including solving the mysteries of unusual materials such as the high-temperature superconductors; identifying trace environmental contaminants, such as those found at the Rocky Flats Superfund site; and pinpointing the culprit in the eroding of the Vasa warship, a Swedish national treasure.
On 31 March 2003, the SSRL temporarily shut down as staff began stripping the historic ring of all its innards and replacing them with a third-generation machine that will take synchrotron radiation research to new heights. The upgrade will replace all storage ring magnets, the 235 m long vacuum system, 54 magnet support rafts, the RF system, power supplies, cable plant and floor foundation, and will result in significantly higher photon brightness and more stable photon beams. In the wonderland of science, SPEAR’s smile will linger for years to come – just like that of the Cheshire cat in Alice in Wonderland.
In 1938 the “mesotron” (now known as the muon, µ) was discovered in cosmic rays. After a few years of uncertainty, a justly famous experiment showed that the mesotron was not Hideki Yukawa’s meson (the pion, π), and soon after, in 1947, the π meson was itself discovered in cosmic-ray showers. That same year, the discovery of strange particles, again in cosmic rays, caused great excitement among physicists. It was in this particularly stimulating context that Charles Peyrou began his brilliant career as a physicist.
With strong support from Louis Leprince-Ringuet, Charles took part in the building of the first Wilson chamber at the Ecole Polytechnique and in its installation at an altitude of 1000 m at Largentière, near Briançon. In 1947 Charles used this chamber, which was equipped with a magnetic field, to measure the mass of the µ (mµ = (212 ± 5) me). However, he did not observe any mass close to 1000 me, as had been detected at Largentière in 1943 and which was no doubt the first observation of the K+ in the history of physics.
Charles also studied the kinematic properties of the showers (due to the multiple production of π mesons) observed in cosmic rays, but he understood as early as 1949 that the study of pions had become a matter for accelerators. By contrast, cosmic rays were still an excellent source of muons, and in 1951, together with André Lagarrigue at the Ecole Polytechnique, Charles used them to measure the electron spectrum from µ decay and to provide the first estimate of the Michel parameter, ρ, different from zero. The following year, using the same apparatus, he obtained a first indication of the hypothesis that electrons and muons have different lepton numbers, because no electrons were observed in the capture of the µ by the nuclei (upper limit of 5%).
Reflecting on the possible causes of the absence of K+ in the data taken at Largentière in 1947, 1948 and 1949, Charles realized that this failure could be due either to too short a lifetime of the K+ compared with the muons, or to the fact that the energy of the primary cosmic-ray particles selected for the experiment was too low.
With the agreement of Bernard Gregory, Charles persuaded Leprince-Ringuet to set up an experiment at 2800 m on the Pic du Midi in the Pyrenees with two large superimposed Wilson chambers – a large magnetic chamber placed on top of a chamber fitted with copper plates. Nuclear reactions of cosmic rays occurred in a lead absorber placed immediately above the magnetic chamber, allowing short-lived secondary particles to be detected. It was hoped that the high altitude of the Pic du Midi would mean that a non-negligible fraction of the nuclear interactions would be of high energy. In 1953, after the experiment had been running for a few months, the first two examples of K mesons passing through the first chamber and stopping in the second were observed.
Following a series of results that gave credence to the existence of a whole spectrum of heavy mesons, the Pic du Midi experiment showed that the K± had a unique mass. In addition, the close similarity in range of the muons from K+ decay made it possible to affirm that the majority of K particles emitting a µ suffered a two-body decay, contrary to the view generally held at the time.
The Pic du Midi experiment produced many other interesting results until 1955, but in 1956 cosmic rays were overtaken by accelerators, at least for the study of elementary particles, and the Wilson chambers were replaced by bubble chambers.
The first hydrogen bubble chambers at CERN
On his arrival at CERN in 1957, four years before the commissioning of the PS, Charles embarked on the difficult but very promising task of building liquid-hydrogen bubble chambers. A first prototype hydrogen bubble chamber, the 10 cm chamber, was built at CERN in 1957 under Charles’s direction. It was first used in 1958 in an experiment in a 270 MeV π– beam from the SC, making it possible to analyse the elastic scattering π– + p → π– + p. The results were of no particular interest to Charles who was, however, very proud of the quality of the tracks obtained in the first prototype.
The experience acquired with the 10 cm prototype made it possible in 1958 to start constructing a 30 cm chamber, this time with a 1.5 T magnetic field. This chamber was used for a few experiments at the SC in 1959 (π+ + p → π+ + π+ + n, π++ p → π+ + π0+ p). With these data it proved possible to demonstrate the exceptional qualities of the chamber, such as efficiency, spatial precision with a “maximum detectable momentum” of 90 GeV/c, and ionization measurements. These characteristics made it a very useful detector, despite its small dimensions, when the PS first started up in 1960. The 30 cm chamber allowed successful exploration of the multi-GeV physics supplied by the first PS beams (16 GeV/c π– and 24 GeV/c protons), and Charles was to make an active contribution to the analysis and interpretation of these data.
Ever since his first research into multi-pion production in the hadronic interactions of cosmic rays, Charles had been particularly interested in these interactions. Now “his” 30 cm bubble chamber and the PS provided him with the opportunity to give free vent to his imagination, allowing him to develop methods of analysing these complex interactions, such as the “Peyrou plot” and the “principal axis”. Naturally, in these experiments he was also interested in the production of strange particles, including the first indications of the leading particle effect, angular correlations, etc.
An engineer and a physicist
The next step in the construction of hydrogen bubble chambers was an ambitious extrapolation as it entailed building a 2 m chamber rather than a 30 cm one, with all the new associated problems relating to cryogenics, the very important issue of safety, optics, etc. Charles approached all these technical problems with the same enthusiasm as he did for physics, and he succeeded in surrounding himself with an excellent team of engineers and technicians. An engineer himself, Charles was never the type to be condescending about technology. His all-embracing curiosity led him to take part in discussions on the austenitic transformations of steel at low temperature as readily as on the spin of the Λ. That was why his technical team respected him and was as devoted to him as his group of physicists.
In 1961 the PS started to deliver good-quality separated beams, and as several years more work were needed to complete the 2 m chamber, Charles, John Adams and Bernard Gregory had proposed in 1960 that Saclay’s 81 cm hydrogen bubble chamber should be installed at the PS. In 1961, a series of experiments using this chamber began that would provide essential data on the spectrum and properties of hadronic resonances: low-energy antiproton annihilation; 3, 3.6 and 5.7 GeV/c antiproton interactions; 4 and 6 GeV/c π+ scattering; 3 and 3.5 GeV/c K+ scattering; experiments on the formation of baryonic resonances of strangeness -1 with K– from 400 to 1200 MeV/c, and so on.
Special mention must be given to the original experiment carried out in 1962 with K– at rest, to study the relative Σ – Λ parity. This parity, long debated between the proponents – of whom Charles was one – and opponents of the eight-fold way, had been exercising physicists’ minds for several years. Benefiting from the good performance of the Saclay 81 cm chamber, the new experiment accumulated 150 events of the type K– + p → Σ0 + π0, where the Σ0 undergoes the three-body decay Σ0 → Λ + e+ + e–. This unambiguously showed that the relative Σ – Λ parity is positive. In addition, this experiment made it possible to study the leptonic decays of the Σ+ and the Σ–. The absence of Σ+ → µ+ + ν + n and Σ+ → e+ + ν + n decays confirmed the validity of the then highly controversial ΔQ/ΔS = +1 rule.
The motivation for performing a wide range of experiments was naturally founded on scientific interest, but there was also an intent to meet the demands of many European universities. Charles attached considerable importance to this latter issue, for his interest in physics was equalled only by his interest in CERN and its users. Beneath an exterior that was sometimes regarded as imperial, lay a fundamentally liberal temperament, convinced as he was that research depended on the unfettered initiative of the physicists. He applied this liberalism equally to both his group and outside the laboratory. This method of directing, with the dispersed effort that it entailed, might sometimes have had a negative impact on the efficiency with which the PS and the bubble chambers were run, but it did great service to European physicists, who at the time were not as accustomed to collaborative efforts as they are now.
Charles Peyrou did CERN a considerable service in establishing international collaborations. It was at his initiative that the Track Chamber Committee (TCC) was set up. Its purpose was to receive all the experimental proposals from physicists throughout the world, and then to filter these proposals because then, like now, demand outweighed the available resources. Charles often played a crucial role in this selection process, with his sound judgement of the merits of a given physics issue and certainly with his knowledge of the PS’s potential, the available beams and the detector performances. In this respect, it can be said that the fine results obtained using the 81 cm chamber often bore Charles’s stamp.
The year 1965 was marked by the commissioning of the 2 m chamber and the creation of the 10 GeV/c K– beam with RF separators. This beam was essential for producing the Ω–, which had been suggested by Murray Gell-Mann at the CERN conference in 1962, and which was to be the jewel in the crown of his SU(3) theory. The production of the Ω– required the construction of a K– beam of at least 3.2 GeV/c. Some months prior to the commissioning of the 2 m chamber, the first two Ω– were discovered at Brookhaven using a 200 cm (80 inch) chamber and a 5 GeV/c separated K– beam. In the period before the 2 m chamber started up, the 10 GeV/c K– beam was used at CERN in conjunction with the 1.5 m British bubble chamber in early 1965, and three Ω– were observed, corresponding to the decay Ω– → Λ + K–. (Analysis of the photos obtained in the 2 m chamber exposed to the 10 GeV/c K– beam was to provide 15 Ω–).
In 1970, Charles maintained the view, contrary to general opinion, that the 2 m chamber could be effectively used in certain instances to study weak interactions. (It was accepted at the time that weak interactions were the preserve of heavy-liquid chambers). He therefore encouraged data-taking, with the 2 m chamber, on the reaction K+ + p → K0 + p + π+ with 1.2 GeV/c incident K+. The aim was to study K0 decays. The quality of the kinematical measurements in the 2 m chamber made it possible to define accurately the K0 trajectory independently of its possible decay over a distance corresponding to several K0s lifetimes. This experiment was to produce original results on the ratio between the amplitude for K0s → π+π–π0, and that corresponding to CP conservation, K0L → π+π–π0, as well as on the ΔQ = ΔS rule for Ke3 and Kµ3.
Charles also played an important role in the saga of neutral currents. The search for neutral currents called for the rapid installation of Gargamelle, the heavy-liquid bubble chamber whose components were built at Saclay, in a neutrino beam at CERN. The bubble chamber was installed in record time despite some unforeseen accidents such as the fire that damaged the beam in 1969. Gargamelle’s first exposures to the neutrino beam were made in March 1971.
The discovery of neutral currents
It is interesting to note that, in his report of activities for 1972, Charles gave priority for the first time to the results obtained on weak interactions with the heavy-liquid chambers, which demonstrated the proportionality with energy of the cross-sections. He also noted that the theory of weak interactions predicted the existence of neutral currents that would be able to generate events of the type νµ + e– → νµ + e– (leptonic neutral currents).
No event of this kind was observed, but the sensitivity of this first phase of the experiment was still too weak for precise conclusions to be drawn. This limitation was eliminated the following year with a detailed analysis of the events corresponding to “hadronic” neutral-current candidates of the type νµ + nucleon → νµ + hadrons, whose cross-section is much larger than that of leptonic neutral currents but for which the background (due to uncontrolled incident neutrons) is much greater. The Gargamelle collaboration nevertheless concluded in July 1973 that neutral currents existed. This result was confirmed in spectacular fashion by the observation of two leptonic events several months later. The experiment made it possible to determine, for the first time, the mixing parameter in the Weinberg-Salam theory, sin2θ = 0.39 ± 0.05.
The discovery of the existence of neutral currents was initially received with a great deal of scepticism by several eminent scientists and, after an in-depth and unbiased study of the arguments put forward by Paul Musset and colleagues in the Gargamelle collaboration, Charles became one of the most eloquent defenders of this important discovery.
In the 1970s, convinced that future research at the SPS required the use of giant bubble chambers, Charles launched the construction of BEBC, a 4 m diameter hydrogen bubble chamber equipped with a superconducting magnet. This bubble chamber arrived at just the right time to supplement Gargamelle’s neutrino physics results. Exposed to 70 and 110 GeV/c K±, π±, p± beams, it also provided the opportunity for studying hadronic interactions at SPS energies.
However, BEBC could not reach the spatial resolution at the vertex required to observe short-lived particles such as the D0, D+ and DS. In addition, the identification of these particles required good identification of the secondary particles. Charles therefore encouraged members of his group to propose the construction of a detector assembly (the European Hybrid Spectrometer, EHS) which, in conjunction with a small rapid-cycling hydrogen chamber and electronic detectors, made it possible to detect and accurately measure the properties of these new particles. The EHS in fact supplied the first lifetime measurements, branching ratios and cross-sections for the production of charmed particles.
After the EHS, bubble chambers were replaced by other experiments that were equipped with high-precision vertex detectors and allowed the accumulation of information with the statistics required for particle physics in the 1980s. Charles always made himself available to give advice, make suggestions and give a critical response to new projects, bringing his curiosity and passion to the fore in equal measures.
All those who knew Charles Peyrou well will recognize his brilliant intelligence, his incisive and enquiring mind, his unusual, colourful and extrovert personality, his generosity and humanity, and his capacity to be passionately interested in science, history, music, the theatre, and in everything that made life fascinating.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.