“Throughout history, science and art have had a special relationship,” explained Michael Benson, director of communications at the London Institute. “Artists today are beginning to realize that science provides fertile territory for the imagination.”
In spite of the differences between the two disciplines, science and art have had similarly crucial roles to play in human civilization. Throughout history, great minds have embraced both disciplines – the most famous example being Leonardo da Vinci in Renaissance Europe.
However, although modern physics impacts on all aspects of daily life, from information technology and telecommunications to energy and medical imaging, today’s art world has responded little to the cultural upheavals of advancing science. No modern Leonardo has emerged as yet.
The artists involved in the Signatures of the Invisible project – Roger Ackling (UK), Jérôme Basserode (France), Sylvie Blocher (France), Richard Deacon (UK), Bartholomeu dos Santos (Portugal), Patrick Hughes (UK), Ken McMullen (UK), Tim O’Riley (UK), Paola Pivi (Italy) and Monica Sand (Sweden) – have worked with scientists and technicians at CERN to create original works of art that reflect the ideas and techniques of modern physics.
Preliminary visits to CERN, which allowed the artists to meet physicists, visit experiments and discover the potential of CERN’s workshops, led to two years of exchanges and close collaboration, which resulted in Signatures of the Invisible. The exhibition will re-open at Geneva’s Centre d’Art Contemporain in January 2002 before travelling to venues in Stockholm, Lisbon, Paris, Strasbourg, Brussels, Tokyo, Australia (venue to be announced) and New York.
Strange particles were first seen in 1947(1) in a cloud chamber of Blackett, triggered by hadron showers produced by cosmic rays. Soon after, other strange particles, then called V particles, were also seen in nuclear emulsions. Progress in our understanding of these new particles was slow, partly because the experimental possibilities were limited to cosmic-ray observations, and partly because the phenomena were so totally outside of what was then known.
I remember in 1949, on a bulletin board at the Princeton Institute of Advanced Studies, a photomicrograph of a nuclear emulsion event, showing what is now known as a K-meson decaying to three pions. We all saw it. There could be no doubt that something interesting was going on, very different from what was then known, but it was hardly discussed because no-one knew what to do with it.
The copious production of these particles, indicative of the strong interaction, was at odds with their long lifetimes, indicative of the weak interaction. Pais noted in 1952 (2) that this could be understood by inventing a feature of the strong interaction, a selection rule, which would permit their production but forbid their decay via the strong interaction. He implemented this in a mechanism that required the new heavy particles to be produced in pairs. This was extended some months later by Gell-Mann (3), who ingeniously combined the selection rule with the notion of isotopic spin. It required that the pair of Pais be composed of a “strange” and an “antistrange” particle.
The arrival of accelerators of sufficient energy facilitated the study of these new particles enormously. The Brookhaven Cosmotron accelerated protons to 3 GeV, six times the energy of the highest energy cyclotron, and sufficient to produce the new particles in collisions on nuclei, and Ralph Shutt and colleagues had developed a new type of cloud chamber. The V particles produced in cosmic-ray showers had been observed in cloud chambers, but these were very inefficient for accelerator experiments because, once made sensitive by expanding the gas, they would require 1 min of relaxation before they could be expanded again. The accelerator cycle, however, was typically 1 s. The new “diffusion” cloud chamber, in contrast, was continuously sensitive and made it possible to demonstrate the production of strange particles in pairs(4) and verify the hypothesis of Pais and Gell-Mann (figure 1).
Two years later, in 1955, Gell-Mann and Pais (5) noticed that the neutral kaon should exist in two versions, one strange and the other antistrange, one the antiparticle of the other. In addition to the known neutral kaon, there should be another one, with the same mass but with much longer lifetime and with different decays, with opposite symmetry under space inversion.
This idea, which seems obvious now, was not obvious at the time. It was not easy for me to understand or to accept this proposal when I read it, but a few days later T D Lee succeeded in explaining it to me. Once understood, the idea could not be rejected.
The experimental confirmation a year later by Lederman and Landé(6) marked a big step forward. It was also carried out at the Cosmotron, and used what was, to my knowledge, the largest cloud chamber ever, 1 m in diameter. The large size made it more likely that the long-lived kaon, with a decay path of the order of 10 m, would decay inside. The chamber had been built at the Nevis laboratory some years before, but had never found any use. This, to my knowledge, was also the end of the long and glorious career of the cloud chamber in particle physics.
In 1953 Donald Glaser invented the bubble chamber(7), which went on to dominate particle physics, especially strange particle research, for the next 20 years. He showed that energetic particle trajectories can be made visible by photographing the bubbles that form within a few milliseconds after particles have traversed a suitably superheated liquid (figure 2).
The advantage of the bubble chamber over the cloud chamber at accelerators was two-fold: the higher density of the liquid proportionally increased the number of interactions produced in it, and it was faster to reactivate, matching the frequency of the accelerator cycle.
Within a year, John Woods, in the group of Alvarez at Berkeley, succeeded in producing tracks in liquid hydrogen.(8) The chamber was a metal cylinder to which glass plates were attached, using indium ribbons as seals. In addition to being a major cryogenic technical achievement, this also demonstrated the crucial fact that for use with accelerators, where the expansion can be timed with respect to the accelerator cycle, the bubble chamber environment need not be as ultra-clean as was the glass vessel of Glaser, which permitted the liquid to survive in its superheated state for relatively long periods.
Three graduate students, John Leitner, Nick Samios, Mel Schwartz, and I began work at Nevis on the design of a practical experimental bubble chamber (9) to study strange particle production at the Cosmotron, I think early in 1954. By 1955 we had a 6 inch (15 cm) diameter liquid propane chamber. This was used at the Cosmotron in the first experiment using this new technique. The work profited a great deal from a generous collaboration as well as friendship with the inventor, who was working on a similar project at Brookhaven with his former student, David Rahm. (10)
Rapid action
Our main technical contribution at Nevis was the discovery of a rapid action three-way gas pressure valve, the “Barksdale” valve. This made it possible to recompress the liquid within milliseconds after the expansion, and so to reduce the undesirable thermal effects that result if the pressure remains low for longer times and greater quantities of liquid boil.
As work progressed, we were joined by R Budde from the newly established CERN laboratory, who had been sent to learn about the new technique. The chamber had a serious flaw, which we nevertheless accepted in order to get experimental results – the liquid became clouded and lost its transparency after a few hours of operation. It was then necessary to empty and to refill the chamber, with a consequent loss of time.
The experiment (11)used a pion beam of energy 1300 MeV, only slightly more than the minimum required to produce a strange particle pair. There was no magnetic field, so the particle momenta could not be measured. However, the information from the spatial directions of the observed particles, recorded stereoscopically, sufficed to permit the identification of L hyperon and neutral kaon decays, to distinguish collisions on hydrogen from those on carbon, and so identify the processes we wanted to study (figure 3).
The lifetimes of most of these particles are of the order of 10-10s, and consequently their path length is typically some centimetres. The several dozen events obtained gave the first quantitative measure of the production probabilities and angular distributions for negative pion on proton reactions, giving a positive kaon and a S-, and a neutral kaon and a L. In retrospect, the most interesting result was a precocious glimpse of parity violation, soon to be at the centre of the particle physics stage.
The development of bubble chambers went on apace. Within a year the 10 inch (25 cm) hydrogen chamber of Alvarez was in operation at the Bevatron, which was then, with 5 GeV protons, the world’s highest-energy accelerator, and which had permitted the discovery of the antiproton by Chamberlain, Segrè, Wiegand and Ypsilantis in 1955.(12) In 1959 this was superseded by the 72 inch (1.8 m) chamber, the workhorse of the Bevatron for more than a decade, which led to the discovery of several meson and hyperon resonances.
At Brookhaven the Shutt group made important technical advances. In 1958 its 20 inch (50 cm) chamber came into operation, followed in 1962 by the 80 inch (2 m) chamber. This went on to take 11 million photographs, and the results included the important discovery in 1964 of the triply strange W– hyperon,(13) confirming the SU(3) symmetry proposed by Gell-Mann to account for the multiplet structure and mass regularities of the observed strange particles, and which mothered the invention of the quark (figure 6).
At CERN a 30 cm hydrogen chamber came into operation in 1960, and the 2 m hydrogen chamber in 1964. This became the main CERN tool for the study of resonant and strange particle physics for a decade and kept hundreds of physicists busy and happy. Gargamelle, a very large heavy-liquid (freon) chamber constructed at Ecole Polytechnique in Paris, came to CERN in 1970. It was 2 m in diameter, 4 m long and filled with freon at 20 atm. With a conventional magnet producing a field of almost 2 T, Gargamelle in 1973 was the tool that permitted the discovery of neutral currents.
The publication of this volume on Léon Van Hove provides a welcome global view of his multifaceted contributions to science. He was CERN’s research director-general from 1976 to 1980, but some of his most important contributions date from his time outside CERN and are little known to the particle physics community. This book consists of reprints of his major scientific papers together with skilful presentations of their significance, as well as discussions of his impact as teacher and scientific statesman.
Léon Van Hove started his career with three years of underground university studies in wartime Brussels. His training and earliest research was in the field of mathematics. In the late 1940s, however, he turned to theoretical physics.
His first papers on statistical mechanics and quantum field theory were mathematically orientated. His rigorous and important papers in statistical mechanics in 1949 prepared the ground for the advances by Ruelle and by Fisher in the 1960s (R Balescu, T Petrosky and I Prigogine); he initiated the perturbation description of large quantum systems in two fundamental papers in 1955 and 1956 (N M Hugenholz).
In the period 1951-1954 he turned, surprisingly and under the influence of Placzek, to phenomenological work on slow neutron scattering, and demonstrated how the space-time correlation function could be measured directly. His papers were a major stimulus to this field and had enormous influence on experiments and applications as well as on theory (N Gidopolous and S W Lovesey). The experimental work by Brockhouse and Shull that used his approach was awarded the Nobel prize in 1994, four years after Van Hove’s death in 1990.
Van Hove’s remarkable scientific change of direction and contributions to particle physics on being invited to head the CERN Theory Division in 1960 are described by several close collaborators: M Jacob on ultrarelativistic heavy-ion collisions, A Giovannini on multihadron production and J J J Kokkedee, W Kittel and A Bialas on high-energy collisions and internal hadron structure.
Van Hove was also an outstanding teacher, scientific administrator and policy maker. Close associates describe his activities in these diverse areas.
Of his Utrecht PhD students in the 1950s, we learn that several became outstanding physicists, for example the Nobel prizewinner M Veltman. (N M Hugenholtz and Th W Ruijgrok). His activities as leader of the CERN Theory Division and as research director general of CERN are described by F Bonaudi, M Jacob, E Gabathuler and V Soergel, while his period as director at the Max-Planck Institute at Munich is covered by N Schmitz. M Bonnet describes his time as advisor to the European Space Agency and the special role he played in developing the Solar-Terrestrial Programme.
The fact that Léon Van Hove came from a field outside particle physics made him particularly sensitive to the potential of high -energy physics in non-traditional areas. For instance, he realized the scientific significance of ultrarelativistic heavy-ion physics at a time when it was still unpopular at CERN. He threw his scientific weight behind this initiative and even focused his own scientific research on it. His intuition has recently been vindicated by the discovery of quark-gluon plasma effects.
This scientific intuition also showed itself in the bold decisions – described both by colleagues and by Van Hove – leading to the construction of the antiproton-proton collider and the discovery of the W and Z particles. Further, it is an excellent initiative to include an autobiography.
The book closes with a documentation of Van Hove’s opinions and attitudes on various issues, compiled in his own words from his speeches and private papers by his son Michel.
This volume gives a fascinating account of the scientific life of a multifaceted physicist with many talents and is highly recommended.
CERN’s major contributions to culture range wider than the discovery of the neutral current of the weak interaction, or the carriers of the weak nuclear force. CERN has shown how international collaboration in science works. Different national attitudes complement and reinforce each other, but this needs to be experienced first.
The vitality of the more than 7000 researchers using CERN’s facilities creates a continuous exchange of ideas and people from all over the world. In addition to the advances in frontier science, the technology needed to carry out this research is often years ahead of what industry can provide.
Every year more than 600 new students, scientists and engineers participate in the various schemes over periods ranging from three months to three years. They all benefit from their experience of working in an international collaboration at the forefront of science and/or technology. Returning to their home institutes, they provide a seedbed for new developments.
A significant fraction (about 10%) of CERN’s personnel budget is spent on various fellow, associate and student programmes. As well as promoting the exchange of knowledge between scientists and engineers from all over the world, these programmes are vital elements in the high-level research and technology training of scientists from CERN’s 20 European member states and, to a lesser extent, from other nations too.
The programmes are popular and there are many applications from eager candidates. Their success is largely due to the strict criteria applied in the selection procedures.
The fellowship programme aims to provide advanced training to young CERN member state university-level postgraduates (mostly with doctorates) in a research or technical domain.
Catering for a different need is the associates programme, which offers opportunities for established scientists and engineers from both member states and elsewhere to spend some time – typically a year – at CERN, on leave from their research, teaching, managerial or administrative duties. During their stay at CERN, associates are on detachment from their home institute.
Fulfilling another requirement are the various student programmes (summer students, technical students and doctoral students) for undergraduates and postgraduates in CERN member states. Doctoral and technical student programmes are currently restricted to candidates from applied sciences and engineering, but there is a move to extend this.
In the popular summer student programme, with a tradition going back to CERN’s early years, some 150 students selected each year from many times that number of applications participate in CERN’s research programme under the supervision of CERN scientists, and they also attend a series of specially arranged lectures. Many leading scientists have benefited from this scheme early in their careers.
The trainee programme is new. Numerous member states have shown a very strong interest in using CERN as a training ground in a wide area of hi-tech activities. In the past five years there has been a rapid development of special programmes that are based on bilateral co-operation agreements.
Member states Austria, Denmark, Finland, Norway and Sweden provide additional funds for the student programmes, while Israel, a CERN observer state, contributes to the associate programme. For another observer state, Japan, since 1996 part of the interest in the nation’s financial contribution to the construction of the new LHC accelerator has been used to help to fund a few fellows and short-term associates.
Member states Spain and Portugal provide grants that cover the insurance and living costs of the young people specializing in engineering and technology. An additional CERN contribution offsets the relatively high cost of living in the Geneva area.
At a regional level, about 25 young engineers and technicians spend some time at CERN within the framework of a special French Rhone_Alps Region programme. A few graduates and postgraduates from the Italian Piedmont are funded by the regional Association for the Development of Science and Technology. It is hoped that these special programmes will be integrated into the wider schemes and expanded.
Together, the various student and short-term visitor schemes transfer specialized knowledge and expertise, and make CERN’s mission and work in particle physics and further afield known to a wider public.
by Fayyazuddin and Riazuddin, 2nd edn, World Scientific ISBN 9810238762 hbk, ISBN 9810238770 pbk.
The first edition of this book by the talented twins from Pakistan, which appeared in 1992, has been updated, with the chapters on neutrino physics, particle mixing and CP violation, and weak decays of heavy flavours having been rewritten. Heavy quark effective field theory and introductory material on supersymmetry and strings are also included.
by Richard Wigmans, Oxford University Press, ISBN 019 850296 6, 726pp, £85.
The role of calorimetry in high-energy physics has become increasingly important during the last 20 years. This is due to the increase in energy of the particle beams available at the major accelerators and to the need for hermetic detectors. The 1980s, in particular the second half of the decade, saw an important breakthrough in the understanding of the mechanisms underlying the development of hadronic cascades and their energy loss.
The theme around which this breakthrough took place is “compensation”: for a compensating calorimeter e/h = 1, where e represents the response to an electromagnetic and h the response to a non-electromagnetic,that is purely hadronic, shower of the same energy. For compensating calorimeters the energy measurement of electrons and hadrons of the same energy yields the same average response for all energies, at the same time leading to optimal hadronic energy resolution. It is also a prerequisite for linearity of the hadronic energy measurement.
In practice, very few compensating calorimeters have been built for major experiments (one example is the calorimeter of the ZEUS experiment at HERA, discussed in the book), probably because, in practice, achieving compensation means making a concession to the electromagnetic energy resolution. None of the experiments planned at the Large Hadron Collider, for example, will employ a compensating calorimeter. The importance of the research into compensation is nevertheless very large in that it led to a much better understanding of calorimetry in general. The author of the book has made original and essential contributions to this field through his own research.
The book reflects the deep and encyclopedic knowledge that the author has of the subject. This makes the book a rich source of information that will be useful for those designing calorimeters and for those analysing calorimeter data, for a long time to come. At the same time the book is not always successful in finding a way of organizing and conveying all of this knowledge in a clearly structured and efficient way. Parts of the book are rather narrative and long-winded.
The most important chapters are those on Shower Development, Energy Response, Fluctuations and Calibration. Also, that on Instrumental Aspects contains essential information. The chapters on generic studies and on existing (or meanwhile dismantled) and planned calorimeter systems, are interesting but less necessary parts of a textbook. Moreover, the author does not always keep to the subject – calorimetry – leading to unnecessary excursions and, what is worse, outdated material. It would, on the other hand, be interesting if the author, in his description of the calorimeters under construction for the Atlas experiment, had been a bit more explicit on what, in the light of the ideas developed earlier in the book, the optimal approach would be to (inter)calibrating this very complex calorimeter system.The chapter on Calibration is probably the most essential part of the book, bringing together many of the fundamental issues on shower development, signal generation and detection. Reading this chapter, one gets the impression that in fact it is impossible to calibrate calorimeters, but the style chosen by the author is only to emphasize that the issue is subtle and great care must be taken. The chapter contains information that is extremely worthy of consideration, culminating in the recommendation that, in the case of non-compensating calorimeters, individual (longitudinal) calorimeter sections should be calibrated by the same particles generating fully contained showers in each section, a recommendation that, in practice, cannot always be satisfied. In his ardour to emphasize the importance of the (inter)calibration of longitudinal calorimeter segments, the author even invokes decays, such as that of the neutral rho into two neutral pions, that do not exist in nature – we get the point and forgive him. It is, however, true that there are more places where the book would have profited from a critical, final edit.
Calorimetry is a book that describes the essential physics of calorimetry. It also contains a wealth of information and practical advice. It is written by a leading expert in the field. The fact that the discussions sometimes do not follow the shortest path to the conclusion and that perhaps the “textbook part” of this work should have been accommodated in a separate volume does not make the book less important: it will be amply used by those trying to familiarize themselves with calorimetry and in particular by those analysing the data of the very complex calorimeter systems of future experiments, such as at LHC.
by M Y Han (Duke), World Scientific Publishing, 168pp, ISBN 981 02 3704 9 hbk $34/£21, ISBN 981 02 3745 6 pbk) $16/£10.
This is a readable little book on particle physics and is aimed at those with no previous exposure to the subject. It starts with the discovery of the electron in 1897 and works its way more or less historically up to the present. That means, of course, that it contains a lot about leptons and photons as well as the quarks and gluons of the title.
The guiding theme is the discovery of different kinds of conserved charges – first electric charge, then baryon number and the lepton numbers, and finally the more subtle kind of charges that are the source of the colour force between the quarks.
Like Stephen Hawking, the author manages to avoid all equations, except E = mc2. The style is chatty and colloquial (American), which will have some non-native English readers running for their phrase books. For example, correct predictions are “right on the money”, and when the terminology seems comical the reader is exhorted to “get a grip on yourself”. Nevertheless, as one would expect from a leading contributor to the field, Han takes care to get things right even when using simple language, as for example in his discussion of spin.
The jacket says that the book will be “both accessible to the layperson and of value to the expert”. I imagine that the latter refers to its value in helping us to communicate with non-experts.
I have some misgivings about this book, mainly because of its insistence on discussing only those charges that are (within current limits) absolutely conserved leaves the reader with the impression that nothing much is understood about the weak interaction. The author even says that the weak charges have yet to be identified. All of the beautiful developments of electroweak unification are omitted. Also, there is no mention of the exciting possibilities that lie in the near future. This makes the subject seem a bit moribund and musty. For example, we are told that the discovery of the pion in 1947 was “one of the last hurrahs” of cosmic-ray physics, whereas in fact that field continues to show astonishing vitality, with neutrino studies, ultrahigh-energy primaries and other fascinating phenomena promising a rich future.
by R A Bertlmann, Oxford University Press, ISBN 01 850762 3, pbk £29.95.
Field theory “anomalies” constitute a long-standing source of physics and mathematics. They have remained fascinating for physicists and mathematicians, as ongoing developments in string and brane theory show.
This book gives a comprehensive description of the many facets of this subject that were known before the mid-1980s. It is essentially self-contained and thus deserves to be called a textbook. Both mathematicians and physicists can learn from this volume.
With a modest knowledge of quantum mechanics, a mathematician can read about the history of the subject: the puzzle of the decay of the neutral pion into two gamma-ray photons; the inconsistencies of the perturbative treatment of gauge theories related to the occurrence of anomalies; the original Feynman graph calculations; and the theoretical constructions that introduced relationships with topology, up to the elementary versions of the index theorem for families.
The physicist will find all of the necessary equipment in elementary topology and differential geometry combined in constructions that are familiar to professional mathematicians. S/he will find thorough descriptions of the algebraic aspects that emerged from perturbation theory, both in the case of gauge theories and in the case of gravity, and an introduction to the way in which they tie up with index theory for elliptic operators and families thereof.
The book reads fluently and is written so clearly that one not only gets an overview of the subject, but also can learn it at an elementary level.
The bibliography is a rather faithful reflection of the physics literature and includes a few basic mathematical references, which give the reader the opportunity to learn more in whichever direction s/he chooses.
As mentioned, the subject is still developing in the direction of new mathematics and, possibly, new physics in the context of strings and branes. One may therefore regret that the book stops around the developments that took place in the mid-1980s.
The book is already more than 500 pages. Since it is essentially self-contained and every topic that is dealt with is described in sufficient detail to allow a non-specialist to get acquainted with it, at least at an elementary level, the mathematical techniques do not go beyond elements of differential geometry, as well as of homology, cohomology and homotopy theory. Generalized cohomology theories, including K-theory, only appear in a phenomenological disguise, in connection with the description of the index theorem for families, in the particular case relevant to gauge theories, but not as mathematical prerequisites.
As a consequence of the principle of maximal perversity, one may expect that physics will exhibit subtle effects describable in terms of the above-mentioned constructions. In such an event, there remains the hope for a corresponding textbook as understandable as this one, possibly written by the same author.
After A Quantum Legacy, the selected papers of Julian Schwinger, it is fitting that the next volume in this carefully selected series covers the work of Richard Feynman.
Now a cult figure, Feynman is fast becoming one of the most prolifically documented physicists of the past century. As well as his own popular work (You Must Be Joking, What Do You Care What Other People Think?) and his various lectures, there are biographies or biographical material by Gleick, Brown and Rigden, Mehra, Schweber, Sykes, and Gribbin and Gribbin.
Anecdotes about such a flamboyant character are easy to find, but the man’s reputation ultimately rests on his major contributions to science, which this book amply documents. Chapters, of various lengths, deal with his work in quantum chemistry, classical and quantum electrodynamics, path integrals and operator calculus, liquid helium, the physics of elementary particles, quantum gravity and computer theory. Each has its own commentary.
As a foretaste of things to come, the first chapter serves up just a single paper – “Forces in molecules” – written by Feynman at the age of 21, in his final year as an undergraduate at MIT. This result – the Hellmann-Feynman theorem -has played an important role in theoretical chemistry and condensed matter physics.
Chapter 2 begins with Feynman’s 1965 Nobel Lecture, goes on to include work with John Wheeler at Princeton, which explored the underlying assumptions about the interaction of radiation and matter, and concludes with the classic 1949 papers that presented his revolutionary approach to quantum electrodynamics.
The Nobel Lecture alone is worth reading – clearly a major early source of Feynman anecdote, such as the Slotnick episode. One is struck by Feynman’s ambivalent attitudes – his enormous regard for father figures such as Wheeler and Bethe on the one hand, and his clear disdain for many contemporaries on the other. Another good read in this chapter is Feynman’s paper presented at the 1961 Solvay meeting, and the ensuing discussion.
Chapter 3 deals with the detailed presentation of the path integral approach, which enabled Feynman to dissect electrodynamics and look at it from a fresh, uncluttered viewpoint.
From 1953 to 1958, Feynman looked for fresh pasture and produced a series of seminal papers on the atomic theory of superfluid helium, which is presented in Chapter 4.
Chapter 5 is split into two parts. The first, on weak interactions, includes the classic 1957 paper with Gell-Mann and some lecture notes from the 1960s exploring the consequences of SU3 symmetry for weak interactions. The second part – by far the largest section of the book – deals with his approach to partons, quarks and gluons. Feyman began thinking about describing hadrons simply as an assembly of smaller parts – his partons – just when experiments were beginning to probe this inner structure. This is a good example of how Feynman, arriving at a fresh interest, would invariably strip problems down to their essential parts before reassembling them in a way that he, and many other people too, understood better.
Feynman’s interest in numerical computation went back to his time at Los Alamos, when he had to model the behaviour of explosions using only the mechanical calculators of the time. Coming back to the subject in the 1980s, he went on to pioneer the idea of quantum computers. Apart from the prophetic papers published here, this aspect of his work has been well documented in The Feynman Lectures on Computing (ed. A J G Hey and R W Allen, Perseus).
Selected Papers of Richard Feynman concludes with a full bibliography. Even without the burgeoning Feynman cult, such a selection of key papers is a useful reference. However, with almost 1000 pages, the book could perhaps havebeen better signposted. The selected papers are not listed in the initial contents and the pages have no running heads to indicate how the chapters fall.
Phase 1 of the new joint project between the Japanese Atomic Energy Research Institute (JAERI) and the national KEK laboratory on high-intensity proton accelerators (see Proton collaboration is under way in Japan) has been given the go-ahead to begin construction.
Although formal approval of the budget has not yet been given, notice from the government means that Phase 1 of the project has effectively already been approved.
Phase 1 of the new project will include:
a 400 MeV normal conducting Linac;
a 3 GeV rapid cycling Proton Synchrotron operating at 1 MW;
a 50 GeV PS operating at 0.75 MW;
a major part of the 3 GeV neutron/meson facility;
a portion of the 50 GeV experimental facility.
The total budget for Phase 1 is 1335 Oku Yen (1 Oku Yen is equal to 108 yen, or approximately $860 000) and it is expected to be completed within six years. The entire cost of the project, including Phase 2, is expected to be in the region of 1890 Oku Yen.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.