The Rio de Janeiro-based Centro Latinoamericano de Física (CLAF) celebrated its 40th anniversary on 26 March. Founded under the auspices of UNESCO following the CERN model, CLAF was established to promote research in physics and provide postgraduate training to young physicists in the region. The physicists Juan José Giambiagi of Argentina, José Leite Lopes of Brazil and Marcos Moshinsky of Mexico were instrumental in its creation. Today, CLAF has 13 member states.
The celebrations took the form of a week-long international meeting in Rio that focused on CLAF’s international collaborations and looked at the long-term future of the centre. Speakers included Faheem Hussain of the International Centre for Theoretical Physics in Trieste, Italy, which has been involved in a programme of joint PhD work with Latin American institutions since 1997. Vladimir Kadyshevsky, director of the Dubna-based Joint Institute for Nuclear Research, and Pavel Bogoliubov, who is responsible for the institute’s international relations, spoke of a 4-year-old programme to train Latin American graduate students in Russia, and of the 25 MeV microtron built at Dubna to form the basis of a proposed regional laboratory in Havana, Cuba. CERN’s Juan Antonio Rubio, who is responsible for the laboratory’s education and technology transfer activity as well as links with Latin America, spoke of the agreement signed in 2001 between CERN and CLAF to organize a joint biennial school in Latin America. Staying with education, Ramón Pascual, former rector of the Universidad Autónoma in Barcelona, signed an agreement at the meeting allowing Latin American students to take part in the European Joint Universities Accelerator School.
Research was discussed by Ana María Cetto, coordinator of the Latin American Scientific Networks, who spoke of a project to foster greater Latin American use of observatories in Chile and the possibility of extending the facilities at Mount Chacaltaya in Bolivia.
Looking to the future, CLAF support for the second CERN-CLAF school to be held in Mexico in 2003 was confirmed at the meeting, and CLAF announced the creation of a biennial school in medical physics and synchrotron radiation. The centre is also promoting improvements in postgraduate education through the ICTP programme and by coordinating a regional Masters degree programme. In research, CLAF will assume a stronger role in coordinating Latin American efforts in medical physics, condensed-matter physics and optics. It will also examine the possibility of building a proton accelerator for cancer treatment. The meeting concluded with a request to the governments of Latin America to increase their percentage of GDP spending on science and technology.
Industry pushes economic globalization to strengthen its market position. The process is driven by the need and desire to increase efficiency and reduce costs, but also by the wish to make the best use of different competencies in different countries. Take the European aircraft industry: parts of planes, such as the wings, tail unit, body and engines are built in different regions of Europe, and finally assembled at one plant. This has enabled distributed regional industries to jointly play a major role on the international market. Yet people are afraid to be at the mercy of some anonymous pressure, and thus increasingly oppose globalization.
Large-scale facilities in science are also increasingly tackled on a global scale. Radio-astronomers around the world have united behind the idea of jointly building their next project, ALMA, a merger of the major millimetre-array projects into one global project. Particle physics has for quite some time moved in the same direction: the large experiments have always been a role model for the shared construction of large equipment. The LHC is built with components from around the world, like HERA before it.
Global challenge
To meet the challenges of the future, accelerator-based particle physics needs to become even more global than in the past. One possible concept, the Global Accelerator Network (GAN), was originally developed as a way to build a linear collider as an international collaboration, to make the best use of worldwide competence, ideas and resources, to maintain and foster the centres of excellence in accelerator physics around the world, and to root the linear collider as an international project firmly inside the national programmes (CERN Courier June 2000).
Global projects rely on collaboration. In the past, particle physicists have developed a culture of collaboration that has worked very successfully. Indeed, they had to do so to meet the scientific challenges. Collaborations function well if their leadership acknowledges the individuality and freedom of all the partners. They do not have a strong hierarchical structure, but are driven instead by a common scientific goal. They probably would not function with an industry-style management.
Therefore the question arises as to whether a model that works for experiments can be extended to accelerators. Or to put it differently: what is needed to make this model work for accelerators as well? These questions were studied by an ICFA working group in 2001, and are now being addressed within the framework of a series of workshops, the first of which, “Enabling the Global Accelerator Network”, took place in March at Cornell. This workshop dealt with technical aspects of the remote operation of facilities, which is a key ingredient of the shared operation of accelerators. No basic problems are expected here. In fact, the TESLA test facility has already been operated remotely from Italy and France.
On the other hand, it became clear at the workshop that the sociological aspects of such a joint endeavour are probably the true challenge. As the GAN concept is built on the principle of shared responsibility, the sharing of know-how and controls is also part of the concept. The laboratory at which the facility is located would therefore relinquish the project control it traditionally had to become one of the equal partners. Mutual trust is the critical element required in order for such a collaboration to be successful.
It is well known that distributed organizations need to build up and maintain trust. Sharing working time from the very beginning is a powerful agent in establishing this trust. This requires a mixture of face-to-face interactions and the use of appropriate communication and collaboration technologies. These interactions should start as early as possible, even during the planning and R&D phase. Mutual trust and interest will continue to grow during the build-up time of the project, and will have to be sustained through the transition from early commissioning to operation and scientific exploitation. Industry is developing many tools to support the full spectrum of situations, ranging from planned, structured activities (such as scheduled meetings) to unplanned interactions.
Trust and involvement of both institutions and individuals have to be maintained over a long time – the duration of the project being typically more than 20 years. Producing exciting science and meeting technological challenges will be the key ingredients for ensuring a long-term interest of all the partners. Working on the frontiers of technology creates the need for a continuous upgrade culture. This culture needs to be distributed around the world.
However, even if the necessary trust is established, we need to solve many questions of key relevance in order to guarantee the success of the project and the major investment it requires. These questions include the management structure and organizational forms. They again are closely related to trust – we cannot afford for scientists and engineers to become disenchanted and to walk away. We need to approach global collaboration on large scientific infrastructure projects with a lot of imagination and determination.
The future of particle physics is no longer determined by scientific challenges only.
by Bernard Pullman (late professor of Quantum Chemistry at the Sorbonne, and director, Institut de Biologie Physico-Chimique, France), Oxford University Press, ISBN 0195114477, £14.95 (€23). Translated from the original French, Editions Fayard, ISBN 2213594635, 729.3.
“This book endeavours to describe the turbulent relationship between atomic theory and philosophy and religion over a period of 25 centuries,” states the preface – a daunting task by any standards. Pullman admits that he is neither a philosopher nor a man of religion, but a chemist “having long lived side-by-side with atoms”. As such, he achieves a great deal.
The book begins with the birth of the atomic theory – the “Greek miracle” of the 7th-5th centuries BC in Pullman’s words, when a few Hellenic thinkers shed the Greek pantheon in favour of a natural philosophy. This began with theories advocating various primordial substances – water (Thales), air (Anaximenes), fire (Heraclitus) and earth (Xenophanes) – from which all things come to be. The two fundamental concepts of atomism – impenetrable, indivisible (atomos) corpuscles and void through which they travel – were formulated around 450 BC by Leucippus and Democritus, and refined a century later by Epicurus and Lucretius to a logical structure that remained essentially unchanged for the next 2000 years. The book also touches on Hindu and Buddhist atomism, which evolved independently at about the same time, but had no impact on the atomic theory of the Western world.
The book then moves on to “a few scattered revivals” during the 1st-15th centuries AD. After describing the antiatomistic position of the Church as put forward by Basil of Caesarea, St Augustine and Thomas Aquinas (among others), some mediaeval Christian atomists make an appearance. These are divided into chroniclers (such as Isidore de Seville), sympathizers and proponents. The sympathizers include Adelard of Bath (a translator of scientific Arab texts) and Thierry of Chartres (a reviver of the works of antiquity). Among the proponents are Constantine the African, a physician from Carthage who explicitly defined atoms as the fundamental constituents of substances; William of Conches; and William of Ockham.
Jewish philosophy from the 9th to the 13th centuries is discussed. This was largely opposed to atomism, although Moses Maimonides (1135-1204) described the teaching of the Arab atomists. The schismatic Jewish sect of the Karaites (founded in the 8th century) adopted the atomic theory borrowed directly from teachings of Muslim philosophers and theologians.
While Greek atomism was to free mankind from invisible powers, Arab atomism is decidedly religious in nature. The Arab atomic doctrine is expressed in the Kalam, a set of 12 propositions, one of which introduces the notion of “accidents”. These reside within atoms, and include characteristics such as life and intelligence, along with inanimate properties such as colour and odour.
Moving into the Renaissance and the age of enlightenment, Pullman describes the resurgence of atomic theory starting with Pierre Gassendi, who is counted among the Christian atomists along with the likes of Galileo, Bruno, Newton and Boyle. Gassendi criticized Aristotle and defended ancient atomists, especially Epicurus, whose teachings he tried to make acceptable to the Church. The doctrine of John Locke, who doubted any future experimental proof of the existence of these atoms, is labelled “agnostic atomism”. Pullman also discusses Maupertuis and Diderot, with their sensitive and intelligent atoms; Holbach, with his materialistic atoms; and Maxwell, who believed that atoms exist due to the action of a creator.
Christian antiatomists – philosophers or scientists who use religious arguments to reject the theory – include Descartes, who rejected the concept of void; and Leibniz with his metaphysical atoms (monads). Others mentioned are Roger Boscovitch, who tried to blend Leibniz’s monads with Newton’s laws of attraction and repulsion; George Berkeley, who rejected matter, material corpuscles and void; and Immanuel Kant, who is labelled an “atomist turned antiatomist”.
The final part of the book moves into the modern era with the advent of scientific atomism through the 19th and 20th centuries. Pullman begins with the demise of the 2000-year-old theory of four elements by the demonstration of Lavoisier that water, and of Cavendish and Priestley that air, have a compound structure. Elements came to be defined as substances that could not be decomposed. Confusion over nomenclature followed until Canizzaro formulated a distinction between atoms and molecules in 1860. Soon afterwards, Mendeleev arranged the first 63 elements in the periodic table.
Controversy, however, continued. Philosophers such as Hegel and Schopenhauer were both opposed to atomism. So were die-hard antiatomists like Berthelot, Mach and Ostwald, and a few that Pullman calls “nostalgic philosophers”, such as Nietzsche, Marx and Bergson.
Nevertheless, atomic theory was almost universally accepted by the time J J Thomson discovered the electron in 1897, bringing the hypothesis of indivisible atoms to an end.
Pullman then brings us into the quantum age in 1900 with Planck’s famous constant. He guides us through Rutherford’s 1911 conclusion that atoms are mainly vacuum with a tiny nucleus surrounded by electrons, to Bohr’s 1913 observation that Planck’s constant leads to stable orbits in the atom and to discrete spectral lines. The rest of the modern atomic picture is carefully covered, with Chadwick’s 1932 discovery of the neutron; de Broglie’s postulation of the wave-like character of matter particles, and its subsequent confirmation by Davisson and Germer; and Schrödinger’s wave mechanics leading to serious conceptual difficulties among scientists.
Chemical bonding naturally plays a large part in the book, given that its author was a chemist. Covalent bonding, where electrons are shared between atoms, leads Pullman to an interesting analogy developed in the chapter “Society of atoms: marriage”, where he concludes that “as always in life, this implies the ability and even obligation both to give and to receive”.
In a closing chapter, Pullman delves into the nanoworld. Here he describes how the scanning-tunnel microscope and the atomic-force microscope led to visualization and manipulation of single atoms interacting with bulk surfaces, and how complete isolation of single (charged) atoms surrounded by vacuum was accomplished using ion traps.
No-one can contest that the atoms conceived 2500 years ago as invisible and indivisible impenetrable philosophical constructs have today become divisible and visible objects of reality. But are they really in human thought? They are certainly in the thoughts of scientists and philosophers, but I doubt they are uppermost in the minds of most people, as Pullman suggests when he claims that “quantum physics has stoked an interest in the ‘problem of God’ among a general public”. The book is let down by its index, which is difficult to use and occasionally inaccurate. That said, however, to read this book is a fruitful learning exercise, and it has a host of informative notes.
by Andrew Holmes-Siedle and Len Adams, 2nd edn (2002), Oxford University Press, ISBN 019850733X, £65 (€102).
This book is aimed at specialists – engineers and applied physicists – employing electronic systems and materials in radiation environments. Its prime role is to explain how to introduce tolerance to radiation into large electronic systems. The reader is expected to be familiar with the theory and operating principles of the various devices. The book mainly addresses components used in space, but also discusses issues specific to other fields, such as military and high-energy physics applications.
The book starts with a quick overview of radiation concepts, units and radiation detection principles, followed by a brief review of the various radiation environments likely to have a degrading effect on electronic devices and systems as encountered in space, energy production (fission and fusion), high-energy physics and in military applications (nuclear weapons). This is followed by a chapter dedicated to a general description of the fundamental effects of radiation in materials and devices: atomic displacement and ionization; as well as colourability of transparent material, single-event phenomena and other transient effects.
Seven central chapters form the core of the handbook, addressing in detail the mechanisms responsible for the degradation of performance of various devices. Each chapter is dedicated to a class of devices: MOS; bipolar transistors and integrated circuits; diodes and optoelectronics such as phototransistors and CCDs; power semiconductors; various types of sensors; and miscellaneous electronic components. The physical problems of total-dose effects and how to predict the electrical changes caused in MOS devices are discussed, along with some of the best solutions to the radiation problem. Long-lived effects, which can be separated into surface and bulk mechanisms, of various radiation types on bipolar transistors are described. How these effects influence the radiation response of bipolar integrated circuits is discussed. The response of the many different types of diodes to radiation is thoroughly discussed in a dedicated chapter. Optoelectronic devices in a hostile environment are subject to multiple effects, and radiation can cause mulfunctioning in a highly tuned, high-technology system. Silicon power devices used as regulators in power subsystems of large space equipment, radiation-generating equipment and nuclear-power sources also suffer from radiation damage. One chapter is devoted to discussing the physics, chemistry and practical problems associated with windows, lenses, optical coatings and optical fibres. Another chapter concentrates on the effects of radiation on polymers and other organics, classifying the main forms of organic degradation under irradiation and summarizing some of the most important examples and problems met with polymers in engineering and science.
Two chapters are dedicated to aspects of radiation shielding of electronic devices and various computer methods for particle transport, essentially with reference to space applications (very thin shields). The three final chapters discuss radiation testing, equipment hardening and hardness assurance. Radiation testing is made unavoidable by the variability in the sensitivity of semiconductors and electronic devices to radiation, which makes it impossible to rely on theory alone to predict the effect on a device of a certain exposure to a given type of radiation. The authors provide guidelines on radiation sources that may be used in irradiation tests, in test procedures and in engineering standards. Finally, they discuss the technologies and methodologies employed in fabricating radiation-hard devices, as well as providing rules of hardening against various types of radiation and for various applications, including remote handling equipment and robots.
Each chapter ends with a summary of its most important points. Besides the usual subject index, a useful author index helps greatly in searching through the large number of references provided at the end of each chapter. With respect to the first edition (1993), the book has been enriched with many references to useful websites, including databases. Surprisingly, the old units rad, rem and curies are used throughout the book, although SI units are provided in brackets. The authors admit they thought hard about what to use, and finally opted for the old system.
It is unfortunate that this otherwise excellent volume contains, here and there, a number of typographical and punctuation errors, and mistakes in some formulae. In a few cases there are contradictory statements a few paragraphs apart. The impression is that the text was not proofread carefully enough before going to print. There are also a few statements that are clearly wrong, such as that X-rays and gamma rays leave no activity in the material irradiated (what about photonuclear reactions above a given threshold?); and others that are confusing, such as in discussing the whole-body dose limit for members of the public. In general, activation phenomena and related problems are also somewhat generally underestimated throughout the book.
Nevertheless, this volume contains a lot of valuable material and is not only a handbook, but also an excellent textbook.
by Jonathan Allday, Institute of Physics Publishing, ISBN 0750308060, £16.99 (€27).
This edition is a revised and updated version of the King’s School, Canterbury teacher’s popular high-school introduction to particle physics and cosmology.
An update on particle physics and closely related topics in Switzerland was presented at the University of Zurich in March to a subpanel of the European Committee for Future Accelerators, RECFA. Members were welcomed to the meeting by the university’s rector, Hans Weder. In his opening address, Professor Weder emphasized his belief in the importance of basic research. He said that the University of Zurich intends to be among the very best in basic research. As a theologist, he found particle physics “a most fascinating human endeavour”. He concluded: “You may be proud of your contribution to human culture.”
The position of the Swiss Government was described by Charles Kleiber, Secretary of State for education and science, who declared that “Switzerland believes in CERN” (see below). An overview of particle physics in Switzerland was given by Claude Amsler, followed by several talks on the various Swiss activities in particle physics, in Europe and elsewhere, as well as a few contributions to spin-offs, such as medical applications involving particle physics techniques.
RECFA delegates were impressed by the extent and quality of the activities. Switzerland, in spite of being a small country, is almost omnipresent at CERN. Swiss scientists are active in a large number of experiments all the way from the lowest energies – experiments with antihydrogen – to the highest-energy experiments preparing for the Large Hadron Collider (LHC). There is also a very strong community of theoretical particle physicists in Switzerland.
An interesting recent development in Switzerland concerns a proposal by the Forum of Swiss High Energy Physicists to construct a dedicated Swiss facility to meet the challenges of the LHC computing. This is to be situated at the Swiss Center for Scientific Computing in the Italian-speaking Canton of Ticino. Switzerland also benefits from a large multidisciplinary national laboratory, the Paul Scherrer Institute (PSI). In addition to being a research laboratory, the PSI enables Swiss physicists to engage in activities beyond those that are possible at universities, such as building large equipment and having access to test-beams.
Are there then no clouds on the horizon for Swiss particle physics? The funding system is complicated, and post-doctoral fellows are expensive and not easy to find. However, Swiss particle physicists seem to have found their way through the funding labyrinths, and RECFA was pleased to find the Swiss particle physics community so strong and dynamic.
In telling RECFA delegates that Switzerland believes in CERN, the Swiss secretary of state for education and science, Charles Kleiber, said: “CERN is the world’s focal point for high-energy physics and therefore an invaluable asset for research in this field. Moreover, member states have invested heavily in CERN and it would simply be a waste of money not to continue to use it to the maximum extent possible. CERN motivates young students to study physics and serves as a first-class learning site by offering excellent training possibilities for the next generation of physicists. CERN is also ‘la part de rêve’ which is so necessary today. CERN disposes of motivated and competent personnel with an excellent record of success, but working also with great passion – and sometimes under very difficult conditions – on the future of CERN. Let’s protect and take advantage of the human resources available.”
Charles Kleiber is the new head of Switzerland’s delegation to CERN Council.
In the spring of 1960, CERN’s proton synchrotron (PS) was delivering its first beams. In the middle of this critical phase for European particle physics, CERN’s director-general, Cornelis Bakker, was killed in an aeroplane accident. Although CERN’s governing Council acted swiftly by appointing John Adams as acting director-general, this step necessarily prolonged the period that in retrospect may be characterized by the dominance of brilliant accelerator scientists.
At the same meeting in June 1960 which confirmed Adams’ appointment, the “modern” structure of research committees with at least as many members from outside as inside the laboratory was also approved, and the search for Bakker’s successor began. In any case, Adams would have to leave CERN to take up an important position in the UK. The discussion centred around two eminent scientists – Hendrik B G Casimir and Victor F Weisskopf. Weisskopf was already well known at CERN, having worked in the Theory Division from 1957 until 1958. With characteristic modesty he doubted his talents for such a position, but he expressed his willingness to act as a director of research. Casimir made it clear that his position with Philips would make it very difficult to take over the post of CERN director-general.
During the following months, a formal nomination procedure of candidates in the Scientific Policy Committee (where Weisskopf was formally proposed by Greece), extensive deliberations and successful persuasion led to Weisskopf’s election by Council on 8 December 1960. His term was envisaged to run from 1 August 1961 until 31 July 1963, but this was later extended until 31 December 1965. It is no exaggeration that in that period, under Weisskopf’s guidance, the future of CERN was shaped for many years to come.
CERN was fortunate to be led by a personality such as Weisskopf at this time. The difficult situation for the laboratory, whose harmonious development had been interrupted at a critical point in its evolution, needed a director-general with special abilities. Every fast-developing scientific organization must cope with the effects that its very size has on its aims. Scientists with little inclination towards administrative matters must submit to administrative and bureaucratic rules, especially in an international organization.
The selection of collaborators and the future style of work is determined at the stage of most rapid initial growth, because the natural inertia of a structure made up of human beings makes it extremely difficult later on to rectify earlier mistakes. At the end of 1960 the number of CERN staff and visiting scientists was 1166; this rose to 2530 at the time of Weisskopf’s departure in 1965.
Therefore, at this time in the history of CERN even more than at others, the director-general had to be a physicist who set the direction of the laboratory towards an absolute priority of science. To achieve this he had to rely on a high reputation in his field, together with an ability to deal with the administrative needs of a rapidly growing organization. CERN was placed in the delicate position of having to restore European research parity with that of the US, profiting as much as possible from the experience gained already in the US, while retaining the European character of the new organization.
Born in Vienna in 1908, Weisskopf followed a truly cosmopolitan scientific career as a theoretical nuclear physicist, working with the most important founding fathers of modern quantum theory, and contributing important results himself. He was familiar not only with Germany (his collaboration with Heisenberg), Switzerland (with Pauli) and the Nordic countries (with Niels Bohr at Copenhagen) from extended stays in these countries, but also with Russia (with Landau at Kharkov), and eventually accepted a position at Rochester, US, in 1937.
His qualities as a leader of a technological project in which theoretical physics only played an auxiliary role was exploited in the Manhattan Project (Los Alamos) towards the end of the Second World War. The European background of many of his collaborators there was an excellent preparation for the task of leading a European laboratory. Even when pursuing the same scientific goal, the individual style of scientists varies greatly, especially if they are of different nationalities.
After the war, as professor at the Massachusetts Institute of Technology (MIT), Weisskopf resumed contacts with Europe, which was slowly recovering from the dark years. In addition to his outstanding qualifications as a theoretical physicist and as a leader of scientific enterprises, Weisskopf possessed a special quality that physics in Europe is lacking to a large degree. Possibly because of the general structure of secondary education in Europe, mathematics plays an extremely important role in theoretical physics. Hence theoretical physics frequently becomes almost a mathematical discipline, with the physical ideas being submerged by an overemphasized mathematical formalism. Among experimentalists this can cause uncertainty or even refusal as far as the judgement of theoretical ideas is concerned.
In the US only a handful of gifted physicists knew how to bridge this gap. Weisskopf was a master of this. Before coming to CERN, he had already taught a generation of nuclear physicists how to pick out the essential physical ideas which are always transparent and simple (once they have been understood), but which may be hidden under many layers of mathematical formalism. The true masters of mathematical physics always knew how to isolate the physical content of complicated mathematical arguments, but unfortunately the majority of theoreticians in Europe are to this day sometimes over-fascinated by the mathematical aspects of the physical description of nature.
The understanding of physical phenomena often does not even require the use of precise formulae. Students at MIT had invented the notion of the “Weisskopfian”, which naturally takes care of numerical factors such ±1, i, 2p, etc. Also in the book Theoretical Nuclear Physics by John M Blatt and Weisskopf, which remains a standard textbook to this day, the emphasis on simple, physically transparent arguments by Weisskopf and the more precise, but more formal presentation topics by his co-author are clearly discernible.
From MIT to CERN
To facilitate his transition from MIT to CERN, and to make optimal use of his period as director-general of CERN, Weisskopf became a part-time member of the CERN directorate in September 1960, dividing his time equally between MIT and CERN. Unfortunately in February 1961 he was involved in a traffic accident, and needed complicated hip surgery and a long stay in hospital. At the start of his term as director-general and less so during a large part of his stay in Geneva, Weisskopf was hampered in his movement. I vividly remember his tall figure walking with crutches through the corridors, obviously in pain, but he never lost his friendly disposition.
The first progress report to CERN Council in December 1961 clearly reflects the situation of CERN at the beginning of the Weisskopf era. Two years after the first beam at the PS, breakdowns and construction work on beams had prevented completely satisfactory use of this machine, whereas the smaller synchrocyclotron was working very well. Research director Gilberto Bernardini aptly remarked that European researchers with a nuclear physics background had had little difficulty orienting their work towards the synchrocyclotron. The PS, on the other hand, was a novelty for physicists, so certain mistakes had been made, particularly with insufficient time for preparation of experiments.
Nevertheless, 1961 was the first year with a vigorous research programme at CERN. Not surprisingly, organizational problems and difficulties in the management of relations with universities in the member states became acute. It was recognized that at least track chamber experiments required the collaboration with institutes outside CERN for the scanning, measuring and evaluation of data. For electronic experiments such a need was not yet seen.
The construction of the 2 m bubble chamber was continuing well, but experimental work was still done on the basis of data from the tiny 30 cm chamber and with the 81 cm Saclay chamber. The heavy liquid chamber had looked in vain for fast neutrinos in the neutrino beam. Simon van der Meer’s neutrino horn, intended to improve this situation, had just finished its design stage.
Addressing Council for the first time on the problem of the term future of CERN, the new director-general already strongly emphasized two directions of development which, as subsequent history has shown, were decisive for the laboratory’s future success. One project, based upon design work by Kjell Johnson and collaborators, foresaw the construction of storage rings; the other was aimed at a much larger 300 GeV accelerator.
The financial implications of such proposals and the necessity to formalize budget preparations more than a year in advance led to the creation of a working group headed by the Dutch delegate, Jan Bannier. From this group emerged the remarkable “Bannier procedure”, under which firm and provisional estimates of budget figures for the coming years are fixed annually. It was decided that the cost variation index should not be provided automatically, and that Council should make a decision on this index each year.
First research successes
The discovery that different neutrinos came from electrons and from muons was made in 1962, not at CERN, but at Brookhaven. In retrospect it was clear that CERN’s attempt was bound to fail for technical reasons. However, the disappointment did not overshadow some remarkable successes in the first full year of CERN under Weisskopf’s leadership. The shrinking of the diffraction peak in elastic proton collisions was first seen at CERN – in agreement with the new ideas of Regge pole theory, which had also originated in Europe. The cascade anti-hyperon was found simultaneously with Brookhaven, but the beta decay of the p meson and the anomalous magnetic moment of the muon were “pure” CERN discoveries. For the first time development of a novel type of scanning device for bubble-chamber pictures (the Hough-Powell device) which started at CERN was taken over by US institutions. However, Weisskopf had to complain to Council about the “equipment gap” at the PS, caused by the lack of increase in real value of the budgets in 1960 and 1961.
In some sense, the most important experimental result of 1963 was the determination of the positive relative parity between the L and the S hyperon, obtained at CERN in the evaluation of data from the 80 cm bubble chamber. This result was in disagreement with some much-publicized predictions from Heisenberg, and gave further support to the growing confidence in internal symmetries. Despite a long shutdown of the PS in order to install the fast ejection mechanism giving extracted beam energies up to 25 GeV, it now began its reliable and faithful operation, which to this day is the basis of all accelerator physics at CERN. Thanks to a neutrino beam 50 times more intense than that at Brookhaven, the first bubble-chamber pictures of neutrino events were made.
<textbreak=Parliament>In 1963 a new body of European physicists was created under the chairmanship of Edoardo Amaldi. Taking into account future plans outside Europe, this body strongly recommended the storage ring project, as well as the plans for a 300 GeV accelerator. CERN Council authorized a “supplementary programme” for 1964 to study the technical implications of these two projects. This Amaldi Committee was set up as a working group of CERN’s Scientific Policy Committee, and was the forerunner of the European Committee for Future Accelerators (ECFA), which was founded three years later, again under Amaldi’s chairmanship. ECFA has been the independent “parliament” of European particle physicists ever since.
Weisskopf’s clear vision of the importance of education resulted in his legendary theoretical seminars for experimentalists at CERN. I had the privilege of collaborating with him at that time on some aspects of the preparation of these seminars, and my view of theoretical physics has been decisively influenced by his insistence on stressing the physical basis of new theoretical methods.
From 1964, CERN’s synchrocyclotron started to concentrate on nuclear physics alone, whereas the PS was now the most intensive and most reliable accelerator in the world. Another world premiere was the first radiofrequency separator, allowing K-meson beams of unprecedented energy. At CERN, also for the first time, small online computers were employed in electronic experiments. A flurry of fluctuating excitement was caused by the analysis of muon and muon-electron pairs in the neutrino events seen in the spark chamber. When it turned out that they could not have been produced by the intermediate W-boson (to be discovered at CERN exactly 20 years later at much larger energies), these events were more or less disregarded. Only 10 years later, after the charmed quark was found in the US, was it realized that these events were examples of charm decay – admittedly very difficult to understand on the basis of the knowledge in 1964. The unsuccessful hunt for free quarks also started in 1964, together with the acceptance of the concept of quarks as fundamental building blocks of matter.
Making decisions
Thanks to Weisskopf’s relentless prodding in 1964, CERN member states were convinced that the time was ripe for a decision on the future programme of CERN. Rather than rush into an easier but one-sided decision, Weisskopf was careful to emphasize the need for a comprehensive step involving three elements:
*further improvements of existing CERN facilities, comprising among other things two very large bubble chambers containing respectively 25 m3 of hydrogen and 10 m3 of heavy liquid;
*the construction of intersecting storage rings (ISR) on a new site offered by France adjacent to the existing laboratory;
*the construction of a 300 GeV proton accelerator in Europe.
Although a decision had to be postponed in 1964 – due to the difficult procedure to be set up for the site selection of the new 300 GeV laboratory – optimism prevailed that such a decision would be possible in 1965. After recommending the ISR supplementary programme in June 1965, the formal decision by Council was finally taken in December 1965.
The novel ISR project had no counterpart elsewhere in the world. Although experience had been gained at the CESR test ring for stacking electrons and for high ultravacuum, this decision reflected the increasing self-confidence of European physics. Thus the foundation was laid for the dominating role of European collider physics which eventually led to the antiproton-proton collider, the LEP electron-positron collider, and the LHC proton collider. At the same time as the ISR project was authorized, a supplementary programme for the preparation of the 300 GeV project was also approved.
When Weisskopf’s mandate ended at the end of 1965, particle physics had passed through perhaps its most important stage of development. From being an appendix to nuclear physics and cosmic-ray experiments, it had become a field with genuine new methods and results. The many new particle states disentangled by CERN and other laboratories gradually found a place in a framework determined by a new substructure, the quarks. In addition, many new discoveries in weak interactions, and especially at the unique neutrino beam of CERN, showed close similarities between weak and electromagnetic interactions, and paved the way for their unified field theory.
Much of the enthusiasm that enabled CERN experimentalists to participate so successfully was due to Weisskopf. He made a point of regularly talking to the scientists, and more than once he visited experiments during the night. These frequent contacts on the experimental floor with physicists at all levels gave CERN a new atmosphere and created contacts between different groups – something which was lacking before. Weisskopf himself was aware of this. When asked on his departure from CERN what he thought his main contribution had been, he replied that the administration and committees would have functioned perfectly well without him, but that he thought he had given CERN “atmosphere”.
During the Weisskopf era, directions were set for the distant future. Almost 40 years later, the basis of the CERN programme is still determined by those decisions taken in 1965. How could Weisskopf have been so successful in his promotion of CERN in Europe, at a time when there was always at least one member state with special problems regarding the support of particle physics and CERN?
Politicians must trust valued experts. Weisskopf achieved so much for the laboratory because he was deeply trusted by the representatives of the member states. Although enthusiastic in the support of new ideas in scientific projects, he never lost his self-critical attitude, and was quick to try to understand opposing points of view in science and in scientific policy. The enthusiasm, honesty and modesty of Victor Weisskopf have proved to be a rich inheritance, and have determined the future of CERN.
For more than 40 years, CERN’s library has collaborated with institutes and universities worldwide to collect carefully documented results of scientific research. Initially, this prodigious output was all on paper, and the CERN library regularly received papers from scientists at these institutes and universities via mailing lists. Because of its visibility, CERN received far more of this material than most institutes, and a major attraction of a visit to CERN was to peruse the latest pre-prints on view in the library.
With the advent of electronic publishing, more and more documents became accessible online. To complete the picture, documents still received on paper were scanned to offer Web access. Today this practice is diminishing as grey literature (library-speak for pre-prints and other material not published by a publishing house) in science, particularly in physics, is more widely available in electronic form.
Saving time and money
Having distributed documents for some years both on paper and electronically, many institutes have now chosen to use only the electronic route. This offers undeniable advantages: cost savings; quick and easy distribution; full text availability at a distance; the possibility of enriching the catalogue; and cheap online access, for example. The virtual library has become a reality. Paper documents are increasingly rare, and authors generally prefer to submit their papers electronically. Most major research centres also offer Web access to their documents and have ceased to send out paper copies via mailing lists, encouraging other scientific libraries and the researchers themselves to consult their Web pages and databases.
Faced with this evolution, library acquisition policies must be reconsidered and adapted to the new standards of scientific information dissemination.
The problem in this new context is the multiple consultation of databases. To find a document, a researcher must consult many resources, which is a time-consuming and tedious task with often dubious results. To facilitate searching and to offer users a single search interface, the CERN library chose to import as many electronic documents as possible. In 1999 the information support team introduced its Uploader program, which allows automatic importation of bibliographic records extracted from several sources. This has led to three main advantages: papers can now be found directly from institutes’ sites; the number of documents received from different research centres has increased; and new databases have been explored.
From any database or Web page, Uploader formats the records and adapts them to the cataloguing format used at CERN – Machine Readable Cataloguing (MARC). The program also updates existing records, searching for duplicates before importation. Which databases to explore was a difficult choice. First, the websites of all institutes from which CERN still received paper documents were consulted to see if the institutes offered the same documents online. This showed that more or less all institutes offer their publications on the Web in some form.
This study also revealed that CERN received, via mailing lists, only a third of the documents available on the Web. There are two possible explanations for this: perhaps for economic reasons research centres make a selection of which documents to send out; and mailing lists are not always kept up to date. The need for automatic importation of these documents from websites became obvious, but there were technical problems to overcome.
Sources can be divided into two types: Web pages and online databases, which are handled differently. Medium-sized research centres and information sources that do not offer online databases generally offer Web pages presenting the work of their researchers (usually theses). Searching can be primitive if no real search engine is implemented. The number of documents is also often limited. This means that manual submission of the full text of the documents is the most efficient way of acquiring the documents. The constant evolution of Web pages also argues against automatic importation. Since alerting services for such sites are rare, the CERN library set up its own alert system for some 80 information sources at 30 institutes. This tells the librarians when the available information changes, allowing them to acquire new documents as they become available.
Online databases often allow multicriteria searching. In contrast to Web pages, however, it is usually impossible to put an alert on the search results. This means that for online databases that do not offer an alert system, a different approach is needed. The method adopted by the CERN library is a monthly or annual search.
The Uploader program helps CERN’s librarians to manage an effective document supply service, but the huge diversity of online information sources means that there is no shortage of work for the librarians. Document structure can vary from page to page, or even within the same page. In the majority of cases the pages are therefore presented as free text with no common structure. With virtually no constraints imposed by databases, no common import protocol is possible, and material must be input manually. Inconsistencies can arise when Web pages are not handled rigorously, causing confusion in bibliographic cataloguing – most frequently for authors’ names. Some databases allow external submission of documents and bibliographies, which results in many irregularities and loss of homogeneity in the presentation of the documents. Information can be presented in multiple forms. Pre-print numbers, for example, can appear as IUAP-00-xxx (number not yet attributed), CERN-TH-2K-1 (instead of CERN-TH-2000-1) or MPS15600 (instead of MPS-2000-156). Vital pieces of information, such as collaboration lists, are sometimes missing. All of these problems require traditional librarianship skills. CERN’s library aims to offer a coherent and homogeneous database, validation and improved metadata. Knowledge databases recognize retrievable work and provide links to relevant articles on the Web, while a computer program appends and corrects bibliographic data, keeping manual checking to a minimum.
There is no doubt that electronic uploading saves a considerable amount of time compared with manual submission. It has also greatly increased the number of documents made accessible and available at CERN. However, source databases must be carefully selected. The richer the database, the more time-consuming the procedure becomes. In addition, the volatility of Web pages requires close follow-up. Automatic importation has taken over from manual submission, but specialist monitoring remains essential.
The electronic approach was initially investigated at CERN on a test basis, to ascertain technical feasibility and to judge what the advantages would be. Since then, its use has spread and the laboratory has reached agreements with Cornell, Fermilab and several other information sources. Today, more than 90% of the material entering the CERN library database is imported or created electronically. Of this, only 8% comes from CERN.
The Large Hadron Collider (LHC) is without a doubt the most technologically challenging project that CERN has ever embarked upon. It is also the most costly, and it was approved under the strictest financial conditions that CERN has ever faced. This should have been cause for the laboratory to reflect on its way of working, but reflection did not come until September last year, when the results of a comprehensive cost-to-completion review showed that CERN would have to find an additional SwFr 850 million for the LHC and its experiments.
CERN is built on a tradition of excellence, in terms of both its personnel and its facilities. In the world of particle physics, the laboratory has a well deserved reputation for building the finest machines. Our first big accelerator, the PS, was completed in 1959 and is still going strong. And had the SPS not been built to CERN’s exacting standards, the Nobel prize-winning antiproton project might never have got off the ground. With the LHC being inaccessibly encased in its cryostat, high standards are needed more than ever. CERN, however, must also become more cost-aware.
Moving forward
At 18% of the material cost, the LHC overrun does not seem excessive for a project of this complexity, and is comparable to the percentage overrun incurred in the construction of the Large Electron Positron (LEP) collider. But the bill for the LHC is three times that for LEP. The lesson we have learned is that contingency in big projects must now be measured in absolute and not percentage terms. Our mistake was that we failed to realize that the scale of the LHC would require new monitoring and control systems at all levels of the laboratory.
Such systems are now being introduced with advice from an External Review Committee. CERN will introduce earned-value management techniques to allow the financial health of the laboratory to be easily monitored at any time, and we will move to full personnel-plus-materials accounting, which will introduce greater transparency. These measures are essential for completing the LHC within the boundaries set by last year’s cost-to-completion review, and they will position CERN well for the longer term.
CERN’s mission is to provide the facilities that its user community wants. In the past, that has meant a diverse range of particle beams serving a wide range of relatively small experiments. With the LHC, our user base has consolidated to give a smaller number of much larger experiments, and we must adapt our facilities accordingly. That means a narrower programme, focused on the LHC. A large part of the required resources can be found by reallocating budget and personnel to the LHC. Further reallocations will come from internal restructuring, postponing the start-up of the LHC until 2007, extending the pay-back period until 2010 and cutting back on accelerator R&D until the LHC is running.
I am convinced that these moves will allow CERN to maintain its tradition of excellence. We continue to host a lively and diverse low-energy programme. The LHC will be the world’s foremost facility for high-energy physics, and by maintaining a minimum R&D base, we are providing a platform for the long term.
However, we are not yet out of the woods. An important part of the resources we plan to reallocate to the LHC has been identified but not yet secured. It will take a coherent effort across the laboratory to ensure that human resources released by the reduction of non-LHC activities are effectively deployed to the LHC. However, we are heading in the right direction, and I have every faith in the ability of CERN’s staff and users to meet the challenge.
The lessons CERN has learned are lessons for us all. The need to measure contingency in absolute terms requires management tools, risk analysis and strategy tuned to the size of the projects, much as we choose our physics instruments in relation to the precision we are aiming to achieve. Time will tell how these considerations can be applied to future projects. For now, however, we have learned our lesson and CERN is set to emerge leaner and fitter to face the future.
by John Dirk Walecka, Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology, Cambridge University Press, ISBN 0521780438, £60 (€ 98).
The author is well placed to write a monograph on this classic subject which, like no other, bridges the gap between nuclear and particle physics through common concepts and techniques. He can look back on a long and distinguished career in this field as professor of physics at Stanford University, as scientific director of CEBAF (now Jefferson Lab), and now as professor of physics at the College of William and Mary.
The book is based largely on a series of lectures on the subject given at CEBAF. Given the author’s track record, it is only natural that the book should focus on electron scattering in the few-gigaelectronvolts energy domain. At the same time, it exploits the power of the theoretical concepts developed originally for low-energy scattering, to address an audience much wider than students and researchers at laboratories such as MIT Bates, JLab and the Mainz microtron. This is achieved through a clear structure and pedagogical distinction between the theoretical framework and practical applications.
The book is organized into five parts, two of which contain the core material. Part 2 – “General analysis” – is one of the most comprehensive reviews of the theory and phenomenology of electron-nucleus and electron-nucleon scattering that can be found in the literature today. This chapter is recommended reading not only for nuclear physicists, but also for every graduate student working on electron, muon and neutrino scattering, to acquire a detailed understanding of the roots and the development of the formalism applied to present-day high-energy experiments. It includes discussions of polarized deep inelastic scattering and of parity violation in electron scattering.
Part 4 – “Selected examples” – is targeted specifically at nuclear physicists.The applications of scattering theory focus on detailed discussions of classic nuclear-structure problems and experiments; three sections on the quark model, QCD, and the Standard Model embed the subject in the wider theoretical context. Recent deep inelastic electron and muon scattering experiments are not covered in a systematic way (however, they have been discussed in many other excellent reviews).
The three remaining parts of the book are more succinct. Part 1 is an easy-to-read introduction, and part 3 discusses quantum electrodynamics and provides an introduction to radiative corrections, which unfortunately is too concise to be of much practical use. Finally, part 5 gives an overview of future directions, which again focuses on the CEBAF/JLab experimental programme. A useful feature is an extensive set of appendices, providing handy reference material.
The author has accomplished a successful blend of textbook and monograph. Written by a nuclear physicist for nuclear physicists, it is a must for students and seasoned researchers alike engaged in electron-nucleus scattering. This book will also be eminently useful and rewarding for the deep-inelastic-scattering community to read, to learn about the origin of their field and its intimate relationship with one of the most important subject matters in nuclear physics.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.