CERN’s director-general Robert Aymar presented a seven-point scientific strategy for the organization at the 128th session of the CERN Council on 18 June. Completion of the Large Hadron Collider (LHC) project, with start-up on schedule for 2007, heads the list. This is followed by the consolidation of existing infrastructure at CERN to guarantee reliable operation of the LHC. The third priority is an examination of a possible future non-LHC experimental programme.
Fourth on the list is a role for CERN in the growing coordination of research in Europe. Aymar cited as examples the Coordination of Accelerator Research in Europe (CARE) project, which could contribute to an LHC upgrade by around 2012, and the EUROTEV project through which CERN will participate in generic R&D issues related to a possible future linear collider. Both of these projects are partly financed by the European Union.
The fifth priority is the construction, starting in 2006, of a linear accelerator injector at CERN to provide more intense beams for the LHC, followed by an intensified R&D effort towards a compact linear collider, or CLIC. This novel accelerator technology under development at CERN could open the way to much higher energies than are available today. Aymar is appealing to laboratories around the world to join the project, and has so far received 18 expressions of interest.
The seventh and final point in the new strategic plan is to prepare a comprehensive review of CERN’s long-term activity, to be available by 2010 when results from the LHC will have given a first description of the landscape of particle physics for years to come.
The current status of the LHC project was the subject of a report at the same session by Lyn Evans, the LHC project leader. The programme for installation of the LHC is currently being reviewed, following a delay in installing the distribution line for the cryogenic liquids that will cool the machine to 1.9 K. Difficulties have been solved, and the contractor has delivered a new schedule that foresees completion of the distribution line by February 2006, with two octants fully installed by the end of 2004. This compresses the overall schedule, which now requires the installation of two octants at a time to make up for the delay. However, Evans strongly reaffirmed the intention to start up the LHC in 2007, with first collisions in the summer. He also drew attention to the global collaboration that is making the LHC a reality. Most of the components from non-member states are now complete.
The bubble-chamber programme at CERN began in an atmosphere of enthusiasm soon after the foundation of the organization. The then recently invented bubble chamber was marvellously appropriate for exploiting the new CERN accelerator, the Proton Synchrotron (PS), which could reach energies where new phenomena were expected. The emergence of the electronic computer around the same time also provided a means of dealing with the large numbers of bubble-chamber pictures. New instruments would be built to measure the thousands of metres of film, and so began a period of intense activity at CERN.
The bubble chamber, which was invented in 1952 by Donald Glaser in Michigan, played an important role in experimental physics at particle accelerators. Glaser showed that the trajectories of energetic particles could be made visible by photographing the bubbles that form within a few milliseconds after particles have traversed a suitably superheated liquid. For three decades bubble chambers were to dominate particle physics, especially research on strange particles, until the mid-1980s when developments in electronics and new wire chamber detectors, together with the start of a new era of collider physics, brought an end to the bubble-chamber programmes.
The early bubble chambers were very small, but over the years they increased in size with the largest containing 20 m3 of liquid. More than 100 bubble chambers were built throughout the world, and more than 100 million stereo pictures were taken (“30 years of bubble chamber physics” 2003). More than half of these pictures were taken at CERN by the 30 cm hydrogen bubble chamber, followed by the 81 cm Saclay chamber, the 2 m CERN chamber and finally the Big European Bubble Chamber (BEBC).
In the 1960s bubble chambers became the main tool at CERN for the study of resonances and strange particle physics, keeping hundreds of physicists busy for a decade. The 30 cm hydrogen chamber came into operation at CERN in 1960, followed four years later by the 2 m hydrogen chamber. From the early 1960s CERN also pioneered the use of bubble chambers for the study of neutrino interactions, and it was the great interest in this field that led to the conception and construction of giant bubble chambers such as Gargamelle and BEBC.
In 1966 CERN, France and Germany launched the BEBC project – a giant cryogenic bubble chamber surrounded by a 3.5 T superconducting solenoid magnet that operated at CERN in the West Area neutrino beam line of the Super Proton Synchrotron (SPS) until 1984. Gargamelle, a very large heavy-liquid (freon) chamber constructed at the Ecole Polytechnique in Paris, came to CERN in 1970. It was 2 m in diameter, 4 m long and filled with 10 tonnes of liquid at 20 atmospheres. With a conventional magnet producing a field of almost 2 T, Gargamelle was the tool that in 1973 allowed the discovery of neutral currents.
The contacts between the American laboratories and CERN were very close. The physicists at the time were inspired by the example of the Lawrence Berkeley Laboratory, where bubble-chamber experiments using beams from the Bevatron were available before CERN’s PS started operation. In particular there was close collaboration on the technical side between CERN and the Brookhaven Laboratory. It was during these early years of CERN that the first international collaborations on experiments began, thanks to the distribution of bubble-chamber film and joint analysis efforts. Among the first of the groups measuring film from CERN to start a collaboration were those from Bologna and Pisa, who already had experience analysing film from the 30 cm propane chamber at Brookhaven (Eisler et al. 1957). A group from Warsaw with experience in nuclear emulsions, along with other groups who had worked previously with cloud chambers, also soon joined the collaborations with CERN to analyse bubble-chamber film.
The bubble-chamber exposures in beams at the PS were based on physics proposals, often hotly debated in the experimental committees at CERN. The results obtained by the many collaborations using film from the laboratory contributed to establishing bubble-chamber physics at CERN on the world stage. For example, many of the major verifications of SU3 symmetry – apart from the discovery of the Ω– particle – came from bubble-chamber experiments at CERN during the 1960s and 1970s. The success of SU3 not only illustrated the importance of group theory in contemporary high-energy physics, but also became a vital ingredient of the modern Standard Model of particle physics.
One of the most important highlights of the bubble-chamber era – and an outstanding success story for CERN – was the discovery of weak neutral currents in Gargamelle, the giant heavy-liquid bubble chamber. The chamber was first exposed to the neutrino beam at the PS, where the decisive detection was made. Parallel with the operation of the chamber and the evaluation of the photographs from the PS exposure, detailed background calculations were made on neutron-induced events, which could simulate neutral currents. It turned out that this background was only 15% of the signal and so neutral currents were established. Later, Gargamelle was transferred to the neutrino beam at the SPS and equipped with an external muon identifier.
Beyond physics
The results of the bubble-chamber programme at CERN extended well beyond the intrinsic interest of the physics. Bubble-chamber studies played a major role in the reconstruction of physics in post-war Europe and had a great impact on its further development. Bubble-chamber pictures could be easily transported, and efforts were made to persuade industry to construct measuring instruments that became commercially available. As had happened for nuclear emulsions a few years earlier, many new groups formed in research centres and universities, where young people were trained. These collaborations produced much of the significant physics. At CERN the use of digital computer and data-handling techniques for experiments began with bubble chambers, and their use in other fields of high-energy experimentation followed later.
Bubble-chamber physics became a training ground for physicists, engineers, technicians, and for specialists in computers and data handling. A great number of new techniques were developed to improve particle identification both inside and outside the track-sensitive volume of the bubble chambers. There was the question, for instance, of how best to convert photons resulting from π0 decays inside a hydrogen bubble chamber. This led at CERN to the operation of track-sensitive hydrogen targets, surrounded by neon-hydrogen mixtures, to improve the photon-conversion efficiency.
Over the years there were many other technical developments (Mulholland and Harigel 2003). Proportional wire chambers surrounding the bubble chambers allowed the identification of muon tracks escaping from the inside through the tanks and magnets of the bubble chamber.
Special fish-eye optics, stereo photography and holography were developed to optimize the recording of the particle tracks. Finally, rapid-cycling bubble chambers and hybrid systems, which combined bubble chambers with spectrometers and counters, were constructed. Interaction triggers, in particular for signals of charm events, were used to fire the flash so as to take photographs only of interesting physics events in order to improve on the slow data-taking rate of bubble chambers. CERN’s competence in engineering and cryogenics, which was developed in the Track Chamber Division through the building and exploiting of bubble chambers, later benefited the construction of the detectors for the Large Electron Positron collider and the experiments for the Large Hadron Collider (LHC).
A whole generation of physicists wrote their PhD theses based on bubble-chamber data. Perhaps because of the large numbers of young people who were working in the field of bubble chambers and film analysis, many are to be found today as leaders of experimental teams or laboratories, and indeed in many other diverse areas.
Lastly, and arguably most importantly, bubble-chamber physics initiated on a large scale the symbiosis between CERN and its community of users. Laboratories that at first limited their activities to measuring pictures and analysing the results soon diversified their activities and links with CERN expanded. This became one of the ingredients of CERN’s success. Bubble chambers had initiated the international collaborations for performing experiments that are now extending to the worldwide collaborative efforts for the LHC.
The bubble-chamber era has now ended and the remains of the major CERN bubble chambers are today exhibits in science museums – BEBC and Gargamelle are in the garden outside the Microcosm exhibition at CERN and the 2 m chamber was donated to the Deutsches Museum in Munich. However, bubble chambers live on in other ways, not only through the many beautiful postcards and books showing bubble-chamber tracks, but also through many of the original ideas and the “bubble chamber philosophy” that continues to play an important role in physics today.
Acknowledgments
The author would like to acknowledge some valuable sources of material. Much of this article is based on a memorandum written in 1987 by the late Yves Goldschnidt-Clermont to the CERN history committee (Goldschnidt-Clermont 1987). The proceedings of the “Bubbles 40” conference, held at CERN in 1993 to commemorate the 40th anniversary of the invention of the bubble chamber, document the evolution and impact of bubble chambers on particle physics and physics discoveries and outline their technological, sociological and pedagogical legacies (“Bubbles 40” 1994). Jack Steinberger from CERN recounted his time with bubble chambers in memories of his early life (Steinberger 1997), and the impact of Charles Peyrou on the bubble-chamber programme at CERN is recalled in a tribute published after his death in 2003.
Ettore Majorana was the Vincent van Gogh of theoretical physics, endowed with phenomenal talent but tortured by his own personality; both were geniuses whose unconventional epic work was first recognized by only a few contemporaries, but whose fame ultimately came only after a premature death, by suicide for van Gogh, and possibly so for Majorana too.
Enrico Fermi once said: “Few are the geniuses like Galileo and Newton. Well Ettore was one of them.” Appointed professor at the University of Naples in 1937, Majorana mysteriously disappeared in March 1938 after a brief trip to his native Sicily. After having cabled “I shall return tomorrow”, he was not aboard the ferry from Palermo when it docked at Naples. Despite extensive searches, no trace of him was ever found. The only clue was an enigmatic remark “The sea has refused me (il mare mi ha rifiuato)” in the same cabled message from Sicily, suggesting that his plan to jump overboard unseen on the outward trip had failed and so he had opted for another attempt on the return journey. It was a major loss for Italian physics, compounded later that year when Fermi emigrated to the US.
After having first studied engineering, Majorana graduated in physics with Fermi in 1928 and went on to become a key member of Fermi’s newly established and subsequently famous Rome group of the early 1930s (which included, among others, Edoardo Amaldi, Ugo Fano, Bruno Ferretti, Bruno Pontecorvo, Giulio Racah, Emilio Segrè and Gian Carlo Wick).
Majorana was chronically diffident, and this shyness extended to his own publications. He formally published just nine papers in his lifetime, including his 1932 relativistic theory of particles with arbitrary spin. His final paper in 1937 was called “A Symmetrical Theory of the Electron and the Positron” and introduced the revolutionary concept of what became known as a “Majorana particle” – a neutral spin 1/2 particle that is its own antiparticle – now of vital importance for neutrino physics. However, Majorana’s archived papers in the Domus Galileana in Pisa show that he had already formulated these ideas in 1933, soon after the positron had been discovered.
This book looks instead at Majorana’s first steps in physics research, carefully documented by him in five notebooks (Volumetti) from 1927-1932, about one notebook per year, and extending from his formal coursework to original research covering topics ranging from the effect of a magnetic field on melting point to solutions of the Fermi-Thomas equation. These papers are translated into English but retain Majorana’s original format and conventions.
As Majorana’s contributions to physics have increased in value, several other collections of his work have appeared. There is also Recami’s excellent biography Il Caso Majorana (Mondadori), but the volume now published by Kluwer is the first rendition of any of Majorana’s work into English. This book is the outcome of some dedicated and painstaking work by the editors in translating a wealth of difficult material and reproducing Majorana’s original presentation. It was commendably supported by the Italian Embassy in the US and by the Italian government. After this effort, the highly motivated editors are looking towards a new volume of Majorana’s subsequent research notes.
The “fly” in question is the nucleus and the “cathedral” is the atom, and this is the account of “how a small group of Cambridge scientists won the race to split the atom”. The story begins in 1909, after Ernest Rutherford’s student, Ernest Marsden, found that when alpha particles are scattered by a gold foil the Rutherford formula is not exactly satisfied. There is therefore evidence that the nucleus is not just a point, but a “fly”. After moving from Manchester to Cambridge, Rutherford and his collaborators wanted to know more about what is inside the “fly”.
In the Cavendish Laboratory at Cambridge in 1927 there were two lines of approach to the study of the nucleus: the traditional one using naturally produced particles such as alphas, gamma rays, etc, and another that was trying to accelerate particles artificially in the laboratory.
James Chadwick was following the first approach, and this led him to the discovery of the neutron in 1932 and to the Nobel prize in 1934. In 1930 two German physicists, Walther Bothe and Herbert Becker, had reported that by bombarding beryllium nuclei with alpha particles from a polonium source they had registered the emission of a powerful neutral radiation. Later, the Joliot-Curies reproduced the phenomenon and proved that the mysterious radiation could knock protons out of a block of paraffin – something that gamma rays of available energies could not have done. When their “Note aux comptes rendus de l’Academie des Sciences” arrived in Rome in January 1932, according to what Gian-Carlo Wick, who was present, told me, Ettore Majorana exclaimed: “Stronzi (idiots), they have not understood that it is the neutron.” Soon after, Chadwick proved by careful experimentation that it really was the neutron, with a mass close to the proton. Another important experiment belonging to the same category, but not mentioned in the book, is the photodisintegration of the deuteron into a proton and a neutron by Chadwick and the then young Maurice Goldhaber (who is now 93 years old). They showed that the neutron was slightly heavier than the proton.
The other approach, trying to accelerate particles, marked the beginning of a new era that CERN continues, through its past achievements and its future projects. This is why the detailed account given in this book, including the successes and failures and the personal lives of the protagonists, seems so interesting.
The main protagonists were a young Irishman, Ernest Walton, a Briton named John Cockcroft, their friend Thomas Allibone from the Metropolitan-Vickers laboratory, and naturally Rutherford himself, who said in his address as president of the Royal Society in November 1927: “It would be of great scientific value if it were possible in laboratory experiments to have a supply of electrons and atoms of matter in general, of which the individual energy of motion is greater even than that of the alpha particle. This would open up an extraordinary new field of investigation that could not fail to give us information of great value, not only in the constitution and stability of atomic nuclei but also in many other directions.”
Cockroft and Walton worked very hard within the limits allowed by the rules of the Cavendish Laboratory, which closed at 6 p.m. and during holidays. But the friendly competition between Europe and the US was already fierce, a little like we see today! In the US Lawrence and Livingstone were working on their cyclotron, Van de Graaf had the machine that bears his name, and Lauritsen had his X-ray machine. There were also competitors in Europe, such as Greinacher, who independently of Cockcroft and Walton had the idea of voltage multiplication.
At this point it is necessary to mention a theoretician of Russian origin, George Gamow, who, as the author explains very well, played an important role in predicting that the attempts of Cockcroft and Walton would be successful. Gamow was probably the first to realize that quantum mechanics applied not only to the electrons running around the nucleus but also to the constituents of the nucleus. Before the discovery of the neutron he was extremely unhappy at having electrons inside the nucleus because their wavelength was much larger than the size of the nucleus. He produced a beautiful explanation of the alpha decay of nuclei by the tunnelling of alpha particles through the Coulomb barrier of the nucleus, a typical quantum mechanical effect. The alpha particles don’t need to have an energy as high as the classical barrier but can “borrow” energy for a short time to cross it. This was very important for the Cambridge machine builders because it meant that protons can penetrate inside the nucleus without having the full energy to cross the barrier. Even Rutherford, who had some repulsion for theory, liked this.
The Cockcroft-Walton accelerator finally started working at the beginning of 1932, but the protons they directed at beryllium and lithium targets did not seem to produce any clear effect. They were looking for gamma rays and saw practically none. This was a serious disappointment and they feared that Lawrence, with a higher energy, would win the race. Rutherford was beginning to get irritated. So they tried using a scintillation detector of zinc sulphide that had been used in the past to detect alpha particles and then they saw a beautiful signal, which was immediately interpreted by Rutherford to be a pair of alpha particles. This was at 800,000 volts, clearly below the classical Coulomb barrier. Then in complete agreement with Gamow’s theory, they lowered the voltage to 150,000 volts and still saw the effect. In this case Rutherford broke the rule of closing the laboratory at 6 p.m. The time of the “night shift” was approaching.
The reaction observed was p + 7Li → α + α with a kinetic energy release of 8 MeV. It was a tremendous success and as the subtitle of the book says, they “won the race to split the atom”. The press jumped on that, but Cockcroft and Walton disliked the statement that this could be “a new source of energy”. In fact the press was right. The discovery of the fission of uranium was not too far away and this kind of proton-induced fission, except for the fact that it uses light elements, is not fundamentally different from what is proposed now by Carlo Rubbia as a new source of energy. Later, long after the unfortunate death of Rutherford due to delays with a hernia operation in 1937, Cockcroft and Walton received the Nobel prize in 1951.
There are also many other people who are rightly quoted in the book, such as Kapitza, who after spending several years in Cambridge was forced by Stalin to stay in the USSR; Blackett, who started cosmic-ray research with Occhialini; and the theoreticians – Dirac of course, but also Mott, Massey, Hartree and so on.
To end this review I would like to complete the postscript of the book, in particular regarding the links of Cockcroft with CERN. A biography of Cockcroft by Ronald Clark says (on page 101) that he “loaned someone from Harwell to build one of the accelerators of CERN”. This accelerator was the Proton Synchrotron (PS), and the “someone” was John Adams. Cockcroft directed radar research in Malvern during the Second World War and one of the people he hired was an engineer called John Adams (once, John complained to me that journalists thought he was originally a “mechanic”). Another was Mervyn Hine (who died very recently, see his obituary in “Faces and Places”). When the war was over Cockcroft retained these two people at the Atomic Energy Research Establishment at Harwell, where they worked on accelerator research; they then moved to CERN with the fantastic success that we all know. Until 1992 the pre-injector of the PS was a Cockcroft-Walton accelerator, and Cockcroft was a member of the CERN Scientific Policy Committee from 1956 until 1961. So there is a link between Rutherford, Cockcroft and Adams for which we must have a great deal of gratitude.
Canada’s national laboratory for particle and nuclear physics, TRIUMF, has proposed a new five-year plan to take it through to the end of the decade. The plan is ambitious, but realistic, and builds on the laboratory’s past and present accomplishments. It aims to deliver first-rate science in a timely and efficient manner and will ensure that TRIUMF remains competitive at the highest international level.
TRIUMF is funded by the Canadian government on a five-year cycle; the current one running from 2000 to 2005. Over the past two years scientists and staff at TRIUMF and its associated universities have been discussing the plan to be forwarded to the government for funding in 2005-2010. The plan was peer-reviewed by two international committees in the latter part of 2003 and presented to the National Research Council of Canada in February 2004. Council recommendations will be transmitted to the higher levels of government for funding decisions later this year.
The underlying theme is to provide Canadian scientists with access to world-class subatomic facilities at TRIUMF, and to provide scientific and engineering support to enable Canadian particle-physics groups to lead or significantly contribute to various experiments worldwide. For facilities at TRIUMF, which are freely open to the global community, the plan includes:
•completion of the ISAC-II radioactive beam post-accelerator, which will extend the maximum ion mass from 30 to 150 and the beam energy from 1.5 to 6.5 MeV/u;
•a new proton line and targetry for the development of new radioactive beams;
•major upgrades to the muon beam lines for materials science and chemistry research;
•greater throughput of radioisotopes for research in the life sciences.
To support external experiments, the plan includes the development of a data hub at TRIUMF for data analysis for the ATLAS experiment at CERN’s Large Hadron Collider; contributions to the T2K long-baseline neutrino experiment in Japan; and to the KOPIO (K0→π0+ν+νbar) kaon rare-decay experiment at Brookhaven; and research and development for the Next Linear Collider. The plan also identifies the importance of transferring the technical knowledge developed at TRIUMF to the commercial sector, and that of educational outreach to the general public and students.
When CERN was founded in 1954 the member states realized that they were making a long-term financial commitment to pay for a very large accelerator and for the operations that would go with it. A basic document, “CERN Gen 5”, had suggested that a large capital cost would be spread over the building period, followed by a much lower annual operating budget. The decisions by CERN Council to build the Synchrocyclotron and the Proton Synchrotron (PS), which had large, but not clearly defined, capital cost, were backed by early annual budget decisions to support continuing construction. Then, towards the end of the PS construction, Council fixed a three-year ceiling for the budgets for 1959-1961, which was intended to allow the PS to start operating, but with the hope still in some people’s minds that later budgets would be lower.
In 1961 it became clear that this ceiling was going to be breached by a significant amount, and that there was no hope for a future reduction. Council therefore set up a committee under the Dutch delegate Jan Bannier, with senior representatives from the major member states, to review the needs of CERN and make recommendations for future financial policy. CERN was represented by Sam Dakin, the director for administration, and myself, recently appointed as directorate member for applied physics.
Discussions inside CERN made it clear that more staff would be needed in all parts of the organization to build up the PS programme to “full exploitation”. Based on empirical data for “costs per man”, the total annual costs would rise in the following years, at a rate of 13% per annum. The shock this figure generated in the committee was somewhat damped when, following my request, the members produced forecasts for their own national science expenditures. Their figures, plotted logarithmically, showed large sums with steady exponential growth at rates of 20-25%. In comparison, CERN’s 13% line at the bottom of the page looked modest and reasonable.
Another important input came from the UK administration delegate, who proposed a rolling four-year budget procedure following internal practice in the UK Treasury. In this procedure, Council would vote in December on the details of the budget for the coming year, within a total that had been fixed a year previously; it would make a “firm estimate” for the budget total for the next year and “provisional estimates” for budget totals for the following two years, with all figures being given at constant prices. In accepting this farsighted proposal, the Bannier committee also recommended that the budget should increase by 13% for two years and by not less than 10% for two years thereafter. Bannier also added a warning that the organization should consider the possibility of a large new project at a later date. Despite its experience with CERN Gen 5 and the three-year ceiling, in 1962 Council approved the committee’s recommendations, showing how much CERN was blessed by the courage of its founding fathers.
It was then clear that CERN needed to establish effective internal procedures to provide well justified and reliable proposals for the programme and budget for four to five years ahead. These would have to be agreeable to the physics community, who were the users of the CERN equipment and services, and acceptable to a majority of the member states, who paid the bill.
The director-general at the time was Viki Weisskopf, who had been appointed in mid-1961 just as the budget crisis was becoming apparent. At CERN the director-general is ultimately responsible to Council for all the work and finances of the laboratory. He therefore needs to have access to information and analysis on all the work in forms suitable for his policy decisions inside the laboratory, and for discussion with Council and its committees. Subject to pressure for resources from all sides, he also needs an independent source of information with high enough status to ensure good collaboration with all parties involved, and with access to their work. As a director with no division to manage and having worked with the Bannier committee I could fulfil such a role, despite my lack of qualifications, by thinking up a planning system to meet the needs of the Bannier budget procedure.
At that moment I was on my own and had to rely on the help of others to collect data. I was, however, accustomed to working alone, having been John Adams’ technical aide in the design of the PS machine. I could fit in well with the director-general’s policy of delegating work, by putting as much as possible into the hands of divisional planners, provided they followed agreed procedures. Fortunately, from 1963 on I had the good fortune to have Gabriel Minder, an engineer from ETH, Zurich, to work with me. Among other qualities he spoke eight languages, and the two of us did most of the thinking and detailed design work for planning procedures over the next few years. Minder eventually published this work as a CERN report (Minder 1970) and as a PhD thesis for ETH (Minder 1969).
The Functional Programme Presentation
I realized that I should minimize the disturbance to existing administrative systems – particularly the plan of accounts and the annual budget procedure under Georges Tièche, the head of the Finance Division – and not upset the detailed planning work inside the operating divisions. Thus the natural starting point was the existing plan of accounts. This had three main headings in each of the 12 divisions: personnel, operation and capital outlays. These headings were subdivided into some 2000 codes for different groups and sections inside divisions, and also for some detail on the kind of material or service concerned. This was very appropriate for financial control, but left personnel costs unallocated and gave little indications on the nature and type of work being done. By contrast, the top management needed information on how many people did or would do just what, for what purpose, how successfully and at what cost. This information would need to span several years both into the past and the future in an appropriate degree of detail for the questions in hand.
I decided that we needed to prepare figures not in terms of the divisions and groups working on parts of the programme, but in a functional form that could describe the state of identifiable activities of different types, covering perhaps several groups or sections, even across divisions, and that would be independent of current structures. Subactivities could be defined inside a division and could be combined to make divisional activities, and also CERN-wide activities where this would be coherent and useful.
A divisional (sub)activity would be an identifiable part of a division’s work, specified by a description with the numbers of people and man-years involved each year, the annual cost and some measure of the resulting output. It would be aimed at programme planning and decision making for CERN, not at local financial management.
To be able to prepare such a Functional Programme Presentation, the FPP, I brought in early on the divisional assistants, who were already accustomed to preparing the detailed annual budget and accounts and who understood the workings of their divisions. With them we could establish agreed sets of divisional activities and subactivities, with their costs and manpower at that time. This required two judgements on the part of the divisional assistants: to allocate percentages of the manpower data of a division between its different subactivities and to identify which official accounts code applied to each subactivitiy. In 80% of the cases the accounts codes matched well enough with the subactivities, but for the remainder judgements were made on percentage splittings of amounts in account codes. These two sets of percentage keys would be revised once or twice a year as the work advanced.
At that point the central administration computer could be programmed to use these keys to convert standard divisional budget data into functional activity costs and manpower figures, without asking for extra work in the divisional offices.
The subactivity descriptions of measures of output could not be given simply as numbers, as staff and money could. They inevitably implied written work descriptions and statements of aims and of output, often with a subjective component. Nevertheless, by inspecting data for past years, simple numerical work indices could quite often be found, such as data on hours of operation and failure rates for equipment, installed kilowatts of power supplies, and square metres of floor space built or cleaned, which could be agreed by all parties to be relevant for evaluating aims and results, at least in the short term.
As an example, the Track Chamber Division (TC) programme in 1969 covered the research and development work described by 12 activities: bubble-chamber operations and development, particle-beam operations and development, picture-evaluation operation and development, low-temperature operations and development, the Big European Bubble Chamber (BEBC) project, additional laboratory staff, fellows and visitors, and divisional direction.
Although the activities were quite specific to the work in hand, they fitted into a few general classes, which formed the top level of the CERN FPP. They were R for research and operations, E for equipment and development, I and Z for improvements and major long-term projects, S for general services, and B for buildings and site equipment. Most of the TC programme fell into either the R or E classes, and BEBC into the I class. S and B covered the general administrative and technical services for the whole laboratory and the corresponding infrastructure, with similar subactivity structures at divisional and CERN levels.
An important point to note here is the clear separation between operations and development in all parts of a programme. These imply two different types of work, staff, expenditure and timetable, which should be kept clear in everybody’s minds and in the forecasting.
The whole picture
The overall result of these steps of coding and combination was that about 2000 accounting codes could summarized into some 150 understandable divisional activities and then into 30 CERN activities. These gave a picture of the whole CERN programme, which was suitable for discussions of general policy before the finer details of any part of a programme were examined.
I should stress that this structure was built bottom-up by the divisional staff, who knew what they were doing and saying and only had very broad guidelines from me as the planning officer. This, I believe, helped to make the whole system well accepted throughout CERN.
In 1965 I sent Minder to Washington, DC, to the president’s Office of the Budget and to NASA, to compare our FPP with the US Administration’s new Planning-Programming-Budgeting System (PPBS), which the Department of Defense had set up in 1961 and many other federal and local agencies had adopted by 1965. However, the PPBS had not proven usable by US research laboratories, due to certain features related to the nature of basic research. We were pleasantly surprised to find those problems had been solved within the FPP. Minder also collected data at OECD in Paris for our long-term models, and we found that their figures could be inserted into our projections for expected users, e.g. the numbers of European graduate and post-graduate physicists.
By 1966 the FPP was beginning to come together, presenting the state of the laboratory over an eight-year period: the past four years, the current year and the three years ahead, where total budgets were known with varying degrees of certainty. At the CERN level it offered in particular a long-term view of two topics of importance for both the management and Council. They were the size and distribution of personnel and the balance between operation and investment.
To be an effective laboratory 10-15 years later, CERN had to maintain high investment in technical development and new equipment and infrastructures, in parallel with satisfying growing demands for resources from the short-term research and operational activities that came from the large and growing community of users. To make this investment visible I stressed the importance to divisional staff of clearly separating R-type work and costs, which were needed to run and maintain facilities, and E-type work aimed at improving facilities or extending their scale and future performance.
The FPP also clearly brought out staff numbers in different areas of activity, and made it clear if staff levels to operate new equipment were being properly planned. Staff numbers have always been a sensitive item for funding bodies, as they are a direct method of controlling expenditure, and staff can represent a long term commitment that is more difficult to undo than cancellation of orders for materials.
What did the FPP give to CERN?
The FPP, with its planning cycle, gave CERN a tool that allowed people at all levels to work together each year to build up a rolling four-year forward plan that took account all parties’ needs and that could be accepted by the member states with very little modification. It could accommodate the addition of major improvement projects and the building and operation of the Intersecting Storage Rings as a new CERN programme. For instance, the Swiss federal administrators of education and research, who wondered why they should support several CERN programmes, accepted to do so by discussing these in the FPP format.
The FPP was used from 1963 through to 1975, when the 300 GeV Super Proton Synchrotron was integrated into the CERN basic programme. At that point the new management changed the philosophy to planning in terms of projects rather than activities.
I am not aware of how well this new scheme worked in practice, as by then I was out of the management structure and working largely on my own. I can, however, see dangers in such a change. For example, accelerator operations are an activity requiring continuity over many years, without the end date and total cost that are suggested by the term “project”. There is also the risk of not properly separating operations and investment. The top management could be faced with plans containing a large number of small-project proposals, which should really be decided at (inter)divisional level within the broader limits of agreed activities. I insisted that I did not want to have details of such divisional work formally reported to me, to avoid the temptation of micro-management by central staff. This policy, I believe, made the FPP well accepted by the divisions and helpful to them in clarifying their planning.
The intrinsic uncertainties of basic research have led to the idea that it cannot be planned, and this is certainly true at the level of its aims, which must depend on what nature offers us and not on what we would like it to tell us. However, the necessary continuity in exploring a fruitful line of enquiry for a certain period means that the effort and the cost can usefully be planned over that period. In some fields this time can be quite long, a decade or more, particularly where large equipment is involved. In elementary particle physics today this useful life is around 20 to 30 years, from the conception of a large facility to the time at which it may usefully be replaced or altered in a major way, with the cost of something completely new.
A four-year planning period, in which much of the work and cost has to be foreseen with some certainty, fits theoretically very well into the conduct of this type of research. The FPP is an existence proof that such a planning system can actually be implemented without disturbing the life of the laboratory, and without calling for a large central administration. This helps to avoid the directorate being tempted to become involved in micro-management, which is better left to lower levels in the organization. This is not, alas, always true in practice.
Mervyn Hine 1920-2004
Mervyn Hine, pioneer of the construction of CERN’s Proton Synchrotron, and of computing and networking at CERN, passed away on 26 April following a serious accident in his home. He had recently completed this article for the CERN Courier in the series “50 years of CERN”, which we publish here as a tribute to an important figure and personality in the history of CERN. An obituary will appear in the next issue.
When CERN’s Kurt Hübner and Günther Plass travelled to Rome in 1986 to join Sergio Tazzari of the Frascati Laboratory in search of a venue for the first European Particle Accelerator Conference, they set in motion the machinery that was to give the European accelerator community its own conference, 20 years after the birth of the American Particle Accelerator Conference, PAC. Two decades later, with science funding tight and justifiably under close scrutiny, it is interesting to assess the value, and also the spin-off, of this event.
In July EPAC’04 will welcome around 800 delegates from more than 30 countries to the ninth conference in the series. Sixty-five oral presentations are scheduled and more than 1000 posters will be displayed during the lively sessions. Two European accelerator prizes, first introduced in 1994, will also be awarded, one for a recent significant, original contribution to the accelerator field from a scientist in the early part of his or her career, and one for outstanding achievement in the accelerator field. This will precede a regular conference highlight, the “entertainment” session, with a talk on cosmic accelerators. The industrial exhibition and its associated session on technology transfer and industrial contacts completes the picture, demonstrating the vital communication between scientists and representatives of industry. This is a very different conference from the first one in 1988.
EPAC’88, held at the Hotel Parco dei Principi in Rome, was a victim of its own success. When the estimated 400 delegates expanded to 700 there was “controlled chaos” as closed-circuit TV had to relay the oral presentations, authors shared poster boards, a lack of air-conditioning caused delegates to flee the industrial exhibition and a lack of space meant the plenary sessions were relocated to the Aula Magna of the “La Sapienza” University of Rome. Alas, it was too late to relocate the conference dinner. Those who recall the fountains dancing in time to the strains of a string quartet in the gardens of the Villa Tuscolana will also remember the mouth-watering buffet, which was woefully insufficient and had vanished before the guests finished arriving.
However, the learning process had begun, and the venue for EPAC’90 was a purpose-built conference centre in Nice. Only one detail escaped the vigilant local organizing committee: the unique banquet venue able to cater for 800 people was outdoors, on a beautiful, unsheltered, Mediterranean beach. A week of perfect weather was marred only by the cloud that burst that particular evening. Who recalls the drenched delegates arriving following a 200 metre sprint in a tropical downpour?
During this period, Maurice Jacob, chairman of the European Physical Society (EPS), convinced EPAC’s organizers to form an EPS Interdivisional Group on Accelerators (IGA), and the successive EPS-IGA elected boards have since formed the EPAC organizing committees. A biennial, one-third turnover of the 18 members ensures continuity, while encouraging new members to introduce new ideas. To promote communication between the regional conferences, the organizing committees welcome representatives of US and Asian PACs, and the chairmen meet informally each year. The EPS-IGA has undertaken a number of initiatives such as the Student Grant Programme which, with the sponsorship of European laboratories and industry, enables young scientists to attend EPAC; around 60 will attend EPAC’04 under this scheme.
Continuity, coordination and communication characterize EPAC organization. Participation has increased steadily, with almost half the participants coming from non-European countries. Improved management techniques have streamlined the workload and contained registration fees. This was also a result of publishing the proceedings in CD-ROM and Web format, rather than expensive paper-hungry hard-copy volumes.
An unexpected spin-off of regional collaboration has been the creation of the Joint Accelerator Conferences Website (JACoW). The suggestion by Ilan Ben-Zvi of the Brookhaven National Laboratory in the mid-1990s to create a website for the publication of regional accelerator conference proceedings has developed into a flourishing international collaboration. It now extends to a whole range of conference series on accelerators – CYCLOTRONS, DIPAC, ICALEPCS, LINAC and RUPAC. The editors of all eight series, led by CERN’s John Poole, get hands-on experience in electronic publication techniques during each PAC and EPAC. Furthermore, the yearly team meetings have led to the development of a Scientific Programme Management System. This is an Oracle-based application capable of handling conference contributions from abstract submission through to proceedings production. Twenty-three sets have been published since 1996, including scanned PAC proceedings dating back to 1967. The EPAC and LINAC series plan to follow suit and scan their pre-electronic era proceedings too.
EPAC has evolved into an established, respected forum for the state of the art in accelerator technology. Delegates meet at unique venues, where the varied cultural heritage constitutes real added value. Strengthened ties with other regional and specialized conferences have enhanced international collaboration in the accelerator field, to the undoubted benefit of the community worldwide.
by A Goldhaber et al. (eds), World Scientific. Hardback ISBN 9812385037, £43 ($58); paperback ISBN 981238530, £25 ($34).
In 1999 a symposium was held at the State University of New York at Stony Brook to mark the retirement of C N Yang as Einstein Professor and director of the Institute for Theoretical Physics, and to celebrate his many achievements. This book contains a selection of the papers presented at the symposium, including contributions from such luminaries as Freeman Dyson, Martinus Veltman, Gerard ‘t Hooft and Maurice Goldhaber.
by Fritz Herlach and Noboru Miura (eds), World Scientific. Hardback ISBN 9810249640 (vol 1) and 9810249659 (vol 2), £41 ($55) each.
These are the first two volumes of a three-volume set intended to provide a comprehensive review of experiments in very strong magnetic fields, which can be generated only with special magnets. Volume 1 is devoted to magnet technology and experimental techniques, while volumes 2 and 3 contain reviews of the different areas of research where strong magnetic fields are an essential tool. Volume 3 is scheduled to appear in autumn 2004.
by W D McComb, Oxford University Press. Hardback ISBN 0198506945, £39.95 ($74.50).
Occupying a gap between standard undergraduate and more advanced texts on quantum field theory, this book covers a range of renormalization methods, including mean-field theories and high-temperature and low-density expansions. It proceeds by easy steps to the epsilon expansion, ending up with the first-order corrections to critical exponents beyond mean-field theory. Macroscopic systems are also included, with particular emphasis on fluid turbulence. Requiring only the basic physics and mathematics known to most scientists and engineers, the material should be accessible to readers other than theoretical physicists.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.