Bluefors – leaderboard other pages

Topics

The beginning of long-term planning in CERN

cernlong1_6-04

When CERN was founded in 1954 the member states realized that they were making a long-term financial commitment to pay for a very large accelerator and for the operations that would go with it. A basic document, “CERN Gen 5”, had suggested that a large capital cost would be spread over the building period, followed by a much lower annual operating budget. The decisions by CERN Council to build the Synchrocyclotron and the Proton Synchrotron (PS), which had large, but not clearly defined, capital cost, were backed by early annual budget decisions to support continuing construction. Then, towards the end of the PS construction, Council fixed a three-year ceiling for the budgets for 1959-1961, which was intended to allow the PS to start operating, but with the hope still in some people’s minds that later budgets would be lower.

In 1961 it became clear that this ceiling was going to be breached by a significant amount, and that there was no hope for a future reduction. Council therefore set up a committee under the Dutch delegate Jan Bannier, with senior representatives from the major member states, to review the needs of CERN and make recommendations for future financial policy. CERN was represented by Sam Dakin, the director for administration, and myself, recently appointed as directorate member for applied physics.

cernlong2_6-04

Discussions inside CERN made it clear that more staff would be needed in all parts of the organization to build up the PS programme to “full exploitation”. Based on empirical data for “costs per man”, the total annual costs would rise in the following years, at a rate of 13% per annum. The shock this figure generated in the committee was somewhat damped when, following my request, the members produced forecasts for their own national science expenditures. Their figures, plotted logarithmically, showed large sums with steady exponential growth at rates of 20-25%. In comparison, CERN’s 13% line at the bottom of the page looked modest and reasonable.

Another important input came from the UK administration delegate, who proposed a rolling four-year budget procedure following internal practice in the UK Treasury. In this procedure, Council would vote in December on the details of the budget for the coming year, within a total that had been fixed a year previously; it would make a “firm estimate” for the budget total for the next year and “provisional estimates” for budget totals for the following two years, with all figures being given at constant prices. In accepting this farsighted proposal, the Bannier committee also recommended that the budget should increase by 13% for two years and by not less than 10% for two years thereafter. Bannier also added a warning that the organization should consider the possibility of a large new project at a later date. Despite its experience with CERN Gen 5 and the three-year ceiling, in 1962 Council approved the committee’s recommendations, showing how much CERN was blessed by the courage of its founding fathers.

It was then clear that CERN needed to establish effective internal procedures to provide well justified and reliable proposals for the programme and budget for four to five years ahead. These would have to be agreeable to the physics community, who were the users of the CERN equipment and services, and acceptable to a majority of the member states, who paid the bill.

The director-general at the time was Viki Weisskopf, who had been appointed in mid-1961 just as the budget crisis was becoming apparent. At CERN the director-general is ultimately responsible to Council for all the work and finances of the laboratory. He therefore needs to have access to information and analysis on all the work in forms suitable for his policy decisions inside the laboratory, and for discussion with Council and its committees. Subject to pressure for resources from all sides, he also needs an independent source of information with high enough status to ensure good collaboration with all parties involved, and with access to their work. As a director with no division to manage and having worked with the Bannier committee I could fulfil such a role, despite my lack of qualifications, by thinking up a planning system to meet the needs of the Bannier budget procedure.

cernlong3_6-04

At that moment I was on my own and had to rely on the help of others to collect data. I was, however, accustomed to working alone, having been John Adams’ technical aide in the design of the PS machine. I could fit in well with the director-general’s policy of delegating work, by putting as much as possible into the hands of divisional planners, provided they followed agreed procedures. Fortunately, from 1963 on I had the good fortune to have Gabriel Minder, an engineer from ETH, Zurich, to work with me. Among other qualities he spoke eight languages, and the two of us did most of the thinking and detailed design work for planning procedures over the next few years. Minder eventually published this work as a CERN report (Minder 1970) and as a PhD thesis for ETH (Minder 1969).

The Functional Programme Presentation

I realized that I should minimize the disturbance to existing administrative systems – particularly the plan of accounts and the annual budget procedure under Georges Tièche, the head of the Finance Division – and not upset the detailed planning work inside the operating divisions. Thus the natural starting point was the existing plan of accounts. This had three main headings in each of the 12 divisions: personnel, operation and capital outlays. These headings were subdivided into some 2000 codes for different groups and sections inside divisions, and also for some detail on the kind of material or service concerned. This was very appropriate for financial control, but left personnel costs unallocated and gave little indications on the nature and type of work being done. By contrast, the top management needed information on how many people did or would do just what, for what purpose, how successfully and at what cost. This information would need to span several years both into the past and the future in an appropriate degree of detail for the questions in hand.

I decided that we needed to prepare figures not in terms of the divisions and groups working on parts of the programme, but in a functional form that could describe the state of identifiable activities of different types, covering perhaps several groups or sections, even across divisions, and that would be independent of current structures. Subactivities could be defined inside a division and could be combined to make divisional activities, and also CERN-wide activities where this would be coherent and useful.

A divisional (sub)activity would be an identifiable part of a division’s work, specified by a description with the numbers of people and man-years involved each year, the annual cost and some measure of the resulting output. It would be aimed at programme planning and decision making for CERN, not at local financial management.

cernlong4_6-04

To be able to prepare such a Functional Programme Presentation, the FPP, I brought in early on the divisional assistants, who were already accustomed to preparing the detailed annual budget and accounts and who understood the workings of their divisions. With them we could establish agreed sets of divisional activities and subactivities, with their costs and manpower at that time. This required two judgements on the part of the divisional assistants: to allocate percentages of the manpower data of a division between its different subactivities and to identify which official accounts code applied to each subactivitiy. In 80% of the cases the accounts codes matched well enough with the subactivities, but for the remainder judgements were made on percentage splittings of amounts in account codes. These two sets of percentage keys would be revised once or twice a year as the work advanced.

At that point the central administration computer could be programmed to use these keys to convert standard divisional budget data into functional activity costs and manpower figures, without asking for extra work in the divisional offices.

The subactivity descriptions of measures of output could not be given simply as numbers, as staff and money could. They inevitably implied written work descriptions and statements of aims and of output, often with a subjective component. Nevertheless, by inspecting data for past years, simple numerical work indices could quite often be found, such as data on hours of operation and failure rates for equipment, installed kilowatts of power supplies, and square metres of floor space built or cleaned, which could be agreed by all parties to be relevant for evaluating aims and results, at least in the short term.

As an example, the Track Chamber Division (TC) programme in 1969 covered the research and development work described by 12 activities: bubble-chamber operations and development, particle-beam operations and development, picture-evaluation operation and development, low-temperature operations and development, the Big European Bubble Chamber (BEBC) project, additional laboratory staff, fellows and visitors, and divisional direction.

Although the activities were quite specific to the work in hand, they fitted into a few general classes, which formed the top level of the CERN FPP. They were R for research and operations, E for equipment and development, I and Z for improvements and major long-term projects, S for general services, and B for buildings and site equipment. Most of the TC programme fell into either the R or E classes, and BEBC into the I class. S and B covered the general administrative and technical services for the whole laboratory and the corresponding infrastructure, with similar subactivity structures at divisional and CERN levels.

An important point to note here is the clear separation between operations and development in all parts of a programme. These imply two different types of work, staff, expenditure and timetable, which should be kept clear in everybody’s minds and in the forecasting.

The whole picture

The overall result of these steps of coding and combination was that about 2000 accounting codes could summarized into some 150 understandable divisional activities and then into 30 CERN activities. These gave a picture of the whole CERN programme, which was suitable for discussions of general policy before the finer details of any part of a programme were examined.

I should stress that this structure was built bottom-up by the divisional staff, who knew what they were doing and saying and only had very broad guidelines from me as the planning officer. This, I believe, helped to make the whole system well accepted throughout CERN.

cernlong5_6-04

In 1965 I sent Minder to Washington, DC, to the president’s Office of the Budget and to NASA, to compare our FPP with the US Administration’s new Planning-Programming-Budgeting System (PPBS), which the Department of Defense had set up in 1961 and many other federal and local agencies had adopted by 1965. However, the PPBS had not proven usable by US research laboratories, due to certain features related to the nature of basic research. We were pleasantly surprised to find those problems had been solved within the FPP. Minder also collected data at OECD in Paris for our long-term models, and we found that their figures could be inserted into our projections for expected users, e.g. the numbers of European graduate and post-graduate physicists.

By 1966 the FPP was beginning to come together, presenting the state of the laboratory over an eight-year period: the past four years, the current year and the three years ahead, where total budgets were known with varying degrees of certainty. At the CERN level it offered in particular a long-term view of two topics of importance for both the management and Council. They were the size and distribution of personnel and the balance between operation and investment.

To be an effective laboratory 10-15 years later, CERN had to maintain high investment in technical development and new equipment and infrastructures, in parallel with satisfying growing demands for resources from the short-term research and operational activities that came from the large and growing community of users. To make this investment visible I stressed the importance to divisional staff of clearly separating R-type work and costs, which were needed to run and maintain facilities, and E-type work aimed at improving facilities or extending their scale and future performance.

The FPP also clearly brought out staff numbers in different areas of activity, and made it clear if staff levels to operate new equipment were being properly planned. Staff numbers have always been a sensitive item for funding bodies, as they are a direct method of controlling expenditure, and staff can represent a long term commitment that is more difficult to undo than cancellation of orders for materials.

What did the FPP give to CERN?

The FPP, with its planning cycle, gave CERN a tool that allowed people at all levels to work together each year to build up a rolling four-year forward plan that took account all parties’ needs and that could be accepted by the member states with very little modification. It could accommodate the addition of major improvement projects and the building and operation of the Intersecting Storage Rings as a new CERN programme. For instance, the Swiss federal administrators of education and research, who wondered why they should support several CERN programmes, accepted to do so by discussing these in the FPP format.

The FPP was used from 1963 through to 1975, when the 300 GeV Super Proton Synchrotron was integrated into the CERN basic programme. At that point the new management changed the philosophy to planning in terms of projects rather than activities.

cernlong6_6-04

I am not aware of how well this new scheme worked in practice, as by then I was out of the management structure and working largely on my own. I can, however, see dangers in such a change. For example, accelerator operations are an activity requiring continuity over many years, without the end date and total cost that are suggested by the term “project”. There is also the risk of not properly separating operations and investment. The top management could be faced with plans containing a large number of small-project proposals, which should really be decided at (inter)divisional level within the broader limits of agreed activities. I insisted that I did not want to have details of such divisional work formally reported to me, to avoid the temptation of micro-management by central staff. This policy, I believe, made the FPP well accepted by the divisions and helpful to them in clarifying their planning.

The intrinsic uncertainties of basic research have led to the idea that it cannot be planned, and this is certainly true at the level of its aims, which must depend on what nature offers us and not on what we would like it to tell us. However, the necessary continuity in exploring a fruitful line of enquiry for a certain period means that the effort and the cost can usefully be planned over that period. In some fields this time can be quite long, a decade or more, particularly where large equipment is involved. In elementary particle physics today this useful life is around 20 to 30 years, from the conception of a large facility to the time at which it may usefully be replaced or altered in a major way, with the cost of something completely new.

A four-year planning period, in which much of the work and cost has to be foreseen with some certainty, fits theoretically very well into the conduct of this type of research. The FPP is an existence proof that such a planning system can actually be implemented without disturbing the life of the laboratory, and without calling for a large central administration. This helps to avoid the directorate being tempted to become involved in micro-management, which is better left to lower levels in the organization. This is not, alas, always true in practice.

Mervyn Hine 1920-2004

Mervyn Hine, pioneer of the construction of CERN’s Proton Synchrotron, and of computing and networking at CERN, passed away on 26 April following a serious accident in his home. He had recently completed this article for the CERN Courier in the series “50 years of CERN”, which we publish here as a tribute to an important figure and personality in the history of CERN. An obituary will appear in the next issue.

More than just a conference

cernvie1_6-04

When CERN’s Kurt Hübner and Günther Plass travelled to Rome in 1986 to join Sergio Tazzari of the Frascati Laboratory in search of a venue for the first European Particle Accelerator Conference, they set in motion the machinery that was to give the European accelerator community its own conference, 20 years after the birth of the American Particle Accelerator Conference, PAC. Two decades later, with science funding tight and justifiably under close scrutiny, it is interesting to assess the value, and also the spin-off, of this event.

In July EPAC’04 will welcome around 800 delegates from more than 30 countries to the ninth conference in the series. Sixty-five oral presentations are scheduled and more than 1000 posters will be displayed during the lively sessions. Two European accelerator prizes, first introduced in 1994, will also be awarded, one for a recent significant, original contribution to the accelerator field from a scientist in the early part of his or her career, and one for outstanding achievement in the accelerator field. This will precede a regular conference highlight, the “entertainment” session, with a talk on cosmic accelerators. The industrial exhibition and its associated session on technology transfer and industrial contacts completes the picture, demonstrating the vital communication between scientists and representatives of industry. This is a very different conference from the first one in 1988.

EPAC’88, held at the Hotel Parco dei Principi in Rome, was a victim of its own success. When the estimated 400 delegates expanded to 700 there was “controlled chaos” as closed-circuit TV had to relay the oral presentations, authors shared poster boards, a lack of air-conditioning caused delegates to flee the industrial exhibition and a lack of space meant the plenary sessions were relocated to the Aula Magna of the “La Sapienza” University of Rome. Alas, it was too late to relocate the conference dinner. Those who recall the fountains dancing in time to the strains of a string quartet in the gardens of the Villa Tuscolana will also remember the mouth-watering buffet, which was woefully insufficient and had vanished before the guests finished arriving.

However, the learning process had begun, and the venue for EPAC’90 was a purpose-built conference centre in Nice. Only one detail escaped the vigilant local organizing committee: the unique banquet venue able to cater for 800 people was outdoors, on a beautiful, unsheltered, Mediterranean beach. A week of perfect weather was marred only by the cloud that burst that particular evening. Who recalls the drenched delegates arriving following a 200 metre sprint in a tropical downpour?

During this period, Maurice Jacob, chairman of the European Physical Society (EPS), convinced EPAC’s organizers to form an EPS Interdivisional Group on Accelerators (IGA), and the successive EPS-IGA elected boards have since formed the EPAC organizing committees. A biennial, one-third turnover of the 18 members ensures continuity, while encouraging new members to introduce new ideas. To promote communication between the regional conferences, the organizing committees welcome representatives of US and Asian PACs, and the chairmen meet informally each year. The EPS-IGA has undertaken a number of initiatives such as the Student Grant Programme which, with the sponsorship of European laboratories and industry, enables young scientists to attend EPAC; around 60 will attend EPAC’04 under this scheme.

Continuity, coordination and communication characterize EPAC organization. Participation has increased steadily, with almost half the participants coming from non-European countries. Improved management techniques have streamlined the workload and contained registration fees. This was also a result of publishing the proceedings in CD-ROM and Web format, rather than expensive paper-hungry hard-copy volumes.

An unexpected spin-off of regional collaboration has been the creation of the Joint Accelerator Conferences Website (JACoW). The suggestion by Ilan Ben-Zvi of the Brookhaven National Laboratory in the mid-1990s to create a website for the publication of regional accelerator conference proceedings has developed into a flourishing international collaboration. It now extends to a whole range of conference series on accelerators – CYCLOTRONS, DIPAC, ICALEPCS, LINAC and RUPAC. The editors of all eight series, led by CERN’s John Poole, get hands-on experience in electronic publication techniques during each PAC and EPAC. Furthermore, the yearly team meetings have led to the development of a Scientific Programme Management System. This is an Oracle-based application capable of handling conference contributions from abstract submission through to proceedings production. Twenty-three sets have been published since 1996, including scanned PAC proceedings dating back to 1967. The EPAC and LINAC series plan to follow suit and scan their pre-electronic era proceedings too.

EPAC has evolved into an established, respected forum for the state of the art in accelerator technology. Delegates meet at unique venues, where the varied cultural heritage constitutes real added value. Strengthened ties with other regional and specialized conferences have enhanced international collaboration in the accelerator field, to the undoubted benefit of the community worldwide.

Symmetry and Modern Physics: Yang Retirement Symposium

by A Goldhaber et al. (eds), World Scientific. Hardback ISBN 9812385037, £43 ($58); paperback ISBN 981238530, £25 ($34).

51GZVNJQHEL._SX314_BO1,204,203,200_

In 1999 a symposium was held at the State University of New York at Stony Brook to mark the retirement of C N Yang as Einstein Professor and director of the Institute for Theoretical Physics, and to celebrate his many achievements. This book contains a selection of the papers presented at the symposium, including contributions from such luminaries as Freeman Dyson, Martinus Veltman, Gerard ‘t Hooft and Maurice Goldhaber.

High Magnetic Fields: Science and Technology, Volumes 1 and 2

by Fritz Herlach and Noboru Miura (eds), World Scientific. Hardback ISBN 9810249640 (vol 1) and 9810249659 (vol 2), £41 ($55) each.

These are the first two volumes of a three-volume set intended to provide a comprehensive review of experiments in very strong magnetic fields, which can be generated only with special magnets. Volume 1 is devoted to magnet technology and experimental techniques, while volumes 2 and 3 contain reviews of the different areas of research where strong magnetic fields are an essential tool. Volume 3 is scheduled to appear in autumn 2004.

Renormalization Methods: A Guide For Beginners

by W D McComb, Oxford University Press. Hardback ISBN 0198506945, £39.95 ($74.50).

51kiGQG8i6L

Occupying a gap between standard undergraduate and more advanced texts on quantum field theory, this book covers a range of renormalization methods, including mean-field theories and high-temperature and low-density expansions. It proceeds by easy steps to the epsilon expansion, ending up with the first-order corrections to critical exponents beyond mean-field theory. Macroscopic systems are also included, with particular emphasis on fluid turbulence. Requiring only the basic physics and mathematics known to most scientists and engineers, the material should be accessible to readers other than theoretical physicists.

Gauge Theories in Particle Physics, Volume 2

by Ian Aitchison and Anthony Hey, Institute of Physics Publishing. Paperback ISBN 0750309504, £29.99 ($45).

515QgFnidmL._SX328_BO1,204,203,200_

Subtitled “QCD and the Electroweak Theory”, this is the second volume of the third edition of a highly successful textbook, which has now been substantially enlarged and updated. It builds on the foundations laid in volume 1, which led up to quantum electrodynamics, and deals with the other two gauge theories of the Standard Model: quantum chromodynamics (QCD) and electroweak theory. It includes new chapters on QCD, as well as extensions to the discussion of weak interaction phenomenology.

The Cold Wars – A History of Superconductivity

by Jean Matricon and Georges Waysand, Rutgers University Press. Hardback ISBN 0813532949, $65; paperback ISBN 0813532957, $26.

cernboo2_6-04

After carefully investigating the behaviour of matter under new conditions, physicists then try to explain what they find. So it happened with cryogenics. It is much easier to light fires than to invent refrigerators, so the physics of high temperatures was initially much more familiar. However, the laws governing the behaviour of hot gases when extrapolated backwards suggested that something strange should happen if matter could be cooled to -273 °C, “absolute zero” on the new Kelvin temperature scale. Fourteen billion years after the Big Bang, the natural universe is screened from absolute zero by the all-permeating cosmic background radiation at 2.7 K, the faint echo of the Big Bang, and only recently have laboratory experiments descended the last few rungs of the temperature ladder. But such a natural barrier was long unsuspected, and in the second half of the 19th century one gas after another was liquefied triumphantly in the quest to approach absolute zero. However, helium remained stubbornly gaseous until Kamerlingh Onnes established a purpose-built laboratory in Leiden.

After setting this scene, The Cold Wars (what “wars”?) charts the progress of cryogenic physics after the liquefaction of helium at 4.2 K in 1908 opened up a new frontier. Painstakingly probing the behaviour of materials at these temperatures, Onnes discovered the phenomenon of superconductivity – the virtual disappearance of electrical resistance. The origins of this phenomenon, and its interplay with magnetic fields, long remained a mystery. Meanwhile, physicists noticed that liquid helium itself behaved bizarrely below about 2.2 K – becoming a superfluid with almost no viscosity. With the emergence of quantum ideas in the 1920s, attention focused on the possible link between superfluidity and Bose-Einstein condensation – when particles sink into the lowest possible quantum energy state, creating new types of matter. Thirty years later, John Bardeen, Leon Cooper and Robert Schrieffer suggested that pairs of electrons could account for the mystery of superconductivity.

The Cold Wars enthusiastically traces the history of cryogenic physics and superconductivity, with its triumphs and disappointments, and is a good introduction to an intriguing subject. However, it does not venture into the elegant modern quantum theory of phase transitions, which satisfyingly relates to a wider range of phenomena. Superfluid helium is still some way from absolute zero, and only in the past decade have physicists been able to achieve total Bose-Einstein condensation and demonstrate what happens when all particles accumulate into a single energy state, but this too is beyond a strictly superconducting horizon.

A major area for applications of superconductivity is in the powerful magnets that guide charged-particle beams in modern accelerators, but the book only covers this in passing and does not mention the world’s largest superconducting project – the 27 km LHC ring using superfluid helium that is now being constructed at CERN. (The only reference to particle-physics developments is an achievement of high magnetic fields at Fermilab “in 1963” – which was before plans for that US laboratory had even been drawn up.)

The Cold Wars is the English translation, with French government support, of La guerre du froid (Editions Seuil). The book concludes with the emergence of the new cuprate “high-temperature” superconductors. The search for superconductivity at still higher temperatures and the explanation of how this happens remains a glamorous research focus, and a final chapter updates these developments beyond what could have been described in the original 1994 edition.

La physique du XXe siècle

By Michel Paty, EDP Sciences, Collection Sciences et histoires. Paperback ISBN 286883518X, €34.

cernboo3_6-04

The title Michel Paty has chosen for his book, La physique du XXe siècle (Physics of the 20th century), is an ambitious one. Summarizing the main advances in physics over the past 100 years in 276 pages, as well as demonstrating their impact on other fields of science, seems like an impossible task. Indeed, the author himself, a physicist and science historian, questions whether it is possible to single out the 20th century’s most important and most characteristic developments in science in general and physics in particular. He takes up the challenge all the same, painting a general panorama of physics in the 20th century.

He begins by reviewing the main concepts of physics, describing the historical background to them, and the men and women associated with them. These include relativity, quantum physics, atoms and states of matter, the nucleus, elementary particles, fundamental fields, dynamic systems and phase transitions. He then turns to fields closely related to physics, namely geophysics, astrophysics, cosmology and, more generally, the search for the origins of the universe. At the end, he examines the subject of physics and the associated methods, and comes back to the emergence of Big Science in the 20th century. Finally, in his conclusion, he describes the lessons to be learnt from the past and looks to the future with confidence.

For the student or curious novice, Paty’s book can quickly become a reference manual, whose use will vary according to individual requirements. It provides the reader with a general introduction to the main fields of physics research and helps him or her along with historical references. The photographs (essentially portraits), boxes, diagrams and tables, which are simple and well chosen, offer an alternative means of getting to grips with the subject. Finally, a detailed bibliography invites the reader to further exploration. Paty has thus essentially met the challenge he set himself, as his book opens up the door to those who wish to enter the universe of physics.

Neutrino Physics

by Kai Zuber, Institute of Physics. Hardback ISBN 0750307501, £80.00 ($125.00).

cernboo1_6-04

This excellent introduction to neutrino physics describes, in 14 chapters and more than 400 pages, the past, present and future experiments and essential developments in one of the most exciting fields of fundamental physics today. Ranging from “Important historical experiments” to “Neutrinos in cosmology”, it is perfect that this comprehensive overview on neutrino physics was published shortly after the Nobel Prize for Physics was awarded to two neutrino physicists, Raymond Davis and Masatoshi Koshiba, for their pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos.

Neutrinos – first postulated in the 1930s and detected in 1956 by Clyde Cowan and Fred Reines – are one of the most fundamental particles in the universe, but they are also one of the least understood. The author, Kai Zuber from Oxford University, begins with some personally selected historical milestones and theoretical background. He then proceeds to give the fundamental properties of the neutrino, address the questions of neutrino mass, and looks at the place of the neutrino within and beyond the Standard Model. Zuber continues with a discussion of the role of neutrinos in modern astroparticle physics and ends with neutrinos in cosmology and the problem of dark matter, thus covering the full range of neutrino physics. It is remarkable that Zuber describes, over many chapters, not only neutrino experiments, detectors and spectrometers in operation, but also those that are at present under construction or planned, such as the KATRIN experiment and the neutrino factory.

The book ends with a summary and personal outlook, a comprehensive list of references and a detailed index. All of this helps the reader to enjoy a fascinatingly written overview of this exciting field of physics, where “you always have to expect the unexpected”. The only weak point is that some of the figures are of poor quality, making it difficult to see what is shown.

Neutrino Physics is a textbook at a level that is suitable for graduate students in physics and astrophysics. It can be highly recommended to anyone interested in this field, and to any advanced student who wants to learn more about this research topic and who needs to understand neutrino physics.

New protocol accords CERN wider international status

CERN’s member states have adopted a new protocol on the privileges and immunities of the organization. This brings CERN into line with other European intergovernmental organizations, such as the European Space Agency and the European Southern Observatory, which already enjoy international status in all their member states.

The protocol, which is also open for signature to non-member states that have agreements with CERN, will simplify the movement of personnel and materials between countries involved in projects with CERN. The privileges and immunities granted to CERN are similar to those granted to other international organizations. The protocol will also facilitate any future enlargement of the organization.

CERN already benefits from international status in its two host states, France and Switzerland. Switzerland accorded this status in 1955, as did France after the CERN site was extended across the Franco-Swiss border in 1965. With the new protocol, all member states that sign will recognize CERN’s international status.

When it comes into force, the protocol will have important effects for the organization’s activities in other countries, particularly those involving contractors or collaborators in other research institutes. For example, by establishing specific privileges and immunities, it will make easier the movement of personnel between countries involved in projects in which CERN is a partner. It will also exempt CERN’s purchasing activities from tax (in particular VAT) and customs duties, and thus simplify the transfer of equipment and materials between the various countries that can be involved in a single contract – with the effect of reducing the costs often incurred through successive taxations as goods move between countries.

The protocol also has an important symbolic value for the future of CERN, as it is open for signature not only to CERN’s member states, but also to other states that collaborate with CERN, either as associate member states or through co-operation agreements. “Although this seems symbolic today,” explains CERN’s director-general Robert Aymar, “I believe that in the future, with the increasing globalization of particle physics, this will become a valuable tool in helping CERN to remain a powerful force in science.”

Nine member states signed the protocol in a ceremony at CERN on 18 March, bringing to 11 (with France and Switzerland) the number of member states that have now agreed to grant full international status to CERN. The other nine have set in motion procedures that will allow them to sign in the near future. It will come into force once it has been signed and ratified by 12 member states other than the host states.

bright-rec iop pub iop-science physcis connect