by Ian Aitchison and Anthony Hey, Institute of Physics Publishing. Paperback ISBN 0750309504, £29.99 ($45).
Subtitled “QCD and the Electroweak Theory”, this is the second volume of the third edition of a highly successful textbook, which has now been substantially enlarged and updated. It builds on the foundations laid in volume 1, which led up to quantum electrodynamics, and deals with the other two gauge theories of the Standard Model: quantum chromodynamics (QCD) and electroweak theory. It includes new chapters on QCD, as well as extensions to the discussion of weak interaction phenomenology.
by Jean Matricon and Georges Waysand, Rutgers University Press. Hardback ISBN 0813532949, $65; paperback ISBN 0813532957, $26.
After carefully investigating the behaviour of matter under new conditions, physicists then try to explain what they find. So it happened with cryogenics. It is much easier to light fires than to invent refrigerators, so the physics of high temperatures was initially much more familiar. However, the laws governing the behaviour of hot gases when extrapolated backwards suggested that something strange should happen if matter could be cooled to -273 °C, “absolute zero” on the new Kelvin temperature scale. Fourteen billion years after the Big Bang, the natural universe is screened from absolute zero by the all-permeating cosmic background radiation at 2.7 K, the faint echo of the Big Bang, and only recently have laboratory experiments descended the last few rungs of the temperature ladder. But such a natural barrier was long unsuspected, and in the second half of the 19th century one gas after another was liquefied triumphantly in the quest to approach absolute zero. However, helium remained stubbornly gaseous until Kamerlingh Onnes established a purpose-built laboratory in Leiden.
After setting this scene, The Cold Wars (what “wars”?) charts the progress of cryogenic physics after the liquefaction of helium at 4.2 K in 1908 opened up a new frontier. Painstakingly probing the behaviour of materials at these temperatures, Onnes discovered the phenomenon of superconductivity – the virtual disappearance of electrical resistance. The origins of this phenomenon, and its interplay with magnetic fields, long remained a mystery. Meanwhile, physicists noticed that liquid helium itself behaved bizarrely below about 2.2 K – becoming a superfluid with almost no viscosity. With the emergence of quantum ideas in the 1920s, attention focused on the possible link between superfluidity and Bose-Einstein condensation – when particles sink into the lowest possible quantum energy state, creating new types of matter. Thirty years later, John Bardeen, Leon Cooper and Robert Schrieffer suggested that pairs of electrons could account for the mystery of superconductivity.
The Cold Wars enthusiastically traces the history of cryogenic physics and superconductivity, with its triumphs and disappointments, and is a good introduction to an intriguing subject. However, it does not venture into the elegant modern quantum theory of phase transitions, which satisfyingly relates to a wider range of phenomena. Superfluid helium is still some way from absolute zero, and only in the past decade have physicists been able to achieve total Bose-Einstein condensation and demonstrate what happens when all particles accumulate into a single energy state, but this too is beyond a strictly superconducting horizon.
A major area for applications of superconductivity is in the powerful magnets that guide charged-particle beams in modern accelerators, but the book only covers this in passing and does not mention the world’s largest superconducting project – the 27 km LHC ring using superfluid helium that is now being constructed at CERN. (The only reference to particle-physics developments is an achievement of high magnetic fields at Fermilab “in 1963” – which was before plans for that US laboratory had even been drawn up.)
The Cold Wars is the English translation, with French government support, of La guerre du froid (Editions Seuil). The book concludes with the emergence of the new cuprate “high-temperature” superconductors. The search for superconductivity at still higher temperatures and the explanation of how this happens remains a glamorous research focus, and a final chapter updates these developments beyond what could have been described in the original 1994 edition.
By Michel Paty, EDP Sciences, Collection Sciences et histoires. Paperback ISBN 286883518X, €34.
The title Michel Paty has chosen for his book, La physique du XXe siècle (Physics of the 20th century), is an ambitious one. Summarizing the main advances in physics over the past 100 years in 276 pages, as well as demonstrating their impact on other fields of science, seems like an impossible task. Indeed, the author himself, a physicist and science historian, questions whether it is possible to single out the 20th century’s most important and most characteristic developments in science in general and physics in particular. He takes up the challenge all the same, painting a general panorama of physics in the 20th century.
He begins by reviewing the main concepts of physics, describing the historical background to them, and the men and women associated with them. These include relativity, quantum physics, atoms and states of matter, the nucleus, elementary particles, fundamental fields, dynamic systems and phase transitions. He then turns to fields closely related to physics, namely geophysics, astrophysics, cosmology and, more generally, the search for the origins of the universe. At the end, he examines the subject of physics and the associated methods, and comes back to the emergence of Big Science in the 20th century. Finally, in his conclusion, he describes the lessons to be learnt from the past and looks to the future with confidence.
For the student or curious novice, Paty’s book can quickly become a reference manual, whose use will vary according to individual requirements. It provides the reader with a general introduction to the main fields of physics research and helps him or her along with historical references. The photographs (essentially portraits), boxes, diagrams and tables, which are simple and well chosen, offer an alternative means of getting to grips with the subject. Finally, a detailed bibliography invites the reader to further exploration. Paty has thus essentially met the challenge he set himself, as his book opens up the door to those who wish to enter the universe of physics.
This excellent introduction to neutrino physics describes, in 14 chapters and more than 400 pages, the past, present and future experiments and essential developments in one of the most exciting fields of fundamental physics today. Ranging from “Important historical experiments” to “Neutrinos in cosmology”, it is perfect that this comprehensive overview on neutrino physics was published shortly after the Nobel Prize for Physics was awarded to two neutrino physicists, Raymond Davis and Masatoshi Koshiba, for their pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos.
Neutrinos – first postulated in the 1930s and detected in 1956 by Clyde Cowan and Fred Reines – are one of the most fundamental particles in the universe, but they are also one of the least understood. The author, Kai Zuber from Oxford University, begins with some personally selected historical milestones and theoretical background. He then proceeds to give the fundamental properties of the neutrino, address the questions of neutrino mass, and looks at the place of the neutrino within and beyond the Standard Model. Zuber continues with a discussion of the role of neutrinos in modern astroparticle physics and ends with neutrinos in cosmology and the problem of dark matter, thus covering the full range of neutrino physics. It is remarkable that Zuber describes, over many chapters, not only neutrino experiments, detectors and spectrometers in operation, but also those that are at present under construction or planned, such as the KATRIN experiment and the neutrino factory.
The book ends with a summary and personal outlook, a comprehensive list of references and a detailed index. All of this helps the reader to enjoy a fascinatingly written overview of this exciting field of physics, where “you always have to expect the unexpected”. The only weak point is that some of the figures are of poor quality, making it difficult to see what is shown.
Neutrino Physics is a textbook at a level that is suitable for graduate students in physics and astrophysics. It can be highly recommended to anyone interested in this field, and to any advanced student who wants to learn more about this research topic and who needs to understand neutrino physics.
CERN’s member states have adopted a new protocol on the privileges and immunities of the organization. This brings CERN into line with other European intergovernmental organizations, such as the European Space Agency and the European Southern Observatory, which already enjoy international status in all their member states.
The protocol, which is also open for signature to non-member states that have agreements with CERN, will simplify the movement of personnel and materials between countries involved in projects with CERN. The privileges and immunities granted to CERN are similar to those granted to other international organizations. The protocol will also facilitate any future enlargement of the organization.
CERN already benefits from international status in its two host states, France and Switzerland. Switzerland accorded this status in 1955, as did France after the CERN site was extended across the Franco-Swiss border in 1965. With the new protocol, all member states that sign will recognize CERN’s international status.
When it comes into force, the protocol will have important effects for the organization’s activities in other countries, particularly those involving contractors or collaborators in other research institutes. For example, by establishing specific privileges and immunities, it will make easier the movement of personnel between countries involved in projects in which CERN is a partner. It will also exempt CERN’s purchasing activities from tax (in particular VAT) and customs duties, and thus simplify the transfer of equipment and materials between the various countries that can be involved in a single contract – with the effect of reducing the costs often incurred through successive taxations as goods move between countries.
The protocol also has an important symbolic value for the future of CERN, as it is open for signature not only to CERN’s member states, but also to other states that collaborate with CERN, either as associate member states or through co-operation agreements. “Although this seems symbolic today,” explains CERN’s director-general Robert Aymar, “I believe that in the future, with the increasing globalization of particle physics, this will become a valuable tool in helping CERN to remain a powerful force in science.”
Nine member states signed the protocol in a ceremony at CERN on 18 March, bringing to 11 (with France and Switzerland) the number of member states that have now agreed to grant full international status to CERN. The other nine have set in motion procedures that will allow them to sign in the near future. It will come into force once it has been signed and ratified by 12 member states other than the host states.
Large laboratories obtain scientific data in vast quantities and usually use this material for rapid research being driven by competition. The majority of important results are collected in as short a time as possible. When new data appear older data lose their importance and are abandoned or placed at the disposal of smaller labs that could make use of them.
This has been the case in the past with data obtained at laboratories such as CERN, Fermilab and JINR, which came in such quantities that they could not be exhaustively analysed by the researchers there. The data were therefore given to various universities and other smaller laboratories, which over a long period of time have analysed the events in question and sometimes made valid discoveries.
More recently, data from the CDF and D0 experiments at Fermilab have become available via the web. A more leisurely analysis phase is also happening with data obtained from experiments at LEP, whose activity is slowing down. Thus it gives the possibility of allowing researchers at smaller scientific institutions to follow up the work and make new findings. For example, institutes in the “Post L3” collaboration are currently analysing some LEP data in their own time and have no obligation to provide results by a specific deadline.
The pictures made in the late 1960s with the CERN 2 m hydrogen bubble chamber show the possible importance of this approach. Its films ended up in various universities, either for further analysis or for didactic purposes, because bubble-chamber pictures are useful for students. Consequently, during the 1970s, the University of Bucharest and JINR in Dubna obtained 125,000 pictures courtesy of CERN. The pictures were found to contain a number of interesting items that had earlier been overlooked because in the principal analysis they had been viewed with different criteria in mind.
In one particular example, V M Karnauhov, V I Moroz, C Coca and A Mihul were able to report on finding a resonance in π–p interactions at 16 GeV, having a mass of 3520 ± 3 MeV/c2 and a width of 7 +20-07MeV with eight standard deviations (Karnauhov et al. 1992). At the time this seemed very strange, as most physicists were not particularly interested as the resonance corresponded to a five-quark particle (uud ccbar), which did not fit then into any theoretical framework.
During the past year, however, evidence for several exotic resonances has been reported. A real “gold rush” for similar phenomena – the “pentaquarks” – has begun, even though there are few, if any, irrefutable theoretical explanations. Their masses have not yet been calculated due to the lack of a theoretical basis. These include the Θ* (1540 MeV and a width of 17 MeV) and the Ξ (1862) baryon with S = -2, which have still to be established with high accuracy. They appear like states of five quarks (pentaquarks), i.e. four quarks and one antiquark, so yielding a system without colour, which is necessary to be observable.
The 2 m bubble-chamber data suggested long ago that at least one more baryonic exotic state was found with a mass of 3520 ± 3 MeV/c2, a width of 7 +20-07 MeV and S = 0. This was a pentaquark baryon with neutral strangeness. The essential difference between the Θ*and Ξ (1862) and what was found long ago is that the old resonance was formed by quarks including a ccbar pair, while the new ones contain s (sbar) quarks, giving a substantial difference in the final mass. Other teams have also reported possible sightings of pentaquarks in data from the 2 m chamber, and now the H1 experiment at DESY has evidence for a uuddcbar state with a mass of 3100 MeV/c2.
So what can we learn from this experience? The distribution of data to smaller institutions, which perhaps have more time to follow different or unfashionable lines of analysis, must continue. Besides the benefits that this activity can bring to the institutes themselves, the long-term process also has the benefit of bringing fresh minds to the analysis as younger physicists, who may bring new approaches, replacing older ones.
The Grid should also be able to overcome some of the difficulties of the past. It aims at providing a global computing facility, which will allow the smaller laboratories to participate in the primary research. However, the Grid is being developed to provide enormous computing power; it will not be able to provide the thinking time that is necessary for the best job to be done. This can only be provided by the researchers performing long-term analysis generally in the smaller laboratories.
Ministers meeting at the end of January for the Committee for Scientific and Technological Policy of the OECD (Organisation for Economic Cooperation and Development) have acknowledged the importance of ensuring access to large-scale research infrastructures in high-energy physics and of the long-term vitality of the field. The ministers also noted the worldwide consensus of the scientific community in choosing an electron-positron linear collider as the next accelerator-based facility to complement and expand on discoveries that are likely to emerge from the Large Hadron Collider (LHC) at CERN. They agreed that the planning and implementation of such a large, multi-year project should be carried out on a global basis, and should involve consultations among not only scientists but also representatives of science funding agencies from interested countries.
At their previous meeting in 1999, the ministers had endorsed the creation of the OECD Global Science Forum, which provided a useful venue for consultations among senior science policy officials and programme managers, and was a valuable mechanism for bringing together government officials with representatives of scientific communities. Now, at the January 2004 meeting, the ministers were in a position to devote their attention to the forum’s work concerning high-energy physics. In particular the ministers endorsed the statement prepared by the forum’s Consultative Group on High-Energy Physics and noted several important points that were articulated in the group’s report. These included the need to have large, next-generation facilities funded, designed, built and operated as global-scale collaborations; the need to educate, attract and train young people in the fields of high-energy physics, astrophysics and cosmology; and the need for a strong international R&D collaboration and studies of the various issues required to realize the next major accelerator facility on the consultative group’s roadmap – a next-generation electron-positron collider with a significant period of concurrent running with the LHC.
CERN and the collaboration behind the Alpha Magnetic Spectrometer (AMS) experiment have signed a new memorandum of understanding (MOU) for the execution of the experiment, which will take place not at CERN, or elsewhere on Earth, but in space. The new MOU foresees the establishment at CERN of the experiment’s Payload Operations and Control Centre, and the Science Operations Centre. CERN will also provide areas for the assembly and testing of the AMS detector, as well as offices for users and secretarial support.
AMS is a major international collaboration that is led by Sam Ting of MIT. The principal goal of the AMS experiment, which will be located on board the International Space Station, is to look for antiparticles in the primary cosmic radiation of outer space. Other objectives of the experiment include searching for dark matter and carefully analysing details of the cosmic-ray spectrum. The detector will be equipped with a powerful superconducting magnet and sophisticated detectors for precision tracking, particle identification and photon detection.
AMS has been a “recognized experiment” at CERN since 1997. The new MOU, which is a significant upgrade of the previous agreement, has a duration of five years and can be renewed.
The UK Particle Physics and Astronomy Research Council (PPARC) has approved a £21 million (~€31 million) programme of accelerator R&D for future facilities in particle physics, including a linear collider and a possible neutrino factory. This will develop the UK’s academic base in these important areas. PPARC’s investment, in partnership with the Council for the Central Laboratory of the Research Councils (CCLRC), will fund a research programme and create two new university research centres. The aim is to build on existing academic expertise and develop a strong research base in accelerator R&D, in order to enhance the UK’s position in experimental particle physics.
The two centres that are being created are the Cockcroft Institute: National Centre for Accelerator Science and the Oxford/Royal Holloway Centre. The Cockcroft Institute is being established with £7.03 million (~€10.50 million) from PPARC, in partnership with the Northwest Development Agency, and the universities of Liverpool, Lancaster and Manchester. The second centre, which will receive £2 million (~€3 million) from PPARC, is a partnership with the University of Oxford and Royal Holloway, University of London. The centres will work closely with CCLRC’s Accelerator Science and Technology Centre to create a leading capability in accelerator science in the UK.
An electron-positron linear collider has been accepted by the international particle-physics community as the next large facility that is needed, and construction could start as early as 2009. UK scientists are focusing on developing the beam delivery system, which will take the accelerated particles to the collision point.
The neutrino factory is a proposed international experiment to study neutrinos, and will rely on a beam of muons to create the neutrinos. To achieve this, a new mechanism has been proposed for cooling the muons, and the Muon Ionisation Cooling Experiment (MICE) is designed to test this principle. A collaboration of more than 150 physicists and engineers from Europe, the US and Japan would like to build and test a section of a realistic cooling channel on a beamline, which could be constructed on the ISIS accelerator at CCLRC’s Rutherford Appleton Laboratory. The funding for MICE is at present only provisional, and depends on the project passing through some further review procedures.
The decade between 1967 and 1976 witnessed an impressive sequence of experimental and theoretical discoveries that changed the vision we had of the world – from the prediction of electroweak unification in the lepton sector (1967-1980) and the discovery of deep-inelastic electron scattering (1969), to asymptotic freedom and quantum chromodynamics (1973) and the discoveries of the J/Ψ (1974) and naked charm (1976). By 1976 the Standard Model of particle physics was in place, ready to confront experiments, and it was clear that a new accelerator was required to explore the electroweak unification sector. This is where the weak gauge bosons, W and Z, were expected, with approximate masses of 65 and 80 GeV/c2, respectively. The arguments for the future LEP machine were already strong.
I remember being asked by John Adams (then executive director-general of CERN) to convene the Large Electron Positron Collider (LEP) study group in April 1976, and to edit the report. In practice this meant learning from theorists John Ellis and Mary K Gaillard all the beautiful new physics that was waiting for us, putting together some documents on the feasibility of the machine (which were available following Burt Richter’s seminal paper), and wrapping it all up as quickly as possible together with some bread-and-butter experimental comments. It took only seven months to get it all done to the satisfaction of Adams, who wanted to push the LEP project in the wake of the success of the Super Proton Synchrotron (SPS), which was about to start operation.
The proton-antiproton choice
The situation in 1976 sets the context in which the proton-antiproton decision was made. The pressure to discover the W and Z was so strong that the long design, development and construction time of the LEP project left most of us, even the most patient, dissatisfied. A quick (but hopefully not dirty) look at the new bosons would have been highly welcome. But when proton-proton colliders such as the Superconducting Intersecting Storage Rings (SCISR) were proposed in this spirit, they were “killed in the egg” by the management at CERN, with the argument that they would delay – or even worse, endanger – the LEP project. This was accepted as a serious argument even by the proponents of such colliders.
The same argument did not apply to the proton-antiproton collider as it did not require the construction of a new collider ring and could be proposed as an experiment. One might object that this sounds like a bad joke, because it implied the construction of an antiproton source, and that turned out later to include a collector/accumulator accelerator complex (AC/AA).
However, it remains true that the existence of the SPS, which was soon shown to perform extremely well, was obviously an essential element of the success of the proton-antiproton project, for which John Adams has to be credited. It is also true that he found it hard to swallow that his newborn baby should be tinkered with at such a young age and turned into a collider that had only a small chance of working. This was indeed the feeling of the vast majority of machine experts at the time, and much of Carlo Rubbia’s merit is that he pushed his ideas for the proton-antiproton collider with an untiring determination in such an adverse climate. Indeed, he pushed not only with determination but also with a clear vision of what his proposals would lead to, and with a deep understanding of the machine-physics issues at stake.
A threat from Fermilab
Another argument also made it possible for the proton-antiproton project to break the LEP taboo. If CERN did not buy Carlo’s idea, it was most likely that he would sell it to Fermilab. This threat was clear and had a great deal of weight when the decision was made at CERN. Despite the fact that the Fermilab machine was not performing well enough at the time to be used as a proton-antiproton collider, the threat very effectively accelerated the well known sequence of events that followed the publication in 1976 of the paper by Carlo Rubbia, Peter McIntyre and David Cline. In 1977, after the proposal had been made to CERN and Fermilab to produce the W and Z with existing machines, a feasibility study was undertaken by Franco Bonaudi, Simon Van der Meer and Bernard Pope that led to the Antiproton Accumulator (AA) design. At the same time a detector study was initiated under Carlo that led to the UA1 design, and the Initial Cooling Experiment (ICE) was proposed to the SPS Committee. The success of ICE was demonstrated in June 1978 and the approval for the UA1 detector followed immediately. Only six months later UA2 was also approved.
I strongly believe that if it had not been for Carlo, there would have been no proton-antiproton collider physics in the world for a long time, maybe forever. Whether the weak bosons would have been discovered at LEP, at the Stanford Linear Collider (SLC), or at some other collider is another matter, but it would have taken another six years at least. One might argue that six years is not really that long, but the top quark would not have been discovered either (other than indirectly from radiative corrections at LEP), nor would we have learned from the vast and rich amount of strong and electroweak physics data that have been collected at the SPS and Tevatron colliders – not to mention the low-energy LEAR physics, antihydrogen, glueballs, CP violation, antiprotonic helium atoms, etc.
The influence of the CERN ISR
I would like to say a word here about the CERN ISR and the seminal role that they played in the success of the proton-antiproton project. The ISR were the world’s first hadron collider. This was the machine on which the young generation of machine physicists who designed, built and operated the antiproton source and the proton-antiproton collider (and later on, maybe to a lesser extent, LEP) gained their experience and their expertise. It worked superbly, exceeding its design goals in both energy and luminosity. It is the machine on which Van der Meer’s ideas on stochastic cooling were tried for the first time, where they were studied and understood. It is also the machine with which a generation of physicists learned how to design experiments at hadron colliders.
When the first ISR experiments were being designed the strong interaction was still a complete mystery; when the machine was finally shut down QCD was in place. I do not mean to say that it is ISR physics that has taught us about QCD, but it contributed to the development of several of its ideas. ISR physics has helped us greatly in drawing a clear picture of hadron collisions, without which we would not have been able to design so effectively the UA experiments at CERN, and CDF and D0 at Fermilab. We, in UA2, were particularly indebted to the ISR, where many of us had previously been working and for whom this experience was an essential asset in designing a good detector.
I would also like to recall the extraordinary concentration of outstanding talents that the proton-antiproton project succeeded in attracting. One reason was of course that between the SPS and LEP projects – one completed and the other as yet unborn – its timing was in some sense ideal. But the other reason, possibly more important, was the challenging nature of the project, which attracted extremely bright engineers and physicists, both machine physicists and particle physicists.
The challenge of designing, constructing and assembling the antiproton source and detectors, and of getting them to work in such a short time, was enormous; as was that of digging and equipping the large experimental halls required for housing the new detectors that had to be alternately rolled in and out between collider and fixed target periods; and that of transforming the SPS into a collider. The amount of ingenuity that went into all these achievements was truly outstanding.
My best memory of those times may indeed be the good fortune I had to work with so many talents, and, in the case of UA2, to enjoy collaborating with such bright colleagues, senior physicists, postdocs, students or physicists of the same generation as mine.
The UA1/UA2 competition
The competition between UA1 and UA2 was real and lively, but relatively unimportant; it was more a kind of game, and we had a lot of fun playing it. There was no doubt that Carlo was the king of the proton-antiproton kingdom and was recognized as such by all of us. Undoubtedly, he would have had to take the blame if the proton-antiproton project had been a failure, but as it turned out to be a success he deserved to take the fame.
Personally, I had been working in Carlo’s group for six years or so, mostly on K physics. I had joined him as a postdoc in the mid-1960s, coming from nuclear physics, and I had learned from him the basis of experimental particle physics. I had always been impressed by his brightness, by the readiness of his mind and by his far-reaching vision; and I respected him, as I do today, as someone of a clearly outstanding stature. To respect him as the king did not mean to belong to his court, however, and we in UA2 were particularly keen on finding occasions when we could proclaim that: “The king was naked.” Such occasions were very rare – the king was usually dressed splendidly – so they were all the more enjoyable.
The design of the UA2 detector was a success and its construction and running-in went extremely smoothly. We were rightly proud of it. For only one-third the cost of UA1 – a condition of our approval was that UA2’s cost should be significantly lower – we managed to build a detector that was ready on time, that saw the W and Z as soon as the collider luminosity made it possible (and at the same time as UA1 did), that measured the W and Z masses more accurately than UA1, and that was better than UA1 at detecting and measuring hadron jets. It was easier to design UA2 than UA1 because UA2 did not have to be a multi-purpose detector and could afford simply to ignore some of the physics, in particular to be blind to muons. While the main asset of the UA1 detector was its central detector, that of UA2 was its calorimetry.
One difficulty in the design process had been judging how well the machine would perform, how long it would take to get going, and how noisy and hostile an experimental environment had to be expected. Sam Ting’s detector (which was ultimately not approved) could have run in almost any background conditions, but could only see muons; the UA1 central detector required very clean conditions; UA2 was somewhere in between.
Expectations exceeded
The collider turned out to be an exceedingly clean machine and we had grossly underestimated how fast its luminosity would increase. In particular we had left an open wedge in our calorimeter, instrumented with a magnetic spectrometer, to do quietly (so we thought) some exploratory measurements while the machine was being tuned and run in. The wedge did not stay open very long, for the performance of the machine progressed at high speed, and we were able to tackle the first high-luminosity run with full calorimetric coverage.
I do not wish to repeat here the oft-told stories about the first seminars and the first publications reporting the UA1 and UA2 discoveries of the weak bosons, but I wish to comment on how we perceived these events. As I have already said, we were all expecting to see the weak bosons, we had no competition to fear from other laboratories and there was no question of UA2 “scooping” UA1 in the sense of stealing a Nobel prize or whatever. There was no doubt in our minds that Carlo and of course Simon deserved the whole credit for the success. The real outstanding achievement was the production of the weak bosons, not their detection. Without Carlo and Simon there would have been no proton-antiproton collider, but without UA1 and UA2 there would have been other experiments that would undoubtedly have done as good a job. The success of UA2 was largely due to the quality of the many physicists who had worked together very efficiently, with an excellent team spirit, and it was impossible to single out a few of them as deserving a larger part of the credit.
Of course there was competition; we enjoyed being faster or more clever than UA1 whenever we could afford to be, as when we were first at reporting to the 1982 Paris Conference the observation of very clear hadron jets, a breakthrough in the history of strong interaction physics. But this was not the dish, it was just the spices. The dish was serious business. It was reporting to the physics community what we had been finding. It was writing papers that would stay forever as important documents in the history of science.
In retrospect I am proud we resisted the pressure that was exerted on us to publish faster than we thought we had to. It would have been stupid and childish to give in, and would not have shown much respect for science. In fact this pressure made us almost overreact and, in the case of the Z, it caused a delay of nearly two months between the UA1 and UA2 publications because we preferred to wait for the imminent new run and collect more statistics before publishing. There was virtually no dissenting opinion in UA2 that we should have behaved differently – we all felt quite strongly about it. In particular the wiser and more experienced members of the collaboration (I mean the generation before mine) gave their full support to this line.
It is obvious today that there would have been no point in making a fuss about an event detected in 1982 that was most likely a Z, but one of its decay electrons was not identified because it hit a coil of our forward spectrometer magnets. We were wise to wait for more statistics before publishing the Z results. The issue at stake was not to bet on the truth, but to behave as if we were the only experiment.
Scientists of my generation are very fortunate to have witnessed such amazing progress in our understanding of nature, in phase with our own scientific life. It is remarkable that this has not only been the case in particle physics but also, and maybe to an even greater extent, in astronomy and life sciences. While many questions remain unanswered in each of these three fields, none can be put aside any longer as being a mystery inaccessible to science. Our vision of the world has changed dramatically. Having had an opportunity to contribute to this progress, however modest our contribution may have been, is very good fortune. May science be smiling on the next generation as kindly as it did on us, with the new physics that the LHC will soon reveal.
Apart from the Nobel lectures of Rubbia and Van der Meer, the interested reader may consult a list of relevant references in John Krige, History of CERN, volume III, chapter 6 (Elsevier, Amsterdam, 1996).
•This article is based on a talk given at the symposium held at CERN in September 2003, “1973: neutral currents, 1983: W± and Z0 bosons. The anniversary of CERN’s discoveries and a look into the future.” The full proceedings will be published as volume 34 issue 1 of The European Physical Journal C. Hardback ISBN 3540207503.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.