The Italian government has approved the long-term funding of the SuperB project. Mariastella Gelmini, the Italian minister for university education and research, announced on 19 April that the Interministerial Committee for Economic Programming had approved the National Research Plan 2011–2013. This sets out the future direction of 14 flagship projects, including SuperB.
The SuperB project is based on the principle that smaller particle accelerators, operating at a low energy can still give excellent scientific results complementary to the high-energy frontier. The project centres on an asymmetric electron–positron collider with a peak luminosity of 1036 cm–2 s–1. Such a high luminosity will allow the indirect exploration of new effects in the physics of heavy quarks and flavours at energy scales up to 10–100 TeV, through studies at only 10 GeV in the centre-of-mass of large samples of B, D and τ decays. At full power, SuperB should be able to produce 1000 pairs of B mesons and the same number of τ pairs, as well as several thousand D mesons every second. The design is based on ideas developed in Italy and tested by the accelerator division of the National Laboratories of INFN in Frascati using the machine called Daφne.
Sponsored by the National Institute of Nuclear Physics (INFN), Super B is to be built in Italy with international involvement. Many countries have expressed an interest in the project and physicists from Canada, France, Germany, Israel, Norway, Poland, Russia, Spain, the UK and the US are taking part in the design effort.
The Istituto Italiano di Tecnologia is co-operating with INFN on the project, which should help in the development of innovative techniques with an important impact in technology and other research areas. It will be possible to use the accelerator as a high-brilliance light source, for example. The machine will be equipped with several photon channels, allowing the extension of the scientific programme to the physics of matter and biotechnology.
Simon van der Meer was born in 1925 in The Hague, the third child of Pieter van der Meer and Jetske Groeneveld. His father was a school teacher and his mother came from a teacher’s family. Good education was highly prized in the van der Meer family and the parents made a big effort to provide this to Simon and his three sisters. Having attended the gymnasium (science section) in The Hague, he passed his final examination in 1943 – during German occupation in wartime. He stayed at the gymnasium for another two years because the Dutch universities were closed, attending classes in the humanities section. During this period – inspired by his excellent physics teacher – he became interested in electronics and filled his parents’ house with electronic gadgets.
In 1945 Simon began studying technical physics at Delft University, where he specialized in feedback circuits and measurement techniques. In a way, this foreshadowed his main invention, stochastic cooling, which is a combination of measurement (of the position of the particles) and feedback. The “amateur approach” – to use his own words – that he practiced during his stay at Delft University later crystallized in an ability to see complicated things in a simple and clear manner. In 1952 he joined the highly reputed Philips research laboratory in Eindhoven, where he became involved in development work on high-voltage equipment and electronics for electron microscopes. Then, in 1956, he decided to move to the recently founded CERN laboratory.
As one of his first tasks at CERN, Simon became involved in the design of the pole-face windings and multipole correction lenses for the 26 GeV Proton Synchrotron (PS), which is still in operation today, as the heart of CERN’s accelerator complex. Supervised by and in collaboration with John Adams and Colin Ramm, he developed – in parallel to his technical work on power supplies for these big magnets – a growing interest in particle physics. He worked for a year on a separated antiproton beam, an activity that triggered the idea of the magnetic horn – a pulsed focusing device for charged particles, which traverse a thin metal wall in which a pulsed high current flows. Such a device is often referred to as a “current sheet lens”. The original application of the magnetic horn was for neutrino physics. Of the secondary particles emerging from a target hit by a high-energy proton beam, the horn selectively focused the pions. When the pions then decayed into muons and neutrinos, an equally focused and intense neutrino beam was obtained. The magnetic horn found many applications all around the world, for both neutrino physics and the production of antiprotons.
In 1965 Simon joined the group led by Francis Farley working on a g-2 experiment for the precision measurement of the magnetic moment of the muon. There, he took part in the design of a small storage ring (the g-2 ring) and participated in all phases of the experiment. As he stated later, this period was an invaluable experience not only for his scientific life but also through sharing the vibrant atmosphere at CERN at the time – which was full of excitement – and the lifestyle of experimental high-energy physics. It was also about this time, in 1966, that Simon met his future wife, Catharina Koopman, during a skiing excursion in the Swiss mountains. In what Simon later described as “one of the best decisions of my life”, they married shortly afterwards and had two children, Ester (born 1968) and Mathijs (born 1970).
In 1967, Simon again became responsible for magnet power supplies, this time for the Intersecting Storage Rings (ISR) and a little later also for the 400 GeV Super Proton Synchrotron (SPS). During his activities at the ISR he developed the now famous “van der Meer scan”, a method to measure and optimize the luminosity of colliding beams. The ISR was a collider with a huge intensity, more than 50 A direct current of protons per beam, and it was in 1968 – probably during one of the long nights devoted to machine development – that a new and brilliant idea to increase luminosity was conceived: the concept of stochastic cooling.
A Nobel concept
“The cooling of a single particle circulating in a ring is particularly simple” (van der Meer 1984), provided that it can be seen in all of the electronic noise from the pick-up and the preamplifiers. “All” that is needed is to measure the amount of betatron oscillation at a suitable location in the ring and correct it later with a kicker at a phase advance of an odd multiple of 90° (figure 1). But the devil (closely related to Maxwell’s demon) is in the detail. Normally, it is not possible to measure the position of just one particle because there are so many particles in the ring that a single one is impossible to resolve. So, groups of particles – often referred to as beam “slices” or “samples” – must be considered instead.
For such a beam slice, it is indeed possible to measure the average position with sufficient precision during its passage through a pick-up and to correct for this when the same slice goes through a kicker. However, the particles in such a slice are not fixed in their relative position. Because there is always a spread around the central momentum, some particles are faster and others are slower. This leads to an exchange of particles between adjacent beam slices. This “mixing” is vital for stochastic cooling – without it, the cooling action would be over in a few turns. Stochastic cooling does eventually act on individual particles. With the combination of many thousands of observations (many thousands of turns), a sufficiently large bandwidth of the cooling system’s low-noise (sometimes cryogenic) electronics and powerful kickers, it works.
At the time, there were discussions about a possible clash with Liouville’s theorem, which states that a continuum of charged particles guided by electromagnetic fields behaves like an incompressible liquid. In reality, particle beams consist of a mixture of occupied and non-occupied phase space – much like foam in a glass of beer. Stochastic cooling is not trying to compress this “liquid” but rather it separates occupied and non-occupied phase space, in a way similar to foam that is settling. Once these theoretical questions were clarified there were still many open issues, such as the influence of noise and the required bandwidth. With a mild push from friends and colleagues, Simon finally published the first internal note on stochastic cooling in 1972 (van der Meer 1972).
ICE was a storage ring built from components of the g-2 experiment
Over the following years, the newly born bird quickly learnt to fly. A first proof-of-principle experiment was carried out in the ISR with a quickly installed stochastic cooling system. Careful emittance measurements over a long period showed the hoped-for effect (Hübner et al. 1975). Together with the proposal to stack antiprotons for physics in the ISR (Strolin et al. 1976) and for the SPS-collider (Rubbia et al. 1977), this led to the construction of the Initial Cooling Experiment (ICE) in 1977. ICE was a storage ring built from components of the g-2 experiment. It was constructed expressly for a full-scale demonstration of stochastic cooling of beam size and momentum spread (electron-cooling was tried later on). In addition, Simon produced evidence that “stochastic stacking” (stacking in momentum space with the aid of stochastic cooling) works well as a vital tool for the production of large stacks of antiprotons (van der Meer 1978).
Once the validity of the method had been demonstrated, Simon’s idea rode on the crest of a wave of large projects that took life at CERN. There was the proposal by David Cline, Peter McIntyre, Fred Mills and Carlo Rubbia to convert the SPS into a proton–antiproton collider. The aim was to provide experimental evidence for the W and Z particles, which would emerge in head-on collisions between sufficiently dense proton and antiproton bunches. Construction of the Antiproton Accumulator (AA) was authorized and started in 1978 under the joint leadership of Simon van der Meer and Roy Billinge. The world’s first antiproton accumulator started up on 3 July 1980, with the first beam circulating the very same evening, and by 22 August 1981 a stack of about 1011 particles had been achieved (Chohan 2004). The UA1 and UA2 experiments at the SPS had already reported the first collisions between high-energy proton and antiproton bunches in the SPS, operating as a collider, on 9 July 1981.
The real highlight arrived in 1982 with the first signs of the W boson, announced on 19 January 1983, to be followed by the discovery of the Z’ announced in May. This was swiftly followed by the award of the Nobel Prize in physics in 1984 to Simon and Carlo Rubbia for “their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of the weak force.”
Simon participated actively in both the commissioning and the operation of the AA and later the Antiproton Accumulator Complex (AAC) – the AA supplemented by a second ring, the Antiproton Collector (AC). He contributed not only to stochastic cooling but to all aspects, for example writing numerous, highly appreciated application programs for the operation of the machines.
He was certainly aware of his superior intellect but he took it as a natural gift, and if someone else did good work, he valued that just as much. When there was a need, he also did “low level” work. Those who worked with him remember many occasions when someone had a good suggestion on how to improve a controls program, Simon would say, “Yes, that would be better indeed”, and next morning it was in operation. He was often in a thoughtful mode, contemplating new ideas and concepts. Usually he did not pass them on to colleagues for comments until he was really convinced himself that they would work. Once he was sure that a certain concept was good and that it was the right way to go, he could be insistent on getting it going. He rarely made comments in meetings, but when he did say something it carried important weight. He was already highly respected long before he became famous in 1984.
Cooling around the world
In the following years, Simon was extremely active in the conversion and the operation of the AA, together with the additional large-acceptance collector ring, the AC. These two rings, with a total of 16 stochastic cooling systems, began antiproton production in 1987 as the AAC – and remained CERN’S work-horse for antiproton production until 1996. Later, the AA was removed and the AC converted into the Antiproton Decelerator (AD), which has run since 2000 with just three stochastic-cooling systems. These remaining three systems operate at 3.5 GeV/c and 2 GeV/c respectively during the deceleration process and are followed by electron cooling at lower momentum.
Stochastic cooling was also used in CERN’s Low Energy Antiproton Ring (LEAR) in combination with electron cooling until the mid-1990s. In a nutshell, stochastic cooling is most suited to rendering hot beams warm and electron cooling makes warm beams cold. Thus the two techniques are, in a way, complementary. As a spin-off from his work on stochastic cooling, Simon proposed a new (noise assisted) slow-extraction method called “stochastic extraction”. This was first used at LEAR, where it eventually made possible spills of up to 24-hour duration. Prior to that, low-ripple spills could last at best a few seconds.
Simon would see the worldwide success of his great inventions not only before his retirement in 1991, but also afterwards. Stochastic cooling systems became operational at Fermilab around 1980 and later, in the early 1990s, at GSI Darmstadt and For-schungszentrum Jülich (FZJ), as well as at other cooling rings all over the world. The Fermilab antiproton source for the Tevatron started operation in 1985. It is in several aspects similar to the CERN AA /AC, which it has since surpassed in performance, leading to important discoveries, including that of the top quark.
For many years, routine application of stochastic cooling was limited to coasting beams, and stochastic cooling of bunched beams in large machines remained a dream for more than a decade. However, having mastered delicate problems related to the saturation of front-end amplifiers and subsequent intermodulation, bunched stochastic cooling is now in routine operation at Fermilab and at the Relativistic Heavy Ion Collider at Brookhaven. Related beam-cooling methods, such as optical stochastic cooling, are also being proposed or under development.
The magnetic horn, meanwhile, has found numerous applications in different accelerators. The van der Meer scan is a vital tool used for LHC operation and stochastic extraction is used in various machines, for example in COSY at FZJ (since 1996).
After his retirement, Simon kept in close contact with a small group of his former colleagues and friends and there were more or less regular “Tuesday lunch meetings”.
“Unlike many of his Nobel colleagues, who almost invariably are propelled to great achievements by their self confidence, van der Meer remained a modest and quiet person preferring, now that he had retired, to leave the lecture tours to other more extrovert personalities and instead look after his garden and occasionally see a few friends. Never has anyone been changed less by success”, wrote Andy Sessler and Ted Wilson in their book Engines of Discovery (Sessler and Wilson, 2007). At CERN today, Simon’s contributions continue to play a significant role in many projects, from the LHC and the CERN Neutrinos to Gran Sasso facility to the antimatter programme at the AD – where results last year were honoured with the distinction of “breakthrough of the year” by Physics World magazine.
We all learnt with great sadness that Simon passed away on 4 March 2011. He will stay alive in our memories for ever.
Particle accelerators are vital state-of-the-art instruments for both fundamental and applied research in areas such as particle physics, nuclear physics and the generation of intense synchrotron radiation and neutron beams. They are also used for many other purposes, in particular medical and industrial applications. Together, the “market” for accelerators is large and steadily increasing year on year. Moreover, R&D in accelerator science and technology, as well as its applications, often leads to innovations with strong socio-economical impacts.
New accelerator-based projects generally require the development of advanced concepts and innovative components with continuously improving performance. This necessitates three levels of R&D: exploratory (validity of principles, conceptual feasibility); targeted (technical demonstration); and industrialization (transfer to industry and optimization). Because these developments require increasingly sophisticated and more expensive prototypes and test facilities, many of those involved in the field felt the need to establish a new initiative aimed at providing a more structured framework for accelerator R&D in Europe with the support of the European Commission (EC). This has led to the Test Infrastructure and Accelerator Research Area (TIARA) project. Co-funded by the European Union Seventh Framework Programme (FP7), the three-year preparatory-phase project started on 1 January 2011, with its first meeting being held at CERN on 23–24 February.
The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way
The approval of the TIARA project and its structure continues a strategic direction that began a decade ago with the report in 2001 to the European Committee for Future Accelerators from the Working Group on the future of accelerator-based particle physics in Europe, followed by the creation of the European Steering Group on Accelerator R&D (ESGARD) in 2002. This was reinforced within the European Strategy for particle physics in 2006. The main objective is to optimize and enhance the outcome of the accelerator research and technical developments in Europe. This strategy has been developed and implemented with the incentive of the Framework Programmes FP6 and FP7, thanks to projects such as CARE, EUROTeV, EURISOL, EuroLEAP, SLHC-PP, ILC-HiGrade, EUROnu and EuCARD. Together, these programmes represent a total investment of around €190 million for the period covered by FP6 and FP7 (2004 to 2012), with about €60 million coming from the EC.
The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way. This endeavour involves a large number of partners across Europe, including universities as well as national and international organizations managing large research centres. Specifically, the main objective is to create a single distributed European accelerator R&D facility by integrating national and international accelerator R&D infrastructures. This will include the implementation of organizational structures to enable the integration of existing individual infrastructures, their efficient operation and upgrades, as well as the construction of new ones whenever needed.
Project organization
The means and structures required to bring about the objectives of TIARA will be developed through the TIARA Preparatory Phase project, at a total cost of €9.1 million, with an EC contribution of €3.9 million. The duration is 3 years – from January 2011 to December 2013 – and it will involve an estimated total of 677 person-months. The project is co-ordinated by the French Alternative Energies and Atomic Energy Commission (CEA), with Roy Aleksan as project co-ordinator, François Kircher as deputy co-ordinator, and Céline Tanguy as project-assistant co-ordinator. Its management bodies are the Governing Council and the Steering Committee. The Governing Council represents the project partners and has elected Leonid Rivkin, of the Paul Scherrer Institute, as its chair. The Steering Committee will ensure the execution of the overall project’s activities, with all work-package co-ordinators as members.
The project is divided into nine work packages (WP). The first five of these are dedicated to organizational issues, while the other four deal with technical aspects.
WP1 focuses on the consortium’s management. Its main task is to ensure the correct achievement of the project goals and it also includes communications, dissemination and outreach. The project office, composed of the co-ordinator and the management team, forms the core of this work package, which is led by Aleksan, the project co-ordinator.
The main objective of WP2, also led by Aleksan, is to develop the future governance structure of TIARA. This includes the definition of the consortium’s organization, the constitution of the statutes and the required means and methods for its management, as well as the related administrative, legal and financial aspects.
WP3 is devoted to the integration and optimization of the European R&D infrastructures. Based on a survey of those that already exist, its objective is to determine present and future needs and to propose ways for developing, sharing and accessing these infrastructures among different users. This work package will also investigate how to strengthen the collaboration with industry and define a technology roadmap for the development of future accelerator components in industry. It is led by Anders Unnervik of CERN.
The main objective of WP4 is to develop a common methodology and procedure for initiating, costing and implementing collaborative R&D projects in a sustainable way. Using these procedures, WP4 will aim to propose a coherent and comprehensive joint R&D programme in accelerator science and technology, which will be carried out by a broad community using the distributed TIARA infrastructures.
The development of structures and mechanisms that allow efficient education and training of human resources and encourage their exchange among the partner facilities is the goal of WP5. The main tasks are to survey the human and training resources and the market for accelerator scientists, as well as to establish a plan of action for promoting accelerator science. This work package is led by Phil Burrows of the John Adams Institute in the UK.
WP6 – SLS Vertical Emittance Tuning (SVET) – is the first of the technical work packages. Its purpose is to convert the Swiss Light Source (SLS) into an R&D infrastructure for reaching and measuring ultrasmall emittances, as will be required for damping rings at a future electron–positron linear collider. This will be done mainly by improving the monitors that are used to measure beam characteristics (position, profile, emittance), and by minimizing the magnetic field errors, misalignments and betatron coupling. This work package is led by Yannis Papaphilippou of CERN.
The principal objective of WP7 – Ionization Cooling Test Facility (ICTF) – is to deliver detailed design reports of the RF power infrastructure upgrades that the ICTF at the UK’s Rutherford Appleton Laboratory requires for it to become the world’s laboratory for R&D in ionization cooling. The design reports will include several upgrades necessary to make the first demonstration of ionization cooling. Ken Long of Imperial College, London, leads this work package.
The goal of WP8 – High Gradient Acceleration (HGA) – is to establish a new R&D infrastructure by upgrading the energy of SPARC, the advanced photo-injector test-facility linac at Frascati. The upgrade will use C-band terawatt high-gradient accelerating structures to reach 250 MeV at the end of the structure. It will be crucial for the next generation of free-electron laser projects, as well as for the SuperB collider project. The work package is led by Marica Biagini of the Frascati National Laboratories.
WP9 – Test Infrastructure for High Energy Power Accelerator Components (TIHPAC) – is centred on the design of two test benches aimed at the future European isotope-separation on-line facility, EURISOL. These will be an irradiation test facility for developing high-power targets and a cryostat for testing various kinds of fully equipped low-beta superconducting cavities. These infrastructures would also be essential for other projects such as the European Spallation Source and accelerator-driven systems such as MYRRHA. The work package is led by Sébastien Bousson of CNRS/IN2P3/Orsay.
• For more information about the TIARA project, see the website at www.eu-tiara.eu.
By Lincoln Wolfenstein and João P Silva Taylor & Francis; CRC Press 2011
Paperback: £30 $49.95
E-book: $49.95
Writing a book is no easy task. It surely requires a considerable investment of time and effort (it is difficult enough to write short book reviews). This is especially true with books about complex scientific topics, written by people who are certainly not professional writers. I doubt that the authors of the books reviewed in the CERN Courier have taken courses on how to write bestsellers. Being such hard work, the authors must have good reasons to embark on the daunting challenge of writing a book.
When I started reading Exploring Fundamental Particles, I immediately wondered what could have been the reasons that triggered Lincoln Wolfenstein and João Silva to write such a book. After all, there are already many “textbooks” about particle physics, both in generic terms and in specific topics. For instance, the puzzling topic of CP violation is described in much detail in the book CP Violation (OUP 1999), by Gustavo Branco, Luís Lavoura and João Silva (the same João Silva, despite the fact that João and Silva are probably the two most common Portuguese names). There are also many books about particle physics that address the “general public”, such as the fascinating Zeptospace Odyssey (OUP 2009), by Gian Giudice, which is a nice option for summer reading, despite the somewhat weird title (the start-of-section quotations are particularly enjoyable).
Exploring Fundamental Particles follows an intermediate path. It addresses a broad spectrum of physics topics all of the way from Newton (!) and basic quantum mechanics to the searches for the Higgs boson at the LHC – building the Standard Model along the way. And yet, despite its wide scope, the book focuses with particularly high resolution on a few specific issues, such as CP violation and neutrino physics, which are not exactly the easiest things to explain to a wide audience. The authors must have faced difficult moments during the writing and editing phases, trying hard to keep the text readable for non-experts, while giving the book a “professional touch”.
This somewhat schizophrenic style can be illustrated by the fact that while the book is submerged in Feynman diagrams, some of them are quite hard to digest (“Penguins” and other beasts), it has no equations at all (not even the ubiquitous E=mc2) – maybe for fear of losing the reader – until we reach the end of the book (the fifth appendix, after more than 250 pages, where we do see E=mc2). The reading is not easy (definitely not a “summertime book”) so, for an audience of university students and young researchers, adding a few equations would have improved the clarity of the exposition.
I also found it disturbing to see the intriguing discussions of puzzling subjects interrupted by trivial explanations on how to pronounce “Delta rho”, “psi prime” etc. These parenthetical moments distract the readers who are trying to remain concentrated on the important narrative and are useless to the other readers. (If you do not know how to pronounce a few common Greek letters, you are not likely to survive a guided tour through the CKM matrix.)
I hope the authors (and editor) will soon revise the book and publish a second edition. In the meantime, I will surely read again a few sections of this edition; for certain things, it is really quite a useful book.
By Ken Takayama and Richard Briggs (eds.) Springer
Hardback: €126.55 £108 $169
Of the nearly 30,000 particle accelerators now operating worldwide, few types are as unfamiliar to most physicists and engineers as induction accelerators. This class of machine is likewise poorly represented in technical monographs. Induction Accelerators, a volume of 12 essays by well known experts, forms a structured exposition of the basic principles and functions of major technical systems of induction accelerators. The editors have arranged the essays in the logical progression of chapters in a textbook. Nonetheless, each has been written to be useful as a stand-alone text.
Apart from the two chapters about induction synchrotrons, the book is very much the product of the “Livermore/Berkeley school” of technology of induction linear accelerators (linacs) started by Nicholas Christofilos and led for many years by Richard Briggs as the Beam Research Program at the Lawrence Livermore National Laboratory. The chapters by Briggs and his colleagues John Barnard, Louis Reginato and Glen Westenskow are masterful expositions marked by the clarity of analysis and physics motivation that have been the hallmarks of the Livermore/Berkeley school. A prime example is the presentation of the principles of induction accelerators that, despite its brevity, forms an indispensable introduction by the master in the field to a discussion (together with Reginato) of the many trade-offs in designing induction cells.
One application of induction technology made important by affordable, solid-state power electronics and high-quality, amorphous magnetic materials is the induction-based modulator. This application grew from early investigations of magnetic switching by Daniel Birx and his collaborators; it is well described by Edward G Cook and Eiki Hotta in the context of a more general discussion of high-power switches and power-compression techniques.
Invented as low-impedance, multistage accelerators of high-current electron beams, induction machines have always had the central challenge of controlling beam instabilities and other maladies that can spoil the quality of the beam. Such issues have been the focus of the major scientific contribution of George Caporaso and Yu-Jiuan Chen, who – in the most mathematical chapter of the book – discuss beam dynamics, the control of beam break-up instability and the suppression of emittance growth resulting from the combination of misalignment and chromatic effects in the beam transport.
In ion induction linacs proposed for use as inertial-fusion energy drivers, an additional class of instabilities is possible, namely, unstable longitudinal space–charge waves. These instabilities are analysed in a chapter by Barnard and Kazuhiko Horioka titled “Ion Induction Linacs”. It is followed by a description of the applications of ion linacs, especially to heavy-ion-driven inertial fusion and high-energy density research. These chapters contain the most extensive bibliographies of the book.
The use of induction devices in a synchrotron configuration was studied at Livermore and at Pulsed Sciences Inc in the late 1980s. However, it was not until the proof-of-concept experiment by Takayama and his colleagues at KEK, who separated the functions of acceleration and longitudinal focusing, that applications of induction accelerators to producing long bunches (super-bunches) in relativistic-ion accelerators became a possibility for an eventual very large hadron collider. These devices and their potential applications are described in the final chapters of the book.
Both physicists and engineers will find the papers in Induction Accelerators well written with ample – though not exhaustive – bibliographies. While the volume is not a textbook, it could profitably be used as associated reading in a course about accelerator science and technology. Induction Accelerators fills a void in the formal literature on accelerators. It is a tribute to Nicholas Christofilos and Daniel Birx, the two brilliant technical physicists, to whom this volume is dedicated. I recommend it highly.
People around the world were deeply saddened to learn of the devastation caused by the major earthquake and the related tsunami on Friday 11 March in northern Japan. The 8.9-magnitude earthquake had its epicentre some 130 km off the eastern coast, and gave rise to unprecedented damage that extended far and wide.
The KEK high-energy physics laboratory and the Japan Proton Accelerator Research Complex (J-PARC) are the two particle accelerator facilities closest to the epicentre. In both case there were fortunately no reported injuries, nor was there any resulting radiation hazard. J-PARC lies on the eastern coast at Tokai and was the most heavily affected of the two facilities. Designed to withstand a tsunami of up to 10 m, on this occasion there was little effect. Although surrounding roads and some buildings were severely damaged, the accelerators at the facility appear to be in relatively good shape. KEK, at Tsukuba some 50 km north-east of Tokyo, suffered significant disruption to services and some damage to buildings and facilities.
The thoughts of the particle-physics community are with friends and colleagues at partner institutes in Japan, as well as those at laboratories and institutes elsewhere who have family and friends in Japan.
After three degrees and two years of research at the forefront of the electrical technology of the day, Ernest Rutherford left New Zealand in 1895 on a Exhibition of 1851 Science Scholarship, which he could have taken anywhere in the world. He chose the Cavendish Laboratory at the University of Cambridge because its director, J J Thomson, had written one of the books about advanced electricity that Rutherford had used as a guide in his research. This put the right man in the right place at the right time.
Initially, Rutherford continued his work on the high-frequency magnetization of iron, developing his detector of fast-current pulses to measure the dielectric properties of materials at high frequencies and hold briefly the world record for the distance over which electric “wireless” waves were detected. “JJ” appreciated Rutherford’s experimental and analytical skills, so he invited Rutherford to participate in his own research into the nature of electrical conduction in gases at low pressures.
Within five months of Rutherford’s arrival at the Cavendish Laboratory, the age of new physics had commenced. Wilhelm Röntgen’s discovery of X-rays was swiftly followed by Henri Becquerel’s announcement on radioactivity in January 1896. Rutherford capitalized on the new forms of ionizing radiation in his attempts to learn what it was that was conducting electricity in an ionized gas. He soon changed to trying to understand radioactivity itself and with his research determined that two types of rays were emitted, which he called “alpha” and “beta” rays.
Thomson continued mainly studying the ionization of gases. Less than two years after Rutherford’s arrival he had carried out a definitive experiment demonstrating that cathode rays were objects a thousand times less massive than the lightest atom. The electronic age and the age of subatomic particles had begun, though mostly unheralded. Rutherford was a close observer of all of this and became an immediate convert to – and champion of – subatomic objects. Beta rays were quickly shown to be high-energy cathode rays, i.e. high-speed electrons.
For Rutherford, however, there was no future at Cambridge. After only three years there he – as a non-Cambridge graduate – was not yet eligible to apply for a six-year fellowship, so in 1898 he took the Macdonald Chair of Physics at McGill University in Canada. (Cambridge changed its rules the following year.) From then on, the world centre of radioactivity and particle research was wherever Rutherford was based.
At McGill, he showed that radioactivity was the spontaneous transmutation of certain atoms. For this he received the 1908 Nobel Prize in Chemistry. He also demonstrated that alpha particles were most likely helium atoms minus two electrons, and he dated the age of the Earth using radioactive techniques. In studying the nature of alpha particles and by being the first to deflect them in magnetic and electric fields in beautifully conceived experiments, Rutherford observed that a narrow beam of alphas in a vacuum became fuzzy either when air was introduced into the beam or when it was passed through a thin window of mica.
Return to England
With blossoming international scientific fame, Rutherford was regularly offered posts in America and elsewhere. He accepted none because McGill had superb laboratories and support for research, but he was wise enough to let the McGill authorities know of each approach; they increased his salary each time. However, Rutherford also wished to be nearer the centre of science, which was England, where he would have access to excellent research students and closer contact with notable scientists. His desire was noted. Arthur Schuster, being from a wealthy family, said he would step down from his chair at Manchester University provided that it was offered to Rutherford, and in 1907 Rutherford moved to Manchester
At Manchester University Rutherford first needed a method of recording individual alpha particles. He was an expert in ionized gases and had been told by John Townsend, an old friend from Cambridge, that one alpha particle ionized tens of thousands of atoms in a gas. So, with the assistant he had inherited, Hans Geiger, the Rutherford-Geiger tube was developed.
Many labs at the time were studying the scattering of beta particles from atoms. People at the Cavendish Laboratory claimed that the large scattering angles were the result of many consecutive, small-angle scatterings inside Thomson’s “plum pudding” model of the atom – the electrons being the fruit scattered throughout the solid sphere of positive electrification. Rutherford did not believe that the scattering was multiple, so once again he had to quantify science to undo the mistaken interpretations of others.
Geiger was given the task of measuring the relative numbers of alpha particles scattered as a function of angle over the few degrees that Rutherford had measured photographically at McGill. However, photography could not register single particles. Nor was the Rutherford-Geiger detector suitable for “quickly” measuring particles scattered over small angles; it was not sensitive to the direction of entry of the alpha particle and all that they observed was the “kick” of a spot of light from a galvanometer. Yet one of the reasons for developing the Rutherford-Geiger tube had been to determine whether or not the spinthariscope invented by William Crookes did, indeed, register one flash of light for every alpha particle that struck a fluorescing screen.
So, Geiger allowed monochromatic alpha particles in a vacuum tube to pass through a metal foil and onto a fluorescing plate that formed the end of the tube. A low-power microscope, looking at about a square millimetre of the plate, allowed the alphas to be counted. It was tiring work, waiting half an hour for the eye to dark adapt, then staring at the screen unblinking for a minute before resting the eye. It is said that Rutherford often cursed and left the counting to the younger Geiger.
Another of Geiger’s duties was to train students in radioactivity techniques and it was Rutherford’s policy to involve undergraduates in simple research. So, when Geiger reported to Rutherford that a young Mancunian undergraduate was ready to undertake an investigation, Rutherford set Ernest Marsden the task of seeing if he could observe alpha particles reflected from metal surfaces. This seemed unlikely, but, on the other hand, beta rays did reflect.
Marsden used the same counting system as Geiger, but had the alpha source on the same side of the metal as the fluorescing screen, with a lead shield to prevent alphas from going directly to the screen (figure 1). When he reported that he did see about 1 in 10,000 alphas scattered at large angles, Rutherford was astonished. As he later famously recalled: “It was as if a 15-inch naval shell had been fired at a piece of tissue paper and it bounced back.”
Geiger and Marsden published their measurements in the May 1909 issue of the Proceedings of the Royal Society, but the study laid fallow for more than a year, while Geiger continued obtaining more accurate results for his small-angle scattering from different materials and various thicknesses of foils. It is said that one day Rutherford went in to Geiger’s room to announce that he knew what the atom looked like. In January 1911 Rutherford was able to write to Arthur Eve in Canada: “Among other things, I have been interesting myself in devising a new atom to explain some of the scattering results. It looks promising and we are now comparing the theory with experiments.”
The nuclear atom
On 7 March 1911 Rutherford spoke at the Manchester Literary and Philosophical Society. Two other speakers followed him: one spoke on “Can the parts of a heavy body be supported by elastic reactions only?”, the other showed a cast of the “Gibraltar Skull”. A reporter from The Manchester Guardian was present and in the edition of 9 March (p3) succinctly paraphrased Rutherford: “It involved a penetration of the atomic structure, and might be expected to throw some light thereon.” Rutherford had asked Geiger to test experimentally his theory that the alpha scattering through large angles varied as cosec4(φ/2). He concluded that the central charge for gold was about 100 units, that for different materials the number was proportional to NA2 (where N was the number of atoms per unit volume and A the atomic weight), and that large-angle scattering (hyperbolic paths) was independent of whether the central charge is positive or negative. The reporter concluded: “…we were on the threshold of an enquiry which might lead to a more definite knowledge of atomic structure.”
Rutherford’s talk was published in the Proceedings of the Manchester Literary and Philosophical Society (Rutherford 1911a) and more fully in the Philosophical Magazine for May (Rutherford 1911b). In the latter, he acknowledged Hantaro Nagaoka’s mathematical consideration of a “Saturnian” disc model of the atom (Nagoaka 1904), stating that essentially it made no difference to the scattering if the atom was a disc rather than a sphere.
The nuclear atom created no great stir among scientists and the public at the time. Three nights after his announcement, Rutherford addressed the Society of Industrial Chemists on “Radium”. The nuclear atom was not mentioned by Sir William Ramsay in his opening address to that year’s meeting of the British Association, although his reported claims of various discoveries caused Schuster – who had stepped down to attract Rutherford to Manchester – to write a letter to The Manchester Guardian stating which of those were discovered by Rutherford.
Rutherford’s busy life continued as normal: accepting a Corresponding Membership of the Munich Academy of Sciences; giving talks on all manner of subjects but the nuclear atom; refuting several claims of cold fusion that came from Ramsay’s laboratory; motoring in the car recently purchased with the money that had accompanied his Nobel prize; and being involved with many organizations, including being a vice-president of both the Manchester Society for Women’s Suffrage and the Manchester Branch of the Men’s League for Women’s Suffrage. (At Canterbury College in New Zealand, his landlady and future mother-in-law was one of the stalwarts who in 1893 had obtained the vote for women in New Zealand.)
Rutherford’s Nobel Prize in Chemistry of 1908 was too recent for physicists to nominate him again for a prize. It was to be 1922 before he was next nominated, unsuccessfully. There have been 27 Nobel prizes awarded for the discovery of, or theories linking, subatomic particles but there was never one for the nuclear atom. However there was a related one. At the end of 1911 Rutherford was the guest of honour at the Cavendish Annual Dinner, at which he was, not surprisingly, in fine form. The chairman, in introducing him, stated that Rutherford had another distinction: of all of the young physicists who had worked at the Cavendish, none could match him in swearing at apparatus.
Rutherford’s jovial laugh boomed round the room. A young Dane, visiting the Cavendish for a year to continue his work on electrons in metals, took an immense liking to the hearty New Zealander and resolved to move to Manchester to work with him. And so it was that Niels Bohr received the 1922 Nobel Prize in Physics for “his services in the investigation of the structure of atoms and of the radiation emanating from them”. He had placed the electrons in stable orbits around Rutherford’s nuclear atom.
When I first sat down with How to Teach Quantum Physics to Your Dog I was expecting a little light reading, something to pick up on Sunday after lunch. After all, if a dog could understand it, surely someone who has a PhD in physics wouldn’t find it too challenging? I was wrong.
Initially Chad Orzel’s analogies with squirrels and dog wavefunctions are both amusing and enlightening, but as the book moves on they don’t make his subject any clearer. By the time he has reached incoherence it is hard to see how anyone without a good grounding in physics would cope. But it is worth persevering.
Orzel’s style – especially his references to dog treats, bunnies and squirrels – get irritating at times, but despite this I found myself enjoying the book.
To quote Orzel, “quantum mechanics is often subtle and difficult to understand”. His book reminds us why that is, and overall he succeeds in making it a little clearer.
The LHC is an amazing engineering achievement supported by a long programme of developments. CERN has been encouraging the development of technologies required to complete the project since the late 1960s (for example, the GESSS collaboration between the Saclay, Karlsruhe and Rutherford Laboratories). The quality of this work has been recognized internationally and it has contributed to spin-off activities, especially in the development of superconductors and in magnetic-field computation. With the completion of the LHC, and recognizing CERN’s desire to maintain the competences required to design accelerators, it is the right time to publish a book on the computer methods developed to design the LHC magnets.
In this book, Stephan Russenschuck provides an extremely useful and comprehensive description of magnetic-field computation for particle-accelerator magnets. It gives practical information and describes simple methods of analysis; in addition, it includes the abstract mathematics necessary to understand the finite element methods that were developed specifically for the design of the magnets for the LHC’s main ring. The final chapter examines optimization methods, particularly those implemented in the ROXIE software.
The successful design of the LHC magnets required highly accurate field-computation methods that were capable of modelling effects such as conductor and cable magnetization, which are uniquely important to accelerators. Even the LHC’s superconducting magnets quench, when a small resistive volume diffuses rapidly through the coil structure, driven forward by the heat it generates. This book’s chapters describe methods for modelling these effects, and demonstrate the accuracy of the results by comparison with measurements. The appendices include practical information about cryogenic material properties required for quench analysis.
This is a well presented book that makes excellent use of computer graphics to show results and explain phenomena. The graphics showing interstrand coupling currents in conductors and cables are particularly clear and help to make this chapter easy to understand.
Russenschuck has written a valuable addition to the library of those involved in the design of accelerator magnets.
“Of all the things that make the universe, the commonest and weirdest are neutrinos.” Thus starts Frank Close’s latest book, Neutrino, a fascinating look into one of the most compelling and surprising scientific advances of the past century.
With its very basic title, a reader might imagine that this book, written by a leading particle theorist, would be an accurate but dry discourse on the eponymous particle. They would be surprised to find a moving book centred on the lives and work of three individuals: Ray Davis, John Bahcall and Bruno Pontecorvo. Neutrino manages to capture not only their impressive scientific contributions but something of their personalities and the times, through an excellent choice of quotes and stories from friends and colleagues. Consequently it is a book that is brief, scientifically accurate and full of drama.
The neutrino’s origins in the early 20th century studies of radiation, stellar astrophysics and neutrino oscillations are all carefully and clearly explained. This book fills in many of the gaps left by more cursory treatments, in particular the road from Wolfgang Pauli’s proposal of the neutrino to the development of the theory of beta decay by Enrico Fermi. But the pedagogic scope is wisely limited and the author does not shy away from leaving the scientific explanations to a footnote if they are incidental to the main storyline.
Neutrino also manages to capture the full spectrum of ideas, events and relationships that play a part in particle physics. The path between brilliant theoretical insight and triumphant experimental verification can be long and precarious. The prosaic (and often deciding) factors – the casual encounter with a colleague that sparks a new idea, incorrect theoretical assumptions identified and corrected, incremental advances in technology, site selection, the vagaries of funding decisions, politics, the role of industrial partners, and just plain luck – are accurately and entertainingly discussed.
That this book succeeds on a number of levels is a credit to the author’s deep knowledge of the physics and his meticulous research, as well as a concise and imaginative writing style. The omission of the LSND and MiniBooNE experiments is the only notable absence, though hardly surprising since the experimental situation here is far from resolved. If the signatures of antineutrino appearance from these experiments stand up to further investigation, neutrinos will have proved to be even weirder than we thought and will provide the author with rich material for a second edition.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.