Comsol -leaderboard other pages

Topics

Hildred Blewett: a life with particle accelerators

One of the most generous schemes to support women returning to physics – and possibly the most valuable to result from a personal bequest – is the M Hildred Blewett Fellowship of the American Physical Society (APS). When Hildred died in 2004, she left nearly all that she had to the APS to set up the scholarship, which funds a couple of women a year in the US or Canada to the tune of up to $45,000. So far, nine recipients have benefited from the bequest, including two in nuclear and particle physics – not far removed from Hildred’s own field of work in accelerator physics. Indeed, she played an important role in the design of accelerators on both sides of the Atlantic, as well as in the organization of their exploitation.

Hildred Hunt was born in Ontario on 28 May 1911. Her father, an engineer who became a minister, supported her interests in mathematics and physics, although the family did not have much money and Hildred had to take a time out from college – a factor that appears to have influenced the future bequest. Nevertheless, by 1935 she had graduated from the University of Toronto with a BA in physics and maths. Stints of research followed, first at the University of Rochester, New York, and then at Cambridge’s Cavendish Laboratory – which was still under Ernest Rutherford – together with her husband John Blewett, who had also studied in Toronto. After returning to the US, in 1938 Hildred joined Cornell University as a graduate student, with Hans Bethe as her thesis supervisor. Writing in APS News more than 60 years later, physicist Rosalind Mendell recalled Hildred saying that as John was working on magnetrons at General Electric (GE) “she had gone back for her doctorate because she loved physics and could no longer endure life as a ‘useless’ company wife” (Mendell 2005). Rosalind had arrived at Cornell in 1940, when she was just short of 20 years old, joining 50 men plus Hildred – “the cheerful, confident and breezy Canadian blonde”. Hildred took the younger woman under her wing, a characteristic that was seen later with other junior colleagues and was also reflected in her final bequest.

The entry of the US into the Second World War changed everything and by the summer of 1942, Bethe was working with Robert Oppenheimer in California on some of the first designs for an atomic bomb. In November Hildred joined GE’s engineering department; her thesis work was left behind, never to be fulfilled. While at GE she developed a method of controlling smoke pollution from factory chimneys. However after the war, a bright future opened up for scientific research in the US and in 1947 both Blewetts were hired by the newly established Brookhaven National Laboratory to work on particle accelerators. Hildred’s forte was in theoretical aspects, while John had already worked with betatrons at GE.

The Blewetts were part of the team that worked on the design and construction of a new accelerator that would reach an energy of 3 GeV, an order of magnitude higher than in any previous machine and in the range of cosmic-ray energies, hence the name of “Cosmotron”. The machine came into operation in 1952 and Hildred edited a special issue of Review of Scientific Instruments, which contained articles on many key aspects, some of which she also co-authored (Blewett 1953a).

Birth of the PS

That same year saw the emergence of the alternating gradient or “strong-focusing” technique, which offered the possibility for an accelerator to go up to much higher energies and gave birth to the Alternating Gradient Synchrotron (AGS) at Brookhaven. The idea was also conveyed to a group of physicists from several European countries who visited Brookhaven in the summer of 1952 to learn about the Cosmotron and how they might build a similar but somewhat larger machine for the nascent organization that would become CERN. Following the visit, and a busy period of study, the decision was indeed taken to build a strong-focusing machine of 25–30 GeV, the future Proton Synchrotron (PS). The group invited the two Blewetts and Ernest Courant – one of the inventors of the principle of strong focusing – to Europe to help plan the new laboratory.

By the end of March 1953, the provisional Council had agreed to build the strong-focusing machine, but as CERN did not yet officially exist, the work was split among groups in several European institutions. On six months’ leave from Brookhaven, the Blewetts went to Odd Dahl’s institute in Bergen, where they contributed to the initial design of the PS. The arrangement turned out to be more complex than initially thought, and they pushed to have everything moved to Geneva, once the site had been selected and ratified by the cantonal referendum in June 1953. The advance guard of the PS group, including the Blewetts, arrived there at the beginning of October. At the end of the month Geneva hosted a conference on the theory and design of an alternating-gradient proton synchrotron; Hildred edited the proceedings (Blewett 1953b).

Both Blewetts were full members of the PS group, engaged in all aspects, from theoretical research to cost estimates, and their collaboration continued, even after they returned to the US. By January 1954, the decision had been taken to build the 33 GeV AGS at Brookhaven, so the collaboration between the US and Europe was important to both. Hildred commented later that there were even times when “in many ways Brookhaven got more from the co-operation than CERN did” (Krige 1987). She returned to Geneva to attend accelerator conferences in 1956 and 1958, and visited CERN for three months in 1959, when the PS was near completion. Well known photos record her presence in the PS control room on the magical evening of 24 November when the “transition” took place; her written recollections still bring the day vividly to life (CERN Courier November 2009 p19).

Back in Brookhaven Hildred made major contributions to the design of the AGS, in particular she “presided over the design of the magnets” (Blewett 1980). Courant also recalls that she devised an elaborate programme to make detailed field measurements of each of the 240 magnets, which enabled the team to assign the positions of the magnets in the ring so as to minimize the effects of deviations from the design fields.

The AGS began operation in 1960, a few months after the PS at CERN. Alan Krisch, then a graduate student at Cornell, worked on a large-angle proton–proton scattering experiment, which was one of the first to be approved. Hildred “sort of adopted” him and he remembers her as a “formidable woman from whom he learnt much”. She was the one, for example, who suggested that the Cornell group acquire a trailer to provide a cleaner environment where they could collect their data near their AGS experiment. “It was a great idea,” he says, “and soon everyone had trailers.”

The Blewetts split up around that time, as professional divergences increased. These included, Krisch recalls, a disagreement about whether the AGS should add a high-intensity linac or colliding beams. After the divorce, Lee Teng, a colleague and friend, invited Hildred to the Argonne National Laboratory, where he had become director of the Particle Accelerator Division. “I remembered that at Brookhaven she got along very well with and was respected by all of the AGS users,” he says, so he suggested that Hildred become the liaison with the users of Argonne’s Zero Gradient Synchrotron (ZGS). She took on the work with characteristic dedication, bringing all of her experience from Brookhaven, taking care of the needs of the users. One of these was Krisch, who at 25 was a newly appointed assistant professor at the University of Michigan and spokesperson for one of the first experiments on the ZGS. Under Hildred, the experimental areas worked well, “probably the best of any place I’ve worked at”, he says. During this time at Argonne, papers by Hildred show that she continued to work on magnet design, as well as on costings for experimental facilities.

By 1967, on leave from Argonne, she was already involved with the 300 GeV project at CERN, for example as co-ordinator of utilization studies across the member states to look into the exploitation of the machine that would become the Super Proton Synchrotron (ECFA 1967). She joined the CERN staff in 1969 and collaborated in the Intersecting Storage Rings (ISR), which started up in 1971. That same year she was heavily involved in the organization of the 8th International Conference on High-energy Accelerators in Geneva, nearly a quarter of a century after the conference (also in Geneva) that had foreshadowed the PS. She ran the finances of the ISR Division, keeping a careful eye on how resources were spent, as well as being secretary of the ISR Committee (ISRC), serving the new community of users at CERN. Again, the users included Krisch, this time as the first US spokesperson on a CERN experiment, together with a trailer flown over from Argonne; and again Hildred’s expertise proved invaluable, advising on how to run the cabling etc. By the time she retired she had been secretary for 60 meetings of the ISRC and left behind her a perfect organization, in the words of her successor.

She retired in August 1976, but remained at CERN until July 1977 as a scientific associate. During this final year, reports were published on the concept for a 100 GeV electron–positron machine and on studies of 400 GeV superconducting proton storage rings – the future Large Electron–Positron collider and Large Hadron Collider, respectively – both of which involved Hildred (Bennet et al. 1977 and Blechschmidt et al. 1977). She also organized the 1st International School of Particle Accelerators “Ettore Majorana” in Erice, which laid the foundations for the CERN Accelerator School.

The recollections of some of the people who knew Hildred not only paint a picture of a strong woman who cared a great deal for others, but also give some insight into her interests beyond physics. Mendell remembers that they walked together on the Physics Department hikes at Cornell and Courant recalls that she was “an avid folk dancer”, organizing weekly classes in which he and his wife participated enthusiastically. Krisch recalls that during his third encounter with Hildred at CERN, she invited him to Geneva’s English Theatre Club to see her star as the Bulgarian heroine in George Bernard Shaw’s Arms and the Man.

After a few years in Oxford, which suited her interests in music, amateur dramatics and fine arts, Hildred returned to Canada to be closer to her brother and his family. She died in Vancouver in June 2004, at the age of 93. Her career was characterized by her concern that others too should be able to make the most of their time in the field she clearly enjoyed – from the young people she mentored to the user communities she served in several major laboratories and to the beneficiaries of her generous bequest.

CERN and the EPS: a joint endeavour

CCvie1_06_11

The European Physical Society (EPS) was founded at CERN in 1968. Today it represents more than 100,000 physicists through its 41 national member societies and it provides a scientific forum for more than 3000 individual members from all fields of physics.

Around 50 universities, research institutes, laboratories and enterprises that are active in physics research are also present as EPS associate members. CERN was the first to join and has supported the EPS since the very beginning. Many leading personalities from CERN have been EPS presidents: Gilberto Bernardini, the founder and first president of the EPS, who at the time was CERN’s research director; Antonino Zichichi; Maurice Jacob and Herwig Schopper.

The EPS is a non-profit association whose purpose is to promote physics in Europe and across the world. In 1968, when European integration was still rather vague, the establishment of the EPS was, to quote Bernardini’s inaugural address in the CERN Council Chamber, “a demonstration of the determination of scientists to make their positive contribution to the strength of European cultural unity.”

Today the EPS continues to play an important role in fostering the scientific excellence of European physicists, through high profile activities, in enhancing communication among physicists in Europe and across the world, and in bringing major issues in physics, and science in general, to the attention of the public and policymakers.

So how is the EPS organized? EPS members decide the priorities of the society, allocate resources for its activities and hold positions of responsibility. The scientific activities of the EPS segment into divisions and groups, which are governed by boards. Such activities include renowned topical conferences, seminars and workshops.

The divisions and groups also develop outreach activities, for students and for the general public, and support measures to help physicists from less-favoured regions of Europe and from scientifically emerging countries worldwide to participate in EPS initiatives.

A number of prestigious prizes are awarded by the EPS divisions and groups in recognition of outstanding achievements in all fields of physics. These often anticipate the Nobel awards.

The EPS has 11 divisions, covering specific fields of physics research: Atomic, Molecular and Optical Physics, Environmental Physics, High Energy and Particle Physics, Nuclear Physics, Physics in Life Sciences, Plasma Physics, Quantum Electronics and Optics, Solar Physics, Statistical and Nonlinear Physics.

In addition there are seven groups that look at questions of common interest to all physicists, such as: Accelerators, Energy, and Technology; but also the History of Physics and Physics for Development. Finally, a number of committees deal with social questions: European Integration, Gender Equality in Physics, Mobility, Physics and Society and Young Minds.

Like all learned societies the EPS publishes a letters journal (Europhysics Letters), a scientific bulletin (Europhysics News) and, more recently, an electronic newsletter (e-EPS). These are produced in partnership with a number of member societies and their respective publishing houses.

As a consequence of its expansion and evolution over the past 40 years, the EPS has undergone several revisions to assess and define its two-fold role of learned society and federation of national societies, so that it can act as an authoritative, scientific opinion-maker.

In 2010 the society sketched out its new strategy plan and identified new guidelines. The EPS needs to gain more visibility, to strengthen and highlight the activities of its divisions and groups and to generate a greater spirit of belonging and cohesion among its members. It also needs to bring added value and provide a louder common voice to its member societies and associate member institutions. It should increase its potential for co-operation and solidarity with less-favoured countries.

The preservation of the quality of European publications, in particular EPS journals and those related to or recognized by the EPS, and their integration into the context of global publishing is another main objective. Finally, establishing and strengthening links with other scientific societies worldwide – physical, astronomical and chemical – is among the new EPS priorities.

In this perspective, further intensifying the good relations and privileged interactions between CERN and the EPS would be highly desirable. Both institutions are on the same wavelength, share the same vision and support excellency in joint fundamental and applied research. They are concerned with technology transfer and industry’s involvement in physics research and they care deeply about education matters, outreach, knowledge dissemination and public awareness.

As CERN’s director-general Rolf Heuer repeatedly emphasizes, “We must bring science closer to society.” A tighter collaboration between the unique European research laboratory that is CERN and the EPS could serve this common goal; moreover CERN could help considerably to boost the future of the EPS.

The Quantum Story

By Jim Baggott

Oxford University Press

Hardback: £16.99 $29.99

CCboo1_06_11

The Quantum Story provides a detailed “biography” of the 111-year-old quantum physics, from its birth with Planck’s quantum of action all of the way up to superstrings, loop quantum gravity and the start of the LHC – a machine that is expected to put physics back on the right track, with experimental measurements forcing some “figments of the theoretical mind” to confront reality.

The first chapters are simply delicious and ideally suited for summer reading on a sunny, late afternoon with a fresh drink close by. I was pleased to revisit most of the stories and characters I met as a teenager when reading books by or about Einstein, Bohr, Pauli, Heisenberg, de Broglie, Schrödinger, Dirac and many other universal heroes. Baggott explains the basics and wonders of quantum physics in a surprisingly clear way, despite its intrinsically “unsettling” and “wholly disconcerting” nature. A multitude of advances and a fair share of dead ends are exposed with excitement and suspense, almost as in a detective story, and the pace of the action is such that I was often reminded of Dan Brown’s novels. You begin to wonder if some of the main characters ever slept, such as during the Solvay conference in 1927, when each breakfast time Einstein would attack with a new gedankenexperiment, which Bohr would counter throughout dinnertime in Brussels’ “Hotel Britannique”.

We all know about Einstein’s “year of miracles” when, perhaps inspired by not having a respectable position to lose in the academic world, he revolutionized physics with an incredible succession of amazing papers. It is less known that he also wrote several “unpublished papers”, some of which influenced new and important ideas, such as Born’s probabilistic view of Schrödinger’s wavefunctions, submitted for publication in June 1926. This “hastily written” paper was followed one month later by a second one giving a “more considered” perspective, complemented by a note added to the proofs of the first, mentioning that the probabilities are proportional to the square of the wavefunctions.

Somehow, it had not crossed my mind that even in those days many physicists were in a hurry to get their ideas in print. The “publish or perish” motto has long applied. Pauli submitted a paper deriving Balmer’s formula from matrix quantum mechanics just five days before Dirac did the same; maybe Dirac’s delay was caused by his proverbial perfectionism with clear language. Baggott mentions other notes added by the authors in the proofs of their papers, as when Heisenberg writes that: “Bohr has brought to my attention that I have overlooked essential points in the course of several discussions in this paper [on uncertainties].” Ouch… this must have hurt. It continues: “I owe great thanks to Professor Bohr for sharing with me at an early stage the results of these more recent investigations of his.” The Copenhagen interpretation did not have an easy birth.

The topic of quantum reality strikes back later in the book, in chapters 30 to 35, where the reader needs a higher level of concentration to follow detailed developments regarding the topics of hidden variables, Bell’s and Leggett’s inequalities, entanglement and the surprisingly accurate experimental work recently made in this area. In chapters 18 to 29, the reader learns the crucial steps in the development of quantum field theories, quantum electrodynamics, quantum chromodynamics, quark asymptotic freedom and infrared confinement, the J/Ψ revolution, the discovery of the intermediate vector bosons, etc. This must be the nicest introduction to the Standard Model that I have read so far.

Given the style (and target audience) of the book, the almost complete absence of mathematics is quite understandable and I should say that the author succeeds remarkably well in explaining many leading-edge physics topics without the help of equations. It is true that “modern theoretical physics is filled with dense, impenetrable, complex mathematical structures”, which often obscure the deep meaning of what is being done. Nevertheless, and with the confidence gained after reading the 410 pages of main text plus several end-of-book notes, I dare to express the wish of seeing this book reprinted in a “special illustrated edition” (following the nice examples of Bill Bryson’s A Short History of Nearly Everything and Stephen Hawking’s A Brief History of Time), with more diagrams, pictures and equations.

In summary, this is a truly exceptional book, which I highly recommend. It will be enjoyable reading for many professional physicists as well as for bright high-school students waiting for something to trigger a decision to follow a career in physics.

Present at the Creation: The Story of CERN and the Large Hadron Collider

By Amir D Aczel

Crown

Hardback: £15.73 $25.99

CCboo2_06_11

Mathematician and science writer Amir D Aczel is well known for his factually convincing and captivating story of Fermat’s Last Theorem. His recent book on CERN follows a similar recipe for writing a gripping story: impressions from several visits to the laboratory – notably witnessing the LHC restart from the CERN Control Centre on 5 March 2010 and from the CMS Control Centre earlier in the day – as well as interviewing respective experts and leading physicists, including 13 Nobel laureates.

The story develops in 14 chapters that are illustrated with colour photographs, black-and-white line drawings, photographs and tables. An afterword, notes and a bibliography complete the picture, together with three more “technical” appendices: how an LHC detector works; particles, forces and the Standard Model; and the key physics principles used in the book. Aczel covers the LHC and its potentialities and risks, the four big detectors, symmetries of nature and Yang–Mills theory, the Standard Model, the Higgs particle, string theory, dark matter, dark energy and the fate of the universe. The result is a splendid effort to inform a wider public of CERN’s achievements set in an appropriate context.

As would be expected, Aczel is at his best when explaining mathematical theories such as that of Yang and Mills. Given the breadth of the material covered, it is not surprising that there are some lacunae and even errors. What struck me as an accelerator physicist was the erroneous explanation for the PS Booster synchrotron in the accelerator chain that feeds the LHC, which he attributes to the limited increase of particle velocity in a given synchrotron. In fact, the need for the Booster arose from the luminosity requirements of the Intersecting Storage Rings (and successive storage rings) – that is higher beam intensity and (phase space) densities or, in other words, limited transverse and longitudinal beam emittances. It would have been helpful if Aczel had been able to interview the late Nobel laureate Simon van der Meer.

Altogether, however, it is a book that can be highly recommended to anybody who wants to know “everything” about CERN and who likes a narrative style. I would personally be interested to know how much a complete newcomer understood after a first reading.

Crashes, Crises, and Calamities: How We Can Use Science to Read the Early-Warning Signs

By Len Fisher

Basic Books

Hardback: £13.99 $23.99

CCboo3_06_11

When I received the book, I was eager to start reading it, particularly because of its subtitle: How We Can Use Science to Read the Early-Warning Signs. How can we? In fact, after reading the book, the conclusion is that we cannot.

Although realizing it caused some disappointment, I can confirm that, even without the million-dollar answer, the book is an interesting read. Len Fisher is an experienced writer, capable of explaining difficult concepts with simplified – but never simplistic – language. The book talks about equilibrium states, physical and mathematical models, negative feedback etc. When you study such topics in textbooks, you can quickly become bored that everything seems so obvious. However, the mathematics that formalizes all of this is far from being obvious; and the million-dollar question has no answer precisely because of this.

Fisher’s writing is engaging because it moves the hard concepts into everyday life, giving them a framework that makes the reader forget about the complex physics and mathematics behind them. Thus, the equilibrium states that remain theoretical in textbooks, are here explained in real and contextual situations, so that the reader learns about the evolution of biological species, the main facts that determine the solidity of a newly constructed bridge (but it could be your house) and even the factors that lead the dynamics between two people who become a couple.

I found this enjoyable reading and the disappointment of the missing conclusion was partly compensated for by the genuine attention that the author pays to the reader’s entertainment. I recommend the book to a non-scientific readership, which, I believe, will greatly profit from Fisher’s explanation of how and why things work, or, conversely, why they don’t work and can break down.

Italy approves long-term funding for the SuperB project

The Italian government has approved the long-term funding of the SuperB project. Mariastella Gelmini, the Italian minister for university education and research, announced on 19 April that the Interministerial Committee for Economic Programming had approved the National Research Plan 2011–2013. This sets out the future direction of 14 flagship projects, including SuperB.

The SuperB project is based on the principle that smaller particle accelerators, operating at a low energy can still give excellent scientific results complementary to the high-energy frontier. The project centres on an asymmetric electron–positron collider with a peak luminosity of 1036 cm–2 s–1. Such a high luminosity will allow the indirect exploration of new effects in the physics of heavy quarks and flavours at energy scales up to 10–100 TeV, through studies at only 10 GeV in the centre-of-mass of large samples of B, D and τ decays. At full power, SuperB should be able to produce 1000 pairs of B mesons and the same number of τ pairs, as well as several thousand D mesons every second. The design is based on ideas developed in Italy and tested by the accelerator division of the National Laboratories of INFN in Frascati using the machine called Daφne.

Sponsored by the National Institute of Nuclear Physics (INFN), Super B is to be built in Italy with international involvement. Many countries have expressed an interest in the project and physicists from Canada, France, Germany, Israel, Norway, Poland, Russia, Spain, the UK and the US are taking part in the design effort.

The Istituto Italiano di Tecnologia is co-operating with INFN on the project, which should help in the development of innovative techniques with an important impact in technology and other research areas. It will be possible to use the accelerator as a high-brilliance light source, for example. The machine will be equipped with several photon channels, allowing the extension of the scientific programme to the physics of matter and biotechnology.

Simon van der Meer: a quiet giant of engineering and physics

Simon van der Meer

Simon van der Meer was born in 1925 in The Hague, the third child of Pieter van der Meer and Jetske Groeneveld. His father was a school teacher and his mother came from a teacher’s family. Good education was highly prized in the van der Meer family and the parents made a big effort to provide this to Simon and his three sisters. Having attended the gymnasium (science section) in The Hague, he passed his final examination in 1943 – during German occupation in wartime. He stayed at the gymnasium for another two years because the Dutch universities were closed, attending classes in the humanities section. During this period – inspired by his excellent physics teacher – he became interested in electronics and filled his parents’ house with electronic gadgets.

In 1945 Simon began studying technical physics at Delft University, where he specialized in feedback circuits and measurement techniques. In a way, this foreshadowed his main invention, stochastic cooling, which is a combination of measurement (of the position of the particles) and feedback. The “amateur approach” – to use his own words – that he practiced during his stay at Delft University later crystallized in an ability to see complicated things in a simple and clear manner. In 1952 he joined the highly reputed Philips research laboratory in Eindhoven, where he became involved in development work on high-voltage equipment and electronics for electron microscopes. Then, in 1956, he decided to move to the recently founded CERN laboratory.

magnetic horn

As one of his first tasks at CERN, Simon became involved in the design of the pole-face windings and multipole correction lenses for the 26 GeV Proton Synchrotron (PS), which is still in operation today, as the heart of CERN’s accelerator complex. Supervised by and in collaboration with John Adams and Colin Ramm, he developed – in parallel to his technical work on power supplies for these big magnets – a growing interest in particle physics. He worked for a year on a separated antiproton beam, an activity that triggered the idea of the magnetic horn – a pulsed focusing device for charged particles, which traverse a thin metal wall in which a pulsed high current flows. Such a device is often referred to as a “current sheet lens”. The original application of the magnetic horn was for neutrino physics. Of the secondary particles emerging from a target hit by a high-energy proton beam, the horn selectively focused the pions. When the pions then decayed into muons and neutrinos, an equally focused and intense neutrino beam was obtained. The magnetic horn found many applications all around the world, for both neutrino physics and the production of antiprotons.

In 1965 Simon joined the group led by Francis Farley working on a g-2 experiment for the precision measurement of the magnetic moment of the muon. There, he took part in the design of a small storage ring (the g-2 ring) and participated in all phases of the experiment. As he stated later, this period was an invaluable experience not only for his scientific life but also through sharing the vibrant atmosphere at CERN at the time – which was full of excitement – and the lifestyle of experimental high-energy physics. It was also about this time, in 1966, that Simon met his future wife, Catharina Koopman, during a skiing excursion in the Swiss mountains. In what Simon later described as “one of the best decisions of my life”, they married shortly afterwards and had two children, Ester (born 1968) and Mathijs (born 1970).

Simon at a farewell party

In 1967, Simon again became responsible for magnet power supplies, this time for the Intersecting Storage Rings (ISR) and a little later also for the 400 GeV Super Proton Synchrotron (SPS). During his activities at the ISR he developed the now famous “van der Meer scan”, a method to measure and optimize the luminosity of colliding beams. The ISR was a collider with a huge intensity, more than 50 A direct current of protons per beam, and it was in 1968 – probably during one of the long nights devoted to machine development – that a new and brilliant idea to increase luminosity was conceived: the concept of stochastic cooling.

A Nobel concept

“The cooling of a single particle circulating in a ring is particularly simple” (van der Meer 1984), provided that it can be seen in all of the electronic noise from the pick-up and the preamplifiers. “All” that is needed is to measure the amount of betatron oscillation at a suitable location in the ring and correct it later with a kicker at a phase advance of an odd multiple of 90° (figure 1). But the devil (closely related to Maxwell’s demon) is in the detail. Normally, it is not possible to measure the position of just one particle because there are so many particles in the ring that a single one is impossible to resolve. So, groups of particles – often referred to as beam “slices” or “samples” – must be considered instead.

The concept of transverse stochastic cooling

For such a beam slice, it is indeed possible to measure the average position with sufficient precision during its passage through a pick-up and to correct for this when the same slice goes through a kicker. However, the particles in such a slice are not fixed in their relative position. Because there is always a spread around the central momentum, some particles are faster and others are slower. This leads to an exchange of particles between adjacent beam slices. This “mixing” is vital for stochastic cooling – without it, the cooling action would be over in a few turns. Stochastic cooling does eventually act on individual particles. With the combination of many thousands of observations (many thousands of turns), a sufficiently large bandwidth of the cooling system’s low-noise (sometimes cryogenic) electronics and powerful kickers, it works.

At the time, there were discussions about a possible clash with Liouville’s theorem, which states that a continuum of charged particles guided by electromagnetic fields behaves like an incompressible liquid. In reality, particle beams consist of a mixture of occupied and non-occupied phase space – much like foam in a glass of beer. Stochastic cooling is not trying to compress this “liquid” but rather it separates occupied and non-occupied phase space, in a way similar to foam that is settling. Once these theoretical questions were clarified there were still many open issues, such as the influence of noise and the required bandwidth. With a mild push from friends and colleagues, Simon finally published the first internal note on stochastic cooling in 1972 (van der Meer 1972).

ICE was a storage ring built from components of the g-2 experiment

Over the following years, the newly born bird quickly learnt to fly. A first proof-of-principle experiment was carried out in the ISR with a quickly installed stochastic cooling system. Careful emittance measurements over a long period showed the hoped-for effect (Hübner et al. 1975). Together with the proposal to stack antiprotons for physics in the ISR (Strolin et al. 1976) and for the SPS-collider (Rubbia et al. 1977), this led to the construction of the Initial Cooling Experiment (ICE) in 1977. ICE was a storage ring built from components of the g-2 experiment. It was constructed expressly for a full-scale demonstration of stochastic cooling of beam size and momentum spread (electron-cooling was tried later on). In addition, Simon produced evidence that “stochastic stacking” (stacking in momentum space with the aid of stochastic cooling) works well as a vital tool for the production of large stacks of antiprotons (van der Meer 1978).

Simon at the controls

Once the validity of the method had been demonstrated, Simon’s idea rode on the crest of a wave of large projects that took life at CERN. There was the proposal by David Cline, Peter McIntyre, Fred Mills and Carlo Rubbia to convert the SPS into a proton–antiproton collider. The aim was to provide experimental evidence for the W and Z particles, which would emerge in head-on collisions between sufficiently dense proton and antiproton bunches. Construction of the Antiproton Accumulator (AA) was authorized and started in 1978 under the joint leadership of Simon van der Meer and Roy Billinge. The world’s first antiproton accumulator started up on 3 July 1980, with the first beam circulating the very same evening, and by 22 August 1981 a stack of about 1011 particles had been achieved (Chohan 2004). The UA1 and UA2 experiments at the SPS had already reported the first collisions between high-energy proton and antiproton bunches in the SPS, operating as a collider, on 9 July 1981.

The real highlight arrived in 1982 with the first signs of the W boson, announced on 19 January 1983, to be followed by the discovery of the Z’ announced in May. This was swiftly followed by the award of the Nobel Prize in physics in 1984 to Simon and Carlo Rubbia for “their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of the weak force.”

Simon participated actively in both the commissioning and the operation of the AA and later the Antiproton Accumulator Complex (AAC) – the AA supplemented by a second ring, the Antiproton Collector (AC). He contributed not only to stochastic cooling but to all aspects, for example writing numerous, highly appreciated application programs for the operation of the machines.

Antiproton Accumulator in 1980.

He was certainly aware of his superior intellect but he took it as a natural gift, and if someone else did good work, he valued that just as much. When there was a need, he also did “low level” work. Those who worked with him remember many occasions when someone had a good suggestion on how to improve a controls program, Simon would say, “Yes, that would be better indeed”, and next morning it was in operation. He was often in a thoughtful mode, contemplating new ideas and concepts. Usually he did not pass them on to colleagues for comments until he was really convinced himself that they would work. Once he was sure that a certain concept was good and that it was the right way to go, he could be insistent on getting it going. He rarely made comments in meetings, but when he did say something it carried important weight. He was already highly respected long before he became famous in 1984.

Cooling around the world

In the following years, Simon was extremely active in the conversion and the operation of the AA, together with the additional large-acceptance collector ring, the AC. These two rings, with a total of 16 stochastic cooling systems, began antiproton production in 1987 as the AAC – and remained CERN’S work-horse for antiproton production until 1996. Later, the AA was removed and the AC converted into the Antiproton Decelerator (AD), which has run since 2000 with just three stochastic-cooling systems. These remaining three systems operate at 3.5 GeV/c and 2 GeV/c respectively during the deceleration process and are followed by electron cooling at lower momentum.

Stochastic cooling was also used in CERN’s Low Energy Antiproton Ring (LEAR) in combination with electron cooling until the mid-1990s. In a nutshell, stochastic cooling is most suited to rendering hot beams warm and electron cooling makes warm beams cold. Thus the two techniques are, in a way, complementary. As a spin-off from his work on stochastic cooling, Simon proposed a new (noise assisted) slow-extraction method called “stochastic extraction”. This was first used at LEAR, where it eventually made possible spills of up to 24-hour duration. Prior to that, low-ripple spills could last at best a few seconds.

Simon would see the worldwide success of his great inventions not only before his retirement in 1991, but also afterwards. Stochastic cooling systems became operational at Fermilab around 1980 and later, in the early 1990s, at GSI Darmstadt and For-schungszentrum Jülich (FZJ), as well as at other cooling rings all over the world. The Fermilab antiproton source for the Tevatron started operation in 1985. It is in several aspects similar to the CERN AA /AC, which it has since surpassed in performance, leading to important discoveries, including that of the top quark.

For many years, routine application of stochastic cooling was limited to coasting beams, and stochastic cooling of bunched beams in large machines remained a dream for more than a decade. However, having mastered delicate problems related to the saturation of front-end amplifiers and subsequent intermodulation, bunched stochastic cooling is now in routine operation at Fermilab and at the Relativistic Heavy Ion Collider at Brookhaven. Related beam-cooling methods, such as optical stochastic cooling, are also being proposed or under development.

The magnetic horn, meanwhile, has found numerous applications in different accelerators. The van der Meer scan is a vital tool used for LHC operation and stochastic extraction is used in various machines, for example in COSY at FZJ (since 1996).

After his retirement, Simon kept in close contact with a small group of his former colleagues and friends and there were more or less regular “Tuesday lunch meetings”.

“Unlike many of his Nobel colleagues, who almost invariably are propelled to great achievements by their self confidence, van der Meer remained a modest and quiet person preferring, now that he had retired, to leave the lecture tours to other more extrovert personalities and instead look after his garden and occasionally see a few friends. Never has anyone been changed less by success”, wrote Andy Sessler and Ted Wilson in their book Engines of Discovery (Sessler and Wilson, 2007). At CERN today, Simon’s contributions continue to play a significant role in many projects, from the LHC and the CERN Neutrinos to Gran Sasso facility to the antimatter programme at the AD – where results last year were honoured with the distinction of “breakthrough of the year” by Physics World magazine.

We all learnt with great sadness that Simon passed away on 4 March 2011. He will stay alive in our memories for ever.

TIARA aims to enhance accelerator R&D in Europe

First meeting of TIARA

Particle accelerators are vital state-of-the-art instruments for both fundamental and applied research in areas such as particle physics, nuclear physics and the generation of intense synchrotron radiation and neutron beams. They are also used for many other purposes, in particular medical and industrial applications. Together, the “market” for accelerators is large and steadily increasing year on year. Moreover, R&D in accelerator science and technology, as well as its applications, often leads to innovations with strong socio-economical impacts.

New accelerator-based projects generally require the development of advanced concepts and innovative components with continuously improving performance. This necessitates three levels of R&D: exploratory (validity of principles, conceptual feasibility); targeted (technical demonstration); and industrialization (transfer to industry and optimization). Because these developments require increasingly sophisticated and more expensive prototypes and test facilities, many of those involved in the field felt the need to establish a new initiative aimed at providing a more structured framework for accelerator R&D in Europe with the support of the European Commission (EC). This has led to the Test Infrastructure and Accelerator Research Area (TIARA) project. Co-funded by the European Union Seventh Framework Programme (FP7), the three-year preparatory-phase project started on 1 January 2011, with its first meeting being held at CERN on 23–24 February.

The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way

The approval of the TIARA project and its structure continues a strategic direction that began a decade ago with the report in 2001 to the European Committee for Future Accelerators from the Working Group on the future of accelerator-based particle physics in Europe, followed by the creation of the European Steering Group on Accelerator R&D (ESGARD) in 2002. This was reinforced within the European Strategy for particle physics in 2006. The main objective is to optimize and enhance the outcome of the accelerator research and technical developments in Europe. This strategy has been developed and implemented with the incentive of the Framework Programmes FP6 and FP7, thanks to projects such as CARE, EUROTeV, EURISOL, EuroLEAP, SLHC-PP, ILC-HiGrade, EUROnu and EuCARD. Together, these programmes represent a total investment of around €190 million for the period covered by FP6 and FP7 (2004 to 2012), with about €60 million coming from the EC.

The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way. This endeavour involves a large number of partners across Europe, including universities as well as national and international organizations managing large research centres. Specifically, the main objective is to create a single distributed European accelerator R&D facility by integrating national and international accelerator R&D infrastructures. This will include the implementation of organizational structures to enable the integration of existing individual infrastructures, their efficient operation and upgrades, as well as the construction of new ones whenever needed.

Project organization

The means and structures required to bring about the objectives of TIARA will be developed through the TIARA Preparatory Phase project, at a total cost of €9.1 million, with an EC contribution of €3.9 million. The duration is 3 years – from January 2011 to December 2013 – and it will involve an estimated total of 677 person-months. The project is co-ordinated by the French Alternative Energies and Atomic Energy Commission (CEA), with Roy Aleksan as project co-ordinator, François Kircher as deputy co-ordinator, and Céline Tanguy as project-assistant co-ordinator. Its management bodies are the Governing Council and the Steering Committee. The Governing Council represents the project partners and has elected Leonid Rivkin, of the Paul Scherrer Institute, as its chair. The Steering Committee will ensure the execution of the overall project’s activities, with all work-package co-ordinators as members.

The project is divided into nine work packages (WP). The first five of these are dedicated to organizational issues, while the other four deal with technical aspects.

WP1 focuses on the consortium’s management. Its main task is to ensure the correct achievement of the project goals and it also includes communications, dissemination and outreach. The project office, composed of the co-ordinator and the management team, forms the core of this work package, which is led by Aleksan, the project co-ordinator.

The main objective of WP2, also led by Aleksan, is to develop the future governance structure of TIARA. This includes the definition of the consortium’s organization, the constitution of the statutes and the required means and methods for its management, as well as the related administrative, legal and financial aspects.

WP3 is devoted to the integration and optimization of the European R&D infrastructures. Based on a survey of those that already exist, its objective is to determine present and future needs and to propose ways for developing, sharing and accessing these infrastructures among different users. This work package will also investigate how to strengthen the collaboration with industry and define a technology roadmap for the development of future accelerator components in industry. It is led by Anders Unnervik of CERN.

The main objective of WP4 is to develop a common methodology and procedure for initiating, costing and implementing collaborative R&D projects in a sustainable way. Using these procedures, WP4 will aim to propose a coherent and comprehensive joint R&D programme in accelerator science and technology, which will be carried out by a broad community using the distributed TIARA infrastructures.

The development of structures and mechanisms that allow efficient education and training of human resources and encourage their exchange among the partner facilities is the goal of WP5. The main tasks are to survey the human and training resources and the market for accelerator scientists, as well as to establish a plan of action for promoting accelerator science. This work package is led by Phil Burrows of the John Adams Institute in the UK.

WP6 – SLS Vertical Emittance Tuning (SVET) – is the first of the technical work packages. Its purpose is to convert the Swiss Light Source (SLS) into an R&D infrastructure for reaching and measuring ultrasmall emittances, as will be required for damping rings at a future electron–positron linear collider. This will be done mainly by improving the monitors that are used to measure beam characteristics (position, profile, emittance), and by minimizing the magnetic field errors, misalignments and betatron coupling. This work package is led by Yannis Papaphilippou of CERN.

The principal objective of WP7 – Ionization Cooling Test Facility (ICTF) – is to deliver detailed design reports of the RF power infrastructure upgrades that the ICTF at the UK’s Rutherford Appleton Laboratory requires for it to become the world’s laboratory for R&D in ionization cooling. The design reports will include several upgrades necessary to make the first demonstration of ionization cooling. Ken Long of Imperial College, London, leads this work package.

The goal of WP8 – High Gradient Acceleration (HGA) – is to establish a new R&D infrastructure by upgrading the energy of SPARC, the advanced photo-injector test-facility linac at Frascati. The upgrade will use C-band terawatt high-gradient accelerating structures to reach 250 MeV at the end of the structure. It will be crucial for the next generation of free-electron laser projects, as well as for the SuperB collider project. The work package is led by Marica Biagini of the Frascati National Laboratories.

TIARA

WP9 – Test Infrastructure for High Energy Power Accelerator Components (TIHPAC) – is centred on the design of two test benches aimed at the future European isotope-separation on-line facility, EURISOL. These will be an irradiation test facility for developing high-power targets and a cryostat for testing various kinds of fully equipped low-beta superconducting cavities. These infrastructures would also be essential for other projects such as the European Spallation Source and accelerator-driven systems such as MYRRHA. The work package is led by Sébastien Bousson of CNRS/IN2P3/Orsay.

• For more information about the TIARA project, see the website at www.eu-tiara.eu.

Exploring Fundamental Particles

By Lincoln Wolfenstein and João P Silva
Taylor & Francis; CRC Press 2011
Paperback: £30 $49.95
E-book: $49.95

CCboo2_05_11

Writing a book is no easy task. It surely requires a considerable investment of time and effort (it is difficult enough to write short book reviews). This is especially true with books about complex scientific topics, written by people who are certainly not professional writers. I doubt that the authors of the books reviewed in the CERN Courier have taken courses on how to write bestsellers. Being such hard work, the authors must have good reasons to embark on the daunting challenge of writing a book.

When I started reading Exploring Fundamental Particles, I immediately wondered what could have been the reasons that triggered Lincoln Wolfenstein and João Silva to write such a book. After all, there are already many “textbooks” about particle physics, both in generic terms and in specific topics. For instance, the puzzling topic of CP violation is described in much detail in the book CP Violation (OUP 1999), by Gustavo Branco, Luís Lavoura and João Silva (the same João Silva, despite the fact that João and Silva are probably the two most common Portuguese names). There are also many books about particle physics that address the “general public”, such as the fascinating Zeptospace Odyssey (OUP 2009), by Gian Giudice, which is a nice option for summer reading, despite the somewhat weird title (the start-of-section quotations are particularly enjoyable).

Exploring Fundamental Particles follows an intermediate path. It addresses a broad spectrum of physics topics all of the way from Newton (!) and basic quantum mechanics to the searches for the Higgs boson at the LHC – building the Standard Model along the way. And yet, despite its wide scope, the book focuses with particularly high resolution on a few specific issues, such as CP violation and neutrino physics, which are not exactly the easiest things to explain to a wide audience. The authors must have faced difficult moments during the writing and editing phases, trying hard to keep the text readable for non-experts, while giving the book a “professional touch”.

This somewhat schizophrenic style can be illustrated by the fact that while the book is submerged in Feynman diagrams, some of them are quite hard to digest (“Penguins” and other beasts), it has no equations at all (not even the ubiquitous E=mc2) – maybe for fear of losing the reader – until we reach the end of the book (the fifth appendix, after more than 250 pages, where we do see E=mc2). The reading is not easy (definitely not a “summertime book”) so, for an audience of university students and young researchers, adding a few equations would have improved the clarity of the exposition.

I also found it disturbing to see the intriguing discussions of puzzling subjects interrupted by trivial explanations on how to pronounce “Delta rho”, “psi prime” etc. These parenthetical moments distract the readers who are trying to remain concentrated on the important narrative and are useless to the other readers. (If you do not know how to pronounce a few common Greek letters, you are not likely to survive a guided tour through the CKM matrix.)

I hope the authors (and editor) will soon revise the book and publish a second edition. In the meantime, I will surely read again a few sections of this edition; for certain things, it is really quite a useful book.

Induction Accelerators

By Ken Takayama and Richard Briggs (eds.)
Springer
Hardback: €126.55 £108 $169

CCboo1_05_11

Of the nearly 30,000 particle accelerators now operating worldwide, few types are as unfamiliar to most physicists and engineers as induction accelerators. This class of machine is likewise poorly represented in technical monographs. Induction Accelerators, a volume of 12 essays by well known experts, forms a structured exposition of the basic principles and functions of major technical systems of induction accelerators. The editors have arranged the essays in the logical progression of chapters in a textbook. Nonetheless, each has been written to be useful as a stand-alone text.

Apart from the two chapters about induction synchrotrons, the book is very much the product of the “Livermore/Berkeley school” of technology of induction linear accelerators (linacs) started by Nicholas Christofilos and led for many years by Richard Briggs as the Beam Research Program at the Lawrence Livermore National Laboratory. The chapters by Briggs and his colleagues John Barnard, Louis Reginato and Glen Westenskow are masterful expositions marked by the clarity of analysis and physics motivation that have been the hallmarks of the Livermore/Berkeley school. A prime example is the presentation of the principles of induction accelerators that, despite its brevity, forms an indispensable introduction by the master in the field to a discussion (together with Reginato) of the many trade-offs in designing induction cells.

One application of induction technology made important by affordable, solid-state power electronics and high-quality, amorphous magnetic materials is the induction-based modulator. This application grew from early investigations of magnetic switching by Daniel Birx and his collaborators; it is well described by Edward G Cook and Eiki Hotta in the context of a more general discussion of high-power switches and power-compression techniques.

Invented as low-impedance, multistage accelerators of high-current electron beams, induction machines have always had the central challenge of controlling beam instabilities and other maladies that can spoil the quality of the beam. Such issues have been the focus of the major scientific contribution of George Caporaso and Yu-Jiuan Chen, who – in the most mathematical chapter of the book – discuss beam dynamics, the control of beam break-up instability and the suppression of emittance growth resulting from the combination of misalignment and chromatic effects in the beam transport.

In ion induction linacs proposed for use as inertial-fusion energy drivers, an additional class of instabilities is possible, namely, unstable longitudinal space–charge waves. These instabilities are analysed in a chapter by Barnard and Kazuhiko Horioka titled “Ion Induction Linacs”. It is followed by a description of the applications of ion linacs, especially to heavy-ion-driven inertial fusion and high-energy density research. These chapters contain the most extensive bibliographies of the book.

The use of induction devices in a synchrotron configuration was studied at Livermore and at Pulsed Sciences Inc in the late 1980s. However, it was not until the proof-of-concept experiment by Takayama and his colleagues at KEK, who separated the functions of acceleration and longitudinal focusing, that applications of induction accelerators to producing long bunches (super-bunches) in relativistic-ion accelerators became a possibility for an eventual very large hadron collider. These devices and their potential applications are described in the final chapters of the book.

Both physicists and engineers will find the papers in Induction Accelerators well written with ample – though not exhaustive – bibliographies. While the volume is not a textbook, it could profitably be used as associated reading in a course about accelerator science and technology. Induction Accelerators fills a void in the formal literature on accelerators. It is a tribute to Nicholas Christofilos and Daniel Birx, the two brilliant technical physicists, to whom this volume is dedicated. I recommend it highly.

bright-rec iop pub iop-science physcis connect