Comsol -leaderboard other pages

Topics

Beautiful times in Amsterdam

CCams1_05_11

The 13th International Conference on B-Physics at Hadron Machines (Beauty 2011) was held at the Felix Meritis building in the historic centre of Amsterdam on 4–8 April. Hosted by Nikhef, the National Institute for Subatomic Physics of the Netherlands, the meeting attracted about 100 participants, including experts from Europe, America and Asia. There were 60 invited talks.

The main topic was the physics of Bq mesons, which consist of a b (“beauty”) quark and an anti-q quark, where q can be an up, down, strange or charm quark. These particles offer interesting probes for precision tests of the Standard Model. In this context, asymmetries between decay rates of B and B mesons, which violate the charge-parity (CP) invariance of weak interactions, play a key role. Such observables and various strongly suppressed rare decays of B mesons show a sensitivity to “new physics”, thanks to the possible impact of the contributions of new particles to virtual quantum loops.

The search for these indirect footprints of physics beyond the Standard Model through high-precision measurements is complemented by the search for direct signals of new particles at high-energy colliders. Here, physicists aim to produce new particles (such as supersymmetric squarks or new gauge bosons) and to study their decays in general-purpose detectors – ATLAS and CMS, in the case of the LHC at CERN. The exploration of heavy flavours, and the B-meson system in particular, is the target of the LHCb experiment, which is exploiting the many B mesons that are produced in the proton–proton collisions at the LHC.

Studies of CP violation

The Beauty conferences traditionally have a strong focus on studies of B mesons at hadron machines. In the previous decade, this field was the domain of the CDF and DØ experiments at the Tevatron, the proton–antiproton collider at Fermilab. The electron–positron B factories at SLAC and KEK, with the BaBar and Belle detectors respectively, were the first to establish CP violation in the B-meson system, while the Tevatron experiments have extended the measurements into the Bs-meson sector, which is still poorly explored. These studies have shown that the Cabibbo-Kobayashi-Maskawa matrix is the dominant source of flavour and CP violation, in accordance with the Standard Model.

However, there is evidence that this model is not complete and recent studies of Bs-decays by the CDF and DØ collaborations give a hint of new sources of CP violation in a quantum-mechanical phenomenon, BsBs mixing – although the uncertainties are still too large to draw definite conclusions. In specific scenarios for physics beyond the Standard Model (such as supersymmetry and models with extra Z’ bosons), it is actually possible to accommodate the effect of new physics of this kind.

The first physics results from the LHC experiments were the main highlight of Beauty 2011. It was impressive to see the wealth and high quality of the data presented. The LHCb collaboration’s presentation of the first analysis of the CP-violating observables of the Bs → J/Ψφ decay was particularly exciting. Although the experimental errors are still large, it is intriguing that the data seem to favour a picture similar to the results from CDF and DØ, mentioned above. Fortunately, the LHCb experiment should be able to reduce the uncertainties significantly within a year, with the prospects of revealing new phenomena in BsBs mixing.

Quantum loops

Another exciting decay in which to search for new physics is the rare decay Bs → μ+μ, which originates from quantum-loop effects in the Standard Model. New particles running in the loops or even contributing at the tree level may significantly enhance the decay rate. So far, this decay has been the domain of the CDF and DØ experiments; they have put upper bounds on the branching ratio that are still about one order of magnitude above the Standard Model prediction. Now LHCb has entered the arena, presenting a first upper bound that is similar to the results from the Tevatron. The constraints from LHCb, and soon those from ATLAS and CMS, will quickly become stronger and it will be interesting to see whether eventually a signal for Bs → μ+μ will emerge that is significantly different from the predictions of the Standard Model.

CCams2_05_11

In addition to these key channels that are facilitating the search for new physics in B decays in the early phase of the LHC, the conference covered a range of other topics. Results on heavy-flavour production were presented with the first LHC data collected in the ATLAS, CMS, LHCb and ALICE experiments. Another interesting topic was charm physics, with results from the BES III experiment, CDF and the first analyses from LHCb. A summary was given of B-factory results on the measurement of CP violation and the unitarity triangle parameters and the status of lepton-flavour violation and models of physics beyond the Standard Model was also presented. Moreover, the potential of upcoming B-physics experiments – SuperB, SuperKEKB and the LHCb upgrade – was discussed.

The many experimental presentations were complemented by theoretical review talks. Theory also figured in the conference summaries, in which Andrzej Buras of the Technische Universität München developed a vision for theory for 2011 and beyond, while the outgoing LHCb spokesperson, Andrei Golutvin, highlighted the experimental results. The discussions about physics also continued in an informal way during a tour on historic boats through the canals of Amsterdam, with people enjoying the spectacular weather and a visit to the Hermitage museum where the conference dinner was held.

Beauty 2011 showed that these are exciting times for B physics, with plenty still happening at the Tevatron and the first physics results from the LHC. It will be interesting to see whether the data collected by LHCb and the general-purpose detectors in 2011 will already reveal new physics in the B-meson sector. Flavour physics is moving towards new frontiers and is a fascinating part of the LHC adventure. Correlations between various flavour-physics observables and the interplay with the direct searches for new particles will play a key role in obtaining insights into the physics lying beyond the Standard Model.

For further information and the slides of the presentations, visit the conference webpage www.beauty2011.nikhef.nl.

Simon van der Meer: a quiet giant of engineering and physics

Simon van der Meer

Simon van der Meer was born in 1925 in The Hague, the third child of Pieter van der Meer and Jetske Groeneveld. His father was a school teacher and his mother came from a teacher’s family. Good education was highly prized in the van der Meer family and the parents made a big effort to provide this to Simon and his three sisters. Having attended the gymnasium (science section) in The Hague, he passed his final examination in 1943 – during German occupation in wartime. He stayed at the gymnasium for another two years because the Dutch universities were closed, attending classes in the humanities section. During this period – inspired by his excellent physics teacher – he became interested in electronics and filled his parents’ house with electronic gadgets.

In 1945 Simon began studying technical physics at Delft University, where he specialized in feedback circuits and measurement techniques. In a way, this foreshadowed his main invention, stochastic cooling, which is a combination of measurement (of the position of the particles) and feedback. The “amateur approach” – to use his own words – that he practiced during his stay at Delft University later crystallized in an ability to see complicated things in a simple and clear manner. In 1952 he joined the highly reputed Philips research laboratory in Eindhoven, where he became involved in development work on high-voltage equipment and electronics for electron microscopes. Then, in 1956, he decided to move to the recently founded CERN laboratory.

magnetic horn

As one of his first tasks at CERN, Simon became involved in the design of the pole-face windings and multipole correction lenses for the 26 GeV Proton Synchrotron (PS), which is still in operation today, as the heart of CERN’s accelerator complex. Supervised by and in collaboration with John Adams and Colin Ramm, he developed – in parallel to his technical work on power supplies for these big magnets – a growing interest in particle physics. He worked for a year on a separated antiproton beam, an activity that triggered the idea of the magnetic horn – a pulsed focusing device for charged particles, which traverse a thin metal wall in which a pulsed high current flows. Such a device is often referred to as a “current sheet lens”. The original application of the magnetic horn was for neutrino physics. Of the secondary particles emerging from a target hit by a high-energy proton beam, the horn selectively focused the pions. When the pions then decayed into muons and neutrinos, an equally focused and intense neutrino beam was obtained. The magnetic horn found many applications all around the world, for both neutrino physics and the production of antiprotons.

In 1965 Simon joined the group led by Francis Farley working on a g-2 experiment for the precision measurement of the magnetic moment of the muon. There, he took part in the design of a small storage ring (the g-2 ring) and participated in all phases of the experiment. As he stated later, this period was an invaluable experience not only for his scientific life but also through sharing the vibrant atmosphere at CERN at the time – which was full of excitement – and the lifestyle of experimental high-energy physics. It was also about this time, in 1966, that Simon met his future wife, Catharina Koopman, during a skiing excursion in the Swiss mountains. In what Simon later described as “one of the best decisions of my life”, they married shortly afterwards and had two children, Ester (born 1968) and Mathijs (born 1970).

Simon at a farewell party

In 1967, Simon again became responsible for magnet power supplies, this time for the Intersecting Storage Rings (ISR) and a little later also for the 400 GeV Super Proton Synchrotron (SPS). During his activities at the ISR he developed the now famous “van der Meer scan”, a method to measure and optimize the luminosity of colliding beams. The ISR was a collider with a huge intensity, more than 50 A direct current of protons per beam, and it was in 1968 – probably during one of the long nights devoted to machine development – that a new and brilliant idea to increase luminosity was conceived: the concept of stochastic cooling.

A Nobel concept

“The cooling of a single particle circulating in a ring is particularly simple” (van der Meer 1984), provided that it can be seen in all of the electronic noise from the pick-up and the preamplifiers. “All” that is needed is to measure the amount of betatron oscillation at a suitable location in the ring and correct it later with a kicker at a phase advance of an odd multiple of 90° (figure 1). But the devil (closely related to Maxwell’s demon) is in the detail. Normally, it is not possible to measure the position of just one particle because there are so many particles in the ring that a single one is impossible to resolve. So, groups of particles – often referred to as beam “slices” or “samples” – must be considered instead.

The concept of transverse stochastic cooling

For such a beam slice, it is indeed possible to measure the average position with sufficient precision during its passage through a pick-up and to correct for this when the same slice goes through a kicker. However, the particles in such a slice are not fixed in their relative position. Because there is always a spread around the central momentum, some particles are faster and others are slower. This leads to an exchange of particles between adjacent beam slices. This “mixing” is vital for stochastic cooling – without it, the cooling action would be over in a few turns. Stochastic cooling does eventually act on individual particles. With the combination of many thousands of observations (many thousands of turns), a sufficiently large bandwidth of the cooling system’s low-noise (sometimes cryogenic) electronics and powerful kickers, it works.

At the time, there were discussions about a possible clash with Liouville’s theorem, which states that a continuum of charged particles guided by electromagnetic fields behaves like an incompressible liquid. In reality, particle beams consist of a mixture of occupied and non-occupied phase space – much like foam in a glass of beer. Stochastic cooling is not trying to compress this “liquid” but rather it separates occupied and non-occupied phase space, in a way similar to foam that is settling. Once these theoretical questions were clarified there were still many open issues, such as the influence of noise and the required bandwidth. With a mild push from friends and colleagues, Simon finally published the first internal note on stochastic cooling in 1972 (van der Meer 1972).

ICE was a storage ring built from components of the g-2 experiment

Over the following years, the newly born bird quickly learnt to fly. A first proof-of-principle experiment was carried out in the ISR with a quickly installed stochastic cooling system. Careful emittance measurements over a long period showed the hoped-for effect (Hübner et al. 1975). Together with the proposal to stack antiprotons for physics in the ISR (Strolin et al. 1976) and for the SPS-collider (Rubbia et al. 1977), this led to the construction of the Initial Cooling Experiment (ICE) in 1977. ICE was a storage ring built from components of the g-2 experiment. It was constructed expressly for a full-scale demonstration of stochastic cooling of beam size and momentum spread (electron-cooling was tried later on). In addition, Simon produced evidence that “stochastic stacking” (stacking in momentum space with the aid of stochastic cooling) works well as a vital tool for the production of large stacks of antiprotons (van der Meer 1978).

Simon at the controls

Once the validity of the method had been demonstrated, Simon’s idea rode on the crest of a wave of large projects that took life at CERN. There was the proposal by David Cline, Peter McIntyre, Fred Mills and Carlo Rubbia to convert the SPS into a proton–antiproton collider. The aim was to provide experimental evidence for the W and Z particles, which would emerge in head-on collisions between sufficiently dense proton and antiproton bunches. Construction of the Antiproton Accumulator (AA) was authorized and started in 1978 under the joint leadership of Simon van der Meer and Roy Billinge. The world’s first antiproton accumulator started up on 3 July 1980, with the first beam circulating the very same evening, and by 22 August 1981 a stack of about 1011 particles had been achieved (Chohan 2004). The UA1 and UA2 experiments at the SPS had already reported the first collisions between high-energy proton and antiproton bunches in the SPS, operating as a collider, on 9 July 1981.

The real highlight arrived in 1982 with the first signs of the W boson, announced on 19 January 1983, to be followed by the discovery of the Z’ announced in May. This was swiftly followed by the award of the Nobel Prize in physics in 1984 to Simon and Carlo Rubbia for “their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of the weak force.”

Simon participated actively in both the commissioning and the operation of the AA and later the Antiproton Accumulator Complex (AAC) – the AA supplemented by a second ring, the Antiproton Collector (AC). He contributed not only to stochastic cooling but to all aspects, for example writing numerous, highly appreciated application programs for the operation of the machines.

Antiproton Accumulator in 1980.

He was certainly aware of his superior intellect but he took it as a natural gift, and if someone else did good work, he valued that just as much. When there was a need, he also did “low level” work. Those who worked with him remember many occasions when someone had a good suggestion on how to improve a controls program, Simon would say, “Yes, that would be better indeed”, and next morning it was in operation. He was often in a thoughtful mode, contemplating new ideas and concepts. Usually he did not pass them on to colleagues for comments until he was really convinced himself that they would work. Once he was sure that a certain concept was good and that it was the right way to go, he could be insistent on getting it going. He rarely made comments in meetings, but when he did say something it carried important weight. He was already highly respected long before he became famous in 1984.

Cooling around the world

In the following years, Simon was extremely active in the conversion and the operation of the AA, together with the additional large-acceptance collector ring, the AC. These two rings, with a total of 16 stochastic cooling systems, began antiproton production in 1987 as the AAC – and remained CERN’S work-horse for antiproton production until 1996. Later, the AA was removed and the AC converted into the Antiproton Decelerator (AD), which has run since 2000 with just three stochastic-cooling systems. These remaining three systems operate at 3.5 GeV/c and 2 GeV/c respectively during the deceleration process and are followed by electron cooling at lower momentum.

Stochastic cooling was also used in CERN’s Low Energy Antiproton Ring (LEAR) in combination with electron cooling until the mid-1990s. In a nutshell, stochastic cooling is most suited to rendering hot beams warm and electron cooling makes warm beams cold. Thus the two techniques are, in a way, complementary. As a spin-off from his work on stochastic cooling, Simon proposed a new (noise assisted) slow-extraction method called “stochastic extraction”. This was first used at LEAR, where it eventually made possible spills of up to 24-hour duration. Prior to that, low-ripple spills could last at best a few seconds.

Simon would see the worldwide success of his great inventions not only before his retirement in 1991, but also afterwards. Stochastic cooling systems became operational at Fermilab around 1980 and later, in the early 1990s, at GSI Darmstadt and For-schungszentrum Jülich (FZJ), as well as at other cooling rings all over the world. The Fermilab antiproton source for the Tevatron started operation in 1985. It is in several aspects similar to the CERN AA /AC, which it has since surpassed in performance, leading to important discoveries, including that of the top quark.

For many years, routine application of stochastic cooling was limited to coasting beams, and stochastic cooling of bunched beams in large machines remained a dream for more than a decade. However, having mastered delicate problems related to the saturation of front-end amplifiers and subsequent intermodulation, bunched stochastic cooling is now in routine operation at Fermilab and at the Relativistic Heavy Ion Collider at Brookhaven. Related beam-cooling methods, such as optical stochastic cooling, are also being proposed or under development.

The magnetic horn, meanwhile, has found numerous applications in different accelerators. The van der Meer scan is a vital tool used for LHC operation and stochastic extraction is used in various machines, for example in COSY at FZJ (since 1996).

After his retirement, Simon kept in close contact with a small group of his former colleagues and friends and there were more or less regular “Tuesday lunch meetings”.

“Unlike many of his Nobel colleagues, who almost invariably are propelled to great achievements by their self confidence, van der Meer remained a modest and quiet person preferring, now that he had retired, to leave the lecture tours to other more extrovert personalities and instead look after his garden and occasionally see a few friends. Never has anyone been changed less by success”, wrote Andy Sessler and Ted Wilson in their book Engines of Discovery (Sessler and Wilson, 2007). At CERN today, Simon’s contributions continue to play a significant role in many projects, from the LHC and the CERN Neutrinos to Gran Sasso facility to the antimatter programme at the AD – where results last year were honoured with the distinction of “breakthrough of the year” by Physics World magazine.

We all learnt with great sadness that Simon passed away on 4 March 2011. He will stay alive in our memories for ever.

TIARA aims to enhance accelerator R&D in Europe

First meeting of TIARA

Particle accelerators are vital state-of-the-art instruments for both fundamental and applied research in areas such as particle physics, nuclear physics and the generation of intense synchrotron radiation and neutron beams. They are also used for many other purposes, in particular medical and industrial applications. Together, the “market” for accelerators is large and steadily increasing year on year. Moreover, R&D in accelerator science and technology, as well as its applications, often leads to innovations with strong socio-economical impacts.

New accelerator-based projects generally require the development of advanced concepts and innovative components with continuously improving performance. This necessitates three levels of R&D: exploratory (validity of principles, conceptual feasibility); targeted (technical demonstration); and industrialization (transfer to industry and optimization). Because these developments require increasingly sophisticated and more expensive prototypes and test facilities, many of those involved in the field felt the need to establish a new initiative aimed at providing a more structured framework for accelerator R&D in Europe with the support of the European Commission (EC). This has led to the Test Infrastructure and Accelerator Research Area (TIARA) project. Co-funded by the European Union Seventh Framework Programme (FP7), the three-year preparatory-phase project started on 1 January 2011, with its first meeting being held at CERN on 23–24 February.

The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way

The approval of the TIARA project and its structure continues a strategic direction that began a decade ago with the report in 2001 to the European Committee for Future Accelerators from the Working Group on the future of accelerator-based particle physics in Europe, followed by the creation of the European Steering Group on Accelerator R&D (ESGARD) in 2002. This was reinforced within the European Strategy for particle physics in 2006. The main objective is to optimize and enhance the outcome of the accelerator research and technical developments in Europe. This strategy has been developed and implemented with the incentive of the Framework Programmes FP6 and FP7, thanks to projects such as CARE, EUROTeV, EURISOL, EuroLEAP, SLHC-PP, ILC-HiGrade, EUROnu and EuCARD. Together, these programmes represent a total investment of around €190 million for the period covered by FP6 and FP7 (2004 to 2012), with about €60 million coming from the EC.

The overall aim of TIARA is to facilitate and optimize European R&D efforts in accelerator science and technology in a sustainable way. This endeavour involves a large number of partners across Europe, including universities as well as national and international organizations managing large research centres. Specifically, the main objective is to create a single distributed European accelerator R&D facility by integrating national and international accelerator R&D infrastructures. This will include the implementation of organizational structures to enable the integration of existing individual infrastructures, their efficient operation and upgrades, as well as the construction of new ones whenever needed.

Project organization

The means and structures required to bring about the objectives of TIARA will be developed through the TIARA Preparatory Phase project, at a total cost of €9.1 million, with an EC contribution of €3.9 million. The duration is 3 years – from January 2011 to December 2013 – and it will involve an estimated total of 677 person-months. The project is co-ordinated by the French Alternative Energies and Atomic Energy Commission (CEA), with Roy Aleksan as project co-ordinator, François Kircher as deputy co-ordinator, and Céline Tanguy as project-assistant co-ordinator. Its management bodies are the Governing Council and the Steering Committee. The Governing Council represents the project partners and has elected Leonid Rivkin, of the Paul Scherrer Institute, as its chair. The Steering Committee will ensure the execution of the overall project’s activities, with all work-package co-ordinators as members.

The project is divided into nine work packages (WP). The first five of these are dedicated to organizational issues, while the other four deal with technical aspects.

WP1 focuses on the consortium’s management. Its main task is to ensure the correct achievement of the project goals and it also includes communications, dissemination and outreach. The project office, composed of the co-ordinator and the management team, forms the core of this work package, which is led by Aleksan, the project co-ordinator.

The main objective of WP2, also led by Aleksan, is to develop the future governance structure of TIARA. This includes the definition of the consortium’s organization, the constitution of the statutes and the required means and methods for its management, as well as the related administrative, legal and financial aspects.

WP3 is devoted to the integration and optimization of the European R&D infrastructures. Based on a survey of those that already exist, its objective is to determine present and future needs and to propose ways for developing, sharing and accessing these infrastructures among different users. This work package will also investigate how to strengthen the collaboration with industry and define a technology roadmap for the development of future accelerator components in industry. It is led by Anders Unnervik of CERN.

The main objective of WP4 is to develop a common methodology and procedure for initiating, costing and implementing collaborative R&D projects in a sustainable way. Using these procedures, WP4 will aim to propose a coherent and comprehensive joint R&D programme in accelerator science and technology, which will be carried out by a broad community using the distributed TIARA infrastructures.

The development of structures and mechanisms that allow efficient education and training of human resources and encourage their exchange among the partner facilities is the goal of WP5. The main tasks are to survey the human and training resources and the market for accelerator scientists, as well as to establish a plan of action for promoting accelerator science. This work package is led by Phil Burrows of the John Adams Institute in the UK.

WP6 – SLS Vertical Emittance Tuning (SVET) – is the first of the technical work packages. Its purpose is to convert the Swiss Light Source (SLS) into an R&D infrastructure for reaching and measuring ultrasmall emittances, as will be required for damping rings at a future electron–positron linear collider. This will be done mainly by improving the monitors that are used to measure beam characteristics (position, profile, emittance), and by minimizing the magnetic field errors, misalignments and betatron coupling. This work package is led by Yannis Papaphilippou of CERN.

The principal objective of WP7 – Ionization Cooling Test Facility (ICTF) – is to deliver detailed design reports of the RF power infrastructure upgrades that the ICTF at the UK’s Rutherford Appleton Laboratory requires for it to become the world’s laboratory for R&D in ionization cooling. The design reports will include several upgrades necessary to make the first demonstration of ionization cooling. Ken Long of Imperial College, London, leads this work package.

The goal of WP8 – High Gradient Acceleration (HGA) – is to establish a new R&D infrastructure by upgrading the energy of SPARC, the advanced photo-injector test-facility linac at Frascati. The upgrade will use C-band terawatt high-gradient accelerating structures to reach 250 MeV at the end of the structure. It will be crucial for the next generation of free-electron laser projects, as well as for the SuperB collider project. The work package is led by Marica Biagini of the Frascati National Laboratories.

TIARA

WP9 – Test Infrastructure for High Energy Power Accelerator Components (TIHPAC) – is centred on the design of two test benches aimed at the future European isotope-separation on-line facility, EURISOL. These will be an irradiation test facility for developing high-power targets and a cryostat for testing various kinds of fully equipped low-beta superconducting cavities. These infrastructures would also be essential for other projects such as the European Spallation Source and accelerator-driven systems such as MYRRHA. The work package is led by Sébastien Bousson of CNRS/IN2P3/Orsay.

• For more information about the TIARA project, see the website at www.eu-tiara.eu.

Exploring Fundamental Particles

By Lincoln Wolfenstein and João P Silva
Taylor & Francis; CRC Press 2011
Paperback: £30 $49.95
E-book: $49.95

CCboo2_05_11

Writing a book is no easy task. It surely requires a considerable investment of time and effort (it is difficult enough to write short book reviews). This is especially true with books about complex scientific topics, written by people who are certainly not professional writers. I doubt that the authors of the books reviewed in the CERN Courier have taken courses on how to write bestsellers. Being such hard work, the authors must have good reasons to embark on the daunting challenge of writing a book.

When I started reading Exploring Fundamental Particles, I immediately wondered what could have been the reasons that triggered Lincoln Wolfenstein and João Silva to write such a book. After all, there are already many “textbooks” about particle physics, both in generic terms and in specific topics. For instance, the puzzling topic of CP violation is described in much detail in the book CP Violation (OUP 1999), by Gustavo Branco, Luís Lavoura and João Silva (the same João Silva, despite the fact that João and Silva are probably the two most common Portuguese names). There are also many books about particle physics that address the “general public”, such as the fascinating Zeptospace Odyssey (OUP 2009), by Gian Giudice, which is a nice option for summer reading, despite the somewhat weird title (the start-of-section quotations are particularly enjoyable).

Exploring Fundamental Particles follows an intermediate path. It addresses a broad spectrum of physics topics all of the way from Newton (!) and basic quantum mechanics to the searches for the Higgs boson at the LHC – building the Standard Model along the way. And yet, despite its wide scope, the book focuses with particularly high resolution on a few specific issues, such as CP violation and neutrino physics, which are not exactly the easiest things to explain to a wide audience. The authors must have faced difficult moments during the writing and editing phases, trying hard to keep the text readable for non-experts, while giving the book a “professional touch”.

This somewhat schizophrenic style can be illustrated by the fact that while the book is submerged in Feynman diagrams, some of them are quite hard to digest (“Penguins” and other beasts), it has no equations at all (not even the ubiquitous E=mc2) – maybe for fear of losing the reader – until we reach the end of the book (the fifth appendix, after more than 250 pages, where we do see E=mc2). The reading is not easy (definitely not a “summertime book”) so, for an audience of university students and young researchers, adding a few equations would have improved the clarity of the exposition.

I also found it disturbing to see the intriguing discussions of puzzling subjects interrupted by trivial explanations on how to pronounce “Delta rho”, “psi prime” etc. These parenthetical moments distract the readers who are trying to remain concentrated on the important narrative and are useless to the other readers. (If you do not know how to pronounce a few common Greek letters, you are not likely to survive a guided tour through the CKM matrix.)

I hope the authors (and editor) will soon revise the book and publish a second edition. In the meantime, I will surely read again a few sections of this edition; for certain things, it is really quite a useful book.

Induction Accelerators

By Ken Takayama and Richard Briggs (eds.)
Springer
Hardback: €126.55 £108 $169

CCboo1_05_11

Of the nearly 30,000 particle accelerators now operating worldwide, few types are as unfamiliar to most physicists and engineers as induction accelerators. This class of machine is likewise poorly represented in technical monographs. Induction Accelerators, a volume of 12 essays by well known experts, forms a structured exposition of the basic principles and functions of major technical systems of induction accelerators. The editors have arranged the essays in the logical progression of chapters in a textbook. Nonetheless, each has been written to be useful as a stand-alone text.

Apart from the two chapters about induction synchrotrons, the book is very much the product of the “Livermore/Berkeley school” of technology of induction linear accelerators (linacs) started by Nicholas Christofilos and led for many years by Richard Briggs as the Beam Research Program at the Lawrence Livermore National Laboratory. The chapters by Briggs and his colleagues John Barnard, Louis Reginato and Glen Westenskow are masterful expositions marked by the clarity of analysis and physics motivation that have been the hallmarks of the Livermore/Berkeley school. A prime example is the presentation of the principles of induction accelerators that, despite its brevity, forms an indispensable introduction by the master in the field to a discussion (together with Reginato) of the many trade-offs in designing induction cells.

One application of induction technology made important by affordable, solid-state power electronics and high-quality, amorphous magnetic materials is the induction-based modulator. This application grew from early investigations of magnetic switching by Daniel Birx and his collaborators; it is well described by Edward G Cook and Eiki Hotta in the context of a more general discussion of high-power switches and power-compression techniques.

Invented as low-impedance, multistage accelerators of high-current electron beams, induction machines have always had the central challenge of controlling beam instabilities and other maladies that can spoil the quality of the beam. Such issues have been the focus of the major scientific contribution of George Caporaso and Yu-Jiuan Chen, who – in the most mathematical chapter of the book – discuss beam dynamics, the control of beam break-up instability and the suppression of emittance growth resulting from the combination of misalignment and chromatic effects in the beam transport.

In ion induction linacs proposed for use as inertial-fusion energy drivers, an additional class of instabilities is possible, namely, unstable longitudinal space–charge waves. These instabilities are analysed in a chapter by Barnard and Kazuhiko Horioka titled “Ion Induction Linacs”. It is followed by a description of the applications of ion linacs, especially to heavy-ion-driven inertial fusion and high-energy density research. These chapters contain the most extensive bibliographies of the book.

The use of induction devices in a synchrotron configuration was studied at Livermore and at Pulsed Sciences Inc in the late 1980s. However, it was not until the proof-of-concept experiment by Takayama and his colleagues at KEK, who separated the functions of acceleration and longitudinal focusing, that applications of induction accelerators to producing long bunches (super-bunches) in relativistic-ion accelerators became a possibility for an eventual very large hadron collider. These devices and their potential applications are described in the final chapters of the book.

Both physicists and engineers will find the papers in Induction Accelerators well written with ample – though not exhaustive – bibliographies. While the volume is not a textbook, it could profitably be used as associated reading in a course about accelerator science and technology. Induction Accelerators fills a void in the formal literature on accelerators. It is a tribute to Nicholas Christofilos and Daniel Birx, the two brilliant technical physicists, to whom this volume is dedicated. I recommend it highly.

A world first for EMMA

CCnew1_04_11

At the end of March, an electron beam was steered round the ring of a new type of particle accelerator and successfully accelerated to 18 MeV for the first time. EMMA (Electron Model for Many Applications) is a proof-of-principle prototype built at the UK Science and Technology Facilities Council’s Daresbury Laboratory to test the concept of the non-scaling fixed-field alternating gradient accelerator (FFAG). The technique should allow the construction of a new generation of more powerful, yet more compact and economical accelerators.

The successful acceleration – a “world first” – confirms not only that the design of the most technically demanding aspects of EMMA is sound but it also demonstrates the feasibility of the technology used. The next steps will be to move towards full acceleration, from 10 to 20 MeV, and commence the detailed characterization of the accelerator.

The basic concept underlying EMMA is that of the FFAG, in which a ring of fixed-field magnets simultaneously steers and focuses the electron beam round the machine. The focusing is as strong as in an alternating-gradient synchrotron but the beam spirals outwards while it is accelerated, as in a cyclotron. However, with sufficiently strong magnetic focusing the displacement of the beam as it accelerates and spirals can be kept much smaller than in other types of accelerator. This makes the FFAG concept attractive for a range of applications, from treating cancer to powering safer nuclear reactors that produce less hazardous waste.

The design of EMMA’s magnet ring presented several challenges. The focusing magnets have a standard quadrupole geometry but they are used to steer the beam by offsetting it horizontally. The magnets are short, so “end effects” become important, and pairs of magnets are closely spaced around the ring, so the interaction between magnets is non-trivial.

• EMMA is a major part of the British Accelerator Science and Radiation Oncology Consortium CONFORM project and is funded by the Research Councils UK (RCUK) Basic Technology programme.

Earthquake in Japan

People around the world were deeply saddened to learn of the devastation caused by the major earthquake and the related tsunami on Friday 11 March in northern Japan. The 8.9-magnitude earthquake had its epicentre some 130 km off the eastern coast, and gave rise to unprecedented damage that extended far and wide.

The KEK high-energy physics laboratory and the Japan Proton Accelerator Research Complex (J-PARC) are the two particle accelerator facilities closest to the epicentre. In both case there were fortunately no reported injuries, nor was there any resulting radiation hazard. J-PARC lies on the eastern coast at Tokai and was the most heavily affected of the two facilities. Designed to withstand a tsunami of up to 10 m, on this occasion there was little effect. Although surrounding roads and some buildings were severely damaged, the accelerators at the facility appear to be in relatively good shape. KEK, at Tsukuba some 50 km north-east of Tokyo, suffered significant disruption to services and some damage to buildings and facilities.

The thoughts of the particle-physics community are with friends and colleagues at partner institutes in Japan, as well as those at laboratories and institutes elsewhere who have family and friends in Japan.

The latest information about KEK and J-PARC is available on the websites: http://j-parc.jp/index-e.html.

ALICE collaboration measures the size of the fireball in heavy-ion collisions

CCnew2_04_11

The ALICE collaboration has measured the size of the pion-emitting system in central lead–ion collisions at the LHC at a centre-of-mass energy of 2.76 TeV per nucleon pair. The radii of the pion source were deduced from the shape of the Bose-Einstein peak in the two-pion correlation functions.

In hadron and ion collisions, Bose-Einstein quantum statistics leads to enhanced production of bosons that are close together in phase space, and thus to an excess of pairs at low relative momentum. The width of the excess region is inversely proportional to the system size at decoupling, i.e. at the point when the majority of the particles stop interacting.

An important finding at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven was that the QCD matter created there behaved like a fluid, with strong collective motions that are well described by hydrodynamic equations. The collective flow makes the size of the system appear smaller with increasing momentum of the pair. This behaviour is also clearly visible for the radii measured at the LHC in the ALICE experiment. Figure 1 shows the results for measurements of the radius of the pion source in three dimensions: along the beam axis, Rlong; along the transverse momentum (kT) of the pair, Rout; and in a direction perpendicular to these two, Rside.

The similarity between the values for Rout and Rside indicates a short duration for the emission, hence an “explosive” emission. The time when the emission reaches its maximum – measured with respect to the first encounter – can be derived from the dependence of the longitudinal radius on the transverse momentum, Rlong(kT). ALICE has found this to be 10–11 fm/c, which is significantly longer than it is at RHIC. Moreover, the product of the three radii at low pair-momentum – the best estimate of the homogeneity volume of the system at decoupling – is twice as large as at RHIC (figure 2).

CCnew3_04_11

These results, taken together with those obtained from the study of the multiplicity and the azimuthal anisotropy, indicate that the fireball formed in nuclear collisions at the LHC is hotter, lives longer and expands to a larger size than at lower energies. Further analyses, in particular including the full dependence of these observables on centrality, will provide more insights into the properties of the system – such as initial velocities, the equation of state and the fluid viscosity – and strongly constrain the theoretical modelling of heavy-ion collisions.

TOTEM construction complete

CCnew4_04_11

The winter technical stop saw the final steps of the installation of the TOTEM experiment at the LHC. After 8 years of development, the two arms of the inelastic telescope T1 were successfully installed inside the CMS endcap at about 10.5 m on either side of the interaction point. This detector joins the previously installed telescope T2 (at 13.5 m), as well as detectors in two sets of Roman Pots at 147 m and 220 m. Additional detectors at 147 m were also installed in the shutdown.

TOTEM is designed to make precise measurements of the total proton–proton cross-section and to perform detailed studies of elastic and diffractive proton–proton scattering. It requires dedicated runs of the LHC at low luminosities to allow the movable Roman Pots to bring detectors as close to the beam as possible.

CMS experiment makes use of the tau

CCnew5_04_11

Measurements with leptons are an important tool for physics studies at the LHC. While electrons and muons – being the easiest to detect and identify – are used for many analyses, studies that include τ leptons are important for searches and for electroweak measurements in particular. It is a sign that experimental analyses are reaching maturity when physics results on τ leptons become available, as they are now doing with CMS.

The lifetime of the τ is of the order of 10–13 s, so it decays shortly after production, complicating its identification and use in physics analyses. It decays most often leptonically, into an electron or muon plus two neutrinos, or hadronically to either one or three charged particles together with neutral hadrons and a neutrino. The hadronic decays of the τ thus contain collimated low-multiplicity jets, a feature that is used experimentally to select τ decays, while reducing background from QCD jets.

CMS recently published two physics papers studying decays into τ leptons. The first presents a study of the decay of Z bosons into τ pairs, using both leptonic and hadronic decays of the τ (CMS collaboration 2011a). The τ leptons are identified via isolated groups of particles, found through the CMS particle-flow event reconstruction, that are compatible with the possible τ decays. Figure 1 shows the visible invariant mass of the two τ candidates for a τ pair, where one decays leptonically to a muon and the other decays hadronically. Because of the escaping neutrinos in the τ decays the reconstructed Z boson mass is not at its known value, but the result of the measurement agrees well with the expectation from the Monte Carlo simulation.

CCnew6_04_11

This yields a cross-section for Z → ττ, in proton–proton collisions at 7 TeV, of 1.00 ± 0.05 (stat.) ± 0.08 (syst.) ± 0.04 (lumi.) nb. This agrees well with similar cross-sections measured in the electron and muon decay modes of the Z – as is expected from the lepton universality in Z decays that was established in precision measurements by experiments in the 1990s at the Large Electron Positron collider.

More interestingly, the τ can be used to search for new particles, for the Higgs boson in particular. Higgs particles in the minimal supersymmetric extension of the Standard Model (MSSM) are expected to show a large decay-rate to τ pairs, especially for large values of the parameter tanβ, which is the ratio of the vacuum expectation values of the two members of the Higgs doublet.

CMS has carried out such an analysis with the full data sample of 2010 and found no excess of τ pair production above the expected background (CMS collaboration 2011b). The resulting excluded region in the plane of tanβ and the mass of pseudoscalar Higgs boson in the MSSM, for a benchmark scenario called mhmax, is shown in figure 2.

The surprise is that the search already goes well beyond the reach of the searches at the Tevatron, in part thanks to the high efficiency and high quality of the detection and reconstruction of the τ leptons in CMS. Clearly, the τ has now become an important tool for the collaborations in exploring the new energy region at the LHC.

bright-rec iop pub iop-science physcis connect