Bluefors – leaderboard other pages

Topics

Principles of Radiation Interaction in Matter and Detection (4th edition)

By C Leroy and P G Rancoita
World Scientific
Also available at the CERN bookshop

51d2dubvP+L._SX336_BO1,204,203,200_

Based on a series of lectures given to undergraduate and graduate students over several years, this book provides a comprehensive and clear presentation of the physics principles that underlie radiation detection.

To detect particles and radiation, the effects of their interaction with matter, when passing through it, have to be studied. The development of increasingly sophisticated and precise detectors has made possible many important discoveries and measurements in particle and nuclear physics.

The book, which has reached its 4th edition thanks to its good reception by readers, is organised into two main parts. The first is dedicated to an extensive treatment of the theories of particle interaction, of the physics and properties of semiconductors, as well as of the displacement damage caused in semiconductors by traversing radiation.

The second part focuses on the techniques used to reveal different kinds of particles, and the relative detectors. Detailed examples are presented to illustrate the operation of the various types of detectors. Radiation environments in which these mechanisms of interaction are expected to take place are also described. The last chapter is dedicated to the application of particle detection to medical physics for imaging. Two appendices and a very rich bibliography complete the volume.

This latest edition of the book has been fully revised, and many sections have been extended to give as complete a treatment as possible of this developing field of study and research. Among other things, this edition provides a treatment of Coulomb scattering on screened nuclear potentials resulting from electrons, protons, light ions and heavy ions, which allows the corresponding non-ionising energy-loss (NIEL) doses deposited in any material to be derived.

Physics and Mathematical Tools: Methods and Examples

By A Alastuey, M Clusel, M Magro and P Pujol
World Scientific

61LLbgVMI7L

This volume presents a set of useful mathematical methods and tools that can be used by physicists and engineers for a wide range of applications. It comprises four chapters, each structured in three parts: first, the general characteristics of the methods are described, then a few examples of applications in different fields are given, and finally a number of exercises are proposed and their solutions sketched.

The topics of the chapters are: analytical properties of susceptibilities in linear response theory, static and dynamical Green functions, and the saddle-point method to estimate integrals. The examples and exercises included range from classical mechanics and electromagnetism to quantum mechanics, quantum field theory and statistical physics. In this way, the general mechanisms of each method are seen from different points of view and therefore made clearer.

The authors have chosen to avoid derivations that are too technical, but without sacrificing rigour or omitting the mathematics behind the method applied in each instance. Moreover, three appendices at the end of the book provide a short overview of some important tools, so that the volume can be considered self-contained, at least to a certain extent.

Intended primarily for undergraduate and graduate physics students, the book could also be useful reading for teachers, researchers and engineers.

Unifying Physics of Accelerators, Lasers and Plasmas

By Andrei Seryi
CRC Press

CCboo2_05_16

Particle accelerators have led to remarkable discoveries and enabled scientists to develop and test the Standard Model of particle physics. On a different scale, accelerators have many applications in technology, materials science, biology, medicine (including cancer therapy), fusion research, and industry. These machines are used to accelerate electrons, positrons or ions to energies in the range of 10 s of MeV to 10 s of GeV. Electron beams are employed in generating intense X-rays in either synchrotrons or free-electron lasers, such as the Linear Collider Light Source at Stanford or the XFEL in Hamburg, for a range of applications.

Particle accelerators developed over the last century are now approaching the energy frontier. Today, at the terascale, the machines needed are extremely large and costly. The size of a conventional accelerator is determined by the technology used and final energy required. In conventional accelerators, radiofrequency microwave cavities support the electric fields responsible for accelerating charged particles. Plasma-based particle accelerators, driven by either lasers or particle beams, are showing great promise as future replacements, primarily due to the extremely large accelerating electric fields they can support, leading to the possibility of compact structures. These fields are supported by the collective motion of plasma electrons, forming a space-charge disturbance moving at a speed slightly below the speed of light in a vacuum. This method is commonly known as plasma wakefield particle acceleration.

Plasma-based accelerators are the brainchild of the late John Dawson and colleagues at the University of California, Los Angeles, and is a topic that is being investigated worldwide with a great deal of success. In the 1980s, John David Lawson asked: “Will they be a serious competitor and displace the conventional ‘dinosaur’ variety?” This is still a valid question, with plasma accelerators already producing bright X-ray sources through betatron radiation at the lower energy scale, and there are plans to create electron beams that are good enough to drive free-electron lasers and future colliders. The topic and application of these plasma accelerators have seen rapid progress worldwide in the last few years, with the result that research is no longer limited to plasma physicists, but is now seeing accelerator and radiation experts involved in developing the subject.

The book fills a void in the understanding of accelerator physics, radiation physics and plasma accelerators. It is intended to unify the three areas and does an excellent job. It also introduces the reader to the theory of inventive problem solving (TRIZ), proposed by Genrikh Altshuller in the mid 20th century to aid in the development of successful patents. It is argued that plasma accelerators fall into the prescription of TRIZ, however, it could also be argued that knowledge, imagination, creativity and time were all that was needed. The concept of TRIZ is outlined, and it is shown how it can be adopted for scientific and engineering problems.

The book is well organised. First, the fundamental concepts of particle motion in EM fields, common to accelerators and plasmas, are presented. Then, in chapter 3, the basics of synchrotron radiation are introduced. They are discussed again in chapter 7, with a potted history of synchrotrons together with Thomson and Compton scattering. It would make sense to have the history of synchrotrons in the earlier chapter.

The main topic of the book, namely the synergy between accelerators, lasers and plasma, is covered in chapter 4, where a comparison between particle-beam bunch compression and laser-pulse compression is made. Lasers have the additional advantage of being amplified through a non-linear medium amplification using chirped-pulse amplification (CPA). This method, together with optical parametric amplification, can push the laser pulses to even higher intensities.

The basics of plasma accelerators are covered in chapter 6, where simple models of these accelerators are described, including laser- and beam-driven wakefield accelerators. However, only the lepton wakefield drivers, not the proton one used for the AWAKE project at CERN, are discussed. This chapter also describes general laser plasma processes, such as laser ionisation, with an update on the progress in developing laser peak intensity. The application of plasma accelerators as a driver of free-electron lasers is covered in chapter 8, describing the principles in simple terms, with handy formulae that can be easily used. Proton and ion acceleration are covered in chapter 9, where the reader is introduced to Bragg scattering, the DNA response to radiation and proton-therapy devices, ending with a description of different plasma-acceleration schemes for protons and ions. The basic principles of the laser acceleration of protons and ions by sheaths, radiation pressure and shock waves are briefly covered. The penultimate chapter discusses beam and pulse manipulation, bringing together a fairly comprehensive but brief introduction to some of the issues regarding beam quality: beam stability, cooling and phase transfer, among others. Finally, chapter 11 looks at inventions and innovations in science, describing how using TRIZ could help. There is also a discussion on bridging the gap between initial scientific ideas and experimental verification to commercial applications, the so-called “Valley of Death”, something that is not discussed in textbooks but is now more relevant than ever.

This book is, to my knowledge, the first to bridge the three disciplines of accelerators, lasers and plasmas. It fills a gap in the market and helps in developing a better understanding of the concepts used in the quest to build compact accelerators. It is an inspiring read that is suitable for both undergraduate and graduate students, as well as researchers in the field of plasma accelerators. The book concentrates on the principles, rather than being heavy on the mathematics, and I like the fact that the pages have wide margins to take notes.

Melting Hadrons, Boiling Quarks: From Hagedorn Temperature to Ultra-Relativistic Heavy-Ion Collisions at CERN. With a Tribute to Rolf Hagedorn

By Johann Rafelski (ed.)
Springer
Also available at the CERN bookshop

CCboo1_05_16

The statistical bootstrap model (SBM), the exponential rise of the hadron spectrum, and the existence of a limiting temperature as the ultimate indicator for the end of ordinary hadron physics, will always be associated with the name of Rolf Hagedorn. He showed that hadron physics contains its own limit, and we know today that this limit signals quark deconfinement and the start of a new regime of strong-interaction physics.

This book is edited by Johann Rafelski, who was a long-time collaborator with Hagedorn and took part in many of the early conceptual developments of the SBM. It may perhaps be best characterised by pointing out what it is not. It is not a collection of review articles on the physics of the SBM and related topics, which could be given to newcomers as an introduction to the field. It is not a collection of reprints to summarise the well-known work of Hagedorn on the SBM, and it is also not a review of the history of this theory. Actually, in this thoughtfully composed volume, aspects of all of the above can be found. However, it goes beyond all of them.

Including a collection of earlier articles on Hagedorn’s work, as well as new invited articles by a number of authors, and original work by Hagedorn himself, along with comments and reprinted material of Rafelski, the book clearly gains its value through the unexpected. It provides an English translation of an early overview article by Hagedorn written in German, as well as unpublished material that may even be new to well-informed practitioners in the field. As such, it presents the transcript of the draft minutes of the 1982 CERN Scientific Policy Committee (SPC) Meeting, at which Maurice Jacob, then head of the CERN Theory Division, reported about the 1982 Bielefeld workshop on the planned experimental exploration of ultra-relativistic heavy-ion collisions, setting the scene for the forthcoming experimental programme at CERN’s SPS.

The book is split into three parts.

Part I, “Reminiscences: Rolf Hagedorn and Relativistic Heavy Ion Research”, contains a collection of 15 invited articles from colleagues of Hagedorn who witnessed the initial stages of his work, leading to formulation of the SBM theory in the early 1960s, and its decisive contribution in expressing the need for an experimental research programme in the early 1980s: Johann Rafelski, Torleif Ericson, Maurice Jacob, Luigi Sertorio, István Montvay and Tamás Biro, Krzysztof Redlich and Helmut Satz, Gabriele Veneziano, Igor Dremin, Ludwik Turko, Marek Gaździcki and Mark Gorenstein, Grażyna Odyniec, Hans Gutbrod, Berndt Müller, and Emanuele Quercigh. These contributions draw a lively picture of Hagedorn, both as a scientist and as a man, with a wide range of interests spanning high-energy physics to music. They also illustrate the impact of Hagedorn’s work on other areas of physics.

Part II, “The Hagedorn Temperature”, contains a collection of original work by Hagedorn. In this section, the scientist’s seminal publication that appeared in 1964 in Nuovo Cimento is deliberately not included; however, publications that emphasise the hurdles that had to be overcome to get to the SBM, and the interpretation Hagedorn offered on his own work in later years, are presented. This is undoubtedly of great interest to those familiar with the physicist’s work but also curious about its creation and growth.

Part III, “Melting Hadrons, Boiling Quarks: Heavy Ion Path to Quark–Gluon Plasma”, puts the work of Hagedorn into the context of the discussion of a possible relativistic heavy-ion programme at CERN that took place in the early 1980s. It starts with his thoughts about a possible programme of this kind, presented at the workshop on future relativistic heavy-ion experiments, held at the Gesellschaft fuer Schwerionenforschung (GSI). It also includes the draft minutes of the 1982 CERN SPC meeting, and some early works on strangeness production as an indicator for quark–gluon plasma formation, as put forward after many years by Rafelski.

The book is undoubtedly an ideal companion to all those who wish to recall the birth of one of the main areas of today’s concepts in high-energy physics, and it is definitely a well-deserved credit to one of the great pioneers in their development.

Data to physics

At the beginning of May, the LHC declared the start of a new physics season for its experiments. The “Stable Beams” visible on the LHC Page 1 screen (see image above) is the “go ahead” for all the experiments to start taking data for physics.

Since 25 March, when the LHC was switched back on after its winter break, the accelerator complex and experiments have been fine-tuned using low-intensity beams and pilot proton collisions, and now the LHC and the experiments are taking an abundance of data.

The short circuit that occurred at the end of April, caused by a small beech marten that had found its way onto a large, open-air electrical transformer situated above ground, resulted in a delay of only a few days in the LHC running schedule. The relevant part of the LHC stopped immediately and safely after the short circuit, and the entire machine remained in standby mode for a few days.

Now, the four largest LHC experiment collaborations, ALICE, ATLAS, CMS and LHCb, have started to collect and analyse the 2016 data (see images above). Last year, operators increased the number of proton bunches to 2244 per beam, spaced at intervals of 25 ns. These enabled the ATLAS and CMS collaborations to study data from about 400 million million proton–proton collisions. In 2016, operators will increase the number of particles circulating in the machine and the squeezing of the beams in the collision regions. The LHC will generate up to one-billion collisions per second in the experiments.

The physics run with protons will last six months. The machine will then be set up for a four-week run colliding protons with lead ions.

Short but intense…

After a short, but intense, “intermezzo” as editor of the CERN Courier, I’m stepping down as I head off to new challenges. I would like to thank the CERN Courier Advisory Board and the many contributors who are the backbone of the magazine. I would also like to thank Lisa Gibson, the production editor at IOP Publishing. Most importantly, I would like to say that the magazine would not be what it is without its faithful readership, whose feedback I have appreciated greatly during these months. I’m sure that they will continue to provide their support to the new editor, Matthew Chalmers. Antonella Del Rosso, CERN.

CMS benefits from higher boosts for improved search potential in Run 2

With the increase in the centre-of-mass energy provided by the Run 2 LHC collisions, the production cross-sections of many new-physics processes are predicted to rise dramatically compared with Run 1, in contrast to those of the background processes. However, this increase in the cross-section is not the only way to enhance search sensitivities in Run 2. The higher energy leads to particle production that is more highly boosted. The large boosts result in the collimation of the decay products of the boosted object, which therefore overlap in the detector. For example, a Z boson that decays to a quark and an antiquark will normally produce two jets if it has a low boost. The same decay of a highly boosted Z boson will – in contrast – produce a single massive jet, because the decay products of the quark and antiquark will merge. Using jet-substructure observables, such as the jet mass or the so-called N-subjettiness, the search sensitivity for boosted objects like boosted top (t) quarks or W, Z and Higgs bosons can be enhanced.

CMS has retuned and optimised these techniques for Run 2 analyses, implementing the latest ideas and algorithms from the realm of QCD and jet-substructure phenomenology. It has been a collaboration-wide effort to commission these tools for analysis use, relying on experts in jet reconstruction and bottom-quark tagging, and on data-analysis techniques from many groups in CMS. These new algorithms significantly improve the identification efficiency of boosted objects compared with Run 1.

Several Run 2 CMS studies probing the boosted regime have already appeared, using the 2015 data set. While searches for boosted entities are pursued by many CMS analysis groups, the Beyond 2 Generations (B2G) group focuses specifically on final states composed of one or more boosted objects. Signal processes of interest in the B2G group include W´ → tb and diboson (VV/VH/HH) resonances, where W´ represents a new heavy W boson, “V” a W or Z boson, and H a Higgs boson. Other B2G studies focus on searches for pair- or singly produced vector-like quarks T and B through the decays T → Wb and B → tW. The search range for these novel particles generally lies between 700 GeV and 4 TeV, yielding many boosted objects when these particles decay.

Another study in the B2G group is the search for a more massive version (Z´) of the elementary Z boson, decaying to a top-quark pair (Z´ → tt). This search is performed in the semileptonic decay channel, for which the final state consists of a boosted top-quark candidate, a lepton, missing transverse momentum, and a tagged bottom-quark jet. Here, the boosted topology not only affects the reconstruction of the top-quark candidate, but also the lepton, whose isolation can be spoiled by the nearby bottom-quark jet. Again, special identification criteria are implemented to maintain a high signal acceptance. This analysis excludes Z´ masses up to 3.4 (4.0) TeV for signal widths equal to 10% (30%) of the Z´ mass, already eclipsing Run 1 limits. A complementary analysis, in the all-hadronic topology, is now under way – an event display showing two boosted top-quark candidates is shown in the figure. The three-subjet topology seen for each boosted top-quark candidate is as expected for such decays.

With these new boosted-object reconstruction techniques now implemented and commissioned for Run 2, CMS anxiously awaits possible discoveries with the 2016 LHC data set.

Theatre of Dreams: LHCb looks to the future

Eighty physicists gathered in Manchester on 6 and 7 April to discuss the future of the LHCb experiment. The LHCb collaboration is currently constructing a significant upgrade to its detector, which will be installed in 2019/2020. The Manchester workshop, entitled Theatre of Dreams: Beyond the LHCb Phase-I Upgrade, explored the longer-term future of the experiment in the second-half of the coming decade, and thereafter.

In the mid-2020s, the LHC and the ATLAS and CMS experiments will be upgraded for high-luminosity LHC operation. These activities will necessitate a long shutdown of at least 2.5 years. The Manchester meeting discussed enhancements to the LHCb experiment, dubbed a “Phase-Ib upgrade”, which could be installed at this time. Although relatively modest, these improvements could bring significant physics benefits to the experiment. Possibilities discussed included an addition to the particle-identification system using an innovative Cherenkov light-based time-of-flight system; placing detector chambers along the sides of the LHCb dipole to extend the physics reach by reconstructing lower-momentum particles; and replacing the inner region of the electromagnetic calorimeter with new technology, therefore extending the experiment’s measurement programme with photons, neutral pions and electrons.

Around 2030, the upgraded LHCb experiment that is currently under construction will reach the end of its foreseen physics programme. At this time, a Phase-II upgrade of the experiment may therefore be envisaged. During the meeting, the experimental-physics programme, the heavy-flavour-physics theory perspectives, and the anticipated reach of Belle II and the other LHC experiments were considered. The goal would be to collect an integrated luminosity of at least 300 fb–1, with an instantaneous luminosity a factor 10 above the first upgrade. Promising high-luminosity scenarios for LHCb from the LHC machine perspective were shown that would potentially allow this goal to be reached, and first thoughts were presented on how the experiment might be modified to operate in this new environment.

Many interesting ideas were exchanged at the workshop and these will be followed up in the forthcoming months to identify the requirements and R&D programmes needed to bring these concepts to reality.

The meeting was sponsored by the Science and Technology Facilities Council, the Institute of Physics, the Institute for Particle Physics Phenomenology and the University of Manchester.

ATLAS explores the dark side of matter

ATLAS

Astrophysics and cosmology have established that about 80% of the mass in the universe consists of dark matter. Dark matter and normal matter interact gravitationally, and they may also interact weakly, raising the possibility that collisions at the LHC may produce pairs of dark-matter particles.

With low interaction strength, dark-matter particles would escape the LHC detectors unseen, accompanied by Standard Model particles. These particles, such as single jets, photons, or W, Z or Higgs bosons, could either be produced in the interaction with the dark matter or radiated from the colliding partons. One result would be “mono-X” signals, named because the Standard Model particle, X, would appear alone, without other visible particles balancing their momentum in the transverse plane of the detector.

During Run 1 of the LHC, ATLAS developed a broad programme of searches for mono-X signals. Now, new results from the ATLAS collaboration in the mono-jet and mono-photon channels are the first of these searches in the proton–proton collision data collected in 2015 after increasing the LHC collision energy to 13 TeV. With only 3.2 fb–1 of collisions, six times fewer than studied in Run 1, these first Run 2 results already achieve comparable sensitivity to beyond-the-Standard-Model phenomena. In each search, the data with large missing transverse momentum are compared with data-driven estimates of Standard Model backgrounds. As an example, the background to the mono-jet search is known to 4–12%, an estimate nearly as precise as that obtained in the final Run 1 analysis. ATLAS has also released preliminary Run 2 results in the mono-Z, mono-W and mono-H channels.

If dark-matter production is observed, ATLAS has the potential to characterise the interaction itself. To produce dark matter in LHC collisions, the interaction must involve the constituent partons within the proton. If the interaction is mediated by s-channel exchange of a new boson, a decay back to the Standard Model partons could also occur.

The ATLAS collaboration has also released new results from the dijet search channel, where new phenomena could modify the smooth dijet invariant mass distribution. With 3.6 fb–1 of data, the search already surpasses the sensitivity of Run 1 dijet searches for many kinds of signals. The dijet results are presented on a simplified model of dark-matter production, where the dark boson has axial-vector couplings to quarks and Dirac dark matter.

The results of the mono-photon, mono-jet and dijet searches are shown in figure 1, assuming a version of the axial-vector dark boson whose couplings to dark matter are four times stronger than those to Standard Model quarks. In this scenario, ATLAS dijet results exclude the existence of mediating particles with masses from about 600 GeV to 2 TeV. The mono-jet and mono-photon channels exclude the parameter space at lower mediator and dark-matter masses. For even larger ratios of the dark-matter-to-quark coupling values, dijet constraints quickly weaken, and mono-X searches play a more powerful role.

On the verge of new data-taking in 2016, with the LHC expected to deliver an order of magnitude more luminosity, mono-X and direct mediator searches at ATLAS are set to probe this and other models with unprecedented sensitivity.

ALICE probes small-system dynamics with charm production at the LHC

The first p–Pb data-taking campaign at the LHC was undertaken as a test of the initial state of heavy-ion collisions (CERN Courier March 2014 p17 and CERN Courier October 2013 p17), and surprisingly revealed an enhancement of (identified) particle pairs with small relative azimuthal angle (CERN Courier January/February 2013 p9 and CERN Courier March 2013 p6) similar to that observed in Pb–Pb collisions where these results are associated with collective effects, such as elliptic flow. A deeper insight into the dynamics of p–Pb collisions is expected to come from measurements classifying events according to the collision centrality.

Hadrons carrying heavy flavour (charm or beauty quarks) are produced in initial hard scatterings and their production rates in the absence of nuclear effects can be calculated using perturbative QCD. Therefore, they are well-calibrated probes that provide information on the nuclear effects at play in the initial and final state of the collision, such as the modification of the parton-distribution functions in nuclei or the energy loss from rescattering between the produced particles, as well as into the dynamics of the heavy-ion collision.

The centrality dependence of prompt D-meson production in p–Pb collisions was studied by the ALICE collaboration by comparing their yields in p–Pb collisions for various centrality classes with those of binary scaled pp collisions at the same centre-of-mass energy via the nuclear modification factor, QpPb, evaluated as the ratio of these quantities. QpPb is equal to unity in the absence of nuclear effects. The D-meson QpPb in collisions classified in percentiles of the energy of slow neutrons detected by the ZNA calorimeter at very large rapidity in the Pb-going direction (figure 1, above) are consistent within uncertainties with unity, i.e. with binary collision scaling of the yield in pp collisions, independent of the geometry of the collision. There is no evidence for a modification of the spectrum shape for p≥ 3 GeV/c in the initial or final state of p–Pb collisions.

The D-meson yields in p–Pb collisions were also studied as a function of the relative charged-particle multiplicity at mid-rapidity and at large rapidity (Pb-going direction), by evaluating the yields in multiplicity intervals with respect to the multiplicity integrated ones, Y/〈Y〉. While with QpPb, particle production was examined in samples of 20% of the analysed events, this observable explores events from low to extremely high multiplicities corresponding to only 5% (1%) of the analysed events in p–Pb (pp) collisions. These measurements are sensitive to the contribution of multiple-parton interactions in pp and p–Pb collisions. The D-meson yield (figure 1, right) increases with a faster-than-linear trend as a function of the charged-particle multiplicity at mid-rapidity. This behaviour is similar to that of the measurements in pp collisions at 7 TeV. By contrast, the increase of the D-meson yields as a function of charged-particle multiplicity in the Pb-going direction is consistent with linear growth as a function of multiplicity. EPOS3 calculations describe the p–Pb results within uncertainties. The results at high multiplicity are better reproduced by the calculation including a viscous hydrodynamical evolution of the collision.

Charmed-meson measurements in p–Pb collisions have revealed intriguing features. The ALICE collaboration is looking forward to the higher-statistics p–Pb data sample to be collected by the end of 2016, which will allow for higher-precision measurements, bring information on the initial state of heavy-ion collisions and provide further constraints to small-system dynamics.

bright-rec iop pub iop-science physcis connect