Comsol -leaderboard other pages

Topics

Who Cares About Particle Physics? Making Sense of the Higgs Boson, the Large Hadron Collider and CERN

By Pauline Gagnon
Oxford

Also available at the CERN bookshop

One of my struggles when I teach at my university, or when I talk to friends about science and technology, is finding inspiring analogies. Without vivid images and metaphors it is extremely hard, or even impossible, to explain the intricacies of particle physics to a public of non-experts. Even for physicists, sometimes it is hard to interpret equations without such aids. Pauline Gagnon has mastered how to explain particle physics to the general public, as she shows in this book full of illustrations but without lack of rigour. She was a senior research scientist at CERN, working with the ATLAS collaboration, until her retirement this year (although she is very active in outreach). Undoubtedly, she knows about particle physics and – more importantly – about its daily practice.

The book is organised into four related areas: particle physics (chapters 1 to 6 and chapter 10), technology spin-offs from particle physics (chapter 7), management in big science (chapter 8) and social issues in the laboratory (chapter 9 on diversity). While the first part was expected, I was positively surprised by the other three. Technology spin-offs are extremely important for society, which in the end is what pays for research. Particle physics is not oriented to economic productivity but driven by a mixture of creativity, perseverance and rigour towards the discovery of how the universe works. On their way to acquiring knowledge, scientists create new tools that can improve our living standards. This book provides a short summary of the technology impact of particle physics in our everyday life and of the effort of CERN to increase the technology spin-off rate by knowledge transfer and workforce training.

Big-science management, especially in the context of a cultural melting pot like CERN, could be very chaotic if it was driven by conventional corporate procedures. The author is clear about this highly non-trivial point: the benefits of the collaborative model we use at CERN in terms of productivity and realising ambitious aims. This organisational model – which she calls the “picnic” model, since each participating institute freely agrees to contribute something – is worth spreading in our modern and interconnected commercial environment, particularly because there are striking similarities with big science when it comes to products and services that are rich in technology and know-how.

As CERN visitors learn, cultural diversity permeates the Organization, and by extension particle physics. Just by taking a seat in any of the CERN restaurants, they can understand that particle physics is a collective and international effort. But they can also easily verify that there is an overwhelming gender imbalance in favour of men. The author, as a woman, addresses the topic of the gender gap in physics and specifically at CERN. She explains why diversity issues, in their overall complexity (not restricted to gender), are very important: our world desperately needs real examples of peaceful and fruitful co-operation between different people with common goals, without gender or cultural barriers.

For what concerns the main part of the book, which is focused on contemporary particle physics, chapters 1, 2, 3 and 6 are undoubtedly very well written, in the overall spirit of explaining things easily but nevertheless with full scientific thoroughness. But I was really impressed by chapter 4, on the experimental discovery of the Higgs boson, and 5, on dark matter, mainly because of the firsthand knowledge they reveal. When you read Gagnon’s words you can feel the emotions of the protagonists during that tipping point in modern particle physics. Chapter 5 is an excursion to the dark universe, with wonderful explanations (such as the imaginative comparison between the Bullet Cluster and an American football match). The science in this chapter is up to date and combines particle physics and observational cosmology without apparent effort.

I recommend this book for the general public interested in particle physics but also for particle physicists who want to take a refreshing and general look at the field, even if only to find images to explain physics to family and friends. Because, in the end, everybody cares about particle physics, if you can raise their interest.

General Relativity: A First Examination

By Marvin Blecher
World Scientific

41EWMhEPJIL._SX312_BO1,204,203,200_

This book provides a concise treatment of general relativity (GR) ideal for a semester course for undergraduate students or first-year graduate students in physics or engineering. After retiring from a career as an experimentalist in nuclear and particle physics, the author decided to teach an introductory course in GR at Virginia Tech, US. Many books are available on this topic, but they normally go into great detail and include a lot of material that cannot be covered in the short time of a semester. This new text by Blecher aims to cover this gap in the literature and provide just the essential concepts of GR.

The author starts with a review of special relativity and of the basic mathematical instruments, and then moves towards the explanation of the way that gravity affects time. This is discussed first for weak gravity via the conservation of energy using a Newtonian formulation with relativistic mass. Later in the book (chapter 5), it is rigorously treated in a completely GR framework. The Schwarzschild metric is also obtained.

In the following sections, GR is discussed in the context of the solar system (chapter 6) and of black holes (chapter 7). In the latter, an appealing example based on the movie Interstellar (Christopher Nolan) is used to discuss why a large gravitational time dilation is possible near a spinning – but not a static – black hole.

Chapter 8 focuses on gravitational waves. The first direct detection of these waves, produced by two black holes that merged into a single one, was announced in February this year, when the book was already going to print. Nevertheless, the author added a discussion on this discovery to the text. The theory of the binary neutron star-system radiation, referred to the binary pulsar discovered by R Hulse and J H Taylor, is also treated, but in the case of elliptical orbits, instead of circular ones as generally done for simplicity in textbooks.

Finally, a chapter is dedicated to cosmology, in which the results of numerical integrations, using the experimental data available for all the energy densities, are discussed.

Electron Lenses for Super-Colliders

By Vladimir D Shiltsev
Springer

Also available at the CERN bookshop

With an energetic writing style, in this book Vladimir Shiltsev presents a novel device for accelerators and storage rings. These machines employ magnets to bend and focus particle trajectories, and magnets always create forces that increase monotonically with the particle displacement in the magnet. But a particle in a beam also experiences forces from the beam itself and from the other beam in a collider – forces that do not increase monotonically with amplitude. Therefore, magnets are not well suited to correct for beam-generated forces. However, another beam may do the job, and this is most easily realised with a low-energy electron beam stabilised in a solenoidal magnetic field – thus an electron lens is created. The lens offers options for generating amplitude-dependent forces that cannot be realised with magnets, and such forces can also be made time-dependent. The electron lens is in effect a nonlinear lens with a rather flexible profile that can either be static or change with every passing bunch.

D Gabor already proposed the use of electron-generated space-charge forces in 1947 (Nature 160 89–90), and E Tsyganov suggested the use of electron lenses for the SSC (SSCL-Preprint-519 1993). But it was Shiltsev who was the driving force behind the first implementation of electron lenses in a high-energy machine. Two such lenses were installed in the Tevatron in 2001 and 2004, where they routinely removed beam not captured by the radiofrequency (RF) system, and were used for numerous studies of long-range and head-on beam–beam compensation and collimation. In 2014, two electron lenses were also installed in the Relativistic Heavy Ion Collider (RHIC) for head-on beam–beam compensation, and their use for the LHC collimation system is under consideration.

Shiltsev’s experience and comprehensive knowledge of the topic make him perhaps the best possible author for an introductory text. The book is divided into five chapters: an introduction, the major pieces of technology, application for beam–beam compensation, halo collimation, and other applications. It draws heavily on published material, and therefore does not have the feel of a textbook. While a consistent notation for symbols is used throughout the book, the figures are taken from other publications, and the units are mostly but not entirely in the International System (SI).

At the heart of the book are descriptions of the major technical components of a working electron lens, and the two main applications to date: beam–beam compensation and halo collimation. Long-range and head-on beam–beam compensation as well as collimation applications are described exhaustively. It is somewhat regrettable that the latest results from RHIC were published too late to be included in the volume (e.g. W Fischer et al. 2015 Phys. Rev. Lett. 115 264801; P Thieberger et al. 2016 Phys. Rev. Accel. Beams 19 041002). The book names the hollow electron lens a collimator, but it is probably better to describe it as a diffusion enhancer (as suggested on p138) because its strength is at least an order of magnitude smaller than a solid-state collimator, and a hollow lens will not replace either a primary or a secondary collimator jaw.

The last chapter ventures into more speculative territory, with applications that are not all in colliders. Most prominently, space-charge compensation is discussed, largely in terms of tune spread but not resonance driving terms. The latter is only mentioned in the context of multiple electron lenses (up to 24 for a simulated Fermilab Booster example). For this and the other applications mentioned, it is clear that much work remains before these could become reality.

Overall, the book is an excellent entry point for anyone who would like to become familiar with the concepts, technology and possible application of electron lenses. It is also a useful reference for many formulas, allowing for fast estimates, and for the published work on this topic – up to the date of publication.

Nanoscale Silicon Devices

By Shunri Oda and David K Ferry (eds)
CRC Press

The CRC Handbook of Chemistry and Physics was first published in 1913 and is a well-known text, at least to older physicists from the time before computers and instant, web-based information. To find relevant data, one had to be familiar with the classification of subjects and tables in the handbook’s 2500 or so pages, but virtually everything was covered. Over the years, the CRC Press – while continuing to publish this handbook, for more than 100 years now – has grown into a large publisher that produces hundreds of titles every year in engineering, physics and other fields.

Its recent publication, Nanoscale Silicon Devices, describes a variety of investigations that are under way to develop improved and smaller electronic structures for computing, signal processing in general, or memory. Now that transistors approach the dimension of a few nanometres, less than 100 atoms in a row, methods to account for quantum effects have to be applied, as shown in the first chapter. The second chapter discusses the need to change the shape of transistors as they become smaller. The controlling gate has to extend as much as possible around the conduction channel material and, eventually, silicon may be replaced in the channel by a different semiconductor material.

Another effect due to the small size, as explained in chapter 3, is the increase of variability between devices of identical design. Single-electron devices and the use of electron spin are discussed in several of the following chapters. A major issue today, as highlighted in the book, is the reduction of power for circuits with a large number of transistors, where the leakage current in the OFF state becomes preponderant. In chapter 7, tunnel FET devices are discussed as a way to solve this problem. In chapter 6, a different approach is shown, using nanoelectromechanical ON/OFF switches integrated in the circuit.

This book is not a typical textbook, but rather a collection of 11 articles written by 20 scientists, including the editors Oda and Ferry. Each article centres on the research of its author(s) in a specific area of semiconductor-device development. One of the consequences of this structure is the abundance of internal references. Reading the book does not quite provide a firm idea about the future of electronics, but it could convince readers that much more will be possible, beyond the current state-of-the-art. One has also to keep in mind that the chip industry tends to keep useful findings under wraps and has little incentive to publish its research before products are on the shelves.

The book is a good buy if you want to get a feel about work going on at the interface between pico- and nanoelectronics. For the use of electronics in scientific research, it is essential to understand how devices are constructed and what researchers might be able to gain from them, especially when working in unusual environments such as a vacuum, space, the human body or a particle collider.

Ukraine becomes associate Member State of CERN

On 5 October, Ukraine became an associate Member State of CERN, following official notification to CERN that Ukraine’s parliament has ratified an agreement signed with CERN in October 2013. “Our hard and consistent work over the past two decades has been crowned today by a remarkable event – granting Ukraine the status of CERN associate member,” says Yurii Klymenko, Ukraine’s ambassador to the United Nations in Geneva. “It is an extremely important step on the way of Ukraine’s European integration.”

Ukraine has been a long-time contributor to the ALICE, CMS and LHCb experiments at the LHC and to R&D in accelerator technology. Ukraine also operates a Tier-2 computing centre in the Worldwide LHC Computing Grid.

Ukraine and CERN first signed a co-operation agreement in 1993, followed by a joint declaration in 2011, but Ukraine’s relationship with CERN dates back much further through the Joint Institute of Nuclear Research (JINR) in Dubna, Russia, of which Ukraine is a member. CERN-JINR co-operation in the field of high-energy accelerators started in the early 1960s, and ever since, the two institutions have formed a bridge between East and West that has made important contributions to the development of global, peaceful scientific co-operation.

Associate membership will open a new era of co-operation that will strengthen the long-term partnership between CERN and the Ukrainian scientific community. It will allow Ukraine to participate in the governance of CERN, in addition to allowing Ukrainian scientists to become CERN staff and to participate in CERN’s training and career-development programmes. Finally, it will allow Ukrainian industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

“It is a great pleasure to warmly welcome Ukraine into the CERN family,” says CERN Director-General Fabiola Gianotti.

CLOUD experiment sharpens climate predictions


Future global climate projections have been put on more solid empirical ground, thanks to new measurements of the production rates of atmospheric aerosol particles by CERN’s Cosmics Leaving OUtdoor Droplets (CLOUD) experiment.

According to the Intergovernmental Panel on Climate Change, the Earth’s mean temperature is predicted to rise by between 1.5–4.5 °C for a doubling of carbon dioxide in the atmosphere, which is expected by around 2050. One of the main reasons for this large uncertainty, which makes it difficult for society to know how best to act against climate change, is a poor understanding of aerosol particles in the atmosphere and their effects on clouds.

To date, all global climate models use relatively simple parameterisations for aerosol production that are not based on experimental data, in contrast to the highly detailed modelling of atmospheric chemistry and greenhouse gases. Although the models agree with current observations, predictions start to diverge when the models are wound forward to project the future climate.

Now, data collected by CLOUD have been used to build a model of aerosol production based solely on laboratory measurements. The new CLOUD study establishes the main processes responsible for new particle formation throughout the troposphere, which is the source of around half of all cloud seed particles. It could therefore reduce the variation in projected global temperatures as calculated by complex global-circulation models.

“This marks a big step forward in the reliability and realism of how models describe aerosols and clouds,” says CLOUD spokesperson Jasper Kirkby. “It’s addressing the largest source of uncertainty in current climate models and building it on a firm experimental foundation of the fundamental processes.”

Aerosol particles form when certain trace vapours in the atmosphere cluster together, and grow via condensation to a sufficient size that they can seed cloud droplets. Higher concentrations of aerosol particles make clouds more reflective and long-lived, thereby cooling the climate, and it is thought that the increased concentration of aerosols caused by air pollution since the start of the industrial period has offset a large part of the warming caused by greenhouse-gas emissions. Until now, however, the poor understanding of how aerosols form has hampered efforts to estimate the total forcing of climate from human activities.

Thanks to CLOUD’s unique controlled environment, scientists can now understand precisely how new particles form in the atmosphere and grow to seed cloud droplets. In the latest work, published in Science, researchers built a global model of aerosol formation using extensive laboratory-measured nucleation rates involving sulphuric acid, ammonia, ions and organic compounds. Although sulphuric acid has long been known to be important for nucleation, the results show for the first time that observed concentrations of particles throughout the atmosphere can be explained only if additional molecules – organic compounds or ammonia – participate in nucleation. The results also show that ionisation of the atmosphere by cosmic rays accounts for nearly one-third of all particles formed, although small changes in cosmic rays over the solar cycle do not affect aerosols enough to influence today’s polluted climate significantly.

Early this year, CLOUD reported in Nature the discovery that aerosol particles can form in the atmosphere purely from organic vapours produced naturally by the biosphere (CERN Courier July/August 2016 p11). In a separate modelling paper published recently in PNAS, CLOUD shows that such pure biogenic nucleation was the dominant source of particles in the pristine pre-industrial atmosphere. By raising the baseline aerosol state, this process significantly reduces the estimated aerosol radiative forcing from anthropogenic activities and, in turn, reduces modelled climate sensitivities.

“This is a huge step for atmospheric science,” says lead-author Ken Carslaw of the University of Leeds, UK. “It’s vital that we build climate models on experimental measurements and sound understanding, otherwise we cannot rely on them to predict the future. Eventually, when these processes get implemented in climate models, we will have much more confidence in aerosol effects on climate. Already, results from CLOUD suggest that estimates of high climate sensitivity may have to be revised downwards.”

n_TOF deepens search for missing cosmic lithium

An experiment at CERN’s neutron time-of-flight (n_TOF) facility has filled in a missing piece of the cosmological-lithium problem puzzle, according to a report published in Physical Review Letters. Along with a few other light elements such as hydrogen and helium, much of the lithium in the universe is thought to have been produced in the very early universe during a process called Big-Bang nucleosynthesis (BBN). For hydrogen and helium, BBN theory is in excellent agreement with observations. But the amount of lithium (7Li) observed is about three times smaller than predicted – a discrepancy known as the cosmological-lithium problem.

The n_TOF collaboration has now made a precise measurement of one of the key processes involved – 7Be(n,α)4He – in an attempt to solve the mystery. The production and destruction of the unstable 7Be isotope regulates the abundance of cosmological lithium, but estimates of the probability of 7Be destruction via this channel have relied on a single measurement made in 1963 of thermal energies at the Ispra reactor in Italy. Therefore, a possible explanation for the higher theoretical value could be an underestimation of the destruction of primordial 7Be, in particular in reactions with neutrons.

Now, n_TOF has measured the cross-section of the 7Be(n,α)4He reaction over a wide range of neutron energies with a high level of accuracy. This was possible thanks to the extremely high luminosity of the neutron beam in the recently constructed experimental area (EAR2) at the n_TOF facility.

The results indicate that, at energies relevant for BBN, the probability for this reaction is 10 times smaller than that used in theoretical calculations. The destruction rate of 7Be is therefore even smaller than previously supposed, ruling out this channel as the source of the missing lithium and deepening the mystery of the cosmological-lithium problem

ATLAS spots light-by-light scattering

The γγγγ process proceeds at lowest order via virtual one-loop box diagrams involving fermions, leading to a severe suppression in the cross-section and thus making it very challenging to observe experimentally. To date, light-by-light scattering via an electron–positron loop has been tested precisely, but indirectly, in measurements of the anomalous magnetic moments of the electron and muon. Closely related observations are Delbrück scattering and photon splitting, both of which involve the scattering of a photon from the nuclear Coulomb field, and the fusion of photons into pseudoscalar mesons observed in electron–positron colliders. The direct observation of light-by-light scattering has, however, remained elusive.

It has recently been proposed that light-by-light scattering can be studied using photons produced in relativistic heavy-ion collisions at large impact parameters. Since the electric-field strength of relativistic ions scales with the square of their charge, collisions lead to huge electromagnetic field strengths relative to proton–proton collisions. The phenomenon manifests itself as beams of nearly real photons, allowing for the process γγγγ to occur directly, while the nuclei themselves generally stay intact. Light-by-light scattering is thus distinguished by the observation of two low-energy photons, back-to-back in azimuth, with no additional activity measured in the detector. Possible backgrounds can arise from misidentified electrons from the QED process γγe+e, as well as from the central exclusive production of two photons from the fusion of two gluons (gg γγ).

The ATLAS experiment has conducted a search for light-by-light scattering in 480 μb–1 of lead–lead data recorded at a nucleon–nucleon centre-of-mass energy of 5.02 TeV during the 2015 heavy-ion run. While almost four-billion strongly interacting events were provided by the LHC, only 13 diphoton candidates were observed. From the expectation of 7.3 signal events and 2.6 background events, a significance of 4.4σ was obtained for observing one of the most fundamental predictions of QED. With the additional integrated luminosity expected in upcoming runs, further study of the γγγγ process will allow tests of extensions of the Standard Model, in which new particles can participate via the loop diagrams, providing an additional window into new physics at the LHC.

Studies of electroweak-boson production by CMS

When such events do arise, however, the non-Abelian SU(2) nature of electroweak bosons – which are generally denoted V – allows the bosons to interact directly with each other. Of particular interest are the direct interactions of three electroweak gauge bosons, whose rate depends on the corresponding triple-gauge-boson-coupling (TGC) strength. Measurement of the rates of single V and double VV (diboson) production and of the strength of TGC interactions represent fundamental tests of the electroweak sector of the Standard Model (SM).

The inclusive production rates of single W or Z bosons at the LHC have been calculated in the SM to an accuracy of about 3%, while the ratio of the W-to-Z-boson production rate is predicted to even greater precision because certain uncertainties cancel. The CMS collaboration has recently measured the W and Z boson inclusive production rates and finds their ratio to be 10.46±0.17, in agreement with the SM prediction at the per cent level. CMS has also measured the ZZ, WZ and WW diboson production rates, finding agreement with the SM predictions within a precision of about 14, 12 and 9%, respectively. These results are based on leptonic-decay modes, specifically decays of a W boson to an electron or muon and the associated neutrino, and of a Z boson to an electron–positron pair or to a muon–antimuon pair.

Results obtained so far have established the viability of the techniques.

Leptonic decays provide an unambiguous experimental signature for a W or Z boson but suffer in statistical precision because of relatively small branching fractions. A complementary strategy is to use hadronic decay modes, namely decays of a W or Z boson to a quark–antiquark pair, which benefit from much larger branching fractions but are experimentally more challenging. Each quark or antiquark appears as a collimated stream of particles, or jet, in the detector. Thus the experimental signature for hadronic decays is the presence of two jets. Discriminating between the hadronic decay of a W boson with a mass of 81 GeV and that of a Z boson (91.2 GeV) is difficult on an event-by-event basis due to the finite jet-energy resolution. Nonetheless, the separation can be performed on a statistical basis for highly energetic jets (see figure).

CMS has selected WV diboson events in which a W boson decays leptonically and a highly energetic V boson decays hadronically. Because of the high V boson energy, the two jets from the V boson decay are partially merged and the WV system can have a very large mass. As a result, the analysis probes a regime where physics beyond the SM might be present. Searches are performed as a function of the mass of the WV system and are used to set limits on anomalous TGC interactions. Results obtained so far have established the viability of the techniques, but much greater sensitivity to the presence of anomalous TGC interactions is expected with the larger data samples that will be analysed in the future.

LHCb searches for strong CP violation

CP violation, which relates to an asymmetry between matter and antimatter, is a well-established feature of the weak interaction that mediates decays of strange, charm and beauty particles. It arises in the Standard Model from a single complex phase in the Cabibbo–Kobayashi–Maskawa matrix that relates the mass and flavour eigenstates of the quarks. However, the strength of the effect is well below what is needed to explain the dominance of matter over antimatter in the present universe. The LHCb collaboration has now looked for evidence of CP violation in the strong interaction, which binds quarks and gluons within hadrons.

In principle, the theory of the strong interactions, quantum chromodynamics (QCD), allows for a CP-violating component, but measurements of the electric dipole moment of the neutron have shown that any effect in QCD must be very small indeed. This apparent absence of CP violation in QCD is known as “the strong CP problem”.

One way to look for evidence of CP violation in strong interactions is to search for η and η′(958) meson decays to pairs of charged pions: η()π+π, both of which would violate CP symmetry. The LHCb collaboration has recently used its copious production of charm mesons to perform such a search, establishing a new method to isolate potential samples of η and η′ decays into two pions. The D+ and D+s mesons (and their opposite sign modes) have well-measured decay modes to ηπ+ and η′π+, as well as to π+π+π. Therefore, any η or η′ decays to π+π would potentially show up as narrow peaks in the π+π mass spectra from D and Ds decays to π+π+π.

The LHCb team used a sample of about 25 million each of D+ and D+s meson decays to π+π+πcollected during Run 1 and the first year of Run 2 of the LHC (figure 1). The analysis used a boosted decision tree to suppress backgrounds, with fits to the π+π mass spectra from the D+ and D+s decays used to set limits on the amount of η and η′ that could be present. No evidence for the CP-violating decays was found and upper limits were set on the branching fractions, at 90% confidence level, of less than 1.6 × 10–5 for ηπ+π and 1.8 × 10–5 for η′(958) π+π. The result for the η meson is comparable with the current world best, while that for the η′ is a factor three below the previous best, further constraining the possibility for a new CP-violating mechanism in strong interactions.

bright-rec iop pub iop-science physcis connect