Bluefors – leaderboard other pages

Topics

The many lives of supergravity

The early 1970s was a pivotal period in the history of particle physics. Following the discovery of asymptotic freedom and the Brout–Englert–Higgs mechanism a few years earlier, it was the time when the Standard Model (SM) of electroweak and strong interactions came into being. After decades of empirical verification, the theory received a final spectacular confirmation with the discovery of the Higgs boson at CERN in 2012, and its formulation has also been recognised by Nobel prizes awarded to theoretical physics in 1979, 1999, 2004 and 2013.

It was clear from the start, however, that the SM, a spontaneously broken gauge theory, had two major shortcomings. First, it is not a truly unified theory because the gluons of the strong (colour) force and the photons of electromagnetism do not emerge from a common symmetry. Second, it leaves aside gravity, the other fundamental force of nature, which is based on the gauge principle of general co-ordinate transformations and is described by general relativity (GR).

In the early 1970s, grand unified theories (GUTs), based on larger gauge symmetries that include the SM’s “SU(3) × SU(2) × U(1)” structure, did unify colour and charge – thereby uniting the strong and electroweak interactions. However, they relied on a huge new energy scale (~1016 GeV), just a few orders of magnitude below the Planck scale of gravity (~1019 GeV) and far above the electroweak Fermi scale (~102 GeV), and on new particles carrying both colour and electroweak charges. As a result, GUTs made the stunning prediction that the proton might decay at detectable rates, which was eventually excluded by underground experiments, and their two widely separated cut-off scales introduced a “hierarchy problem” that called for some kind of stabilisation mechanism.

A possible solution came from a parallel but unrelated development. In 1973, Julius Wess and Bruno Zumino unveiled a new symmetry of 4D quantum field theory: supersymmetry, which interchanges bosons and fermions and, as would be better appreciated later, can also conspire to stabilise scale hierarchies. Supersymmetry was inspired by “dual resonance models”, an early version of string theory pioneered by Gabriele Veneziano and extended by André Neveu, Pierre Ramond and John Schwarz. Earlier work done in France by Jean-Loup Gervais and Benji Sakita, and in the Soviet Union by Yuri Golfand and Evgeny Likhtman, and by Dmitry Volkov and Vladimir Akulov, had anticipated some of supersymmetry’s salient features.

An exact supersymmetry would require the existence of superpartners in the SM, but it would also imply mass degeneracies between the known particles and their superpartners. This option has been ruled out over the years by several experiments at CERN, Fermilab and elsewhere, and therefore supersymmetry can be at best broken, with superpartner masses that seem to lie beyond the TeV energy region currently explored at the LHC. Moreover, a spontaneous breaking of supersymmetry would imply the existence of additional massless (“Goldstone”) fermions.

Supergravity, the supersymmetric extension of GR, came to the rescue in this respect. It predicted the existence of a new particle of spin 3/2 called the gravitino that would receive a mass in the broken phase. In this fashion, one or more gravitinos could be potentially very heavy, while the additional massless fermions would be “eaten” – much as it occurs for part of the Higgs doublet in the SM.

Seeking unification

Supergravity, especially when formulated in higher dimensions, was the first concrete realisation of Einstein’s dream of a unified field theory (see diagram opposite). Although the unification of gravity with other forces was the central theme for Einstein during the last part of his life, the beautiful equations of GR were for him a source of frustration. For 30 years he was disturbed by what he considered a deep flaw: one side of the equations contained the curvature of space–time, which he regarded as “marble”, while the other contained the matter energy, which he compared to “wood”. In retrospect, Einstein wanted to turn “wood” into “marble”, but after special and general relativity he failed in this third great endeavour.

GR has, however, proved to be an inestimable source of deep insights for unification. A close scrutiny of general co-ordinate transformations led Theodor Kaluza and Oskar Klein (KK), in the 1920s and 1930s, to link electromagnetism and its Maxwell potentials to internal circle rotations, what we now call a U(1) gauge symmetry. In retrospect, more general rotations could also have led to the Yang–Mills theory, which is a pillar of the SM. According to KK, Maxwell’s theory could be a mere byproduct of gravity, provided the universe contains one microscopic extra dimension beyond time and the three observable spatial ones. In this 5D picture, the photon arises from a portion of the metric tensor – the “marble” in GR – with one “leg” along space–time and the other along the extra dimensions.

Supergravity follows in this tradition: the gravitino is the gauge field of supersymmetry, just like the photon is the gauge field of internal circle rotations. If one or more local supersymmetries (whose number will be denoted by N) accompany general co-ordinate transformations, they grant the consistency of gravitino interactions. In a subclass of “pure” supergravity models, supersymmetry also allows one to connect “marble” and “wood” and therefore goes well beyond the KK mechanism, which does not link Bose and Fermi fields. Curiously, while GR can be formulated in any number of dimensions, seven additional spatial dimensions, at most, are allowed in supergravity due to intricacies of the Fermi–Bose matching.

Last year marked the 40th anniversary of the discovery of supergravity. At its heart lie some of the most beautiful ideas in theoretical physics, and therefore over the years this theory has managed to display different facets or has lived different parallel lives.

Construction begins

The first instance of supergravity, containing a single gravitino (N = 1), was built in the spring of 1976 by Daniel Freedman, Peter van Nieuwenhuizen and one of us (SF). Shortly afterwards, the result was recovered by Stanley Deser and Bruno Zumino, in a simpler and elegant way that extended the first-order (“Palatini”) formalism of GR. Further simplifications emerged once the significance of local supersymmetry was better appreciated. Meanwhile, the “spinning string” – the descendant of dual resonance models that we have already met – was connected to space–time supersymmetry via the so-called Gliozzi–Scherk–Olive (GSO) projection, which reflects a subtle interplay between spin-statistics and strings in space–time. The low-energy spectrum of the resulting models pointed to previously unknown 10D versions of supergravity, which would include the counterparts of several gravitinos, and also to a 4D Yang–Mills theory that is invariant under four distinct supersymmetries (N = 4). A first extended (N = 2) version of 4D supergravity involving two gravitinos came to light shortly after.

When SF visited Caltech in the autumn of 1976, he became aware that Murray Gell-Mann had already worked out many consequences of supersymmetry. In particular, Gell-Mann had realised that the largest “pure” 4D supergravity theory, in which all forces would be connected to the conventional graviton, would include eight gravitinos. Moreover, this N = 8 theory could also allow an SO(8) gauge symmetry, the rotation group in eight dimensions (see table opposite). Although SO(8) would not suffice to accommodate the SU(3) × SU(2) × U(1) symmetry group of the SM, the full interplay between supergravity and supersymmetric matter soon found a proper setting in string theory, as we shall see.

The following years, 1977 and 1978, were most productive and drew many people into the field. Important developments followed readily, including the discovery of reformulations where N = 1 4D supersymmetry is manifest. This technical step was vital to simplify more general constructions involving matter, since only this minimal form of supersymmetry is directly compatible with the chiral (parity-violating) interactions of the SM. Indeed, by the early 1980s, theorists managed to construct complete couplings of supergravity to matter for N = 1 and even for N = 2.

The maximal, pure N = 8 4D supergravity was also derived, via a circle KK reduction, in 1978 by Eugene Cremmer and Bernard Julia. This followed their remarkable construction, with Joel Scherk, of the unique 11D form of supergravity, which displayed a particularly simple structure where a single gravitino accounts for eight 4D ones. In contrast, the N = 8 model is a theory of unprecedented complication. It was built after an inspired guess about the interactions of its 70 scalar fields (see table) and a judicious use of generalised dualities, which extend the manifest symmetry of the Maxwell equations under the interchange of electric and magnetic fields. The N = 8 supergravity with SO(8) gauge symmetry foreseen by Gell-Mann was then constructed by Bernard de Wit and Hermann Nicolai. It revealed a negative vacuum energy, and thus an anti-de Sitter (AdS) vacuum, and was later connected to 11D supergravity via a sphere KK reduction. Regarding the ultraviolet behaviour of supergravity theories, which was vigorously investigated soon after the original discovery, no divergences were found, at one loop, in the “pure” models, and many more unexpected cancellations of divergences have since come to light. The case of N = 8 supergravity is still unsettled, and some authors still expect that this maximal theory be finite to all orders.

The string revolution

Following the discovery of supergravity, the GSO projection opened the way to connect “spinning strings”, or string theory as they came to be known collectively, to supersymmetry. Although the link between strings and gravity had been foreseen by Scherk and Schwarz, and independently by Tamiaki Yoneya, it was only a decade later, in 1984, that widespread activity in this direction began. This followed Schwarz and Michael Green’s unexpected discovery that gauge and gravitational anomalies cancel in all versions of 10D supersymmetric string theory. Anomalies – quantum violations of classical symmetries – are very troublesome when they concern gauge interactions, and their cancellation is a fundamental consistency condition that is automatically granted in the SM by its known particle content.

Anomaly cancellation left just five possible versions of string theory in 10 dimensions: two “heterotic” theories of closed strings, where the SU(3) × SU(2) × U(1) symmetry of the SM is extended to the larger groups SO(32) or E8 × E8; an SO(32) “type-I” theory involving both open and closed strings, akin to segments and circles, respectively; and two other very different and naively less interesting theories called IIA and IIB. At low energies, supergravity emerges from all of these theories in its different 10D realisations, opening up unprecedented avenues for linking 10D strings to the interactions of particle physics. Moreover, the extended nature of strings made all of these enticing scenarios free of the ultraviolet problems of gravity.

Following this 1984 “first superstring revolution”, one might well say that supergravity officially started a second life as a low-energy manifestation of string theory. Anomaly cancellation had somehow connected Einstein’s “marble” and “wood” in a miraculous way dictated by quantum consistency, and definite KK scenarios soon emerged that could recover from string theory both the SM gauge group and its chiral, parity-violating interactions. Remarkably, this construction relied on a specific class of 6D internal manifolds called Calabi–Yau spaces that had been widely studied in mathematics, thereby merging 4D supergravity with algebraic geometry. Calabi–Yau spaces led naturally, in four dimensions, to a GUT gauge group E6, which was known to connect to the SM with right-handed neutrinos, also providing realisations of the see-saw mechanism.

A third life

The early 1990s were marked by many investigations of black-hole-like solutions in supergravity, which soon unveiled new aspects of string theory. Just like the Maxwell field is related to point particles, some of the fields in 10D supergravity are related to extended objects, generically dubbed “p-branes” (p = 0 for particles, p = 1 for strings, p = 2 for membranes, and so on). String theory, being based at low energies on supergravity, therefore could not be merely a theory of strings. Rather, as had been strongly advocated over the years by Michael Duff and Paul Townsend, we face a far more complicated soup of strings and more general p-branes. A novel ingredient was a special class of p-branes, the D-branes, whose role was clarified by Joseph Polchinski, but the electric-magnetic dualities of the low-energy supergravity remained the key tool to analyse the system. The end result, in the mid 1990s, was the awesome, if still somewhat vague, unified picture called M-theory, which was largely due to Edward Witten and marked the “second superstring revolution”. Twenty years after its inception, supergravity thus started a third parallel life, as a deep probe into the mysteries of string theory.

The late 1990s witnessed the emergence of a new duality. The AdS/CFT correspondence, pioneered by Juan Maldacena, is a profound equivalence between supergravity and strings in AdS and conformal field theory (CFT) on its boundary, which connects theories living in different dimensions. This “third superstring revolution” brought to the forefront the AdS versions of supergravity, which thus started a new life as a unique tool to probe quantum field theory in unusual regimes. The last two decades have witnessed many applications of AdS/CFT outside of its original realm. These have touched upon fluid dynamics, quark–gluon plasma, and more recently condensed-matter physics, providing a number of useful insights on strongly coupled matter systems. Perhaps more unexpectedly, AdS/CFT duality has stimulated work related to scattering amplitudes, which may also shed light on the old issue of the ultraviolet behaviour of supergravity. The reverse programme of gaining information about gravity from gauge dynamics has proved harder, and it is difficult to foresee where the next insights will come from. Above all, there is a pressing need to highlight the geometrical principles and the deep symmetries underlying string theory, which have proved elusive over the years.

The interplay between particle physics and cosmology is a natural arena to explore consequences of supergravity. Recent experiments probing the cosmic microwave background, and in particular the results of the Planck mission, lend support to inflationary models of the early universe. An elusive particle, the inflaton, could have driven this primordial acceleration, and although our current grasp of string theory does not allow a detailed analysis of the problem, supergravity can provide fundamental clues on this and the subsequent particle-physics epochs.

Supersymmetry was inevitably broken in a de Sitter-like inflationary phase, where superpartners of the inflaton tend to experience instabilities. The novel ingredient that appears to get around these problems is non-linear supersymmetry, whose foundations lie in the prescient 1973 work of Volkov and Akulov. Non-linear supersymmetry arises when superpartners are exceedingly massive, and seems to play an intriguing role in string theory. The current lack of signals for supersymmetry at the LHC makes one wonder whether it might also hold a prominent place in an eventual picture of particle physics. This resonates with the idea of “split supersymmetry”, which allows for large mass splittings among superpartners and can be accommodated in supergravity at the price of reconsidering hierarchy issues.

In conclusion, attaining a deeper theoretical understanding of broken supersymmetry in supergravity appears crucial today. In breaking supersymmetry, one is confronted with important conceptual challenges: the resulting vacua are deeply affected by quantum fluctuations, and this reverberates on old conundrums related to dark energy and the cosmological constant. There are even signs that this type of investigation could shed light on the backbone of string theory, and supergravity may also have something to say about dark matter, which might be accounted for by gravitinos or other light superpartners. We are confident that supergravity will lead us farther once more.

Linking waves to particles

Black holes are arguably humankind’s most intriguing intellectual construction. Featuring a curvature singularity where space–time “ends” and tidal forces are infinite, black-hole interiors cannot be properly understood without a quantum theory of gravity. They are defined by an event horizon – a surface beyond which nothing escapes to the outside – and an exterior region called a photosphere, which is able to trap light rays. These uncommon properties explain why black holes were basically ignored for half a century, considered little more than a bizarre mathematical solution of Einstein’s equations but one without counterpart in nature.

LIGO’s discovery of gravitational waves provides the strongest evidence to date for the existence of black holes, but these tiny distortions of space–time have much more to tell us. Gravitational waves offer a unique way to test the basic tenets of general relativity, some of which have been taken for granted without observations. Are black holes the simplest possible macroscopic objects? Do event horizons and black holes really exist, or is their formation halted by some as-yet unknown mechanism? In addition, gravitational waves can tell us if gravitons are massless and if extra-light degrees of freedom fill the universe, as predicted in the 1970s by Peccei and Quinn in an attempt to explain the smallness of the neutron electric-dipole moment, and more recently by string theory. Ultralight fields affect the evolution of black holes and their gravitational-wave emission in a dramatic way that should be testable with upcoming gravitational-wave observatories.

The existence of black holes

The standard criterion with which to identify a black hole is straightforward: if an object is dark, massive and compact, it’s a black hole. But are there other objects which could satisfy the same criteria? Ordinary stars are bright, while neutron stars have at most three solar masses and therefore neither is able to explain observations of very massive dark objects. In recent years, however, unknown physics and quantum effects in particular have been invoked that change the structure of the horizon, replacing it by a hard surface. In this scenario, the exterior region – including the photosphere – would remain unchanged, but black holes would be replaced by very compact, dark stars. These stars could be made of normal matter under extraordinary quantum conditions or of exotic matter such as new scalar particles that may form “boson stars”.

Unfortunately, the formation of objects invoking poorly understood quantum effects is difficult to study. The collapse of scalar fields, on the other hand, can theoretically allow boson stars to form, and these may become more compact and massive through mergers. Interestingly, there is mounting evidence that compact objects without horizons but with a photosphere are unstable, ruling out entire classes of alternatives that have been put forward.

Gravitational waves might soon provide a definite answer to such questions. Although current gravitational-wave detections are not proof for the existence of black holes, they are a strong indicator that photospheres exist. Whereas observations of electromagnetic processes in the vicinities of black holes only probe the region outside of the photosphere, gravitational waves are sensitive to the entire space–time and are our best probe of strong-field regions.

A typical gravitational-wave signal generated by a small star falling head-on into a massive black hole looks like that in figure 1. As the star crosses the photosphere, a burst of radiation is emitted and a sequence of pulses dubbed  “quasinormal ringing” follow, determined by the characteristic modes of the black hole. But if the star falls into a quantum-corrected or exotic compact object with no horizon, part of the burst generated during the crossing of the photosphere reflects back at the object surface. The resulting signal in a detector would thus initially look the same, but be followed by lower amplitude “echoes” trapped between the photosphere and the surface of the object (figure 1, lower panel). These echoes, although tricky to dig out in noisy data, would be a smoking gun for new physics. With increasing sensitivity in detectors such as LIGO and Virgo, observations will be pushing back the object’s surface closer to the horizon, perhaps even to the point where we can detect the echo of quantum effects.

Dark questions

Understanding strong-field gravity with gravitational waves can also test the nature of dark matter. Although dark matter may interact very feebly with Standard Model particles, according to Einstein’s equivalence principle it must fall just like any other particle. If dark matter is composed of ultralight fields, as recent studies argue, then black holes may serve as excellent dark-matter detectors. You might ask how a monstrous, supermassive black hole could ever be sensitive to ultralight fields. The answer lies in superradiant resonances. When black holes rotate, as most do, they display an interesting effect discovered in the 1970s called superradiance: if one shines a low-frequency lamp on a rotating black hole, the scattered beam is brighter. This happens at the expense of the hole’s kinetic energy, causing the spin of the black-hole to decrease.

Not only electromagnetic waves, but also gravitational waves and any other bosonic field can be amplified by a rotating black hole. In addition, if the field is massive, low-energy fluctuations are trapped near the horizon and are forced to interact repeatedly with the black hole, producing an instability. This instability extracts rotational energy and transfers it to the field, which grows exponentially in amplitude and forms a rotating cloud around the black hole. For a one-million solar-mass black hole and a scalar field with a mass of 10–16 eV, the timescale for this to take place is less than two minutes. Therefore, the very existence of ultralight fields is constrained by the observation of spinning black holes. With this technique, one can place unprecedented bounds on the mass of axion-like particles, another popular candidate for dark matter. For example, we know from current astrophysical observations that the mass of dark photons must be smaller than 10–20 eV, which is 100 times better than accelerator bounds. The technique relies only on measurements of the mass and spin of black holes, which will be known with unprecedented precision with future gravitational-wave observations.

Superradiance, together with current electromagnetic observations of spinning black holes, can also be used to constrain the mass of the graviton, since any massive boson would trigger superradiant instabilities. Spin measurements of the supermassive black hole in galaxy Fairall 9 requires the mass of the graviton to be lighter than 5 × 10–23 eV – an impressive number which is even more stringent than the bound recently placed by LIGO.

Gravitational lighthouses

Furthermore, numerical simulations suggest that the superradiant instability mechanism eventually causes a slowly evolving and non-symmetric cloud to form around the black hole, emitting periodic gravitational waves like a gravitational “lighthouse”. This would not only mean that black holes are not as simple as we thought, but lead to a definite prediction: some black holes should be emitting nearly monochromatic gravitational waves whose frequency is dictated only by the field’s mass. This raises terrific opportunities for gravitational-wave science: not only can gravitational waves provide the first direct evidence of ultralight fields and of possible new effects near the horizon, but they also carry detailed information about the black-hole mass and spin. If light fields exist, the observation of a few hundred black holes should show “gaps” in the mass-spin plane corresponding to regions where spinning black holes are too unstable to exist.

This is a surprising application of gravitational science, which can be used to investigate the existence of new particles such as those possibly contributing to the dark matter. The idea of using observations of supermassive black holes to provide new insights not accessible in laboratory experiments would certainly be exciting. Perhaps these new frontiers in gravitational-wave astrophysics, in addition to probing the most extreme objects, will also give us a clearer understanding of the microscopic universe.

Unity through global science

CERN’s Large Hadron Collider (LHC) and its discovery of the Higgs boson in 2012 have launched a new era of research in particle physics. The LHC and its upgrades will chart the course of the field for many years to come, and CERN is therefore in a unique position to help shape the long-term future of particle physics. In view of this, CERN is exploring two different and challenging projects: the Compact Linear Collider (CLIC) and a Future Circular Collider (FCC).

These developments are taking place at a time when facilities for high-energy physics, as for other branches of science, are becoming larger and more complex as well as requiring more resources. Funding for the field is not increasing in many countries and the timescale for projects is becoming longer, resulting in fewer facilities being realised. Particle physics must adapt to this evolving reality by fostering greater co-ordination and collaboration on a global scale. This goes hand in hand with CERN’s tradition of networking with worldwide partners.

In 2010, CERN Council approved a radical shift in CERN’s membership policy that opened full membership to non-European states, irrespective of their geographical location. At the same time, Council introduced the status of associate membership to facilitate the accession of new members, including countries outside of Europe that might not command sufficient resources to sustain full membership (CERN Courier December 2014 p58).

Geographical enlargement is part of the effort to secure the future of the laboratory, and the process has been gradual and measured. Israel became CERN’s 21st Member State in 2014 while Romania joined as the 22nd Member State in 2016. Cyprus and Serbia are presently associate members in the pre-stage to membership, while Pakistan, Turkey and Ukraine are associate members. Late last year, agreements with Slovenia for associate membership in the pre-stage to membership and with India for associate membership were signed (see “Slovenia to become associate Member State in pre-stage to membership” and “India to become associate Member State” in this issue). Brazil, Croatia, Lithuania and Russia have also applied for associate membership.

CERN builds on a long tradition of a global engagement. The Organization has formal relations with non-member states (NMS) via bilateral International Co-operation Agreements (ICAs), currently in force with 47 countries. Out of a total of about 12,700 users at CERN, the participation of NMS users is now almost 40% – the majority of which are researchers from the US and Russia working on the LHC. The overall NMS participation in the non-LHC research programme is currently about 20%. Financial resources for research programmes, notably maintenance and operation costs for the LHC experiments, are shared between the Member States, the associate members and the NMS. In addition, there is increasing interest in collaboration on accelerator R&D and related technologies, focusing on the LHC’s luminosity upgrades and also on the FCC and CLIC studies. The number of states involved in such activities is already growing beyond the restricted circle of NMS that contributed to the LHC accelerator construction. The increasingly global interest in CERN also translates into a rising demand for CERN’s education and training programmes – falling within CERN’s mission of helping build capacity in countries that are developing their particle-physics communities.

The geographical enlargement policy of 2010 offers important opportunities for the future of the Organization. Now, CERN has developed it into a strategy, presented to Council in March 2016, to ensure that geographical enlargement consolidates the institutional base and thus reinforces the long-term scientific aspirations of CERN. Enlargement is not an aim in and of itself. Rather, the focus is on strengthening relations with countries that can bring scientific and technological expertise to CERN and can, in turn, benefit from closer engagement.

It is essential that membership and associate membership are beneficial to particle physics in individual countries, and that governments continue to invest in the growth of national communities. At the same time, enlargement should not hinder the operational efficiency of the laboratory. CERN’s engagement with prospective members and associate members is clearly oriented towards these objectives, mindful that investigating the unification of the fundamental forces of nature requires uniting scientific efforts on a global scale.

Raman Spectroscopy: An Intensity Approach

By Wu Guozhen
World Scientific

51cIIUCen0L

In this book the author offers an overview of Raman spectroscopy techniques – including Raman optical activity (ROA) and surface-enhanced Raman scattering spectroscopy (SERS) – covering their applications and their theoretical foundations.

The Raman effect is an inelastic two-photon process in which the incident (scattering) photon is absorbed by an atom or molecule (the scatterer) that immediately emits a photon of different energy and frequency than the incident one. This energy difference, which arises because the incident photon vibrationally excites the molecule, is called the Raman shift. Raman shifts provide information on the molecular motion and thus its structure and bond strength. As a consequence, this effect is used for material analysis in Raman spectroscopy.

More important than the energy difference are the Raman intensity of the scattered light, which offers insights into the dynamics of the photon-perturbed molecule, and the electronic polarisability of the molecule, which is a measure of how easily the electrons can be affected by the light.

After introducing the Raman effect and the normal mode analysis, the author discusses the bond polarisabilities, the intensity analysis and the Raman virtual states. A group of chapters then cover the extension of the bond polarisability algorithm to the ROA intensity analysis and many findings on ROA mechanism resulting from the work of the author and his collaborators. The last chapter introduces a unified classical theory for ROA and vibrational circular dichroism (another spectroscopic technique).

Relativistic Density Functional for Nuclear Structure

By Jie Meng (ed.)
World Scientific

41gdxv9tMQL._SX312_BO1,204,203,200_

This book, the 10th volume of the International Review of Nuclear Physics series, provides an overview of the current status of relativistic density functional theories and their applications. Written by leading scientists in the field, it is intended both for students and for researchers interested in many-body theory or nuclear physics.

Density functional theory was introduced in 1970s and has since developed in an attempt to find a unified and self-consistent description of the single-particle motion in a nucleus and of the collective motions of the nucleus based on strong interaction theory. Largely applied for heavy and super-heavy nuclei, this description allows mapping the complex quantum-mechanical many-body problem of the structure of these nuclei onto an adequate one-body problem, which is relatively easy to solve.

After explaining the theoretical basics of relativistic (or covariant) density functional theory, the authors discuss different models and the application of the theory to various cases, including the structure of neutron stars. In the last chapter, three variants of the relativistic model and the non-relativistic density model are compared. Possible directions for future developments of energy density functional theory are also outlined.

Readers interested in further details and specific research work can rely on the very rich bibliography that accompanies each chapter.

Who Cares About Particle Physics? Making Sense of the Higgs Boson, the Large Hadron Collider and CERN

By Pauline Gagnon
Oxford

Also available at the CERN bookshop

One of my struggles when I teach at my university, or when I talk to friends about science and technology, is finding inspiring analogies. Without vivid images and metaphors it is extremely hard, or even impossible, to explain the intricacies of particle physics to a public of non-experts. Even for physicists, sometimes it is hard to interpret equations without such aids. Pauline Gagnon has mastered how to explain particle physics to the general public, as she shows in this book full of illustrations but without lack of rigour. She was a senior research scientist at CERN, working with the ATLAS collaboration, until her retirement this year (although she is very active in outreach). Undoubtedly, she knows about particle physics and – more importantly – about its daily practice.

The book is organised into four related areas: particle physics (chapters 1 to 6 and chapter 10), technology spin-offs from particle physics (chapter 7), management in big science (chapter 8) and social issues in the laboratory (chapter 9 on diversity). While the first part was expected, I was positively surprised by the other three. Technology spin-offs are extremely important for society, which in the end is what pays for research. Particle physics is not oriented to economic productivity but driven by a mixture of creativity, perseverance and rigour towards the discovery of how the universe works. On their way to acquiring knowledge, scientists create new tools that can improve our living standards. This book provides a short summary of the technology impact of particle physics in our everyday life and of the effort of CERN to increase the technology spin-off rate by knowledge transfer and workforce training.

Big-science management, especially in the context of a cultural melting pot like CERN, could be very chaotic if it was driven by conventional corporate procedures. The author is clear about this highly non-trivial point: the benefits of the collaborative model we use at CERN in terms of productivity and realising ambitious aims. This organisational model – which she calls the “picnic” model, since each participating institute freely agrees to contribute something – is worth spreading in our modern and interconnected commercial environment, particularly because there are striking similarities with big science when it comes to products and services that are rich in technology and know-how.

As CERN visitors learn, cultural diversity permeates the Organization, and by extension particle physics. Just by taking a seat in any of the CERN restaurants, they can understand that particle physics is a collective and international effort. But they can also easily verify that there is an overwhelming gender imbalance in favour of men. The author, as a woman, addresses the topic of the gender gap in physics and specifically at CERN. She explains why diversity issues, in their overall complexity (not restricted to gender), are very important: our world desperately needs real examples of peaceful and fruitful co-operation between different people with common goals, without gender or cultural barriers.

For what concerns the main part of the book, which is focused on contemporary particle physics, chapters 1, 2, 3 and 6 are undoubtedly very well written, in the overall spirit of explaining things easily but nevertheless with full scientific thoroughness. But I was really impressed by chapter 4, on the experimental discovery of the Higgs boson, and 5, on dark matter, mainly because of the firsthand knowledge they reveal. When you read Gagnon’s words you can feel the emotions of the protagonists during that tipping point in modern particle physics. Chapter 5 is an excursion to the dark universe, with wonderful explanations (such as the imaginative comparison between the Bullet Cluster and an American football match). The science in this chapter is up to date and combines particle physics and observational cosmology without apparent effort.

I recommend this book for the general public interested in particle physics but also for particle physicists who want to take a refreshing and general look at the field, even if only to find images to explain physics to family and friends. Because, in the end, everybody cares about particle physics, if you can raise their interest.

General Relativity: A First Examination

By Marvin Blecher
World Scientific

41EWMhEPJIL._SX312_BO1,204,203,200_

This book provides a concise treatment of general relativity (GR) ideal for a semester course for undergraduate students or first-year graduate students in physics or engineering. After retiring from a career as an experimentalist in nuclear and particle physics, the author decided to teach an introductory course in GR at Virginia Tech, US. Many books are available on this topic, but they normally go into great detail and include a lot of material that cannot be covered in the short time of a semester. This new text by Blecher aims to cover this gap in the literature and provide just the essential concepts of GR.

The author starts with a review of special relativity and of the basic mathematical instruments, and then moves towards the explanation of the way that gravity affects time. This is discussed first for weak gravity via the conservation of energy using a Newtonian formulation with relativistic mass. Later in the book (chapter 5), it is rigorously treated in a completely GR framework. The Schwarzschild metric is also obtained.

In the following sections, GR is discussed in the context of the solar system (chapter 6) and of black holes (chapter 7). In the latter, an appealing example based on the movie Interstellar (Christopher Nolan) is used to discuss why a large gravitational time dilation is possible near a spinning – but not a static – black hole.

Chapter 8 focuses on gravitational waves. The first direct detection of these waves, produced by two black holes that merged into a single one, was announced in February this year, when the book was already going to print. Nevertheless, the author added a discussion on this discovery to the text. The theory of the binary neutron star-system radiation, referred to the binary pulsar discovered by R Hulse and J H Taylor, is also treated, but in the case of elliptical orbits, instead of circular ones as generally done for simplicity in textbooks.

Finally, a chapter is dedicated to cosmology, in which the results of numerical integrations, using the experimental data available for all the energy densities, are discussed.

Electron Lenses for Super-Colliders

By Vladimir D Shiltsev
Springer

Also available at the CERN bookshop

With an energetic writing style, in this book Vladimir Shiltsev presents a novel device for accelerators and storage rings. These machines employ magnets to bend and focus particle trajectories, and magnets always create forces that increase monotonically with the particle displacement in the magnet. But a particle in a beam also experiences forces from the beam itself and from the other beam in a collider – forces that do not increase monotonically with amplitude. Therefore, magnets are not well suited to correct for beam-generated forces. However, another beam may do the job, and this is most easily realised with a low-energy electron beam stabilised in a solenoidal magnetic field – thus an electron lens is created. The lens offers options for generating amplitude-dependent forces that cannot be realised with magnets, and such forces can also be made time-dependent. The electron lens is in effect a nonlinear lens with a rather flexible profile that can either be static or change with every passing bunch.

D Gabor already proposed the use of electron-generated space-charge forces in 1947 (Nature 160 89–90), and E Tsyganov suggested the use of electron lenses for the SSC (SSCL-Preprint-519 1993). But it was Shiltsev who was the driving force behind the first implementation of electron lenses in a high-energy machine. Two such lenses were installed in the Tevatron in 2001 and 2004, where they routinely removed beam not captured by the radiofrequency (RF) system, and were used for numerous studies of long-range and head-on beam–beam compensation and collimation. In 2014, two electron lenses were also installed in the Relativistic Heavy Ion Collider (RHIC) for head-on beam–beam compensation, and their use for the LHC collimation system is under consideration.

Shiltsev’s experience and comprehensive knowledge of the topic make him perhaps the best possible author for an introductory text. The book is divided into five chapters: an introduction, the major pieces of technology, application for beam–beam compensation, halo collimation, and other applications. It draws heavily on published material, and therefore does not have the feel of a textbook. While a consistent notation for symbols is used throughout the book, the figures are taken from other publications, and the units are mostly but not entirely in the International System (SI).

At the heart of the book are descriptions of the major technical components of a working electron lens, and the two main applications to date: beam–beam compensation and halo collimation. Long-range and head-on beam–beam compensation as well as collimation applications are described exhaustively. It is somewhat regrettable that the latest results from RHIC were published too late to be included in the volume (e.g. W Fischer et al. 2015 Phys. Rev. Lett. 115 264801; P Thieberger et al. 2016 Phys. Rev. Accel. Beams 19 041002). The book names the hollow electron lens a collimator, but it is probably better to describe it as a diffusion enhancer (as suggested on p138) because its strength is at least an order of magnitude smaller than a solid-state collimator, and a hollow lens will not replace either a primary or a secondary collimator jaw.

The last chapter ventures into more speculative territory, with applications that are not all in colliders. Most prominently, space-charge compensation is discussed, largely in terms of tune spread but not resonance driving terms. The latter is only mentioned in the context of multiple electron lenses (up to 24 for a simulated Fermilab Booster example). For this and the other applications mentioned, it is clear that much work remains before these could become reality.

Overall, the book is an excellent entry point for anyone who would like to become familiar with the concepts, technology and possible application of electron lenses. It is also a useful reference for many formulas, allowing for fast estimates, and for the published work on this topic – up to the date of publication.

Nanoscale Silicon Devices

By Shunri Oda and David K Ferry (eds)
CRC Press

The CRC Handbook of Chemistry and Physics was first published in 1913 and is a well-known text, at least to older physicists from the time before computers and instant, web-based information. To find relevant data, one had to be familiar with the classification of subjects and tables in the handbook’s 2500 or so pages, but virtually everything was covered. Over the years, the CRC Press – while continuing to publish this handbook, for more than 100 years now – has grown into a large publisher that produces hundreds of titles every year in engineering, physics and other fields.

Its recent publication, Nanoscale Silicon Devices, describes a variety of investigations that are under way to develop improved and smaller electronic structures for computing, signal processing in general, or memory. Now that transistors approach the dimension of a few nanometres, less than 100 atoms in a row, methods to account for quantum effects have to be applied, as shown in the first chapter. The second chapter discusses the need to change the shape of transistors as they become smaller. The controlling gate has to extend as much as possible around the conduction channel material and, eventually, silicon may be replaced in the channel by a different semiconductor material.

Another effect due to the small size, as explained in chapter 3, is the increase of variability between devices of identical design. Single-electron devices and the use of electron spin are discussed in several of the following chapters. A major issue today, as highlighted in the book, is the reduction of power for circuits with a large number of transistors, where the leakage current in the OFF state becomes preponderant. In chapter 7, tunnel FET devices are discussed as a way to solve this problem. In chapter 6, a different approach is shown, using nanoelectromechanical ON/OFF switches integrated in the circuit.

This book is not a typical textbook, but rather a collection of 11 articles written by 20 scientists, including the editors Oda and Ferry. Each article centres on the research of its author(s) in a specific area of semiconductor-device development. One of the consequences of this structure is the abundance of internal references. Reading the book does not quite provide a firm idea about the future of electronics, but it could convince readers that much more will be possible, beyond the current state-of-the-art. One has also to keep in mind that the chip industry tends to keep useful findings under wraps and has little incentive to publish its research before products are on the shelves.

The book is a good buy if you want to get a feel about work going on at the interface between pico- and nanoelectronics. For the use of electronics in scientific research, it is essential to understand how devices are constructed and what researchers might be able to gain from them, especially when working in unusual environments such as a vacuum, space, the human body or a particle collider.

Ukraine becomes associate Member State of CERN

On 5 October, Ukraine became an associate Member State of CERN, following official notification to CERN that Ukraine’s parliament has ratified an agreement signed with CERN in October 2013. “Our hard and consistent work over the past two decades has been crowned today by a remarkable event – granting Ukraine the status of CERN associate member,” says Yurii Klymenko, Ukraine’s ambassador to the United Nations in Geneva. “It is an extremely important step on the way of Ukraine’s European integration.”

Ukraine has been a long-time contributor to the ALICE, CMS and LHCb experiments at the LHC and to R&D in accelerator technology. Ukraine also operates a Tier-2 computing centre in the Worldwide LHC Computing Grid.

Ukraine and CERN first signed a co-operation agreement in 1993, followed by a joint declaration in 2011, but Ukraine’s relationship with CERN dates back much further through the Joint Institute of Nuclear Research (JINR) in Dubna, Russia, of which Ukraine is a member. CERN-JINR co-operation in the field of high-energy accelerators started in the early 1960s, and ever since, the two institutions have formed a bridge between East and West that has made important contributions to the development of global, peaceful scientific co-operation.

Associate membership will open a new era of co-operation that will strengthen the long-term partnership between CERN and the Ukrainian scientific community. It will allow Ukraine to participate in the governance of CERN, in addition to allowing Ukrainian scientists to become CERN staff and to participate in CERN’s training and career-development programmes. Finally, it will allow Ukrainian industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

“It is a great pleasure to warmly welcome Ukraine into the CERN family,” says CERN Director-General Fabiola Gianotti.

bright-rec iop pub iop-science physcis connect