Comsol -leaderboard other pages

Topics

NA62: CERN’s kaon factory

Résumé

NA62 : l’usine à kaons du CERN

Le CERN est fort d’une longue tradition en physique des kaons, tradition perpétuée aujourd’hui par l’expérience NA62. La phase de mise en service a cédé la place en 2015 à la phase d’acquisition de données, qui devrait se poursuivre jusqu’en 2018. NA62 est conçue pour étudier avec précision la désintégration K+ → π+νν, mais elle est aussi utile pour examiner d’autres aspects, notamment l’universalité des leptons et les désintégrations radiatives. La qualité du détecteur, la possibilité d’utiliser des faisceaux secondaires aussi bien chargés que neutres, et la disponibilité prévue des faisceaux extraits du SPS pour la durée de l’exploitation du LHC font de NA62 une véritable usine à kaons.

CERN’s long tradition in kaon physics started in the 1960s with experiments at the Proton Synchrotron conducted by, among others, Jack Steinberger and Carlo Rubbia. It continued with NA31, CPLEAR, NA48 and its follow-ups. Next in line and currently active is NA62 – the high-intensity facility designed to study rare kaon decays, in particular those where the mother particle decays into a pion and two neutrinos. The nominal performance of the detector in terms of data quality and quantity is so good that the experiment can undeniably play the role of a kaon factory.

Using its unique set-up, NA62 will address with sufficient statistics and precision a basic question: does the Standard Model also work in the most suppressed corner of flavour-changing neutral currents (FCNCs)? According to theory, these processes are suppressed by the unitarity of the quark-mixing Cabibbo–Kobayashi–Maskawa matrix and by the Glashow–Iliopoulos–Maiani mechanism. What makes the kaons special is that some of these FCNCs are not affected by large hadronic matrix-element uncertainties because they can be normalised to a semi-leptonic mode described by the same form factor, which therefore drops out in the ratio. The poster child of these reactions is the K → πνν. By measuring the decay rate, it will be possible to determine a combination of Cabibbo–Kobayashi–Maskawa matrix elements independently of B decays. Discrepancies compared with expectations might be a signature of new physics.

Testing Standard Model theoretical predictions is not easy, because the decay under study is predicted to occur with a probability of less than one part in 10 billion. Therefore, the first experimental challenge is to collect a sufficient number of kaon decays. To do so, in 2012, an intense secondary beam from the Super Proton Synchrotron (SPS), called K12, had to be completely rebuilt. Today, NA62 is exploiting this intense secondary beam, which has an instantaneous rate approaching 1 GHz. Although we know that approximately only 6% of the beam particles are kaons, each single particle sent by the SPS accelerator has to be identified before entering the experiment’s decay region. At the heart of the tracking system is the gigatracker (GTK), which is able to measure the impact position of the incoming particle and its arrival time. This information is used to associate the incoming particle with the event observed downstream, and to reconstruct its kinematics. To do so with the required sensitivity, 200 picoseconds time-resolution in the gigatracker is required.

The GTK consists of a matrix of 200 columns by 90 rows of hybrid silicon pixels. To affect the trajectory of the particles as little as possible, the sensors are 200 μm thick and the pixel chip is 100 μm thick. The GTK is placed in a vacuum and operated at a temperature of –20 °C to reduce radiation-induced performance degradation. The NA62 collaboration has developed innovative ways to ensure effective cooling, using light materials to minimise their effect on particle trajectory.

In addition to measuring the direction and the momentum of each particle, the identity of the particle needs to be determined before it enters the decay tank. This is done using a differential Cherenkov counter (CEDAR) equipped with state-of-the-art optics and electronics to cope with the large particle rate.

Final-pion identification

There is a continuous struggle between particle physicists, who want to keep the amount of material in the tracking detectors to a minimum, and engineers, who need to ensure safety and prevent the explosion of pressurised devices operated inside the vacuum tank, such as the NA62 straw tracker made of more than 7000 thin tubes. In addition, the beam specialists would even prefer to have no detector at all. Any amount of material in the beam leads to scattering of particles into the detectors placed downstream, leading to potential backgrounds and unwanted additional counting rates. In NA62, the accepted signal is a single pion π+ and nothing else, so every trick in the book of experimental particle physics is used to determine the identity of the final pion, including a ring imaging Cherenkov (RICH) counter for pion/muon separation up to about 40 GeV/c.

Perhaps the most striking feature of NA62 is the complex of electromagnetic calorimeters deployed along and downstream of the vacuum tank: 12 stations of lead-glass rings (using crystals refurbished from the OPAL barrel at LEP), of which 11 operate inside the vacuum tank; a liquid-krypton calorimeter, a legacy of NA48 but upgraded with new electronics, and smaller detectors complementing the acceptance. These calorimeters form the NA62 army deployed to suppress the background originating from K+ → π+π0 decays when both photons from the π0 decay are lost: only one π0 out of 107 remains undetected. As you have probably realised by now, NA62 is not a small experiment; a picture of the detector is shown in figure 1.

Even with a 65 m-long fiducial region, only 10% of the kaons decay usefully, so only six in 1000 of the incoming particles measured by the GTK actually end up being used to study kaon decays in NA62 – a big upfront price to pay. On the positive side, the advantage is the possibility to have full control of the initial and final states because the particles don’t cross any material apart from the trackers, and the kinematics of the decays can be reconstructed with great precision. To demonstrate the quality of the NA62 data, figure 3 shows events selected with a single track for incoming particles tagged as kaons and figure 4 shows the particle-identification capability.

In addition to suppressing the π0, NA62 has to suppress the background from muons. Most of the single rate in the large detectors is due to these particles, either from the more frequent pion and kaon decay (π→ μ+ν and K+ → μ+ν) or originating from the dump of the primary proton beam. In addition to the already mentioned RICH, NA62 is equipped with hadron calorimeters and a fast muon detector at the end of the hall to deal with the muons. A powerful and innovative trigger-and-data-acquisition system is a crucial ingredient for the success of NA62, together with the commitment and dedication of each collaborator (see figure 2).

NA62 was commissioned in 2014 and 2015, and it is now in the middle of a first long phase of data-taking, which should last until the accelerator’s Long Shutdown 2 in 2018. The data collected so far indicate a detector performance in line with expectations, and preliminary results based on these data were shown at the Rencontres de Moriond Electroweak conference in La Thuile, Italy, in March. A big effort was invested to build this new experiment, and the collaboration is eager to exploit its physics potential to the full.

Having designed NA62 to address with precision the K+ → π+νν decay means that several other physics opportunities can be studied with the same detector. They range from the study of lepton universality to radiative decays. The improved apparatus with respect to NA48 should also allow measurements of π π scattering and semi-leptonic decays to be improved on, and possible low-mass long-lived particles to be looked for.

The quality of the detector, the possibility to use both charged and neutral secondary beams, and the foreseen availability of the SPS extracted beams for the duration of exploitation of the LHC make NA62 a bona-fide kaon factory.

Particle flow in CMS

In hadron-collider experiments, jets are traditionally reconstructed by clustering photon and hadron energy deposits in the calorimeters. As the information from the inner tracking system is completely ignored in the reconstruction of jet momentum, the performance of such calorimeter-based reconstruction algorithms is seriously limited. In particular, the energy deposits of all jet particles are clustered together, and the jet energy resolution is driven by the calorimeter resolution for hadrons – typically 100%/√E in CMS – and by the non-linear calorimeter response. Also, because the trajectories of low-energy charged hadrons are bent away from the jet axis in the 3.8 T field of the CMS magnet, their energy deposits in the calorimeters are often not clustered into the jets. Finally, low-energy hadrons may even be invisible if their energies lie below the calorimeter detection thresholds.

In contrast, in lepton-collider experiments, particles are identified individually through their characteristic interaction pattern in all detector layers, which allows the reconstruction of their properties (energy, direction, origin) in an optimal manner, even in highly boosted jets at the TeV scale. This approach was first introduced at LEP with great success, before being adopted as the baseline for the design of future detectors for the ILC, CLIC and the FCC-ee. The same ambitious approach has been adopted by the CMS experiment, for the first time at a hadron collider. For example, the presence of a charged hadron is signalled by a track connected to calorimeter energy deposits. The direction of the particle is indicated by the track before any deviation in the field, and its energy is calculated as a weighted average of the track momentum and the associated calorimeter energy. These particles, which typically carry about 65% of the energy of a jet, are therefore reconstructed with the best possible energy resolution. Calorimeter energy deposits not connected to a track are either identified as a photon or as a neutral hadron. Photons, which represent typically 25% of the jet energy, are reconstructed with the excellent energy resolution of the CMS electromagnetic calorimeter. Consequently, only 10% of the jet energy – the average fraction carried by neutral hadrons – needs to be reconstructed solely using the hadron calorimeter, with its 100%/√E resolution. In addition to these types of particles, the algorithm identifies and reconstructs leptons with improved efficiency and purity, especially in the busy jet environment.

Key ingredients for the success of particle flow are excellent tracking efficiency and purity, the ability to resolve the calorimeter energy deposits of neighbouring particles, and unambiguous matching of charged-particle tracks to calorimeter deposits. The CMS detector, while not designed for this purpose, turned out to be well-suited for particle flow. Charged-particle tracks are reconstructed with efficiency greater than 90% and a rate of false track reconstruction at the per cent level down to a transverse momentum of 500 MeV. Excellent separation of charged hadron and photon energy deposits is provided by the granular electromagnetic calorimeter and large magnetic-field strength. Finally, the two calorimeters are placed inside of the magnet coil, which minimises the probability for a charged particle to generate a shower before reaching the calorimeters, and therefore facilitates the matching between tracks and calorimeter deposits.

After particle flow, the list of reconstructed particles resembles that provided by an event generator. It can be used directly to reconstruct jets and the missing transverse momentum, to identify hadronic tau decays, and to quantify lepton isolation. Figure 1 illustrates, in a given event, the accuracy of the particle reconstruction by comparing the jets of reconstructed particles to the jets of generated particles. Figure 2 further demonstrates the dramatic improvement in jet-energy resolution with respect to the calorimeter-based measurement. In addition, the particle flow improves the jet angular resolution by a factor of three and reduces the systematic uncertainty in the jet energy scale by a factor of two. The influence of particle flow is, however, far from being restricted to jets with, for example, similar improvements for missing transverse-momentum reconstruction and a tau-identification background rate reduced by a factor three. This new approach to reconstruction also paved the way for particle-level pile-up mitigation methods such as the identification and masking of charged hadrons from pile-up before clustering jets or estimating lepton isolation, and the use of machine learning to estimate the contribution of pile-up to the missing transverse momentum.

The algorithm, optimised before the start of LHC Run I in 2009, remains essentially unchanged for Run II, because the reduced bunch spacing of 25 ns could be accommodated by a simple reduction of the time windows for the detector hits. The future CMS upgrades have been planned towards optimal conditions for particle flow (and therefore physics) performance. In the first phase of the upgrade programme, a new pixel layer will reduce the rate of false charged-particle tracks, while the read-out of multiple layers with low noise photodetectors in the hadron calorimeter will improve the neutral hadron measurement that limits the jet-energy resolution. The second phase includes extended tracking allowing for full particle-flow reconstruction in the forward region, and a new high-granularity endcap calorimeter with extended particle-flow capabilities. The future is therefore bright for the CMS particle-flow reconstruction concept.

• CMS Collaboration, “Particle flow and global event description in CMS”, in preparation.

AugerPrime looks to the highest energies

 

Since the start of its operations in 2004, the Auger Observatory has illuminated many of the open questions in cosmic-ray science. For example, it confirmed with high precision the suppression of the primary cosmic-ray energy spectrum for energies exceeding 5 × 1019 eV, as predicted by Kenneth Greisen, Georgiy Zatsepin and Vadim Kuzmin (the “GZK effect”). The collaboration has searched for possible extragalactic point sources of the highest-energy cosmic-ray particles ever observed, as well as for large-scale anisotropy of arrival directions in the sky (CERN Courier December 2007 p5). It has also published unexpected results about the specific particle types that reach the Earth from remote galaxies, referred to as the “mass composition” of the primary particles. The observatory has set the world’s most stringent upper limits on the flux of neutrinos and photons with EeV energies (1 EeV = 1018 eV). Furthermore, it contributes to our understanding of hadronic showers and interactions at centre-of-mass energies well above those accessible at the LHC, such as in its measurement of the proton–proton inelastic cross-section at √s = 57 TeV (CERN Courier September 2012 p6).

The current Auger Observatory

The Auger Observatory learns about high-energy cosmic rays from the extensive air showers they create in the atmosphere (CERN Courier July/August 2006 p12). These showers consist of billions of subatomic particles that rain down on the Earth’s surface, spread over a footprint of tens of square kilometres. Each air shower carries information about the primary cosmic-ray particle’s arrival direction, energy and particle type. An array of 1600 water-Cherenkov surface detectors, placed on a 1500 m grid covering 3000 km2, samples some of these particles, while fluorescence detectors around the observatory’s perimeter observe the faint ultraviolet light the shower creates by exciting the air molecules it passes through. The surface detectors operate 24 hours a day, and are joined by fluorescence-detector measurements on clear moonless nights. The duty cycle for the fluorescence detectors is about 10% that of the surface detectors. An additional 60 surface detectors in a region with a reduced 750 m spacing, known as the infill array, focus on detecting lower-energy air showers whose footprint is smaller than that of showers at the highest energies. Each surface-detector station (see image above) is self-powered by a solar panel, which charges batteries in a box attached to the tank (at left in the image), enabling the detectors to operate day and night. An array of 153 radio antennas, named AERA and spread over a 17 km2 area, complements the surface detectors and fluorescence detectors. The antennas are sensitive to coherent radiation emitted in the frequency range 30–80 MHz by air-shower electrons and positrons deflected in the Earth’s magnetic field.

The motivation for AugerPrime and its detector upgrades

The primary motivation for the AugerPrime detector upgrades is to understand how the suppressed energy spectrum and the mass composition of the primary cosmic-ray particles at the highest energies are related. Different primary particles, such as γ-rays, neutrinos, protons or heavier nuclei, create air showers with different average characteristics. To date, the observatory has deduced the average primary-particle mass at a given energy from measurements provided by the fluorescence detectors. These detectors are sensitive to the number of air-shower particles versus depth in the atmosphere through the varying intensity of the ultraviolet light emitted along the path of the shower. The atmospheric depth of the shower’s maximum number of particles, a quantity known as Xmax, is deeper in the atmosphere for proton-induced air showers relative to showers induced by heavier nuclei, such as iron, at a given primary energy. Owing to the 10% duty cycle of the fluorescence detectors, the mass-composition measurements using the Xmax technique do not currently extend into the energy region E > 5 × 1019 eV where the flux suppression is observed. AugerPrime will capitalise on another feature of air showers induced by different primary-mass particles, namely, the different abundances of muons, photons and electrons at the Earth’s surface. The main goal of AugerPrime is to measure the relative numbers of these shower particles to obtain a more precise handle on the primary cosmic-ray composition with increased statistics at the highest energies. This knowledge should reveal whether the flux suppression at the highest energies is a result of a GZK-like propagation effect or of astrophysical sources reaching a limit in their ability to accelerate the highest-energy primary particles.

The key to differentiating the ground-level air-shower particles lies in improving the detection capabilities of the surface array. AugerPrime will cover each of the 1660 water-Cherenkov surface detectors with planes of plastic-scintillator detectors measuring 4 m2. Surface-detector stations with scintillators above the Cherenkov detectors will allow the Auger team to determine the electron/photon versus muon abundances of air showers more precisely compared with using the Cherenkov detectors alone. The scintillator planes will be housed in light-tight, weatherproof enclosures, attached to the existing water tanks with a sturdy support frame, as shown above. The scintillator light will be read out with wavelength-shifting fibres inserted into straight extruded holes in the scintillator planes, which are bundled and attached to photomultiplier tubes. Also above, an image shows how the green wavelength-shifting fibres emerge from the scintillator planes and are grouped into bundles. Because the surface detectors operate 24 hours a day, the AugerPrime upgrade will yield mass-composition information for the full data set collected in the future.

The AugerPrime project also includes other detector improvements. The dynamic range of the Cherenkov detectors will be extended with the addition of a fourth photomultiplier tube. Its gain will be adjusted so that particle densities can be accurately measured close to the core of the highest-energy air showers. New electronics with faster sampling of the photomultiplier-tube signals will better identify the narrow peaks created by muons. New GPS receivers at each surface-detector station will provide better timing accuracy and calibration. A subproject of AugerPrime called AMIGA will consist of scintillator planes buried 1.3 m under the 60 surface detectors of the infill array. The AMIGA detectors are directly sensitive to the muon content of air showers, because the electromagnetic components are largely absorbed by the overburden.

The AugerPrime Symposium

In November 2015, the Auger scientists combined their biannual collaboration meeting in Malargüe, Argentina, with a meeting of its International Finance Board and dignitaries from many of its collaborating countries, to begin the new phase of the experiment in an AugerPrime Symposium. The Finance Board endorsed the development and construction of the AugerPrime detector upgrades, and a renewed international agreement was signed in a formal ceremony for continued operation of the experiment for an additional 10 years. The observatory’s spokesperson, Karl-Heinz Kampert from the University of Wuppertal, said: “The symposium marks a turning point for the observatory and we look forward to the exciting science that AugerPrime will enable us to pursue.”

While continuing to collect extensive air-shower data with its current detector configuration and publishing new results, the Auger Collaboration is focused on finalising the design for the upgraded AugerPrime detectors and making the transition to the construction phase at the many collaborating institutions worldwide. Subsequent installation of the new detector components on the Pampa Amarilla is no small task, with the 1660 surface detectors spread across such a large area. Each station must be accessed with all-terrain vehicles moving carefully on rough desert roads. But the collaboration is up to the challenge, and AugerPrime is foreseen to be completed in 2018 with essentially no interruption to current data-taking operations.

• For more information, see auger.org/augerprime.

A global lab with a global mission

Our world has been transformed almost beyond recognition since CERN was founded in 1954. Particle physics has evolved to become a field that is increasingly planned and co-ordinated around the world. Collaboration across regions is growing. New players are emerging.

CERN is now a global lab, with a European core. This was recognised by CERN member states with the adoption, in 2010, of the geographical enlargement policy that opens up for greater participation from countries outside of Europe. Since then, we have welcomed Israel as a new member state. Romania and Serbia are entering the final stages of accession to membership, and Cyprus has just joined as an associate member in the pre-stage to membership. Since 2015, Pakistan and Turkey have been part of the wider CERN family as associate members, and several more states are applicants for associate membership.

Yet, the changes go much further than our scientific field and the inclusion of new members in our particle-physics family. Global governance is more complex than ever, with overlapping challenges and a greater number of interlocutors. Public opinion is being formed in new ways, driven by technological advances and political change. Global economic changes, with emerging countries gaining influence and clout, shape policy priorities in new ways – also in the scientific field. Support for fundamental science must be constantly nurtured, and partnerships are more necessary than ever.

It is a highly complex and fast-moving global policy space. CERN – and indeed all large labs and research infrastructures – needs to react to and act within this evolving context. The challenge for all of us is to advance in a globally co-ordinated manner, so as to be able to carry out as many exciting and complementary projects as possible, while ensuring long-term support for fundamental science as the competition for resources becomes ever fiercer on all levels.

Global impact

It is against this background that the Director-General of CERN has now, for the first time, established an International Relations (IR) sector. The sector brings together entities within the Organization that are working on different aspects of our international engagement, and it provides a unique opportunity for CERN to strengthen the global dimension of its work.

The IR sector has three overarching objectives. First, to help strengthen CERN’s position as a global centre of excellence in science and research through sustained support from all stakeholders. Second, to contribute to shaping a global policy agenda that supports fundamental research, and includes science perspectives more generally. And third, connecting CERN with people across the world to inspire scientific curiosity and understanding.

The immediate priorities for the sector include reinforcing dialogue with our member states, setting future directions for geographical enlargement, and strengthening CERN’s voice in global policy debates.

Let me share a couple of the initiatives that are under way.

We have already expanded the interaction with member states with the establishment of thematic forums that enable better dialogue, and new forums will be created in the coming months. We have also begun reflecting on how to focus geographical enlargement in a way that fully supports and reinforces our long-term scientific aspirations. It is critical that enlargement is not seen as an end in itself; it is intended to underpin CERN’s scientific objectives through a broader and more diverse support base to strengthen our core scientific work.

Fundamental science

Direct engagement with people across the world is a key aspect of our work. With a newly integrated Education, Communications and Outreach group, we will be able to reach out in a more co-ordinated manner – to stimulate interest in and support for fundamental science, among teachers, students, global science policy makers and the many others around the globe who follow our work. For those of us who work with fundamental science every day, the value and impact seem obvious. But it isn’t always that obvious beyond our own corridors. We need to get better at demonstrating how scientific advances impact on the lives of people across the world, every single day, often in surprising but deeply profound ways.

While the IR sector as an institutional construct is new, we are building on a proud, long-standing tradition of inclusive international collaboration in pursuit of a common goal: expanding our collective knowledge. Exploring the frontiers of knowledge has always thrived on ideas, input and initiatives from across the world.

It is truly a privilege to be part of the collective effort that is the CERN IR sector, to take that work forward.

The Composite Nambu–Goldstone Higgs

By Giuliano Panico and Andrea Wulzer
Springer

978-3-319-22617-0

This book provides a description of a composite Higgs scenario as possible extension of the Standard Model (SM). The SM is, by now, the established theory of electroweak and strong interactions, but it is not the fundamental theory of nature. It is just an effective theory, an approximation of a more fundamental theory, which is able to describe nature under specific conditions.

There are a number of open theoretical issues, such as: the existence of gravity, for which no complete high-energy description is available; the neutrino masses and oscillation; and the hierarchy problem associated with the Higgs boson mass (why does the Higgs boson have so small a mass? Or, in other words, why is it so much lighter than the Planck mass?).

Among the possible solutions to the hierarchy problem, the scenario of a composite Higgs boson is a quite simple idea that offers a plausible description of the experimental data. In this picture, the Higgs must be a (pseudo-) Nambu–Goldstone boson, as explained in the text.

The aim of this volume is to describe the composite Higgs scenario, to assess its likelihood of being a model that is actually realisable in nature – to the best of present-day theoretical and experimental understanding – and identify possible experimental manifestations of this scenario (which would influence future research directions). The tools employed for formulation of the theory and for the study of its implications are also discussed.

Thanks to the pedagogical nature of the text, this book could be useful for graduate students and non-specialist researchers in particle, nuclear and gravitational physics.

Chern–Simons (Super) Gravity – 100 Years of General Relativity (vol. 2)

By Mokhtar Hassaine and Jorge Zanelli
World Scientific

61bSQ5eEmtL

Written on the basis of a set of lecture notes, this book provides a concise introduction to Chern–Simons (super) gravity theories accessible to graduate students and researchers in physics and mathematics.

Chern–Simons (CS) theories are gauge-invariant models that could include gravity in a consistent way. As a consequence, they are very interesting to study because they can open up the way to a common description of the four fundamental interactions of nature.

As is well known, three such interactions are described by the Standard Model as Yang–Mills (YM) theories, which are based on the principle of gauge invariance (requiring a correlation between particles at different locations in space–time). The particular form of these YM interactions makes them consistent with quantum mechanics.

On the other hand, gravitation – the fourth fundamental force – is described by general relativity (GR), which is also based on a gauge principle, but cannot be quantised following the same steps that work in the YM case.

Gauge principles suggest that a viable path is the introduction of a peculiar, yet generic, modification of GR, consisting in the addition of a CS term to the action.

Besides being mathematically elegant, CS theories have a set of properties that make them intriguing and promising: they are gauge-invariant, scale-invariant and background-independent; they have no dimensionful coupling constants; and all constants in the Lagrangian equation are fixed rational coefficients that cannot be adjusted without destroying the gauge invariance.

Wisdom of the Martians of Science: In Their Own Words with Commentaries

By Balazs Hargittai and Istvan Hargittai
World Scientific

51LXIcT7uqL

The “Martians” of science that the titles refers to are five Jewish-Hungarian scientists who distinguished themselves for significant discoveries in fundamental science that contributed to shaping the modern world. These great scientists are John von Neumann, a pioneer of the modern computer; Theodore von Kármán, known as the scientist behind the US Air Force; Loe Szilard, initiator of the development of nuclear weapons; Nobel laurate Eugene P Wigner, who was the world’s first nuclear engineer; and Edward Teller, colloquially known as “the father of the hydrogen bomb”.

Born to upper-middle-class Jewish families and raised in the sophisticated atmosphere of liberal Budapest, they were forced to leave their anti-Semitic homeland to emigrate to Germany, and ultimately to the US, which became their new home country, to the point that they devoted themselves to its defence.

The book comes as a follow-up to a previous title, The Martians of Science, which drew the profiles of these five scientists and presented their contributions to their fields of research. The aim of this second volume is to show the wisdom of the Martians by presenting their thoughts and ideas with their own words and putting them into context. Through direct quotes from the five characters and commentaries from other people who knew them, the authors offer an insight into the thinking of such great minds, which they find instructive and entertaining. They are witty, provocative, intriguing and, as the author says, never boring.

Excitons and Cooper Pairs: Two Composite Bosons in Many-Body Physics

By Monique Combescot and Shine-Yuan Shiau
Oxford University Press

41Sjwm2nLXL._SX342_SY445_QL70_ML2_

This book deals with two major but different fields of condensed-matter physics, semiconductors and superconductors, starting from the consideration that the key particles of these materials, which are excitons and Cooper pairs, are actually composite bosons. The authors are not interested in describing the physics of these materials, but in better understanding how composite bosons made of two fermions interact and, more specifically, identifying the characteristics of their fermionic components that control many-body effects at a microscopic level.

The many-body physics of elementary fermions and bosons has been largely studied using Green functions and with the help of Feynman diagrams for visualisation. But these tools are not easily applicable to many-body physics of composite bosons made of two fermions. Consequently, a new formalism has been developed and a new type of graphic representation, the “Shiva diagrams” (so named because of the multi-arm structure reminiscent of the Hindu god Shiva) adopted.

After two sections dedicated to the mathematical and physical foundation of Wannier and Frenkel excitons and of Cooper pairs, the book continues with a discussion on composite particles made of excitons. In the fourth and last part, the authors look at some aspects of the condensation of composite bosons, which they call “bosonic condensation”, and which is different from the Bose–Einstein condensation of free elementary bosons. Other important issues are discussed, such as the application of the Pauli exclusion principle on the fermionic components of bosonic particles.

Although suitable for advanced undergraduate and graduate students in physics without a specific background, this text will also appeal to researchers in condensed-matter physics who are willing to obtain insight into the many-body physics of two composite bosons.

Effective Field Theories

By Alexey A Petrov and Andrew E Blechman
World Scientific

CCboo2_04_16

The importance of effective field theory (EFT) techniques cannot be over-emphasised. In fact, all theories are, in some sense, effective. A book that discusses these techniques, groups different cases in which EFTs are necessary, and provides numerous examples, is therefore necessary.

After illustrating the ubiquitousness of EFTs with a discussion of Newtonian gravity, superconductivity, and the Euler–Heisemberg theory of photon–photon scattering below the electron mass, the book splits into different directions to examine qualitatively diverse situations where EFTs are used. Fermi theory, chiral perturbation theory, heavy-quark effective theory, non-relativistic quantum electrodynamics (chromodynamics), and even the EFT for physics beyond the Standard Model, are all discussed with a common language that allows the reader to find analogies and appreciate the different physics of these fundamentally different systems.

Soft collinear effective theory (SCET) and non-relativistic general relativity provide a different context in which EFTs are useful as a computational tool. The text exploits the intuition developed in the previous examples to identify the relevant expansion parameters and to organise hierarchically the different contributions to the scattering amplitudes.

Admittedly, the book focuses on high-energy physics topics, neglecting many applications in soft and condensed matter.

The volume is very well written, it is continuous, and includes a rich introduction on the main topics necessary to understand and use EFTs, such as symmetries, renormalization-group methods and anomalies. As an advanced quantum field theory (QFT) book, it exploits the possibility of relying on the previous knowledge of the reader and concentrates on the relevant issues; the introduction is written in a practical way, providing EFT jargon and highlighting the differences between renormalisable and non-renormalisable theories.

The tone of the book makes it suitable not only for practitioners in the field, but also for students looking for a broad perspective on different QFT topics – the common EFT language providing the thread – and for teachers searching for analogies and similarities between advanced and classical topics.

Introduction to Soft-Collinear Effective Theory

By Thomas Becher, Alessandro Broggio and Andrea Ferroglia
Springer
Also available at the CERN bookshop

41fPk0c2IfL._SX331_BO1,204,203,200_

The volume provides an essential and pedagogical introduction to soft-collinear effective field theory (SCET), one of the low-energy effective field theories (EFTs) of the Standard Model developed in the last two decades. EFTs are used when the problem that is tackled requires a separation of the low-energy contributions from the high-energy part, to be solved.

SCET has already been applied to a large variety of processes, from B-meson decays to jet production at the LHC. As a consequence, the need was felt for a self-contained text that could make this subject easily accessible to students, as well as to researchers who are not experts in the subject. Nevertheless, a background in quantum field theories and perturbative QCD is a prerequisite for the book.

The basics of the construction of effective theory are presented in detail. The expansion of Feynman diagrams describing the production of energetic particles is described, followed by the construction of an effective Lagrangian, which produces the different terms that contribute to the expanded diagrams. The case of a scalar theory is considered first, then the construction is extended to the more complex case of QCD.

To show the method at work, the authors have included some collider-physics example applications (the field where, in the last few years, SCET has been applied the most). In particular, the soft-gluon resummation for the inclusive Drell–Yan cross-section in proton–proton collisions is discussed, and SCET formalism is used to perform transverse-momentum resummation. In addition, the application of SCET methods to a process with high energetic particles in many directions is analysed, and the structure of infrared singularities in n-point gauge-theory amplitudes derived.

bright-rec iop pub iop-science physcis connect