This book provides a comprehensive overview of the physics of the strong interaction, which is necessary to analyse and understand the results of current experiments at particle accelerators. In particular, the authors aim to show how to apply the framework of perturbative theory in the context of the strong interaction, to the prediction as well as correct interpretation of signals and backgrounds at the Large Hadron Collider (LHC).
The book consists of three parts. In the first, after a brief introduction to the LHC and the present hot topics in particle physics, a general picture of high-energy interactions involving hadrons in the initial state is developed. The relevant terminology and techniques are reviewed and worked out using standard examples.
The second part is dedicated to a more detailed discussion of various aspects of the perturbative treatment of the strong interaction in hadronic reactions. Finally, in the last section, experimental findings are confronted with theoretical predictions.
Primarily addressed at graduate students and young researchers, this book can also be a helpful reference for advanced scientists. In fact, it can provide the right level of knowledge for theorists to understand data more in depth and for experimentalists to be able to recognise the advantages and disadvantages of different theoretical descriptions.
The reader is assumed to be familiar with concepts of particle physics such as the calculation of Feynman diagrams at tree level and the evaluation of cross sections through phase space integration with analytical terms. However, a short review of these topics is given in the appendices.
In this book, popular-science writer Paul Nahin presents a collection of everyday situations in which the application of simple physical principles and a bit of mathematics can make us understand how things work. His aim is to take these scientific disciplines closer to the layperson and, at the same time, show them the wonder lying behind many aspects of reality that are often taken for granted.
The problems presented and explained are very diverse, ranging from how to extract more energy from renewable sources, how best to catch a baseball, to how to measure gravity in one’s garage and why the sky is dark at night. These topics are treated in an informal and entertaining way, but without waiving the maths. In fact, as the author himself highlights, he is interested in keeping the discussions simple, but not so simple that they are simply wrong. The whole point of the book is actually to show how physics and some calculus can explain many of the things that we commonly encounter.
Engaging and humorous, this text will appeal to non-experts with some background in maths and physics. It is suited to students at any level beyond the last years of high school, as well as to practicing scientists who might discover alternative, clever ways to solve (and explain) everyday physics problems.
The observation of the night sky is as old as humankind itself. Cosmology, however, has only achieved the status of “science” in the past century or so. In this book, Gott accompanies the reader through the birth of this new science and our growing understanding of the universe as a whole, starting from the observation by Hubble and others in the 1920s that distant galaxies are receding away from us. This was one of the most important discoveries in the history of science because it shifted the position of humans farther away from the centre of the cosmos and showed that the universe is not eternal, but had a beginning. The philosophical implications were hard to digest, even for Einstein, who invented the cosmological constant such that his equations of general relativity could have a static solution.
Following the first observations of distant galaxies, astronomers began to draw a comprehensive map of the observable universe. They played the same role as the explorers travelling around our planet, except that they could only sit where they were and receive light from distant objects, like the faded photography of a lost past.
After an introduction to the early days of cosmology, the book becomes more personal, and the reader feels drawn in to the excitement of actually doing research. Gott’s account of cosmology is given through the lens of his own research, making the book slightly biased towards the physics of the large-scale structure of the universe, but also more focused and definitely captivating for the reader.
The overarching theme of the book is the quest to understand the shape of the “cosmic web”, which is the distribution of galaxies and voids in a universe that is homogeneous only on very large scales. Tiny fluctuations in the matter density, ultimately quantum in origin, grow via gravity to weave the web.
In graduate school, under the supervision of Jim Gunn, Gott wrote his most cited paper, proposing a mathematical model of the gravitational collapse of small density fluctuations. Here, the readers are given a flavour of the way real research is carried out. The author describes in detail the physics involved in the topic, as well as how the article was born and completed and how it took on a life of its own to become a classic.
The author’s investigation of the large-scale structure intertwines with his passion for topology. He was fascinated by polyhedrons with an infinite number of faces, which were the subject of an award-winning project that he developed in high school and of his first scientific article published in a mathematics journal.
At the time, when astronomical surveys were covering only a small portion of the sky, it was unclear how the cosmic structures assembled. American cosmologists thought that galaxies gathered in isolated clusters floating in a low-density universe, like meatballs in a soup. On the other hand, Soviet scientists maintained that the universe was made up of a connected structure of walls and filaments, where voids appear like holes in a Swiss cheese.
Does the 3D map of the universe resemble a meatball stew or a Swiss cheese? Neither, Gott says. With his collaborators, he proposed that the cosmic web is topologically like a sponge, where voids and galaxy clusters form two interlocking regions, much like the infinite polyhedrons Gott studied in his youth.
The reader is given clear and mathematically precise descriptions of the methods used to demonstrate the idea, which was later confirmed by deeper and larger astronomical observations (in 3D), and by the analysis of the cosmic microwave background (in 2D). By that time, we had the theory of cosmological inflation to explain a few of the puzzles regarding the origin of the universe. Remarkably, inflation predicts tiny quantum fluctuations in the fabric of space–time, giving rise to a symmetry between higher and lower density perturbations, leading to the observed sponge-like topology.
Therefore, by the end of the 20th century, the pieces of our understanding of the universe were falling into place and, in 1998, the discovery that the universe is accelerating allowed us to start thinking about the ultimate fate of the cosmos. This is the subject of the last chapter, an interesting mix of sound predictions (for the next trillion years) and speculative ideas (in a future so far away that it is hard to think about), ending the book with a question – rather than an exclamation – mark.
This is not only a good popular science book that achieves a balance between mathematical precision and a layperson’s intuition. It is also a text about the day-to-day life of a researcher, describing details of how science is actually done, the excitement of discovery and the disappointment of following a wrong path. It is a book for readers curious about cosmology, for researchers in other fields, and for young scientists, who will be inspired by an elder one to pursue the fascinating exploration of nature.
This book is an excellent source for those interested in learning the basic features of the Standard Model (SM) of particle physics – also known as the Glashow–Weinberg–Salam (GSW) model – without many technical details. It is a remarkably accessible book that can be used for self learning by advanced undergraduates and beginning graduate students. All the basic building blocks are provided in a self-contained manner, so that the reader can acquire a good knowledge of quantum mechanics and electromagnetism before reaching the boundaries of the SM, which is the theory that best describes our knowledge of the fundamental interactions.
The topics that the book deals with include special relativity, basic quantum field theory and the action principle, continuous symmetries and Noether’s theorem, as well as basic group theory – in particular, the groups needed in the SM: U(1), SU(2) and SU(3). It also covers the relativistic treatment of fermions through the Dirac equation, the quantisation of the electromagnetic field and a first look at the theory of gauge transformations in a familiar context. This is followed by a reasonable account of quantum electrodynamics (QED), the most accurate theory tested so far. The quantisation rules are reviewed with clarity and a number of useful and classic computations are presented to familiarise the reader with the technical details associated with the computation of decay rates, scattering amplitudes, phase-space volumes and propagators. The book also provides an elementary description of how to construct and compute Feynman rules and diagrams, which are later applied to electron–electron scattering and electron–positron annihilation, and how the latter relates to Compton or electron–photon scattering. This lays the basic computational tools to be used later in the sections about electroweak and strong interactions.
At this point, before starting a description of the SM per se, the author briefly describes the historical Fermi model and then presents the main actors. The reader is introduced to the lepton doublet (including the electron, the muon, the tau and their neutrinos), the weak charged and neutral currents, and the vector bosons that carry the weak force (the Ws and the Z). This is followed by an analysis of electroweak unification and the introduction of the weak angle, indicating how the electromagnetic interaction sits inside the weak isospin and hypercharge. Then, the author deals with the quark doublets and the symmetry breaking pattern, using the Brout–Englert–Higgs mechanism, which gives mass to the vector bosons and permits the accommodation of masses for the quarks and leptons. We also learn about the Cabibbo–Kobayashi–Maskawa mixing matrix, neutrino oscillations, charge and parity (CP) violation, the solar neutrino problem, and so on. To conclude, the author presents the SU(3) gauge theory of the strong interactions and provides a description of some theories that go beyond the SM, as well as a short list of important open problems. All this is covered in just over 250 pages: a remarkable achievement. In addition, the book includes many interesting and useful computations.
This work is a very welcome addition to the modern literature in particle physics and I certainly recommend it, in particular for self study. I hope, though, that in the second edition the correct Weinberg is portrayed on p184… an extremely hilarious blunder.
Dimensional analysis is a mathematical technique that allows one to deduce the relationship between different physical quantities from the dimensions of the variables involved in the system under study. It provides a method to simplify – when possible – the resolution of complex physical problems.
This short book provides an introduction to dimensional analysis, covering its history, methods and formalisation, and shows its application to a number of physics and engineering problems. As the author explains, the foundation principle of dimensional analysis is essentially a more precise version of the well known rule against “adding apples and oranges”; nevertheless, the successful application of this technique requires physical intuition and some experience. Most of the time it does not lead to the solution of the problem, but it can provide important hints about the direction to take, constraints on the relationship between physical variables and constants, or a confirmation of the correctness of calculations.
After a chapter covering the basics of the method and some historical notions about it, the book offers application examples of dimensional analysis in several areas: mechanics, hydrodynamics, thermal physics, electrodynamics and quantum physics. Through the solution of these real problems, the author shows the possibilities and limitations of this technique. In the final chapter, dimensional analysis is used to take a few steps in the direction of uncovering the dimensional structure of the universe.
Aimed primarily at physics and engineering students in their first university courses, it can also be useful to experienced students and professionals. Being concise and providing problems with solutions at the end of each chapter, the book is ideal for self study.
This textbook aims to provide a concise introduction to string theory for undergraduate and graduate students.
String theory was first proposed in the 1960s and has become one of the main candidates for a possible quantum theory of gravity. While going through alternate phases of highs and lows, it has influenced numerous areas of physics and mathematics, and many theoretical developments have sprung from it.
It was the intention of the author to include in the book just the fundamental concepts and tools of string theory, rather than to be exhaustive. As Schomerus states, there are already various textbooks available that cover this field in detail, from its roots to its most modern developments, but these might be dispersive and overwhelming for students approaching the topic for the first time.
The volume is composed of a brief historical introduction and two parts, each including various chapters. The first part is dedicated to the dynamics of strings moving in a flat Minkowski space. While these string theories do not describe nature, their study is helpful to understand many basic concepts and constructions, and to explore the relation between string theory and field theory on a two-dimensional “world”.
The second part deals with string theories for four-dimensional physics, which can be relevant to the description of our universe. In particular, the motion of superstrings on backgrounds in which some of the dimensions are curled up is studied (this phenomenon is called compactification). This part, in turn, includes three sections devoted to as many subtopics.
First, the author discusses conformal field theory, also dealing with the SU(2) Wess–Zumino–Novikov–Witten model. Then, he passes on to treat Calabi–Yau spaces and the associated string compactification. Finally, he focuses on string dualities, giving special emphasis to the AdS/CFT correspondence and its application to gauge theory.
This book, the 27th volume in the “Advanced Series on Directions in High Energy Physics”, presents a robust and accessible summary of 60 years of technological development at CERN. Over this period, the foundations of today’s understanding of matter, its fundamental constituents and the forces that govern its behaviour were laid and, piece by piece, the Standard Model of particle physics was established. All this was possible thanks to spectacular advances in the field of particle accelerators and detectors, which are the focus of this volume. Each of the 12 chapters is built using contributions from the physicists and engineers who played key roles in this great scientific endeavour.
After a brief historical introduction, the story starts with the Synchrocyclotron (SC), CERN’s first accelerator, which allowed – among other things – innovative experiments on pion decay and a measurement of the anomalous magnetic dipole moment of the muon. While the SC was a development of techniques employed elsewhere, the Proton Synchroton (PS), the second accelerator constructed at CERN and now the cornerstone of the laboratory’s accelerator complex, was built using the new and “disruptive” strong-focusing technique. Fast extraction from the PS combined with the van der Meer focussing horn were key to the success of a number of experiments with bubble chambers and, in particular, to the discovery of the weak neutral current using the large heavy-liquid bubble chamber Gargamelle.
The book goes on to present the technological developments that led to the discovery of the Higgs boson by the ATLAS and CMS collaborations at the LHC, and the study of heavy-quark physics as a means to understand the dynamics of flavour and the search for phenomena not described by the SM. The taut framework that the SM provides is evident in the concise reviews of the experimental programme of LEP: the exquisitely precise measurements of the properties of the W and Z bosons, as well as of the quarks and the leptons – made by the ALEPH, DELPHI, OPAL and L3 experiments – were used to demonstrate the internal consistency of the SM and to correctly predict the mass of the Higgs boson. An intriguing insight into the breadth of expertise required to deliver this programme is given by the discussion of the construction of the LEP/LHC tunnel, where the alignment requirements were such that the geodesy needed to account for local variations in the gravitational potential and measurements were verified by observations of the stars.
The rich scientific programme of the LHC and of LEP before it have their roots in the systematic development of the accelerator and detector techniques. The accelerator complex at CERN has grown out of the SC.
The book concisely presents the painstaking work required to deliver the PS, the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS). Experimentation at these facilities established the quark-parton model and quantum chromodynamics (QCD), demonstrated the existence of charged and neutral weak currents, and pointed out weaknesses in our understanding of the structure of the nucleon and the nucleus. The building of the SPS was expedited by the decision to use single-function magnets that enabled a staged approach to its construction. The description of the technological innovations that were required to realise the SPS includes the need for a distributed, user-friendly control-and-monitoring system. A novel solution was adopted that exploited an early implementation of a local-area network and for which a new, interpretative programming language was developed.
The book also describes the introduction of the new isotope separation online technique, which allows highly unstable nuclei to be studied, and its evolution into research on nuclear matter in extreme conditions at ISOLDE and its upgrades. The study of heavy-ion collisions in fixed target experiments at the SPS collider and now in the ALICE experiment at the LHC, has its roots in the early nuclear-physics programme as well. The SC, and later the PS, were ideal tools to create the intense low-energy beams used to test fundamental symmetries, to search for rare decays of hadrons and leptons, and to measure the parameters of the SM.
Reading this chronicle of CERN’s outstanding record, I was struck by its extraordinary pedigree of innovation in accelerator and detector technology. Among the many examples of groundbreaking innovation discussed in the book is the construction of the ISR which, by colliding beams head on, opened the path to today’s energy
frontier. The ISR programme created the conditions for pioneering developments such as the multi-wire proportional chamber, and the transition radiation detector as well as large-acceptance magnetic spectrometers for colliding-beam experiments. Many of the technologies that underpin the success of the proton–antiproton (Spp S) collider, LEP and the LHC, were innovations pioneered at the ISR. For example, the discovery of the W and Z bosons at the Spp S relied on the demonstration of stochastic cooling and antiproton accumulation. The development of these techniques allowed CERN to establish its antiproton programme, which encompassed the search for new phenomena at the energy frontier, as well as the study of discrete symmetries using neutral kaons at CPLEAR and the detailed study of the properties of antimatter.
The volume includes contributions on the development of the computing, data-handling and networking systems necessary to maximise the scientific output of the accelerator and detector facilities. From the digitisation and handling of bubble- and spark-chamber images in the SC era, to the distributed processing possible on the worldwide LHC computing grid, the CERN community has always developed imaginative solutions to its data-processing needs.
The book concludes with thoughtful chapters that describe the impact on society of the technological innovations driven by the CERN programme, the science and art of managing large, technologically challenging and internationally collaborative projects, and a discussion of the R&D programme required to secure the next 60 years of discovery.
The contributions from leading scientists of the day collected in this relatively slim book document CERN’s 60-year voyage of innovation and discovery, the repercussions of which vindicate the vision of those who drove the foundation of the laboratory – European in constitution, but global in impact. The spirit of inclusive collaboration, which was a key element of the original vision for the laboratory, together with the aim of technical innovation and scientific excellence, are reflected in each of the articles in this unique volume.
By Stephen Peggs and Todd Satogata
Cambridge University Press
This concise book provides an overview of accelerator physics, a field that has grown rapidly since its inception and is progressing in many directions. Particle accelerators are becoming more and more sophisticated and rely on diverse technologies, depending on their application.
With a pedagogical approach, the book presents both the physics of particle acceleration, collision and beam dynamics, and the engineering aspects and technologies that lay behind the effective construction and operation of these complex machines. After a few introductory theoretical chapters, the authors delve into the different components and types of accelerators: RF cavities, magnets, linear accelerators, etc. Throughout, they also show the connections between accelerator technology and the parallel development of computational capability.
This text is aimed at university students at graduate or late undergraduate level, as well as accelerator users and operators. An introduction to the field, rather than an exhaustive treatment of accelerator physics, the book is conceived to be self-contained (to a certain extent) and to provide a strong starting point for more advanced studies on the topic. The volume is completed by a selection of exercises at the end of each chapter and an appendix with important formulae for accelerator design.
Since the analysis of data from physics experiments is mainly based on statistics, all experimental physicists have to study this discipline at some point in their career. It is common, however, for students not to learn it in a specific advanced university course but in bits and pieces during their studies and subsequent career.
This textbook aims to present all of the basic statistics tools required for data analysis, not only in particle physics but also astronomy and any other area of the physical sciences. It is targeted towards graduate students and young scientists and, since it is not intended as a text for mathematicians or statisticians, detailed proofs of many of the theorems and results presented are left out.
After a philosophical introduction on the scientific method, the text is presented in three parts. In the first, the foundational concepts and methods of probability and statistics are provided, considering both the frequentist and Bayesian interpretations. The second part deals with the basic and most commonly used advanced techniques for measuring particle-production cross-sections, correlation functions and particle identification. Much attention is also given to the notions of statistical and systematic errors, as well as the methods used to unfold or correct data for the instrumental effects associated with measurements. Finally, in the third section, introductory techniques in Monte Carlo simulations are discussed, focusing on their application to experimental data interpretation.
In February 2016 the LIGO and Virgo collaborations announced the first detection of gravitational waves from the collision of two black holes. It was a splendid result for a quest that started about five decades ago with the design and construction of small prototypes of laser interferometers. Since this first discovery, at least five other binary black-hole mergers have been found and gravitational waves from two colliding neutron stars have also been detected. Gravitational-wave science is now booming, literally, and will continue to do so for a long time. The upcoming observational progress in this field will impact the development of astrophysics, cosmology and, perhaps, particle physics.
Govert Schilling is an award-winning science journalist with a special interest in astronomy and space science. In this book, he guides the reader through the development of gravitational-wave astronomy, from its very origin deep in the early days of general relativity up to the first LIGO discovery. He does so, not only by delving into the key moments of this wonderful piece of history, but also by explaining the main physical and engineering ideas that made it possible.
Moreover, Schilling does a very good job discussing the scientific context in which these events and ideas arose. Far from being a mere collection of events, the book offers the reader a journey that goes beyond its title, exploring and connecting topics such as the cosmic-microwave background and its polarisation, radioastronomy and pulsars, supernovae, primordial inflation, gamma-ray bursts and even dark energy. In addition, the last few chapters of the book discuss the science that may come next, when new interferometers will join LIGO and Virgo in this adventure, observing the sky from Earth (e.g. KAGRA) and space (LISA).
The book clearly aims to target a non-specialist readership and will surely be enjoyed by people lacking a prior knowledge of astrophysics, gravitational waves or cosmology. However, this does not mean that readers more well-versed in these topics will find the book uninspiring. Schilling addresses the reader in a direct, entertaining, almost colloquial manner, managing to explain complex concepts in a few paragraphs while keeping the science sound. Besides, the book gives an interesting (and sometimes surprising) glimpse into the lives, aspirations and mutual interactions of the scientific pioneers in the field of gravitational waves.
If an objection had to be found, it would be that in the first chapter the author belittles general relativity by introducing it as “the theory behind [the movie] Interstellar”. If this scares you, read on and fear nothing. As always happens, science outshines fiction, and the rest of the book proves why this is so.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.