Established in 2007 to fund frontier-research projects, the European Research Council (ERC) has quickly become a fundamental instrument of science policy at European level, as well as a quality standard for academic research. This book traces the history of the creation and development of the ERC, drawing on the first-hand knowledge of the author, who was scientific adviser to the president of the ERC for four years. It covers the period between the early 2000s – when a group of strong-minded scientists pushed the idea of allocating (more) money to research projects selected for the quality of the proposals, judged by independent, competent and impartial reviewers – and when the first ERC programme cycle was concluded in 2013.
The author is particularly interested in the politics behind those events and shows how the ERC could translate into reality thanks to the fact that the European Commission decided to support it, using a much more strategic, planned and technical approach. He also describes the way that the ERC was implemented and the creation of its scientific council, discusses the “hybrid” nature of the ERC – being somewhere between a programme and an institution – and the consequent frictions in its early days, as well as the process to establish a procedure for selecting applications for funding.
While telling the story of the ERC from a critical perspective and examining its challenges and achievements, the book also offers a view of the relationship between science and policy in the 21st century.
By Waldyr A Rodrigues Jr and Edmundo Capelas de Oliveira
Springer
The Many Faces of Maxwell, Dirac and Einstein Equations
In theoretical physics, hardly anything is better known than the Einstein, Maxwell and Dirac equations. The Dirac and Maxwell equations (as well as the analogous Yang–Mills equations) form the basis of the modern description of matter via the electrodynamic, weak and strong interactions, while Einstein’s equations of special and general relativity are the foundations of the theory of gravity. Taken together, these three equations cover scales from the subatomic to the large-scale universe, and are the pillars on which the standard models of cosmology and particle physics are built. Although they constitute core information for theoretical physicists, they are rarely, if ever, presented together.
This book aims to remedy the situation by providing a full description of the Dirac, Maxwell and Einstein equations. The authors go further, however, by presenting the equations in several different forms. Their aim is twofold. On one hand, different expressions of these famous formulae may help readers to view a given equation from new and possibly more fruitful perspectives (when the Maxwell equations are written in the form of the Navier–Stokes equations, for instance, they allow a hydrodynamic interpretation of the electrodynamic field). On the other hand, casting different equations in similar forms may shed light on the quest for unification – as happens, for example, when the authors rewrite Maxwell’s equations in Dirac-like form and use this to launch a digression on supersymmetry.
Another feature of the book concerns concepts in differential geometry that are widely used in mathematics but about which there is little knowledge in theoretical physics. An example is the torsion of space–time: general differential manifolds are naturally equipped with a torsion in addition to the well-known curvature, and torsion also enters into the description of Lie algebras, yet the torsional completion of Einstein gravity, for instance, has been investigated very little. In the book, the authors take care of this issue by presenting the most general differential geometry of space–time with curvature and torsion. They then use this to understand conservation laws, more specifically to better grasp the conditions under which these conservation laws may or may not fail. Trivially, a genuine conservation law expresses the fact that a certain quantity is constant over time, but in differential geometry there is no clear and unambiguous way to define an absolute time.
As an additional important point, the book contains a thorough discussion about the role of active transformations for physical fields (to be distinguished from passive transformations, which are simply a change in co-ordinates). Active transformations are fundamental, both to define the transformation properties of specific fields and also to investigate their properties from a purely kinematic point of view without involving field equations. A section is also devoted to exotic or new physical fields, such as the recently introduced “ELKO” field.
Aside from purely mathematical treatments, the book contains useful comments about fundamental principles (such as the equivalence principle) and physical effects (such as the Sagnac effect). The authors also pay attention to clarifying certain erroneous concepts that are widespread in physics, such as assigning a nonzero rest mass to the photon.
In summary, the book is well suited for anyone who has an interest in the differential geometry of twisted–curved space–time manifolds, and who is willing to work on generalisations of gravity, electrodynamics and spinor field theories (including supersymmetry and exotic physics) from a mathematical perspective. Perhaps the only feature that might discourage a potential reader, which the authors themselves acknowledge in the introduction, is the considerable amount of sophisticated formalism and mathematical notation. But this is the price one has to pay for such a vast and comprehensive discussion about the most fundamental tools in theoretical physics.
Lying midway between the history and the philosophy of science, this book illuminates a fascinating period in European history during which mathematics clashed with common thought and religion. Set in the late 16th and early 17th centuries, it describes how the concept of infinitesimals – a quantity that is explicitly nonzero and yet smaller than any measurable quantity – took a central role in the debate between ancient medieval ideas and the new ideas arising from the Renaissance. The former were represented by immutable divine order and the principle of authority, the latter by social change and experimentation.
The idea of indivisible quantities and their use in geometry and arithmetic, which had already been developed by ancient Greek mathematicians, underwent its own renaissance 500 years ago, at the same time as Martin Luther launched the Reformation. The consequences for mathematics and physics were enormous, giving rise to unprecedented scientific progress that continued for the following decades and centuries. But even more striking is that the new way of thinking built around the concept of infinitesimals crossed the borders of science and strongly influenced society, up to the point that mathematics became the main focus of the struggle between the old and new orders.
This book is divided into two parts, each devoted to a particular geographical area and period in which this battle took place. The first part leads the reader to late 16th century Italy, where the flourishing and creative ideas of the Renaissance had given birth to a prolific number of mathematicians and scientists. Here, the prominent figure of Galileo Galilei – together with Evangelista Torricelli, Bonaventura Cavalieri and others – was at the forefront of the new mathematical approach involving the concept of infinitesimals. This established the basis of inductive reasoning, which makes broad generalisations from specific observations, and led to a new science founded on experience. On the opposite side, the religious congregation of the Jesuits used these same mathematical developments in its fight against heresy and the Reformation. To them, the traditional mathematical approach was a solid basis for the absolute truth represented by the Catholic faith and the authority of the Pope. The fierce opposition of the Jesuit mathematicians led Galileo and the “infinitesimalists” to damnation, with irreparable consequences for the ancient tradition of Italian mathematicians.
The second part of the book moves the reader to 17th century England, just after the English Civil War in the years of Cromwell’s republic and the Restoration. In that context, the new ideas represented by infinitesimals were not only condemned by the Anglican Church but also opposed by political powers. Here, the leading figure of Thomas Hobbes took the stage in the fight against the indivisibles and the inductive method. For him, traditional Euclidean geometry – which, contrary to induction, used deduction to achieve any result from a few basic statements – was the highest expression of an ordered philosophical system and a model for a perfect state. Hobbes was also concerned about the threat to the principle of authority that emanated from traditional mathematical thought. In his struggle against infinitesimals, he was confronted by the members of the newly founded Royal Society, eager for scientific progress. Among them was John Wallis, who considered mathematical knowledge as a “down–up” inductive system in which calculus played the role of experiments in physics. Solving many of the toughest mathematical problems of his times by infinitesimal procedures, Wallis defeated traditional geometry – and Thomas Hobbes with it. The triumph of Wallis made way for scientific progress and the advance of thought that opened the door to the Enlightenment.
This book is excellently written and its mathematical concepts are clearly explained, making it fully accessible to a general audience. With his fascinating narrative, the author intrigues the reader, depicting the historical background and, in particular, recounting the plots of the Holy See, the Jesuits’ fight for power, the Reformation, the absolutist power of the kings, and the early steps of Europeans towards democracy and freedom of thought. The book includes extensive notes at the end, a useful index of concepts, a timeline and a “dramatis personae” section, which is divided between “infinitesimalists” and “non-infinitesimalists”. Finally, the images and portraits included in the book enhance the enjoyment for the reader.
Gaseous photomultipliers are gas-filled devices capable of detecting single photons (in the visible and UV spectrum) with a high position resolution. They are used in various research settings, in particular high-energy physics, and are among several types of contemporary single-photon detectors. This book provides a detailed comparison between photosensitive detectors based on different technologies, highlighting their advantages and disadvantages of them for diverse applications.
After describing the main principles underlying the conversion of photons to photoelectrons and the electron avalanche multiplication effect, the characteristics (and requirements) of position-sensitive gaseous photomultipliers are discussed. A long section of the book is then dedicated to describing and analysing the development of these detectors, which evolved from photomultipliers filled with photosensitive vapours to devices using liquid and then solid photocathodes. UV-sensitive photodetectors based on caesium iodide and caesium telluride, which are mainly used as Cherenkov-ring imaging detectors and are currently employed in the ALICE and COMPASS experiments at CERN, are presented in a dedicated chapter. The latest generation of gaseous photomultipliers, sensitive up to the visible region, are also discussed, as are alternative position-sensitive detectors.
The authors then focus on the Cherenkov light effect, its discovery and the way it has been used to identify particles. The introduction of ring imaging Cherenkov (RICH) detectors was a breakthrough and led to the application of these devices in various experiments, including the Cosmic AntiParticle Ring Imaging Cherenkov Experiment (CAPRICE) and the former CERN experiment Charge Parity violation at Low Energy Antiproton Ring (CP LEAR).
The latest generation of RICH detectors and applications of gaseous photomultipliers beyond RICH detectors are also discussed, completing the overview of the subject.
By S Tackmann, K Kampmann and H Skovby (eds)
Forlaget Historika/Gad Publishers
This book, which includes a contribution by CERN Director-General Fabiola Gianotti, presents 17 radical and game-changing ideas to help reach the 2030 Global Goals for Sustainable Development identified by the United Nations General Assembly.
Renowned and influential leaders propose innovative solutions for 17 “big bets” that the human race must face in the coming years. These experts in the environment, finance, food security, education and other relevant disciplines share their vision of the future and suggest new paths towards sustainability.
In the book, Gianotti replies to this call and shares her ideas about the importance of basic science and research in science, technology, engineering and maths (STEM) to underpin innovation, sustainable development and the improvement of global living conditions. After giving examples of breakthrough innovations in technology and medicine that came about from the pursuit of knowledge for its own sake, Gianotti contends that we need science and scientifically aware citizens to be able to tackle pressing issues, including drastic reduction of poverty and hunger, and the provision of clean and affordable energy. Finally, she proposes a plan to secure STEM education and funding for basic scientific research.
Published as part of the broader Big Bet Initiative to engage stakeholders around new and innovative ideas for global development, this book provides fresh points of view and credible solutions. It would appeal to readers who are interested in innovation and sustainability, as well as in the role of science in such a framework.
This book aims to deliver a concise, practical and intuitive introduction to probability and statistics for undergraduate and graduate students of physics and other natural sciences. The author attempts to provide a textbook in which mathematical complexity is reduced to a minimum, yet without sacrificing precision and clarity. To increase the appeal of the book for students, classic dice-throwing and coin-tossing examples are replaced or accompanied by real physics problems, all of which come with full solutions.
In the first part (chapters 1–6), the basics of probability and distributions are discussed. A second block of chapters is dedicated to statistics, specifically the determination of distribution parameters based on samples. More advanced topics follow, including Markov processes, the Monte Carlo method, stochastic population modelling, entropy and information.
The author also chooses to cover some subjects that, according to him, are disappearing from modern statistics courses. These include extreme-value distributions, the maximum-likelihood method and linear regressions using singular-value decomposition. A set of appendices concludes the volume.
An introduction to the novel and developing field of quantum information, this book aims to provide undergraduate and beginning graduate students with all of the basic concepts needed to understand more advanced books and current research publications in the field. No background in quantum physics is required because its essential principles are provided in the first part of the book.
After an introduction to the methods and notation of quantum mechanics, the authors explain a typical two-state system and how it is used to describe quantum information. The broader theoretical framework is also set out, starting with the rules of quantum mechanics and the language of algebra.
The book proceeds by showing how quantum properties are exploited to develop algorithms that prove more efficient in solving specific problems than their classical counterparts. Quantum computation, information content in qubits, cryptographic applications of quantum-information processing and quantum-error correction are some of the key topics covered in this book.
In addition to the many examples developed in the text, exercises are provided at the end of each chapter. References to more advanced material are also included.
This book collates information from various literature to provide students with a unified guide to contemporary developments in atomic physics. In just 400 pages it largely succeeds in achieving this aim.
The author is a professor of physics at the Indian Institute of Science in Bangalore. His research focuses on laser cooling and trapping of atoms, quantum optics, optical tweezers, quantum computation in ion traps, and tests of time-reversal symmetry using laser-cooled atoms. He received a PhD from the Massachusetts Institute of Technology under the supervision of David Pritchard, a leader in modern atomic physics and a mentor of two researchers – Eric Cornell and Wolfgang Ketterle – who went on to become Nobel laureates.
The book addresses the basis of atomic physics and state-of-the-art topics. It explains material clearly, although the arrangement of information is quite different to classical atomic-physics textbooks. This is clearly motivated by the importance of certain topics in modern quantum-optics theory and experiments. The physics content is often accompanied by the history behind concepts and by explanations of why things are named the way they are. Historical notes and personal anecdotes give the book a very appealing flair.
Chapter one covers different measurement systems and their merits, followed by universal units and fundamental constants, with a detailed explanation of which constants are truly fundamental. The next chapter is devoted to preliminary materials, starting with the harmonic oscillator and moving to concepts – namely coherent and squeezed states – that are important in quantum optics but not explicitly covered in some other books in the field. The chapter ends with a section on radiation, even including a description of the Casimir effect.
Chapter three is called Atoms. Alongside classical content such as energy levels of one-electron atoms, interactions with magnetic and electric fields, and atoms in oscillating fields, this chapter explains dressed atoms and also, unfortunately only briefly, includes a description of the permanent atomic electric dipole moment (EDM).
The following chapter is devoted to nuclear effects, the isotope shift and hyperfine structure. At this point it would have been nice to see some mention of the flourishing field of laser spectroscopy of radioactive nuclei, which exploits the two above effects to investigate the ground-state properties of nuclei far from the valley of stability.
Chapter five is about resonance, which is often scattered around in other books about atomic physics. Here, interestingly, nuclear magnetic resonance (NMR) plays a central role, and the chapter connects this topic very naturally to atomic physics. The chapter closes with a description of the density matrix formalism. After this comes a chapter devoted to interactions, including the electric dipole approximation, selection rules, transition rates and spontaneous emission. The last section is concerned with differences in saturation intensities by broadband and monochromatic radiation.
Multiphoton interactions are the topic of chapter seven, which is clearly motivated by their importance in modern quantum-optics laboratories. Two-photon absorption and de-excitation, Raman processes and the dressed atom description are all explained. Another crucial concept in modern quantum optics is coherence. Thus it is included as a full chapter, which includes coherence in a single atom and in ensembles of atoms, as well as coherent control in multilevel atoms. Spin echo appears as well, showing again how close the topics presented in the book are to NMR.
Chapter nine is devoted to lineshapes, which is clearly a subject relevant for modern atomic spectroscopists. Spectroscopy is the next chapter, which starts with alkali atoms – used extensively in laser cooling and Bose–Einstein condensates. The rest of the material is aimed at experimentalists. Uniquely for such a book, it includes a description of the key experimental tools, followed by Doppler-free techniques and nonlinear magneto-optic rotation.
The last chapter covers cooling and trapping, with so many relevant concepts already presented in the preceding chapters. The content includes different cooling approaches, principles of atom and ion traps, the cryptic and equally common Zeeman slower, and even more intriguing optical tweezers.
Each chapter ends with a problems section, in which the problems are often relevant to a real quantum-optics lab, for example concerning quantum defects, RF-induced magnetic transitions, Raman scattering cross-sections, quantum beats or the Voigt line profile. The problems are worked out in detail, allowing readers to follow how to arrive at the solution.
The appendices cover the standards and the frequency comb, which is one of the ingenious devices to come from the laboratory of Nobel laureate Theodor Hänsch and which can be now found in an ever-growing number of laser-spectroscopy and quantum-optics labs. Two other appendices are very different: they have a philosophical flair and deal with the nature of the photon and with Einstein as nature’s detective.
The presented theoretical basis leads to state-of-the art experiments, especially related to ion and atom cooling and to Bose–Einstein condensates. The selection of topics is thus clearly tailored for experimentalists working in a quantum optics lab. One small criticism is that it would be good to read more about the EDM experiments and laser spectroscopy of radioactive ions, which are currently two very active fields. Readers interested in different classic subjects, like atomic collisions, should turn to other books such as Bransden and Joachain’s Physics of Atoms and Molecules.
The level of the book makes it suitable for undergraduate level, but also for new graduate students. It can also serve as a quick reference for researchers, especially concerning the topics of general interest: metrology, what is a photon or how a frequency comb works, and how to achieve a Bose–Einstein condensate. Overall, the book is a very good guide to the topics relevant in modern atomic physics and its style makes it quite unique and personal.
Following their previous book on many-body theory, the authors have written a new volume focused on exactly solvable models, to add to the literature in this field. Several theoretical models are presented for selected systems in condensed states of matter – including solid, liquid and disordered states – and for systems of few or many bodies.
The book starts with an introduction to low-order density matrices, then discusses exactly or nearly exactly solvable models for several few-particle systems. The material is arranged according to the statistics of these particle assemblies, going from small clusters of fermions to small clusters of bosons – with specific reference to Efimov trimers in nuclear and condensed-matter assemblies – to anyon statistics.
The second group of chapters is dedicated to models for selected many-body systems in condensed matter, where particular attention is given to superconductivity and superfluidity, and to isolated impurities in a solid. Pair-potential and many-body force models for liquids are also discussed, as well as disorder and its implications for transport in solids.
The authors then deal with more general topics, in particular statistical field theory (discussing some specific models and critical exponents) and relativistic field theory. Open problems in quantum gravity are also briefly reviewed in the concluding chapter, and several appendices are included at the end of the book.
As the author himself states, the primary aim of this book is to explain why so many scientists choose to work on a theory that has no direct experimental support and is unlikely to have so anytime soon.
String theory, the origins of which date back to 1968, has developed into a major component of theoretical particle physics. It is most famous as a theory of quantum gravity and as a candidate unified theory of fundamental interactions at the smallest scales – so small that, unfortunately, we cannot directly test it with experiments.
Although string theory is built on a very solid mathematical basis and allows rigorous calculations, the author uses almost no equations. Rather than a textbook, this is a book on the history, science and philosophy lying behind a fascinating and speculative theory.
In the first part, the theory of quantum-mechanical relativistic strings is placed within the broader context of theoretical particle physics, and ultimately science in general. It is then discussed why there is still a need for ideas and paradigms that go beyond what we already know, and why string theory is a candidate for being a global theory that includes all others. Following this, the author describes the motivation driving this field and how this has evolved during the past 50 years. In particular, he dedicates various chapters to the connections of string theory with quantum field theory, mathematics, cosmology, particle physics and quantum gravity.
The last part of the book discusses the social aspects of science: the diverse ways of approaching the topic as well as various personal driving forces. A chapter is also dedicated to the most significant criticisms of string theory, to which the author provides a reply.
The book is intended to appeal to laypersons interested in fundamental physics as well as to physics students, so the author chooses to avoid mathematical formulations of the theory. However, the risk is that the book is then not sufficiently clear and explanatory to be an easy read for non-experts, nor technical and detailed enough to appeal to students.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.