Making Sense of Quantum Mechanics
By Jean Bricmont
In this book, Jean Bricmont aims to challenge Richard Feynman’s famous statement that “nobody understands quantum mechanics” and discusses some of the issues that have surrounded this field of theoretical physics since its inception.
Bricmont starts by strongly criticising the “establishment” view of quantum mechanics (QM), known as the Copenhagen interpretation, which attributes a key role to the observer in a quantum measurement. The quantum-mechanical wavefunction, indeed, predicts the possible outcomes of a quantum measurement, but not which one of these actually occurs. The author opposes the idea that a conscious human mind is an essential part of the process of determining what outcome is obtained. This interpretation was proposed by some of the early thinkers on the subject, although I believe Bricmont is wrong to associate it with Niels Bohr, who relates the measurement with irreversible changes in the measuring apparatus, rather than in the mind of the human observer.
The second chapter deals with the nature of the quantum state, illustrated with discussions of the Stern–Gerlach experiment to measure spin and the Mach–Zender interferometer to emphasise the importance of interference. During the last 20 years or so, much work has been done on “decoherence”. This has shown that the interaction of the quantum system with its environment, which may include the measuring apparatus, prevents any detectable interference between the states associated with different possible measurement outcomes. Bricmont correctly emphasises that this still does not result in a particular outcome being realised.
The author’s central argument is presented in chapter five, where he discusses the de Broglie–Bohm hidden-variable theory. At its simplest, it proposes that there are two components to the quantum-mechanical state: the wavefunction and an actual point particle that always has a definite position, although this is hidden from observation until its position is measured. This model claims to resolve many of the conceptual problems thrown up by orthodox QM: in particular, the outcome of a measurement is determined by the position of the particle being measured, while the other possibilities implied by the wavefunction can be ignored because they are associated with “empty waves”. Bricmont shows how all the results of standard QM – particularly the statistical probabilities of different measurement outcomes – are faithfully reproduced by the de Broglie–Bohm theory.
This is probably the clearest account of this theorem that I have come across. So why is the de Broglie–Bohm theory not generally accepted as the correct way to understand quantum physics? One reason follows from the work of John Bell, who showed that no hidden-variable theory can reproduce the quantum predictions (now thoroughly verified by experiment) for systems consisting of two or more particles in an entangled state unless the theory includes non-locality – i.e. a faster-than-light communication between the component particles and/or their associated wavefunctions. As this is clearly inconsistent with special relativity, many thinkers (including Bell himself) have looked elsewhere for a realistic interpretation of quantum phenomena. Not so Jean Bricmont: along with other contemporary supporters of the de Broglie–Bohm theory, he embraces non-locality and looks to use the idea to enhance our understanding of the reality he believes underlies quantum physics. In fact he devotes a whole chapter to this topic and claims that non-locality is an essential feature of quantum physics and not just of models based on hidden variables.
Other problems with the de Broglie–Bohm theory are discussed and resolved – to the author’s satisfaction at least. These include how the de Broglie–Bohm model can be consistent with the Heisenberg uncertainty principle when it appears to assume that the particle always has a definite position and momentum; he points out that the statistical results of a large number of measurements always agree with conventional predictions, and these include the uncertainty principle.
Alternative ways to interpret QM are presented, but the author does not find in them the same advantages as in the de Broglie–Bohm theory. In particular, he discusses the many-worlds interpretation, which assumes that the only reality is the wavefunction and that, rather than collapsing at a measurement, this produces branches that correspond to all measurement outcomes. One of the consequences of decoherence is that there can be no interaction between the possible measurements, and this means that no branch can be aware of any other. It follows that, even if a human observer is involved, each branch can contain a copy of him or her who is unaware of the others’ presence. From this point of view, all the possible measurement outcomes co-exist – hence the term “many worlds”. Apart from its ontological extravagance, the main difficulty with many-worlds theory is that it is very hard to see how the separate outcomes can have different probabilities when they all occur simultaneously. Many-worlds supporters have proposed solutions to this problem, which do not satisfy Bricmont (and, indeed, myself), who emphasises that this is not a problem for the de Broglie–Bohm theory.
A chapter is also dedicated to a brief discussion of philosophy, concentrating on the concept of realism and how it contrasts with idealism. Unsurprisingly, it concludes that realists want a theory describing what happens at the micro scale that accounts for predictions made at the macro scale – and that de Broglie–Bohm provide just such a theory.
The book concludes with an interesting account of the history of QM, including the famous Bohr–Einstein debate, the struggle of de Broglie and Bohm for recognition, and the influence of the politics of the time.
This is a clearly written and interesting book. It has been very well researched, containing more than 500 references, and I would thoroughly recommend it to anyone who has an undergraduate knowledge of physics and mathematics and an interest in foundational questions. Whether it actually lives up to its title is for each reader to judge.
• Alastair Rae, University of Birmingham, UK.
Bose–Einstein Condensation and Superfluidity
By L Pitaevskii and S Stringari
Oxford University Press
This book deals with the fascinating topics of Bose–Einstein condensation (BEC) and superfluidity. The main emphasis is on providing the formalism to describe these phases of matter as observed in the laboratory. This is far from the idealised studies that originally predicted BEC and are essential to interpret the experimental observations.
BEC was predicted in 1925 by Einstein, based on the ideas of Satyendra Nath Bose. It corresponds to a new phase of matter where bosons accumulate at the lowest energy level and develop coherent quantum properties at a macroscopic scale. These properties may correspond to phenomena that seem impossible from an everyday perspective. In particular, BEC lies behind the theory of superfluids, which are fluids that flow without dissipating energy and rotate without generating vorticity – if we except quantised vortices, which are a sort of topological defect.
Experimentally, the first BEC from dilute gases was observed in the laboratory in 1995, recognised by the 2001 Nobel Prize in Physics. Since then, there has been an explosion of interest and new results in the field. It is thus timely that two of its leading experts have updated and extended their volume on BEC to summarise the theoretical aspects of this phase of matter. The authors also describe in detail how superfluid phenomena can occur for Fermi gases in the presence of interactions.
The book is relatively heavy in formalism, which is justified by the wide range of phenomena covered in a relatively concise volume. It starts with some basics about correlation functions, condensation and statistical mechanics. Next, it delves into the simplest systems for which BEC can occur: weakly coupled dilute gases of bosonic particles. The authors describe different approaches to the BEC phase, including the works of Landau and Bogoliubov. They also introduce the Gross–Pitaevskii equation and show its importance in the description of superfluids. Superfluidity is explained in great detail, in particular the occurrence of quantised vortices.
The second part describes how to adapt the theoretical formalism introduced in the first part to realistic traps where BEC is observed. This is very important to connect theoretical descriptions to laboratory research, for instance to predict in which experimental configurations a BEC will appear and how to characterise it.
Part three deals with BEC in fermionic systems, which is possible if the fermions interact and pair-up into bosonic structures. These fermionic phases exhibit superfluid properties and have been created in the laboratory, and the authors consider fermionic condensates in realistic traps. The final part is devoted to new phenomena appearing in mixed bosonic–fermionic systems.
The book is a good resource for the theoretical description of BEC beyond the idealised configurations that are described in many texts. The concise style and large amount of notation requires constant effort from the reader, but seems inevitable to explain many of the surprising phenomena appearing in BECs. The book, perhaps combined with others, will provide the reader with a clear overview of the topic and latest theoretical developments in the field. The text is enhanced by the many figures and plots presenting experimental data.
• Diego Blas Temino, CERN.
Thorium Energy for the World
By J-P Revol, M Bourquin, Y Kadi, E Lillestol,
J-C de Mestral and K Samec (eds)
This book contains the proceedings of the Thorium Energy Conference (ThEC13), held in October 2013 at CERN, which brought together some of the world’s leading experts on thorium technologies. According to them, nuclear energy based on a thorium fuel cycle is safer and cleaner than the one generated from uranium. In addition, long-lived waste from existing power plants could be retrieved and integrated into the thorium fuel cycle to be transformed into a stable material while generating electricity.
The technology required to implement this type of fuel cycle is already being developed, nevertheless much effort and time is still needed.
The ThEC13 conference saw the participation of high-level speakers from 30 countries, such as the Nobel prize laureates Carlo Rubbia and Jack Steinberger, the then CERN Director-General Rolf Heuer, and Hans Blix, former director-general of the International Atomic Energy Agency (IAEA), to name a few.
Collecting the contributions of the speakers, this book offers a detailed technical review of thorium-energy technologies from basic R&D to industrial developments, and is thus a tool for informed debates on the future of energy production and, in particular, on the advantages and disadvantages of different nuclear technologies.
A First Course in Mathematical Physics
By Colm T Whelan
The aim of this book is to provide undergraduate students taking classes in the physical sciences with the fundamental-mathematics tools they need to proceed with their studies.
In the first part the author introduces core mathematics, starting from basic concepts such as functions of one variable and complex numbers, and moving to more advanced topics including vector spaces, fields and operators, and functions of a complex variable.
The second part shows some of the copious applications of these mathematics tools to physics. When introducing complex physics laws and theories, including Maxwell’s equations, special relativity and quantum theory, the author tries to present the material in an easily intelligible way. The author also emphasises the direct connection between the conceptual basis of these physics topics and the mathematical tools provided in the first part of the text.
Two appendices of formulas conclude the book. A large number of problems are included but the solutions are only made available on a password-protected website for lecturers.