Comsol -leaderboard other pages

Topics

Gravity: Where Do We Stand?

By R Peron, M Colpi, V Gorini and U Moschella (eds)
Springer

41AaXI0C5lL._SX313_BO1,204,203,200_

This book, a collection of expert contributions, provides an overview of the current knowledge in gravitational physics, including theoretical and experimental aspects.

After a pedagogical introduction to gravitational theories, several chapters explore gravitational phenomena in the realm of so-called weak-field conditions: the Earth (specifically, the laboratory environment) and the solar system.

The second part of the book is devoted to gravity in an astrophysical context, which is an important test-bed for general relativity. A chapter is dedicated to gravitational waves, the recent discovery of which is an impressive experimental result in this field. The importance of studying radio pulsars is also highlighted.

A section on research frontiers in gravitational physics follows. This explores the many open issues, especially related to astrophysical and cosmological problems, and the way that possible solutions impact the quest for a quantum theory of gravity and a unified theory of the forces.

The book’s origins lie in the 2009 edition of a school organised by the Italian Society of Relativity and Gravitation. As such, it is aimed at graduate students, but could also appeal to researchers working in the field.

Physics Matters

By Vasant Natarajan
World Scientific

41KnKmsJQiL

This book is a collection of essays on various physics topics, which the author aims at presenting in a manner that is accessible to non-experts and, specifically, to non-physics science and arts students at the undergraduate level. The author is motivated by the conviction that understanding fundamental concepts of other subjects facilitates out-of-the-box thinking, which can result in making original contributions to one’s chosen field.

The selection of topics is very personal: some basic-physics concepts, such as standards for units and oscillation theory, are placed next to discussions about general relativity and the famous twin paradox. The author uses an informal style and has particular interest in dispelling some myths about science.

The final chapters cover topics from his area of research, atomic and optical physics, focusing on the Nobel Prizes assigned in the last two decades to scientists in these fields.

Even though the use of equations is kept to a minimum, some mathematics and physics background is required of the reader.

Raw Data: A Novel on Life in Science

By Pernille Rørth
Springer

CCboo2_06_17

Raw Data is a scientific novel that explores the moral dilemmas surrounding the accidental discovery of a case of scientific misconduct within a top US biomedical institute.

The choice of subject is interesting and unusual. Scientific misconduct is not an unprecedented topic for scientific novels, but the focus is usually on spectacular frauds that clearly violate the ethos of the scientific community. This story depicts a more nuanced situation. Readers may even find themselves understanding, if not condoning, the conscious decision of one of the co-protagonists to cheat.

This character chooses to “cut a corner” out of fear of being scooped, to satisfy an unreasonably picky reviewer who had requested an additional control experiment that she deems irrelevant. The stakes for her career are huge because she is competing with other groups on the same research line, and publishing second would cost her a great deal academically. When a co-worker accidentally finds hints of her fabrication and immediately alerts the laboratory’s principal investigator, both find themselves in a bitter no-win situation. “Doing the right thing” has a significant cost, but any other option potentially entails far worse consequences for their careers and their reputations.

Along the way, the author illustrates vividly how people in research think, feel, work and live. Work–life balance in science, especially for young female researchers, is a secondary theme of the book. Overall, the portrait of academia is not a flattering one, but definitely faithful. As someone who works in high-energy physics, I learnt about day-by-day practices in the biomedical sector and how it differs from mine. Although the author focuses on her own area of the scientific environment, some descriptions of “postdoc life” are quite general.

This relatively short novel is followed by a long Q&A section with the author, a former biomedical researcher who left the field after some considerable career achievements. There she makes her opinions explicit about several of the topics, including the “publish or perish” attitude, work–life balance, scientific integrity, and what she perceives as systemic dangers for the academic research world.

Although the author clearly made an effort to simplify the science to the minimum needed to understand the plot (and as a reader with no understanding of microbiology I found her effort successful), I am not sure that a reader with no previous interest in science would be hooked by the story. The book is well written, but the plot has a slow pace and, while Springer deserves credit for publishing it, the text contains many typographical errors.

Overall, I recommend the book to other scientists, regardless of their specialisation, and to the scientifically educated public who may appreciate this insider view of contemporary research life.

Trick or Truth? The Mysterious Connection Between Physics and Mathematics

By Anthony Aguirre, Brendan Foster, Zeeya Merali (eds)
Springer

CCboo1_06_17

One of the most intriguing works in the philosophy of science is Wigner’s 1960 paper titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. Indeed the fact that so many natural laws can be formulated in this language, not to mention that some of these represent the most precise knowledge we have about our world, is a stunning mystery.

A related question is whether mathematics, which has largely developed overlapping or in parallel with physics, is constructed by the human mind or “discovered”. This question is worth asking again today, when modern theories of fundamental physics and contemporary mathematics have reached levels of abstraction that are unimaginable from the perspective of just 100 years ago.

This book is a collection of essays discussing the connection between physics and mathematics. They are written by the winners of the 2015 Foundational Questions Institute contest, which invited contributors – from professional researchers to members of the public – to propose an essay on the topic.

Since it appears primarily as a subject of the philosophy of science rather than of science itself, it is not a surprise that there are conflicting viewpoints that sometimes reach opposite conclusions.

A significant point of view is that the claimed effectiveness of mathematics is actually not that surprising. This is because we process information and generate knowledge about our world in an inadvertently biased way, namely as a result of the evolution of our mind in a specific physical world. For example, concepts of elementary geometry (such as straight lines, parabolas, etc) and the mechanics of classical physics are deeply imprinted in the human brain as evolutionary bias. In a fuzzy, chaotic world, such naive mathematical notions might not have developed, as they wouldn’t represent a good approximation to that world. In fact, in a drastically unstructured world it would have been less likely that life had evolved in the first place, so it may not seem such a surprise that we find ourselves in a world largely governed by relatively simple geometrical structures.

What remains miraculous, on the other hand, is the effectiveness of mathematics in the microscopic realm of quantum mechanics: it is not obvious how the mathematical notions on which it is based could be explained in terms of evolutionary bias. Actually, much of the progress of fundamental physics during the last 100 years or so crucially depended on abandoning the intuition of everyday common sense, in favour of abstract mathematical principles.

Another aspect is selection bias, in that failures of the mathematical description of certain phenomena tend simply to be ignored. A prime example is human consciousness – undoubtedly a real-world phenomenon – for which it is not at all clear whether its structure can ever be mapped to mathematical concepts in a meaningful way. A quite common reductionist point of view typical of particle physicists is that, since the brain is essentially chemistry (thus physics), a mathematical underpinning is automatic. But it may be that the way such complex phenomena emerge completely obfuscates the connection to the underlying, mathematically clean microscopic physics, rendering the latter useless for any practical purpose in this regard.

This raises the issue of the structure of knowledge per se, and some essays in this book argue that it may not necessarily be hierarchical but rather scale invariant with some, or many, distinguished nodes. One may think of these as local attractors to which “arrows of deeper explanation” point. It may be that only locally near such attractors does knowledge appear hierarchical, so that, for example, our mathematical description of fundamental physics is meaningful only near one particular such node. There might be other local attractors that are decoupled from our mathematical modelling, with no obvious chains of explanation linking them.

On a different tack, a vehemently dissimilar and extreme point of view is taken by adepts of Tegmark’s mathematical universe hypothesis, which has been directly addressed by various authors. This posits that there is actually no difference between mathematics and the physical world, so the role of mathematics in our physical world appears as a tautology.

Surveying all the thoughts in this collection of essays would be beyond the scope of this review. Suffice it to say that the book should be of great interest to anybody pondering the meaning of physical theories, although it appears more useful for scientists rather than for the general public. It is not an easy read, but the reader is rewarded with a great deal of food for thought.

A First Course in Mathematical Physics

By Colm T Whelan
Wiley-VCH

51gZGHnDTiL._SX346_BO1,204,203,200_

The aim of this book is to provide undergraduate students taking classes in the physical sciences with the fundamental-mathematics tools they need to proceed with their studies.

In the first part the author introduces core mathematics, starting from basic concepts such as functions of one variable and complex numbers, and moving to more advanced topics including vector spaces, fields and operators, and functions of a complex variable.

The second part shows some of the copious applications of these mathematics tools to physics. When introducing complex physics laws and theories, including Maxwell’s equations, special relativity and quantum theory, the author tries to present the material in an easily intelligible way. The author also emphasises the direct connection between the conceptual basis of these physics topics and the mathematical tools provided in the first part of the text.

Two appendices of formulas conclude the book. A large number of problems are included but the solutions are only made available on a password-protected website for lecturers.

Thorium Energy for the World

By J-P Revol, M Bourquin, Y Kadi, E Lillestol, J-C de Mestral and K Samec (eds)
Springer

978-3-319-26542-1

This book contains the proceedings of the Thorium Energy Conference (ThEC13), held in October 2013 at CERN, which brought together some of the world’s leading experts on thorium technologies. According to them, nuclear energy based on a thorium fuel cycle is safer and cleaner than the one generated from uranium. In addition, long-lived waste from existing power plants could be retrieved and integrated into the thorium fuel cycle to be transformed into a stable material while generating electricity.

The technology required to implement this type of fuel cycle is already being developed, nevertheless much effort and time is still needed.

The ThEC13 conference saw the participation of high-level speakers from 30 countries, such as the Nobel prize laureates Carlo Rubbia and Jack Steinberger, the then CERN Director-General Rolf Heuer, and Hans Blix, former director-general of the International Atomic Energy Agency (IAEA), to name a few.

Collecting the contributions of the speakers, this book offers a detailed technical review of thorium-energy technologies from basic R&D to industrial developments, and is thus a tool for informed debates on the future of energy production and, in particular, on the advantages and disadvantages of different nuclear technologies.

Bose–Einstein Condensation and Superfluidity

By L Pitaevskii and S Stringari
Oxford University Press

CCboo2_05_17

This book deals with the fascinating topics of Bose–Einstein condensation (BEC) and superfluidity. The main emphasis is on providing the formalism to describe these phases of matter as observed in the laboratory. This is far from the idealised studies that originally predicted BEC and are essential to interpret the experimental observations.

BEC was predicted in 1925 by Einstein, based on the ideas of Satyendra Nath Bose. It corresponds to a new phase of matter where bosons accumulate at the lowest energy level and develop coherent quantum properties at a macroscopic scale. These properties may correspond to phenomena that seem impossible from an everyday perspective. In particular, BEC lies behind the theory of superfluids, which are fluids that flow without dissipating energy and rotate without generating vorticity – if we except quantised vortices, which are a sort of topological defect.

Experimentally, the first BEC from dilute gases was observed in the laboratory in 1995, recognised by the 2001 Nobel Prize in Physics. Since then, there has been an explosion of interest and new results in the field. It is thus timely that two of its leading experts have updated and extended their volume on BEC to summarise the theoretical aspects of this phase of matter. The authors also describe in detail how superfluid phenomena can occur for Fermi gases in the presence of interactions.

The book is relatively heavy in formalism, which is justified by the wide range of phenomena covered in a relatively concise volume. It starts with some basics about correlation functions, condensation and statistical mechanics. Next, it delves into the simplest systems for which BEC can occur: weakly coupled dilute gases of bosonic particles. The authors describe different approaches to the BEC phase, including the works of Landau and Bogoliubov. They also introduce the Gross–Pitaevskii equation and show its importance in the description of superfluids. Superfluidity is explained in great detail, in particular the occurrence of quantised vortices.

The second part describes how to adapt the theoretical formalism introduced in the first part to realistic traps where BEC is observed. This is very important to connect theoretical descriptions to laboratory research, for instance to predict in which experimental configurations a BEC will appear and how to characterise it.

Part three deals with BEC in fermionic systems, which is possible if the fermions interact and pair-up into bosonic structures. These fermionic phases exhibit superfluid properties and have been created in the laboratory, and the authors consider fermionic condensates in realistic traps. The final part is devoted to new phenomena appearing in mixed bosonic–fermionic systems.

The book is a good resource for the theoretical description of BEC beyond the idealised configurations that are described in many texts. The concise style and large amount of notation requires constant effort from the reader, but seems inevitable to explain many of the surprising phenomena appearing in BECs. The book, perhaps combined with others, will provide the reader with a clear overview of the topic and latest theoretical developments in the field. The text is enhanced by the many figures and plots presenting experimental data.

Making Sense of Quantum Mechanics

By Jean Bricmont
Springer

CCboo1_05_17

In this book, Jean Bricmont aims to challenge Richard Feynman’s famous statement that “nobody understands quantum mechanics” and discusses some of the issues that have surrounded this field of theoretical physics since its inception.

Bricmont starts by strongly criticising the “establishment” view of quantum mechanics (QM), known as the Copenhagen interpretation, which attributes a key role to the observer in a quantum measurement. The quantum-mechanical wavefunction, indeed, predicts the possible outcomes of a quantum measurement, but not which one of these actually occurs. The author opposes the idea that a conscious human mind is an essential part of the process of determining what outcome is obtained. This interpretation was proposed by some of the early thinkers on the subject, although I believe Bricmont is wrong to associate it with Niels Bohr, who relates the measurement with irreversible changes in the measuring apparatus, rather than in the mind of the human observer.

The second chapter deals with the nature of the quantum state, illustrated with discussions of the Stern–Gerlach experiment to measure spin and the Mach–Zender interferometer to emphasise the importance of interference. During the last 20 years or so, much work has been done on “decoherence”. This has shown that the interaction of the quantum system with its environment, which may include the measuring apparatus, prevents any detectable interference between the states associated with different possible measurement outcomes. Bricmont correctly emphasises that this still does not result in a particular outcome being realised.

The author’s central argument is presented in chapter five, where he discusses the de Broglie–Bohm hidden-variable theory. At its simplest, it proposes that there are two components to the quantum-mechanical state: the wavefunction and an actual point particle that always has a definite position, although this is hidden from observation until its position is measured. This model claims to resolve many of the conceptual problems thrown up by orthodox QM: in particular, the outcome of a measurement is determined by the position of the particle being measured, while the other possibilities implied by the wavefunction can be ignored because they are associated with “empty waves”. Bricmont shows how all the results of standard QM – particularly the statistical probabilities of different measurement outcomes – are faithfully reproduced by the de Broglie–Bohm theory.

This is probably the clearest account of this theorem that I have come across. So why is the de Broglie–Bohm theory not generally accepted as the correct way to understand quantum physics? One reason follows from the work of John Bell, who showed that no hidden-variable theory can reproduce the quantum predictions (now thoroughly verified by experiment) for systems consisting of two or more particles in an entangled state unless the theory includes non-locality – i.e. a faster-than-light communication between the component particles and/or their associated wavefunctions. As this is clearly inconsistent with special relativity, many thinkers (including Bell himself) have looked elsewhere for a realistic interpretation of quantum phenomena. Not so Jean Bricmont: along with other contemporary supporters of the de Broglie–Bohm theory, he embraces non-locality and looks to use the idea to enhance our understanding of the reality he believes underlies quantum physics. In fact he devotes a whole chapter to this topic and claims that non-locality is an essential feature of quantum physics and not just of models based on hidden variables.

Other problems with the de Broglie–Bohm theory are discussed and resolved – to the author’s satisfaction at least. These include how the de Broglie–Bohm model can be consistent with the Heisenberg uncertainty principle when it appears to assume that the particle always has a definite position and momentum; he points out that the statistical results of a large number of measurements always agree with conventional predictions, and these include the uncertainty principle.

Alternative ways to interpret QM are presented, but the author does not find in them the same advantages as in the de Broglie–Bohm theory. In particular, he discusses the many-worlds interpretation, which assumes that the only reality is the wavefunction and that, rather than collapsing at a measurement, this produces branches that correspond to all measurement outcomes. One of the consequences of decoherence is that there can be no interaction between the possible measurements, and this means that no branch can be aware of any other. It follows that, even if a human observer is involved, each branch can contain a copy of him or her who is unaware of the others’ presence. From this point of view, all the possible measurement outcomes co-exist – hence the term “many worlds”. Apart from its ontological extravagance, the main difficulty with many-worlds theory is that it is very hard to see how the separate outcomes can have different probabilities when they all occur simultaneously. Many-worlds supporters have proposed solutions to this problem, which do not satisfy Bricmont (and, indeed, myself), who emphasises that this is not a problem for the de Broglie–Bohm theory.

A chapter is also dedicated to a brief discussion of philosophy, concentrating on the concept of realism and how it contrasts with idealism. Unsurprisingly, it concludes that realists want a theory describing what happens at the micro scale that accounts for predictions made at the macro scale – and that de Broglie–Bohm provide just such a theory.

The book concludes with an interesting account of the history of QM, including the famous Bohr–Einstein debate, the struggle of de Broglie and Bohm for recognition, and the influence of the politics of the time.

This is a clearly written and interesting book. It has been very well researched, containing more than 500 references, and I would thoroughly recommend it to anyone who has an undergraduate knowledge of physics and mathematics and an interest in foundational questions. Whether it actually lives up to its title is for each reader to judge.

The founding of Fermilab

In 1960, two high-energy physics laboratories were competing for scientific discoveries. The first was Brookhaven National Laboratory on Long Island in New York, US, with its 33 GeV Alternating Gradient Synchrotron (AGS). The second was CERN in Switzerland, with its 28 GeV Proton Synchrotron (PS). That year, the US Atomic Energy Commission (AEC) received several proposals to boost the country’s research programme focusing on the construction of new accelerators with energies between 100–1000 GeV. A joint panel of president Kennedy’s Presidential Science Advisory Committee and the AEC’s General Advisory Committee was formed to consider the submissions, chaired by Harvard physicist and Manhattan Project veteran Norman Ramsey. By May 1963, the panel had decided to have Ernest Lawrence’s Radiation Laboratory in Berkeley, California, design a several-hundred GeV accelerator. The result was a 200 GeV synchrotron costing approximately $340 million.

CCfer2_05_17

When Cornell physicist Robert Rathbun Wilson, a student of Lawrence’s who also worked on the Manhattan Project, saw Berkeley’s plans he considered them too conservative, unimaginative and too expensive. Wilson, being a modest yet proud man, thought he could design a better accelerator for less money and let his thoughts be known. By September 1965, Wilson had proposed an alternative, innovative, less costly (approximately $250 million) design for the 200 GeV accelerator to the AEC. The Joint Committee on Atomic Energy, the congressional body responsible for AEC projects and budgets, approved of his plan.

During this period, coinciding with the Vietnam war, the US Congress hoped to contain costs. Yet physicists hoped to make breakthrough discoveries, and thought it important to appeal to national interests. The discovery of the Ω particle at Brookhaven in 1964 led high-energy physicists to conclude that “an accelerator ‘in the range of 200–1000 BeV’ would ‘certainly be crucial’ in exploring the ‘detailed dynamics of this strong SU(3) symmetrical interaction’.” Simultaneously, physicists were expressing frustration with the geographic situation of US high-energy physics facilities. East and West Coast laboratories like Lawrence Berkeley Laboratory and Brookhaven did not offer sufficient opportunity for the nation’s experimental physicists to pursue their research. Managed by regional boards, the programmes at these two labs were directed by and accessible to physicists from nearby universities. Without substantial federal support, other major research universities struggled to compete with these regional laboratories.

CCfer3_05_17

Against this backdrop arose a major movement to accommodate physicists in the centre of the country and offer more equal access. Columbia University experimental physicist Leon Lederman championed “the truly national laboratory” that would allow any qualifying proposal to be conducted at a national, rather than a regional, facility. In 1965, a consortium of major US research universities, Universities Research Association (URA), Inc., was established to manage and operate the 200 GeV accelerator laboratory for the AEC (and its successor agencies the Energy Research and Development Administration (ERDA) and the Department of Energy (DOE)) and address the need for a more national laboratory. Ramsey was president of URA for most of the period 1966 to 1981.

Following a nationwide competition organised by the National Academy of Sciences, in December 1966 a 6800 acre site in Weston, Illinois, around 50 km west of Chicago, was selected. Another suburban Chicago site, north of Weston in affluent South Barrington, had withdrawn when local residents “feared that the influx of physicists would ‘disturb the moral fibre of their community’”. Robert Wilson was selected to direct the new 200 GeV accelerator, named the National Accelerator Laboratory (NAL). Wilson asked Edwin Goldwasser, an experimental physicist from the University of Illinois, Urbana-Champaign, and member of Ramsey’s panel, to be his deputy director and the pair set up temporary offices in Oak Brook, Illinois, on 15 June 1967. They began to recruit physicists from around the country to staff the new facility and design the 200 GeV accelerator, also attracting personnel from Chicago and its suburbs. President Lyndon Johnson signed the bill authorising funding for the National Accelerator Laboratory on 21 November 1967.

Chicago calling

CCfer4_05_17

It wasn’t easy to recruit scientific staff to the new laboratory in open cornfields and farmland with few cultural amenities. That picture lies in stark contrast to today, with the lab encircled by suburban sprawl encouraged by highway construction and development of a high-tech corridor with neighbours including Bell Labs/AT&T and Amoco. Wilson encouraged people to join him in his challenge, promising higher energy and more experimental capability than originally planned. He and his wife, Jane, imbued the new laboratory with enthusiasm and hospitality, just as they had experienced in the isolated setting of wartime-era Los Alamos while Wilson carried out his work on the Manhattan Project.

Wilson and Goldwasser worked on the social conscience of the laboratory and in March 1968, a time of racial unrest in the US, they released a policy statement on human rights. They intended to: “seek the achievement of its scientific goals within a framework of equal employment opportunity and of a deep dedication to the fundamental tenets of human rights and dignity…The formation of the Laboratory shall be a positive force…toward open housing…[and] make a real contribution toward providing employment opportunities for minority groups…Special opportunity must be provided to the educationally deprived…to exploit their inherent potential to contribute to and to benefit from the development of our Laboratory. Prejudice has no place in the pursuit of knowledge…It is essential that the Laboratory provide an environment in which both its staff and its visitors can live and work with pride and dignity. In any conflict between technical expediency and human rights we shall stand firmly on the side of human rights. This stand is taken because of, rather than in spite of, a dedication to science.” Wilson and Goldwasser brought inner-city youth out to the suburbs for employment, training them for many technical jobs. Congress supported this effort and was pleased to recognise it during the civil-rights movement of the late 1960s. Its affirmative spirit endures today.

CCfer5_05_17

When asked by a congressional committee authorising funding for NAL in April 1969 about the value of the research to be conducted at NAL, and if it would contribute to national defence, Wilson famously answered: “It has only to do with the respect with which we regard one another, the dignity of men, our love of culture…It has to do with, are we good painters, good sculptors, great poets? I mean all the things we really venerate and honour in our country and are patriotic about. It has nothing to do directly with defending our country except to help make it worth defending.”

A harmonious whole

Wilson, who had promised to complete his project on time and under budget, perceived of the new laboratory as a beautiful, harmonious whole. He felt that science, technology, and art are importantly connected, and brought a graphic artist, Angela Gonzales, with him from Cornell to give the laboratory site and its publications a distinctive aesthetic. He had his engineers work with a Berkeley colleague, William Brobeck, and an architectural-engineering group, DUSAF, to make designs and cost estimates for early submissions to the AEC, in time for their submissions to the congressional committees that controlled NAL’s budget. Wilson appreciated frugality and minimal design, but also tried to leave room for improvements and innovation. He thought design should be ongoing, with changes implemented as they are demonstrated, before they became conservative.

CCfer6_05_17

There were many decisions to be made in creating the laboratory Wilson envisioned. Many had to be modified, but this was part of his approach: “I came to understand that a poor decision was usually better than no decision at all, for if a necessary decision was not made, then the whole effort would just wallow – and, after all, a bad decision could be corrected later on,” he wrote in 1987. An example was the magnets in the Main Ring, the first name of the 200 GeV synchrotron accelerator, which had to be redesigned as did the plans for the layout of the experimental areas. Even the design of the distinctive Central Laboratory building, constructed after the accelerator achieved its design energy and renamed Robert Rathbun Wilson Hall in 1980, had to have certain adjustments from its initial concepts. Wilson said that “a building does not have to be ugly to be inexpensive” and he orchestrated a competition among his selected architects to create the final design of this visually striking structure. To save money he set up competitions between contractors so that the fastest to finish a satisfactory project were rewarded with more jobs. Consequently, the Main Ring was completed on time by 30 March 1972 and under the $250 million budget. NAL was dedicated and renamed Fermilab on 11 May 1974.

International attraction

Experimentalists from Europe and Asia flocked to propose research at the new frontier facility in the US, forging larger collaborations with American colleagues. Its forefront position and philosophy attracted the top physicists of the world, with Russian physicists making news working on the first approved experiment at Fermilab in the height of the Cold War. Congress was pleased and the scientists were overjoyed with more experimental areas than originally planned and with higher energy, as the magnets were improved to attain 400 GeV and 500 GeV within two years. The higher energy in a fixed-target accelerator complex allowed more innovative experiments, in particular enabling the discovery of the bottom quark in 1977.

CCfer7_05_17

Fermilab’s early intellectual environment was influenced by theoretical physicists Robert Serber, Sam Trieman, J D Jackson and Ben Lee, who later brought Chris Quigg and Bill Bardeen, who in turn invited many distinguished visitors to add to the creative milieu of the laboratory. Already on Wilson’s mind was a colliding-beams accelerator he called an “energy doubler”, which would employ superconductivity, and he had established working groups to study the idea. But Wilson encountered budget conflicts with the AEC’s successor, the new Department of Energy, which led to his resignation in 1978. He joined the faculties of the University of Chicago and Columbia University briefly before returning to Cornell in 1982.

Fermilab’s future was destined to move forward with Wilson’s ideas of superconducting-magnet technology, and a new director was sought. Lederman, who was spokesperson of the Fermilab study that discovered the bottom quark, accepted the position in late 1978 and immediately set out to win support for Wilson’s energy doubler. An accomplished scientific spokesman, Lederman achieved the necessary funding by 1979 and promoted the energy-enhancing idea of introducing an antiproton source to the accelerator complex to enable proton–antiproton collisions. Experts from Brookhaven and CERN, as well as the former USSR, shared ideas with Fermilab physicists to bring superconducting-magnet technology to fruition at Fermilab. Under the leadership of Helen Edwards, Richard Lundy, Rich Orr and Alvin Tollestrup, the Main Ring evolved into the energy doubler/saver in 1983 with a new ring of superconducting magnets installed below the early Main Ring magnets. This led to a trailblazing era during which Fermilab’s accelerator complex, now called the Tevatron, would lead the world in high-energy physics experiments. By 1985 the Tevatron had achieved 800 GeV in fixed-target experiments and 1.6 TeV in colliding-beam experiments, and by the time of its closure in 2011 it had reached 1.96 TeV in the centre of mass – just shy of its original goal of 2 TeV.

CCfer8_05_17

Theory also thrived at Fermilab in this period. Lederman had brought James Bjorken to Fermilab’s theoretical physics group in 1980 and a theoretical astrophysics group founded by Rocky Kolb and Michael Turner was added to Fermilab’s research division in 1983 to address research at the intersection of particle physics and cosmology. Lederman also expanded the laboratory’s mission to include science education, offering programmes to local high-school students and teachers, and in 1980 opened the first children’s centre for employees of any DOE facility. He founded the Illinois Mathematics and Science Academy in 1985 and the Chicago Teachers Academy for Mathematics and Science in 1990, and the Lederman Science Education Center on the Fermilab site is named after him. Lederman also reached out to many regions including Latin America and partnered with businesses to support the lab’s research and encourage technology transfer. The latter included Wilson’s early Fermilab initiative of neutron therapy for certain cancers, which later would see Fermilab build the 70–250 MeV proton synchrotron for the Loma Linda Medical Center in California.

Scientifically, the target in this period was the top quark. Fermilab and CERN had planned for a decade to detect the elusive top, with Fermilab deploying two large international experimental teams at the Tevatron – CDF (founded by Tollestrup) and DZero (founded by Paul Grannis) – from 1976 to 1995. In 1988 Lederman shared the Nobel prize for the discovery of the muon neutrino at Brookhaven 25 years previously, and in 1989 he stepped down as Fermilab director and joined the faculty of the University of Chicago and later the Illinois Institute of Technology.

Lederman was succeeded by John Peoples, a machine builder and Fermilab experimentalist since 1970, and leader of the Fermilab antiproton source from 1981 to 1985. Peoples had his hands full not only with Fermilab and its research programme but also with the Superconducting Super Collider (SSC) laboratory in Texas. In 1993 the SSC was cancelled and Peoples was asked by the DOE to close down the project and its many contracts. The only person to direct two national laboratories at the same time, Peoples successfully managed both tasks and returned to Fermilab to see the discovery of the top quark in 1995. He had also launched the luminosity-enhancing upgrade to the Tevatron, the Main Injector, in 1999. Peoples stepped down as laboratory director that summer and became director of the Sloan Digital Sky Survey (SDSS) – Fermilab’s first astrophysics experiment. He later directed the Dark Energy Survey and in 2010 he retired, continuing to serve as director emeritus of the laboratory.

Intense future

CCfer9_05_17

In 1999, experimentalist and former Fermilab user Michael Witherell of the University of California at Santa Barbara became Fermilab’s fourth director. Ongoing fixed-target and colliding-beam experiments continued under Witherell, as did the SDSS and the Pierre Auger cosmic ray experiments, and the neutrino programme with the Main Injector. Mirroring the spirt of US–European competition of the 1960s, this period saw CERN begin construction of the Large Hadron Collider (LHC) to search for the Higgs boson at a lower energy than the cancelled SSC. Accordingly, the luminosity of the Tevatron became a priority, as did discussions about a possible future international linear collider. After launching the Neutrinos at the Main Injector (NuMI) research programme, including sending the underground particle beam off-site to the MINOS detector in Minnesota, Witherell returned to Santa Barbara in 2005 and in 2016 he became director of the Lawrence Berkeley Laboratory.

Physicist Piermaria Oddone from Lawrence Berkeley Laboratory became Fermilab’s fifth director in 2005. He pursued the renewal of the Tevatron in order to exploit the intensity frontier and explore new physics with a plan called “Project X”, part of the “Proton Improvement Plan”. Yet the last decade has been a challenging time for Fermilab, with budget cuts, reductions in staff and a redefinition of its mission. The CDF and DZero collaborations continued their search for the Higgs boson, narrowing the region where it could exist, but the more energetic LHC always had the upper hand. In the aftermath of the global economic crisis of 2008, as the LHC approached switch-on, Oddone oversaw the shutdown of the Tevatron in 2011. A Remote Operations Center in Wilson Hall and a special US Observer agreement allowed Fermilab physicists to co-operate with CERN on LHC research and participate in the CMS experiment. The Higgs boson was duly discovered at CERN in 2012 and Oddone retired the following year.

CCfer10_05_17

Under its sixth director, former Fermilab user and director of TRIUMF laboratory in Vancouver, Nigel Lockyer, Fermilab now looks ahead to shine once more through continued exploration of the intensity frontier and understanding the properties of neutrinos. In the next few years, Fermilab’s Long-Baseline Neutrino Facility (LBNF) will send neutrinos to the underground DUNE experiment 1300 km away in South Dakota, prototype detectors for which are currently being built at CERN. Meanwhile, Fermilab’s Short-Baseline Neutrino programme has just taken delivery of the 760 tonne cryostat for its ICARUS experiment after its recent refurbishment at CERN, while a major experiment called Muon g-2 is about to take its first results. This suite of experiments, with co-operation with CERN and other international labs, puts Fermilab at the leading edge of the intensity frontier and continues Wilson’s dreams of exploration and discovery.

Revisiting the b revolution

CCbdi1_05_17

Scientists summoned from all parts of Fermilab had gathered in the auditorium on the afternoon of 30 June 1977. Murmurs of speculation ran through the crowd about the reason for the hastily scheduled colloquium. In fact, word of a discovery had begun to leak out [long before the age of blogs], but no one had yet made an official announcement. Then, Steve Herb, a postdoc from Columbia University, stepped to the microphone and ended the speculation: Herb announced that scientists at Fermilab Experiment 288 had discovered the upsilon particle. A new generation of quarks was born. The upsilon had made its first and famous appearance at the Proton Center at Fermilab. The particle, a b quark and an anti-b quark bound together, meant that the collaboration had made Fermilab’s first major discovery. Leon Lederman, spokesman for the original experiment, described the upsilon discovery as “one of the most expected surprises in particle physics”.

The story had begun in 1970, when the Standard Model of particle interactions was a much thinner version of its later form. Four leptons had been discovered, while only three quarks had been observed – up, down and strange. The charm quark had been predicted, but was yet to be discovered, and the top and bottom quarks were not much more than a jotting on a theorist’s bedside table.

In June of that year, Lederman and a group of scientists proposed an experiment at Fermilab (then the National Accelerator Laboratory) to measure lepton production in a series of experimental phases that began with the study of single leptons emitted in proton collisions. This experiment, E70, laid the groundwork for what would become the collaboration that discovered the upsilon.

CCbdi2_05_17

The original E70 detector design included a two-arm spectrometer [for the detection of lepton pairs, or di-leptons], but the group first experimented with a single arm [searching for single leptons that could come, for example, from the decay of the W, which was still to be discovered]. E70 began running in March of 1973, pursuing direct lepton production. Fermilab director Robert Wilson asked for an update from the experiment, so the collaborators extended their ambitions, planned for the addition of the second spectrometer arm and submitted a new proposal, number 288, in February 1974 – a single-page, six-point paper in which the group promised to get results, “publish these and become famous”. This two-arm experiment would be called E288.

The charm dimension

Meanwhile, experiments at Brookhaven National Laboratory and at the Stanford Linear Accelerator Center were searching for the charm quark. These two experiments led to what is known as the “November Revolution” in physics. In November of 1974, both groups announced they had found a new particle, which was later proven to be a bound state of the charm quark: the J/psi particle.

Some semblance of symmetry had returned to the Standard Model with the discovery of charm. But in 1975, an experiment at SLAC revealed the existence of a new lepton, called tau. This brought a third generation of matter to the Standard Model, and was a solid indication that there were more third-generation particles to be found.

The Fermilab experiment E288 continued the work of E70 so much of the hardware was already in place waiting for upgrades. By the summer of 1975, collaborators completed construction on the detector. Lederman invited a group from the State University of New York at Stony Brook to join the project, which began taking data in the autumn of 1975.

One of the many legends in the saga of the b quark describes a false peak in E288’s data. In the process of taking data, several events at an energy level between 5.8 and 6.2 GeV were observed, suggesting the existence of a new particle. The name upsilon was suggested for this new particle. Unfortunately, the signals at that particular energy turned out to be mere fluctuations, and the eagerly anticipated upsilon became known as “oopsLeon”.

CCbdi3_05_17

What happened next is perhaps best described in a 1977 issue of The Village Crier (FermiNews’s predecessor): “After what Dr R R Wilson jocularly refers to as ‘horsing around,’ the group tightened its goals in the spring of 1977.” The tightening of goals came with a more specific proposal for E288 and a revamping of the detector. The collaborators, honed by their experiences with the Fermilab beam, used the detectors and electronics from E70 and the early days of E288, and added two steel magnets and two wire-chamber detectors borrowed from the Brookhaven J/psi experiment.

The simultaneous detection of two muons from upsilon decay characterised the particle’s expected signature. To improve the experiment’s muon-detection capability, collaborators called for the addition to their detector of 12 cubic feet – about two metric tonnes – of beryllium, a light element that would act as an absorber for particles such as protons and pions, but would have little effect on the sought-for muons. When the collaborators had problems finding enough of the scarce and expensive material, an almost forgotten supply of beryllium in a warehouse at Oak Ridge National Laboratory came to the rescue. By April 1977, construction was complete.

Six weeks to fame

The experiment began taking data on 15 May 1977, and saw quick results. After one week of taking data, a “bump” appeared at 9.5 GeV. John Yoh, sure but not overconfident, put a bottle of champagne labelled “9.5” in the Proton Center’s refrigerator.

But champagne corks did not fly right away. On 21 May, fire broke out in a device that measures current in a magnet, and the fire spread to the wiring. The electrical fire created chlorine gas, which when doused with water to put out the fire, created acid. The acid began to eat away at the electronics, threatening the future of E288. At 2.00 a.m. Lederman was on the phone searching for a salvage expert. He found his expert: a Dutchman who lived in Spain and worked for a German company. The expert agreed to come, but needed 10 days to get a US visa. Lederman called the US embassy, asking for an exception. Not possible, said the embassy official. Just as it began to look hopeless, Lederman mentioned that he was a Columbia University professor. The official turned out to be a Columbia graduate, class of 1956. The salvage expert was at Fermilab two days later. Collaborators used the expert’s “secret formulas” to treat some 900 electronic circuit boards, and E288 was back online by 27 May.

By 15 June, the collaborators had collected enough data to prove the existence of the bump at 9.5 GeV – evidence for a new particle, the upsilon. On 30 June, Steve Herb gave the official announcement of the discovery at the seminar at Fermilab, and on 1 July the collaborators submitted a paper to Physics Review Letters. It was published without review on 1 August.

Since the discovery of the upsilon, physicists have found several levels of upsilon states. Not only was the upsilon the first major discovery for Fermilab, it was also the first indication of a third generation of quarks. A bottom quark meant there ought to be a top quark. Sure enough, Fermilab found the top quark in 1995.

Bumps on the particle-physics road
CCbdi4_05_17

The story of “bumps” in particle physics dates back to an experiment at the Chicago Cyclotron in 1952, when Herbert Anderson, Enrico Fermi and colleagues found that the πp cross-section rose rapidly at pion energies of 80–150 MeV, with the effect about three times larger in π+p than in πp. This observation had all the hallmarks of the resonance phenomena that was well known in nuclear physics, and could be explained by a state with spin 3/2, isospin 3/2. With higher energies available at the Carnegie Synchro-cyclotron, in 1954 Julius Ashkin and colleagues were able to report that the πp cross-section fell above about 180 MeV, revealing a characteristic resonance peak. Through the uncertainty principle, the peak’s width of some 100 MeV is consistent with a lifetime of around 10–23 s. Further studies confirmed the resonance, later called Δ, with a mass of 1232 MeV in four charge states: Δ++, Δ+, Δ0 and Δ.

The Δ remained an isolated case until 1960, when a team led by Luis Alvarez began studies of Kp interactions using the 15 inch hydrogen bubble chamber at the Berkeley Bevatron. Graduate students Stan Wojcicki and Bill Marciano studied plots of the invariant mass of pairs of particles produced, and found bumps corresponding to three resonances now known as the Σ(1385), the Λ(1405) and the K*(892). These discoveries opened a golden age for bubble chambers, and set in motion the industry of “bump hunting” and the field of hadron spectroscopy. Four years later, the Δ, together with Σ and Ξ resonances, figured in the famous decuplet of spin-3/2 particles in Murray Gell-Mann’s quark model. These resonances and others could now be understood as excited states of the constituents – quarks – of more familiar longer-lived particles.

By the early 1970s, the number of broad resonances had grown into the hundreds. Then came the shock of the “November Revolution” of 1974. Teams at Brookhaven and SLAC discovered a new, much narrower resonance in experiments studying, respectively, pBe  e+eX and e+e annihilation. This was the famous J/psi, which after the dust had settled was recognised as the bound state of a predicted fourth quark, charm, and its antiquark. The discovery of the upsilon, again as a narrow resonance formed from a bottom quark and antiquark, followed three years later (see main article). By the end of the decade, bumps in appropriate channels were revealing a new spectroscopy of charm and bottom particles at energies around 4 GeV and 10 GeV, respectively.

This left the predicted top quark, and in the absence of any clear idea of its mass, over the following years searches at increasingly high energies looked for a bump that could indicate its quark–antiquark bound state. The effort moved from e+e colliders to the higher energies of p–p machines, and it was experimental groups at Fermilab’s Tevatron that eventually claimed the first observation of top quarks in 1995, not in a resonance, but through their individual decays.

However, important bumps did appear in p-p collisions, this time at CERN’s SPS, in the experiments that discovered the W and Z bosons in 1983. The bumps allowed the first precise measurements of the masses of the bosons. The Z later became famous as an e+e resonance, in particular at CERN’s LEP collider. The most precisely measured resonance yet, the Z has a mass of 91.1876±0.0021 GeV and a width of 2.4952±0.0023 GeV.

However, a more recent bump is probably still more famous – the Higgs boson as observed in 2012. In data from the ATLAS and CMS experiments, small bumps around 125 GeV in the mass spectrum in the four-lepton and two-photon channels, respectively, revealed the long-sought scalar boson (CERN Courier September 2012 p43 and p49).

Today, bump-hunting continues at machines spanning a huge range in energy, from the BEPC-II e+e collider, with a beam energy of 1–2.3 GeV in China, to the CERN’s LHC, operating at 6.5 TeV per beam. Only recently, LHC experiments spotted a modest excess of events at an energy of 750 GeV;  although the researchers cautioned that it was not statistically significant, it still prompted hundreds of publications on the arXiv preprint server. Alas indeed, on this occasion as on others over the decades, the bump faded away once larger data sets were recorded.

Nevertheless, with the continuing searches for new high-mass particles, now as messengers for physics beyond the Standard Model, and searches at lower energies providing many intriguing bumps, who knows where the next exciting “bump” might appear?

• Christine Sutton, formerly CERN.

 

bright-rec iop pub iop-science physcis connect