Comsol -leaderboard other pages

Topics

Thorium Energy for the World

By J-P Revol, M Bourquin, Y Kadi, E Lillestol, J-C de Mestral and K Samec (eds)
Springer

978-3-319-26542-1

This book contains the proceedings of the Thorium Energy Conference (ThEC13), held in October 2013 at CERN, which brought together some of the world’s leading experts on thorium technologies. According to them, nuclear energy based on a thorium fuel cycle is safer and cleaner than the one generated from uranium. In addition, long-lived waste from existing power plants could be retrieved and integrated into the thorium fuel cycle to be transformed into a stable material while generating electricity.

The technology required to implement this type of fuel cycle is already being developed, nevertheless much effort and time is still needed.

The ThEC13 conference saw the participation of high-level speakers from 30 countries, such as the Nobel prize laureates Carlo Rubbia and Jack Steinberger, the then CERN Director-General Rolf Heuer, and Hans Blix, former director-general of the International Atomic Energy Agency (IAEA), to name a few.

Collecting the contributions of the speakers, this book offers a detailed technical review of thorium-energy technologies from basic R&D to industrial developments, and is thus a tool for informed debates on the future of energy production and, in particular, on the advantages and disadvantages of different nuclear technologies.

Bose–Einstein Condensation and Superfluidity

By L Pitaevskii and S Stringari
Oxford University Press

CCboo2_05_17

This book deals with the fascinating topics of Bose–Einstein condensation (BEC) and superfluidity. The main emphasis is on providing the formalism to describe these phases of matter as observed in the laboratory. This is far from the idealised studies that originally predicted BEC and are essential to interpret the experimental observations.

BEC was predicted in 1925 by Einstein, based on the ideas of Satyendra Nath Bose. It corresponds to a new phase of matter where bosons accumulate at the lowest energy level and develop coherent quantum properties at a macroscopic scale. These properties may correspond to phenomena that seem impossible from an everyday perspective. In particular, BEC lies behind the theory of superfluids, which are fluids that flow without dissipating energy and rotate without generating vorticity – if we except quantised vortices, which are a sort of topological defect.

Experimentally, the first BEC from dilute gases was observed in the laboratory in 1995, recognised by the 2001 Nobel Prize in Physics. Since then, there has been an explosion of interest and new results in the field. It is thus timely that two of its leading experts have updated and extended their volume on BEC to summarise the theoretical aspects of this phase of matter. The authors also describe in detail how superfluid phenomena can occur for Fermi gases in the presence of interactions.

The book is relatively heavy in formalism, which is justified by the wide range of phenomena covered in a relatively concise volume. It starts with some basics about correlation functions, condensation and statistical mechanics. Next, it delves into the simplest systems for which BEC can occur: weakly coupled dilute gases of bosonic particles. The authors describe different approaches to the BEC phase, including the works of Landau and Bogoliubov. They also introduce the Gross–Pitaevskii equation and show its importance in the description of superfluids. Superfluidity is explained in great detail, in particular the occurrence of quantised vortices.

The second part describes how to adapt the theoretical formalism introduced in the first part to realistic traps where BEC is observed. This is very important to connect theoretical descriptions to laboratory research, for instance to predict in which experimental configurations a BEC will appear and how to characterise it.

Part three deals with BEC in fermionic systems, which is possible if the fermions interact and pair-up into bosonic structures. These fermionic phases exhibit superfluid properties and have been created in the laboratory, and the authors consider fermionic condensates in realistic traps. The final part is devoted to new phenomena appearing in mixed bosonic–fermionic systems.

The book is a good resource for the theoretical description of BEC beyond the idealised configurations that are described in many texts. The concise style and large amount of notation requires constant effort from the reader, but seems inevitable to explain many of the surprising phenomena appearing in BECs. The book, perhaps combined with others, will provide the reader with a clear overview of the topic and latest theoretical developments in the field. The text is enhanced by the many figures and plots presenting experimental data.

Making Sense of Quantum Mechanics

By Jean Bricmont
Springer

CCboo1_05_17

In this book, Jean Bricmont aims to challenge Richard Feynman’s famous statement that “nobody understands quantum mechanics” and discusses some of the issues that have surrounded this field of theoretical physics since its inception.

Bricmont starts by strongly criticising the “establishment” view of quantum mechanics (QM), known as the Copenhagen interpretation, which attributes a key role to the observer in a quantum measurement. The quantum-mechanical wavefunction, indeed, predicts the possible outcomes of a quantum measurement, but not which one of these actually occurs. The author opposes the idea that a conscious human mind is an essential part of the process of determining what outcome is obtained. This interpretation was proposed by some of the early thinkers on the subject, although I believe Bricmont is wrong to associate it with Niels Bohr, who relates the measurement with irreversible changes in the measuring apparatus, rather than in the mind of the human observer.

The second chapter deals with the nature of the quantum state, illustrated with discussions of the Stern–Gerlach experiment to measure spin and the Mach–Zender interferometer to emphasise the importance of interference. During the last 20 years or so, much work has been done on “decoherence”. This has shown that the interaction of the quantum system with its environment, which may include the measuring apparatus, prevents any detectable interference between the states associated with different possible measurement outcomes. Bricmont correctly emphasises that this still does not result in a particular outcome being realised.

The author’s central argument is presented in chapter five, where he discusses the de Broglie–Bohm hidden-variable theory. At its simplest, it proposes that there are two components to the quantum-mechanical state: the wavefunction and an actual point particle that always has a definite position, although this is hidden from observation until its position is measured. This model claims to resolve many of the conceptual problems thrown up by orthodox QM: in particular, the outcome of a measurement is determined by the position of the particle being measured, while the other possibilities implied by the wavefunction can be ignored because they are associated with “empty waves”. Bricmont shows how all the results of standard QM – particularly the statistical probabilities of different measurement outcomes – are faithfully reproduced by the de Broglie–Bohm theory.

This is probably the clearest account of this theorem that I have come across. So why is the de Broglie–Bohm theory not generally accepted as the correct way to understand quantum physics? One reason follows from the work of John Bell, who showed that no hidden-variable theory can reproduce the quantum predictions (now thoroughly verified by experiment) for systems consisting of two or more particles in an entangled state unless the theory includes non-locality – i.e. a faster-than-light communication between the component particles and/or their associated wavefunctions. As this is clearly inconsistent with special relativity, many thinkers (including Bell himself) have looked elsewhere for a realistic interpretation of quantum phenomena. Not so Jean Bricmont: along with other contemporary supporters of the de Broglie–Bohm theory, he embraces non-locality and looks to use the idea to enhance our understanding of the reality he believes underlies quantum physics. In fact he devotes a whole chapter to this topic and claims that non-locality is an essential feature of quantum physics and not just of models based on hidden variables.

Other problems with the de Broglie–Bohm theory are discussed and resolved – to the author’s satisfaction at least. These include how the de Broglie–Bohm model can be consistent with the Heisenberg uncertainty principle when it appears to assume that the particle always has a definite position and momentum; he points out that the statistical results of a large number of measurements always agree with conventional predictions, and these include the uncertainty principle.

Alternative ways to interpret QM are presented, but the author does not find in them the same advantages as in the de Broglie–Bohm theory. In particular, he discusses the many-worlds interpretation, which assumes that the only reality is the wavefunction and that, rather than collapsing at a measurement, this produces branches that correspond to all measurement outcomes. One of the consequences of decoherence is that there can be no interaction between the possible measurements, and this means that no branch can be aware of any other. It follows that, even if a human observer is involved, each branch can contain a copy of him or her who is unaware of the others’ presence. From this point of view, all the possible measurement outcomes co-exist – hence the term “many worlds”. Apart from its ontological extravagance, the main difficulty with many-worlds theory is that it is very hard to see how the separate outcomes can have different probabilities when they all occur simultaneously. Many-worlds supporters have proposed solutions to this problem, which do not satisfy Bricmont (and, indeed, myself), who emphasises that this is not a problem for the de Broglie–Bohm theory.

A chapter is also dedicated to a brief discussion of philosophy, concentrating on the concept of realism and how it contrasts with idealism. Unsurprisingly, it concludes that realists want a theory describing what happens at the micro scale that accounts for predictions made at the macro scale – and that de Broglie–Bohm provide just such a theory.

The book concludes with an interesting account of the history of QM, including the famous Bohr–Einstein debate, the struggle of de Broglie and Bohm for recognition, and the influence of the politics of the time.

This is a clearly written and interesting book. It has been very well researched, containing more than 500 references, and I would thoroughly recommend it to anyone who has an undergraduate knowledge of physics and mathematics and an interest in foundational questions. Whether it actually lives up to its title is for each reader to judge.

The founding of Fermilab

In 1960, two high-energy physics laboratories were competing for scientific discoveries. The first was Brookhaven National Laboratory on Long Island in New York, US, with its 33 GeV Alternating Gradient Synchrotron (AGS). The second was CERN in Switzerland, with its 28 GeV Proton Synchrotron (PS). That year, the US Atomic Energy Commission (AEC) received several proposals to boost the country’s research programme focusing on the construction of new accelerators with energies between 100–1000 GeV. A joint panel of president Kennedy’s Presidential Science Advisory Committee and the AEC’s General Advisory Committee was formed to consider the submissions, chaired by Harvard physicist and Manhattan Project veteran Norman Ramsey. By May 1963, the panel had decided to have Ernest Lawrence’s Radiation Laboratory in Berkeley, California, design a several-hundred GeV accelerator. The result was a 200 GeV synchrotron costing approximately $340 million.

CCfer2_05_17

When Cornell physicist Robert Rathbun Wilson, a student of Lawrence’s who also worked on the Manhattan Project, saw Berkeley’s plans he considered them too conservative, unimaginative and too expensive. Wilson, being a modest yet proud man, thought he could design a better accelerator for less money and let his thoughts be known. By September 1965, Wilson had proposed an alternative, innovative, less costly (approximately $250 million) design for the 200 GeV accelerator to the AEC. The Joint Committee on Atomic Energy, the congressional body responsible for AEC projects and budgets, approved of his plan.

During this period, coinciding with the Vietnam war, the US Congress hoped to contain costs. Yet physicists hoped to make breakthrough discoveries, and thought it important to appeal to national interests. The discovery of the Ω particle at Brookhaven in 1964 led high-energy physicists to conclude that “an accelerator ‘in the range of 200–1000 BeV’ would ‘certainly be crucial’ in exploring the ‘detailed dynamics of this strong SU(3) symmetrical interaction’.” Simultaneously, physicists were expressing frustration with the geographic situation of US high-energy physics facilities. East and West Coast laboratories like Lawrence Berkeley Laboratory and Brookhaven did not offer sufficient opportunity for the nation’s experimental physicists to pursue their research. Managed by regional boards, the programmes at these two labs were directed by and accessible to physicists from nearby universities. Without substantial federal support, other major research universities struggled to compete with these regional laboratories.

CCfer3_05_17

Against this backdrop arose a major movement to accommodate physicists in the centre of the country and offer more equal access. Columbia University experimental physicist Leon Lederman championed “the truly national laboratory” that would allow any qualifying proposal to be conducted at a national, rather than a regional, facility. In 1965, a consortium of major US research universities, Universities Research Association (URA), Inc., was established to manage and operate the 200 GeV accelerator laboratory for the AEC (and its successor agencies the Energy Research and Development Administration (ERDA) and the Department of Energy (DOE)) and address the need for a more national laboratory. Ramsey was president of URA for most of the period 1966 to 1981.

Following a nationwide competition organised by the National Academy of Sciences, in December 1966 a 6800 acre site in Weston, Illinois, around 50 km west of Chicago, was selected. Another suburban Chicago site, north of Weston in affluent South Barrington, had withdrawn when local residents “feared that the influx of physicists would ‘disturb the moral fibre of their community’”. Robert Wilson was selected to direct the new 200 GeV accelerator, named the National Accelerator Laboratory (NAL). Wilson asked Edwin Goldwasser, an experimental physicist from the University of Illinois, Urbana-Champaign, and member of Ramsey’s panel, to be his deputy director and the pair set up temporary offices in Oak Brook, Illinois, on 15 June 1967. They began to recruit physicists from around the country to staff the new facility and design the 200 GeV accelerator, also attracting personnel from Chicago and its suburbs. President Lyndon Johnson signed the bill authorising funding for the National Accelerator Laboratory on 21 November 1967.

Chicago calling

CCfer4_05_17

It wasn’t easy to recruit scientific staff to the new laboratory in open cornfields and farmland with few cultural amenities. That picture lies in stark contrast to today, with the lab encircled by suburban sprawl encouraged by highway construction and development of a high-tech corridor with neighbours including Bell Labs/AT&T and Amoco. Wilson encouraged people to join him in his challenge, promising higher energy and more experimental capability than originally planned. He and his wife, Jane, imbued the new laboratory with enthusiasm and hospitality, just as they had experienced in the isolated setting of wartime-era Los Alamos while Wilson carried out his work on the Manhattan Project.

Wilson and Goldwasser worked on the social conscience of the laboratory and in March 1968, a time of racial unrest in the US, they released a policy statement on human rights. They intended to: “seek the achievement of its scientific goals within a framework of equal employment opportunity and of a deep dedication to the fundamental tenets of human rights and dignity…The formation of the Laboratory shall be a positive force…toward open housing…[and] make a real contribution toward providing employment opportunities for minority groups…Special opportunity must be provided to the educationally deprived…to exploit their inherent potential to contribute to and to benefit from the development of our Laboratory. Prejudice has no place in the pursuit of knowledge…It is essential that the Laboratory provide an environment in which both its staff and its visitors can live and work with pride and dignity. In any conflict between technical expediency and human rights we shall stand firmly on the side of human rights. This stand is taken because of, rather than in spite of, a dedication to science.” Wilson and Goldwasser brought inner-city youth out to the suburbs for employment, training them for many technical jobs. Congress supported this effort and was pleased to recognise it during the civil-rights movement of the late 1960s. Its affirmative spirit endures today.

CCfer5_05_17

When asked by a congressional committee authorising funding for NAL in April 1969 about the value of the research to be conducted at NAL, and if it would contribute to national defence, Wilson famously answered: “It has only to do with the respect with which we regard one another, the dignity of men, our love of culture…It has to do with, are we good painters, good sculptors, great poets? I mean all the things we really venerate and honour in our country and are patriotic about. It has nothing to do directly with defending our country except to help make it worth defending.”

A harmonious whole

Wilson, who had promised to complete his project on time and under budget, perceived of the new laboratory as a beautiful, harmonious whole. He felt that science, technology, and art are importantly connected, and brought a graphic artist, Angela Gonzales, with him from Cornell to give the laboratory site and its publications a distinctive aesthetic. He had his engineers work with a Berkeley colleague, William Brobeck, and an architectural-engineering group, DUSAF, to make designs and cost estimates for early submissions to the AEC, in time for their submissions to the congressional committees that controlled NAL’s budget. Wilson appreciated frugality and minimal design, but also tried to leave room for improvements and innovation. He thought design should be ongoing, with changes implemented as they are demonstrated, before they became conservative.

CCfer6_05_17

There were many decisions to be made in creating the laboratory Wilson envisioned. Many had to be modified, but this was part of his approach: “I came to understand that a poor decision was usually better than no decision at all, for if a necessary decision was not made, then the whole effort would just wallow – and, after all, a bad decision could be corrected later on,” he wrote in 1987. An example was the magnets in the Main Ring, the first name of the 200 GeV synchrotron accelerator, which had to be redesigned as did the plans for the layout of the experimental areas. Even the design of the distinctive Central Laboratory building, constructed after the accelerator achieved its design energy and renamed Robert Rathbun Wilson Hall in 1980, had to have certain adjustments from its initial concepts. Wilson said that “a building does not have to be ugly to be inexpensive” and he orchestrated a competition among his selected architects to create the final design of this visually striking structure. To save money he set up competitions between contractors so that the fastest to finish a satisfactory project were rewarded with more jobs. Consequently, the Main Ring was completed on time by 30 March 1972 and under the $250 million budget. NAL was dedicated and renamed Fermilab on 11 May 1974.

International attraction

Experimentalists from Europe and Asia flocked to propose research at the new frontier facility in the US, forging larger collaborations with American colleagues. Its forefront position and philosophy attracted the top physicists of the world, with Russian physicists making news working on the first approved experiment at Fermilab in the height of the Cold War. Congress was pleased and the scientists were overjoyed with more experimental areas than originally planned and with higher energy, as the magnets were improved to attain 400 GeV and 500 GeV within two years. The higher energy in a fixed-target accelerator complex allowed more innovative experiments, in particular enabling the discovery of the bottom quark in 1977.

CCfer7_05_17

Fermilab’s early intellectual environment was influenced by theoretical physicists Robert Serber, Sam Trieman, J D Jackson and Ben Lee, who later brought Chris Quigg and Bill Bardeen, who in turn invited many distinguished visitors to add to the creative milieu of the laboratory. Already on Wilson’s mind was a colliding-beams accelerator he called an “energy doubler”, which would employ superconductivity, and he had established working groups to study the idea. But Wilson encountered budget conflicts with the AEC’s successor, the new Department of Energy, which led to his resignation in 1978. He joined the faculties of the University of Chicago and Columbia University briefly before returning to Cornell in 1982.

Fermilab’s future was destined to move forward with Wilson’s ideas of superconducting-magnet technology, and a new director was sought. Lederman, who was spokesperson of the Fermilab study that discovered the bottom quark, accepted the position in late 1978 and immediately set out to win support for Wilson’s energy doubler. An accomplished scientific spokesman, Lederman achieved the necessary funding by 1979 and promoted the energy-enhancing idea of introducing an antiproton source to the accelerator complex to enable proton–antiproton collisions. Experts from Brookhaven and CERN, as well as the former USSR, shared ideas with Fermilab physicists to bring superconducting-magnet technology to fruition at Fermilab. Under the leadership of Helen Edwards, Richard Lundy, Rich Orr and Alvin Tollestrup, the Main Ring evolved into the energy doubler/saver in 1983 with a new ring of superconducting magnets installed below the early Main Ring magnets. This led to a trailblazing era during which Fermilab’s accelerator complex, now called the Tevatron, would lead the world in high-energy physics experiments. By 1985 the Tevatron had achieved 800 GeV in fixed-target experiments and 1.6 TeV in colliding-beam experiments, and by the time of its closure in 2011 it had reached 1.96 TeV in the centre of mass – just shy of its original goal of 2 TeV.

CCfer8_05_17

Theory also thrived at Fermilab in this period. Lederman had brought James Bjorken to Fermilab’s theoretical physics group in 1980 and a theoretical astrophysics group founded by Rocky Kolb and Michael Turner was added to Fermilab’s research division in 1983 to address research at the intersection of particle physics and cosmology. Lederman also expanded the laboratory’s mission to include science education, offering programmes to local high-school students and teachers, and in 1980 opened the first children’s centre for employees of any DOE facility. He founded the Illinois Mathematics and Science Academy in 1985 and the Chicago Teachers Academy for Mathematics and Science in 1990, and the Lederman Science Education Center on the Fermilab site is named after him. Lederman also reached out to many regions including Latin America and partnered with businesses to support the lab’s research and encourage technology transfer. The latter included Wilson’s early Fermilab initiative of neutron therapy for certain cancers, which later would see Fermilab build the 70–250 MeV proton synchrotron for the Loma Linda Medical Center in California.

Scientifically, the target in this period was the top quark. Fermilab and CERN had planned for a decade to detect the elusive top, with Fermilab deploying two large international experimental teams at the Tevatron – CDF (founded by Tollestrup) and DZero (founded by Paul Grannis) – from 1976 to 1995. In 1988 Lederman shared the Nobel prize for the discovery of the muon neutrino at Brookhaven 25 years previously, and in 1989 he stepped down as Fermilab director and joined the faculty of the University of Chicago and later the Illinois Institute of Technology.

Lederman was succeeded by John Peoples, a machine builder and Fermilab experimentalist since 1970, and leader of the Fermilab antiproton source from 1981 to 1985. Peoples had his hands full not only with Fermilab and its research programme but also with the Superconducting Super Collider (SSC) laboratory in Texas. In 1993 the SSC was cancelled and Peoples was asked by the DOE to close down the project and its many contracts. The only person to direct two national laboratories at the same time, Peoples successfully managed both tasks and returned to Fermilab to see the discovery of the top quark in 1995. He had also launched the luminosity-enhancing upgrade to the Tevatron, the Main Injector, in 1999. Peoples stepped down as laboratory director that summer and became director of the Sloan Digital Sky Survey (SDSS) – Fermilab’s first astrophysics experiment. He later directed the Dark Energy Survey and in 2010 he retired, continuing to serve as director emeritus of the laboratory.

Intense future

CCfer9_05_17

In 1999, experimentalist and former Fermilab user Michael Witherell of the University of California at Santa Barbara became Fermilab’s fourth director. Ongoing fixed-target and colliding-beam experiments continued under Witherell, as did the SDSS and the Pierre Auger cosmic ray experiments, and the neutrino programme with the Main Injector. Mirroring the spirt of US–European competition of the 1960s, this period saw CERN begin construction of the Large Hadron Collider (LHC) to search for the Higgs boson at a lower energy than the cancelled SSC. Accordingly, the luminosity of the Tevatron became a priority, as did discussions about a possible future international linear collider. After launching the Neutrinos at the Main Injector (NuMI) research programme, including sending the underground particle beam off-site to the MINOS detector in Minnesota, Witherell returned to Santa Barbara in 2005 and in 2016 he became director of the Lawrence Berkeley Laboratory.

Physicist Piermaria Oddone from Lawrence Berkeley Laboratory became Fermilab’s fifth director in 2005. He pursued the renewal of the Tevatron in order to exploit the intensity frontier and explore new physics with a plan called “Project X”, part of the “Proton Improvement Plan”. Yet the last decade has been a challenging time for Fermilab, with budget cuts, reductions in staff and a redefinition of its mission. The CDF and DZero collaborations continued their search for the Higgs boson, narrowing the region where it could exist, but the more energetic LHC always had the upper hand. In the aftermath of the global economic crisis of 2008, as the LHC approached switch-on, Oddone oversaw the shutdown of the Tevatron in 2011. A Remote Operations Center in Wilson Hall and a special US Observer agreement allowed Fermilab physicists to co-operate with CERN on LHC research and participate in the CMS experiment. The Higgs boson was duly discovered at CERN in 2012 and Oddone retired the following year.

CCfer10_05_17

Under its sixth director, former Fermilab user and director of TRIUMF laboratory in Vancouver, Nigel Lockyer, Fermilab now looks ahead to shine once more through continued exploration of the intensity frontier and understanding the properties of neutrinos. In the next few years, Fermilab’s Long-Baseline Neutrino Facility (LBNF) will send neutrinos to the underground DUNE experiment 1300 km away in South Dakota, prototype detectors for which are currently being built at CERN. Meanwhile, Fermilab’s Short-Baseline Neutrino programme has just taken delivery of the 760 tonne cryostat for its ICARUS experiment after its recent refurbishment at CERN, while a major experiment called Muon g-2 is about to take its first results. This suite of experiments, with co-operation with CERN and other international labs, puts Fermilab at the leading edge of the intensity frontier and continues Wilson’s dreams of exploration and discovery.

Revisiting the b revolution

CCbdi1_05_17

Scientists summoned from all parts of Fermilab had gathered in the auditorium on the afternoon of 30 June 1977. Murmurs of speculation ran through the crowd about the reason for the hastily scheduled colloquium. In fact, word of a discovery had begun to leak out [long before the age of blogs], but no one had yet made an official announcement. Then, Steve Herb, a postdoc from Columbia University, stepped to the microphone and ended the speculation: Herb announced that scientists at Fermilab Experiment 288 had discovered the upsilon particle. A new generation of quarks was born. The upsilon had made its first and famous appearance at the Proton Center at Fermilab. The particle, a b quark and an anti-b quark bound together, meant that the collaboration had made Fermilab’s first major discovery. Leon Lederman, spokesman for the original experiment, described the upsilon discovery as “one of the most expected surprises in particle physics”.

The story had begun in 1970, when the Standard Model of particle interactions was a much thinner version of its later form. Four leptons had been discovered, while only three quarks had been observed – up, down and strange. The charm quark had been predicted, but was yet to be discovered, and the top and bottom quarks were not much more than a jotting on a theorist’s bedside table.

In June of that year, Lederman and a group of scientists proposed an experiment at Fermilab (then the National Accelerator Laboratory) to measure lepton production in a series of experimental phases that began with the study of single leptons emitted in proton collisions. This experiment, E70, laid the groundwork for what would become the collaboration that discovered the upsilon.

CCbdi2_05_17

The original E70 detector design included a two-arm spectrometer [for the detection of lepton pairs, or di-leptons], but the group first experimented with a single arm [searching for single leptons that could come, for example, from the decay of the W, which was still to be discovered]. E70 began running in March of 1973, pursuing direct lepton production. Fermilab director Robert Wilson asked for an update from the experiment, so the collaborators extended their ambitions, planned for the addition of the second spectrometer arm and submitted a new proposal, number 288, in February 1974 – a single-page, six-point paper in which the group promised to get results, “publish these and become famous”. This two-arm experiment would be called E288.

The charm dimension

Meanwhile, experiments at Brookhaven National Laboratory and at the Stanford Linear Accelerator Center were searching for the charm quark. These two experiments led to what is known as the “November Revolution” in physics. In November of 1974, both groups announced they had found a new particle, which was later proven to be a bound state of the charm quark: the J/psi particle.

Some semblance of symmetry had returned to the Standard Model with the discovery of charm. But in 1975, an experiment at SLAC revealed the existence of a new lepton, called tau. This brought a third generation of matter to the Standard Model, and was a solid indication that there were more third-generation particles to be found.

The Fermilab experiment E288 continued the work of E70 so much of the hardware was already in place waiting for upgrades. By the summer of 1975, collaborators completed construction on the detector. Lederman invited a group from the State University of New York at Stony Brook to join the project, which began taking data in the autumn of 1975.

One of the many legends in the saga of the b quark describes a false peak in E288’s data. In the process of taking data, several events at an energy level between 5.8 and 6.2 GeV were observed, suggesting the existence of a new particle. The name upsilon was suggested for this new particle. Unfortunately, the signals at that particular energy turned out to be mere fluctuations, and the eagerly anticipated upsilon became known as “oopsLeon”.

CCbdi3_05_17

What happened next is perhaps best described in a 1977 issue of The Village Crier (FermiNews’s predecessor): “After what Dr R R Wilson jocularly refers to as ‘horsing around,’ the group tightened its goals in the spring of 1977.” The tightening of goals came with a more specific proposal for E288 and a revamping of the detector. The collaborators, honed by their experiences with the Fermilab beam, used the detectors and electronics from E70 and the early days of E288, and added two steel magnets and two wire-chamber detectors borrowed from the Brookhaven J/psi experiment.

The simultaneous detection of two muons from upsilon decay characterised the particle’s expected signature. To improve the experiment’s muon-detection capability, collaborators called for the addition to their detector of 12 cubic feet – about two metric tonnes – of beryllium, a light element that would act as an absorber for particles such as protons and pions, but would have little effect on the sought-for muons. When the collaborators had problems finding enough of the scarce and expensive material, an almost forgotten supply of beryllium in a warehouse at Oak Ridge National Laboratory came to the rescue. By April 1977, construction was complete.

Six weeks to fame

The experiment began taking data on 15 May 1977, and saw quick results. After one week of taking data, a “bump” appeared at 9.5 GeV. John Yoh, sure but not overconfident, put a bottle of champagne labelled “9.5” in the Proton Center’s refrigerator.

But champagne corks did not fly right away. On 21 May, fire broke out in a device that measures current in a magnet, and the fire spread to the wiring. The electrical fire created chlorine gas, which when doused with water to put out the fire, created acid. The acid began to eat away at the electronics, threatening the future of E288. At 2.00 a.m. Lederman was on the phone searching for a salvage expert. He found his expert: a Dutchman who lived in Spain and worked for a German company. The expert agreed to come, but needed 10 days to get a US visa. Lederman called the US embassy, asking for an exception. Not possible, said the embassy official. Just as it began to look hopeless, Lederman mentioned that he was a Columbia University professor. The official turned out to be a Columbia graduate, class of 1956. The salvage expert was at Fermilab two days later. Collaborators used the expert’s “secret formulas” to treat some 900 electronic circuit boards, and E288 was back online by 27 May.

By 15 June, the collaborators had collected enough data to prove the existence of the bump at 9.5 GeV – evidence for a new particle, the upsilon. On 30 June, Steve Herb gave the official announcement of the discovery at the seminar at Fermilab, and on 1 July the collaborators submitted a paper to Physics Review Letters. It was published without review on 1 August.

Since the discovery of the upsilon, physicists have found several levels of upsilon states. Not only was the upsilon the first major discovery for Fermilab, it was also the first indication of a third generation of quarks. A bottom quark meant there ought to be a top quark. Sure enough, Fermilab found the top quark in 1995.

Bumps on the particle-physics road
CCbdi4_05_17

The story of “bumps” in particle physics dates back to an experiment at the Chicago Cyclotron in 1952, when Herbert Anderson, Enrico Fermi and colleagues found that the πp cross-section rose rapidly at pion energies of 80–150 MeV, with the effect about three times larger in π+p than in πp. This observation had all the hallmarks of the resonance phenomena that was well known in nuclear physics, and could be explained by a state with spin 3/2, isospin 3/2. With higher energies available at the Carnegie Synchro-cyclotron, in 1954 Julius Ashkin and colleagues were able to report that the πp cross-section fell above about 180 MeV, revealing a characteristic resonance peak. Through the uncertainty principle, the peak’s width of some 100 MeV is consistent with a lifetime of around 10–23 s. Further studies confirmed the resonance, later called Δ, with a mass of 1232 MeV in four charge states: Δ++, Δ+, Δ0 and Δ.

The Δ remained an isolated case until 1960, when a team led by Luis Alvarez began studies of Kp interactions using the 15 inch hydrogen bubble chamber at the Berkeley Bevatron. Graduate students Stan Wojcicki and Bill Marciano studied plots of the invariant mass of pairs of particles produced, and found bumps corresponding to three resonances now known as the Σ(1385), the Λ(1405) and the K*(892). These discoveries opened a golden age for bubble chambers, and set in motion the industry of “bump hunting” and the field of hadron spectroscopy. Four years later, the Δ, together with Σ and Ξ resonances, figured in the famous decuplet of spin-3/2 particles in Murray Gell-Mann’s quark model. These resonances and others could now be understood as excited states of the constituents – quarks – of more familiar longer-lived particles.

By the early 1970s, the number of broad resonances had grown into the hundreds. Then came the shock of the “November Revolution” of 1974. Teams at Brookhaven and SLAC discovered a new, much narrower resonance in experiments studying, respectively, pBe  e+eX and e+e annihilation. This was the famous J/psi, which after the dust had settled was recognised as the bound state of a predicted fourth quark, charm, and its antiquark. The discovery of the upsilon, again as a narrow resonance formed from a bottom quark and antiquark, followed three years later (see main article). By the end of the decade, bumps in appropriate channels were revealing a new spectroscopy of charm and bottom particles at energies around 4 GeV and 10 GeV, respectively.

This left the predicted top quark, and in the absence of any clear idea of its mass, over the following years searches at increasingly high energies looked for a bump that could indicate its quark–antiquark bound state. The effort moved from e+e colliders to the higher energies of p–p machines, and it was experimental groups at Fermilab’s Tevatron that eventually claimed the first observation of top quarks in 1995, not in a resonance, but through their individual decays.

However, important bumps did appear in p-p collisions, this time at CERN’s SPS, in the experiments that discovered the W and Z bosons in 1983. The bumps allowed the first precise measurements of the masses of the bosons. The Z later became famous as an e+e resonance, in particular at CERN’s LEP collider. The most precisely measured resonance yet, the Z has a mass of 91.1876±0.0021 GeV and a width of 2.4952±0.0023 GeV.

However, a more recent bump is probably still more famous – the Higgs boson as observed in 2012. In data from the ATLAS and CMS experiments, small bumps around 125 GeV in the mass spectrum in the four-lepton and two-photon channels, respectively, revealed the long-sought scalar boson (CERN Courier September 2012 p43 and p49).

Today, bump-hunting continues at machines spanning a huge range in energy, from the BEPC-II e+e collider, with a beam energy of 1–2.3 GeV in China, to the CERN’s LHC, operating at 6.5 TeV per beam. Only recently, LHC experiments spotted a modest excess of events at an energy of 750 GeV;  although the researchers cautioned that it was not statistically significant, it still prompted hundreds of publications on the arXiv preprint server. Alas indeed, on this occasion as on others over the decades, the bump faded away once larger data sets were recorded.

Nevertheless, with the continuing searches for new high-mass particles, now as messengers for physics beyond the Standard Model, and searches at lower energies providing many intriguing bumps, who knows where the next exciting “bump” might appear?

• Christine Sutton, formerly CERN.

 

Theory of Quantum Transport at Nanoscale: An Introduction

By Dmitry A Ryndyk
Springer

51IDr5vOk+L

This book provides an introduction to the theory of quantum transport at the nanoscale – a rapidly developing field that studies charge, spin and heat transport in nanostructures and nanostructured materials. The theoretical models and methods recollected in the volume are widely used in nano-, molecular- and bio-electronics, as well as in spin-dependent electronics (spintronics).

The book begins by introducing the basic concepts of quantum transport, including the Landauer–Büttiker method; the matrix Green function formalism for coherent transport; tunnelling (transfer) Hamiltonian and master equation methods for tunnelling; Coulomb blockade; and vibrons and polarons.

In the second part of the book, the author gives a general introduction to the non-equilibrium Green function theory, describing first the approach based on the equation-of-motion technique, and then a more sophisticated one based on the Dyson–Keldysh diagrammatic technique. The book focuses in particular on the theoretical methods able to describe the non-equilibrium (at finite voltage) electron transport through interacting nanosystems, specifically the correlation effects due to electron–electron and electron–vibron interactions.

The book would be useful for both masters and PhD students and for researchers or professionals already working in the field of quantum transport theory and nanoscience.

Thermodynamics and Equations of State for Matter: From Ideal Gas to Quark–Gluon Plasma

By Vladimir Fortov
World Scientific

41DLBMZ6iqL._SX312_BO1,204,203,200_

This monograph presents a comparative analysis of different thermodynamic models of the equation of state (EOS). The author aims to present in a unified way both the theoretical methods and experimental material relating to the field.

Particular attention is given to the description of extreme states reached at high pressure and temperature. As a substance advances along the scale of pressure and temperature, its composition, structure and properties undergo radical changes, from the ideal state of non-interacting neutral particles described by the classical statistical Boltzmann function to the exotic forms of baryon and quark–gluon matter.

Studying the EOS of matter under extreme conditions is important for the study of astrophysical objects at different stages of their evolution as well as in plasma, condensed-matter and nuclear physics. It is also of great interest for the physics of high-energy concentrations that are either already attained or can be reached in the near future under controlled terrestrial conditions.

Ultra-extreme astrophysical and nuclear-physical applications are also analysed. Here, the thermodynamics of matter is affected substantially by relativity, high-power gravitational and magnetic fields, thermal radiation, the transformation of nuclear particles, nucleon neutronisation, and quark deconfinement.

The book is intended for a wide range of specialists who study the EOS of matter and high-energy-density physics, as well as for senior students and postgraduates.

Big Data: Storage, Sharing, and Security

By Fei Hu (ed.)
CRC Press

CCboo2_04_17

Nowadays, enormous quantities of data in a variety of forms are generated rapidly in fields ranging from social networks to online shopping portals to physics laboratories. The field of “big data” involves all the tools and techniques that can store and analyse such data, whose volume, variety and speed of production are not manageable using traditional methods. As such, this new field requires us to face new challenges. These challenges and their possible solutions are the subject of this book of 17 chapters, which is clearly divided into two sections: data management and security.

Each chapter, written by different authors, describes the state-of-the-art for a specific issue that the reader may face when implementing a big-data solution. Far from being a manual to follow step-by-step, topics are treated theoretically and practical uses are described. Every subject is very well referenced, pointing to many publications for readers to explore in more depth.

Given the diversity of topics addressed, it is difficult to give a detailed opinion on each of them, but some deserve particular mention. One is the comparison between different communication protocols, presented in depth and accompanied by many graphs that help the reader to understand the behaviour of these protocols under different circumstances. However, the black-and-white print makes it difficult to differentiate between the lines in these graphs. Another topic that is nicely introduced is the SP (simplicity and power) system, which makes use of innovative solutions to aspects such as the variety of data when dealing with huge amounts. Even though the majority of the topics in the book are clearly linked to big data, some of them are related to broader computing topics such as deep-web crawling or malware detection in Android environments.

Security in big-data environments is widely covered in the second section of the  book, spanning cryptography, accountability and cloud computing. As the authors point out, privacy and security are key: solutions are proposed to successfully implement a reliable, safe and private platform. When managing such amounts of data, privacy needs to be carefully treated since delicate information could be extracted.  The topic is addressed in several chapters from different points of view, from looking at outsourced data to accountability and integrity. Special attention is also given to cloud environments, since they are not as controlled as those “in house”. Cloud environments may require data to be securely transmitted, stored and analysed to avoid access by unauthorised sources. Proposed approaches to apply security include encryption, authorisation and authentication methods.

The book is a good introduction to many of the aspects that readers might face or want to improve in their big-data environment.

Challenges and Goals for Accelerators in the XXI Century

By Oliver Brüning and Stephen Myers (eds)
World Scientific

Also available at the CERN bookshop

CCboo1_04_17

This mighty 840 page book covers an impressive range of subjects divided into no less than 45 chapters. Owing to the expertise and international reputations of the authors of the individual chapters, few if any other books in this field have managed to summarise such a broad topic with such authority. While too numerous to list in the space provided, the full list of authors – a veritable “who’s who” of the accelerator world – can be viewed at worldscientific.com/worldscibooks/10.1142/8635#t=toc.

The book opens with two chapters devoted to a captivating historical review of the Standard Model and a general introduction to accelerators, and closes with two special sections. The first of these is devoted to novel accelerator ideas: plasma accelerators, energy-recovery linacs, fixed-field alternating-gradient accelerators, and muon colliders. The last section describes European synchrotrons used for tumour therapy with carbon ions and covers, in particular, the Heidelberg Ion Therapy Centre designed by GSI and the CERN Proton Ion Medical Machine Study. The last chapter describes the transformation of the CERN LEIR synchrotron into an ion facility for radiobiological studies.

Concerning the main body of the book, 17 chapters look back over the past 100 years, beginning with a concise history of the three first lepton colliders: AdA in Frascati, VEP-1 in Novosibirsk and the Princeton–Stanford electron–electron collider. A leap in time then takes the reader to CERN’s Large Electron–Positron collider (LEP), which is followed by a description of the Stanford Linear Collider. Unfortunately, this latter chapter is too short to do full justice to such an innovative approach to electron–positron collisions.

The next section is devoted to beginnings, starting from the time of the Brookhaven Cosmotron and Berkeley Bevatron. The origin of alternating-gradient synchrotrons is well covered through a description of the Brookhaven AGS and the CERN Proton Synchrotron. The first two hadron colliders at CERN – the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS) proton–antiproton collider – are then discussed. The ISR’s breakthroughs were numerous, including the discovery of Schottky scans, the demonstration of stochastic cooling and absolute luminosity measurements by van der Meer scans. Even more remarkable was the harvest of the SPS proton–antiproton collider, culminating with the Nobel prize awarded to Carlo Rubbia and Simon van der Meer. The necessary Antiproton Accumulator and Collector are discussed in a separate chapter, which ends with an amusing recollection: “December 1982 saw the collider arriving at an integrated luminosity of 28 inverse nanobarns and Rubbia offering a ‘champagne-only’ party with 28 champagne bottles!” Antiproton production methods are covered in detail, including a description of the manoeuvres needed to manipulate antiproton bunches and of the production of cold antihydrogen atoms. This subject is continued in a later chapter dedicated to CERN’s new ELENA antiproton facility.

The Fermilab proton–antiproton collider started later than the SPS, but eventually led to the discovery of the top quark by the CDF and D0 collaborations. The Fermilab antiproton recycler and main ring are described, followed by a chapter dedicated to the Tevatron, which was the first superconducting collider. The first author remarks that, over the years, some 1016 antiprotons were accumulated at Fermilab, corresponding to about 17 nanograms and more than 90% of the world’s total man-made quantity of nuclear antimatter. This section of the book concludes with a description of the lepton–proton collider HERA at DESY, the GSI heavy-ion facility, and the rare-isotope facility REX at ISOLDE. Space is also given to the accelerator that was never built, the US Superconducting Super Collider (SSC), of which “the hopeful birth and painful death” is recounted.

The following 25 chapters are devoted to accelerators for the 21st century, with the section on “Accelerators for high-energy physics” centred on the Large Hadron Collider (LHC). In the main article, magisterially written, it is recalled that the 27 km length of the LEP tunnel was chosen having already in mind the installation of a proton–proton collider, and the first LHC workshop was organised as early as 1984. The following chapters are dedicated to ion–ion collisions at the LHC and to the upgrades of the main ring and the injector. The high-energy version of the LHC and the design of a future 100 km-circumference collider (with both electron–positron and proton–proton collision modes) are also covered, as well as the proposed TeV electron–proton collider LHeC. The overall picture is unique, complete and well balanced.

Other chapters discuss frontier accelerators: super B-factories, the BNL Relativistic Heavy Ion Collider (RHIC)  and its electron–ion extension, linear electron–positron colliders, electron–positron circular colliders for Higgs studies and the European Spallation Source. Special accelerators for nuclear physics, such as the High Intensity and Energy ISOLDE at CERN and the FAIR project at GSI, are also discussed. Unfortunately, the book does not deal with synchrotron light sources, free electron lasers and high- power proton drivers. However, the latter are discussed in connection with neutrino beams by covering the CERN Neutrinos to Gran Sasso project and neutrino factories.

The book is aimed at engineers and physicists who are already familiar with particle accelerators and may appreciate the technical choices and stories behind existing and future facilities. Many of its chapters could also be formative for young people thinking of joining one of the described projects. I am convinced that these readers will receive the book very positively.

Synchrotron Radiation: Basics, Methods and Applications

By S Mobilio, F Boscherini and C Meneghini (eds)
Springer

61At-RFIPgL

Observed for the first time in 1947 – and long considered as a problem for particle physics as it can cause particle beams to lose energy – synchrotron radiation is today a fundamental tool for characterising nanostructures and advanced materials. Thanks to its characteristics in terms of brilliance, spectral range, time structure and coherence, it is extensively applied in many scientific fields, spanning material science, chemistry, nanotechnology, earth and environmental sciences, biology, medical applications, and even archaeology and cultural heritage.

The book reports the lecture notes of lessons held at the 12th edition of the School on Synchrotron Radiation, held in Trieste, Italy, in 2013 and organised by the Italian Synchrotron Radiation Society in collaboration with Elettra-Sincrotrone Trieste. The book is organised in four parts. The first describes the emission of synchrotron and free-electron laser sources, as well as the basic aspects of beamline instrumentation. In the second part, the fundamental interactions between electromagnetic radiation and matter are illustrated. The third part discusses the most important experimental methods, including different types of spectroscopy, diffraction and scattering, microscopy and imaging techniques. An overview of the numerous applications of these techniques to various research fields is then given in the fourth section. In this, a chapter is also dedicated to the new generation of synchrotron radiation sources, based on free-electron lasers, which are opening the way to new applications and more precise measurements.

This comprehensive book is aimed at both PhD students and more experienced researchers, since it not only provides an introduction to the field but also discusses relevant topics of interest in depth.

bright-rec iop pub iop-science physcis connect