Comsol -leaderboard other pages

Topics

Neutrino pioneer

When I meet Jack in his office in Building 2, he has just returned from a “splendid” birthday celebration – a classical-music concert “with a lady conductor”, he is quick to add. It had been organised by members of his town of birth, Bad Kissingen in South Germany, and was held at the local gymnasium that bears his name. Steinberger’s memories of the town are those of a 13 year-old child in pre-war Germany during the Nazi election propaganda. “Hitler was psychopathic when it came to Jews,” he says. “In making me leave, however, he did me a great favour because I had a wonderful education in America.”

Talking to this extraordinary man and physicist – who is too modest to dwell on the 1962 discovery of the muon neutrino that won him, Leon Lederman and Melvin Schwartz the 1988 Nobel Prize in Physics – is like taking a trip back in the history of particle physics. With the help of a scholarship from the University of Chicago, Steinberger completed a first degree in chemistry in 1942. He owes his first contact with physics to Ed Purcell and Julian Schwinger, with whom he worked at the MIT radiation laboratory where he had been assigned a military role in 1941 – the year that Japan attacked the US at Pearl Harbour.

“We were making bombsights for bombers, something that could be mounted on airplanes and could see the ground with radar and so you could find military targets,” he explains. “The bombsight we succeeded in developing had a very limited accuracy and you couldn’t see a military target, but you could see cities.” With a heavy heart, Steinberger adds that the radar system was used in the infamous Dresden bombing. “That was my contribution during the war,” he states flatly.

The Fermi years

When the war ended, Steinberger went back to Chicago with the intention of completing a thesis in theoretical physics. Then he met Enrico Fermi. “Fermi was the biggest luck I had in my life!” he exclaims, with a spark in his striking blue eyes. “He asked me to look into a problem raised by an experiment by Rossi and Sands on stopping cosmic-ray muons, and suggested that I do an experiment instead of waiting for a theoretical topic to surface,” recalls Steinberger. At the time, most experiments required just a handful of Geiger counters and a detector measuring about 20 cm long, he says. “The experiment I wanted to do required 80 of those and was 50 cm long, so it was not trivial to build it.”

It was the time before computers, when vacuum tubes were the height of technology, and Fermi had identified the resources required in the physics department of the University of Chicago. Once the experiment was up and running, however, Fermi suggested it would produce results more quickly if it were located on top of a mountain, where there would be more mesons from cosmic rays. “He found a young driver – I didn’t know how to drive, it was the beginning of cars – who took me to the only mountain in the US with a road to the top,” says Steinberger. “It was almost as high as Mt Blanc, and I could do the experiment faster by being on top of that thing.”

The experiment showed that the energy spectrum of the electron in certain meson decays is continuous. It suggested that the muon undergoes a three-body decay, probably into an electron and two neutrinos, and helped to lay the experimental foundation for the concept of a universal weak interaction. What followed is history, leading to the discovery of the muon neutrino (see “DUMAND and the origins of large neutrino detectors”). “It is likely that we had no prejudice on the question of whether the neutrino in muon decay is the same as the one in beta decay.”

Apart from the discovery of the muon neutrino, Steinberger’s pioneering work in physics overlaps 40 years of history of electroweak theory and experiment. At each turn of a decade, Steinberger was the first user of the latest device available for experimentalists, starting with McMillan’s electron synchrotron when it had just been completed in 1949, or Columbia’s 380 MeV cyclotron in 1950. In 1954, he published the first bubble-chamber paper with Leitner, Samios and Schwartz, making a substantial contribution to the technique itself and achieving important results on the properties of the new unstable (strange) particles.

Lasting legacy

What brought Steinberger to CERN in 1968 was the availability of Charpak’s wire chamber, which he realised was a much more powerful way to study K0 decays – to which he says he had “become addicted”. Then he conceived and led the ALPEH experiment at the Large Electron–Positron (LEP) collider. The results of this and the other LEP experiments, he says, “dominated CERN physics, perhaps the world’s, for a dozen or more years, with crucial precise measurements that confirmed the Standard Model of the unified electroweak and strong interactions”.

These days, Jack still comes to CERN with the same curiosity for the field that he always had. He says he is “trying to learn astrophysics, in spite of my mental deficiencies”, and thinks that the most interesting question today is dark matter. “You have a Standard Model which does not predict everything and it does not predict dark matter, but you can conceive of mechanisms for making dark matter in the Standard Model,” he says. “You don’t know if you really understand it, but you can imagine it. And I am not the only one who doesn’t know.”

Futures intertwined

CERN and Fermilab have a rich history of scientific accomplishment. Fermilab, which is currently the only US laboratory fully devoted to particle physics, tends to favour fermions: the top and bottom quarks were discovered here, as was the tau neutrino. CERN seems to prefer bosons: the W, Z and Higgs bosons were all discovered at the European lab. Both labs also have ambitious plans for the future that build on a history of close collaboration. A recent example is the successful test of a novel high-field quadrupole superconducting magnet made from Nb3Sn as part of the R&D programme for the High-Luminosity Large Hadron Collider (HL-LHC). The highly successful team behind this technology (the Fermilab-led LHC Accelerator Research Programme, which includes Berkeley and Brookhaven national labs) is also committed to developing 16 T magnets for a high-energy LHC and a possible larger circular collider.

Our laboratories and their global communities are now moving even closer together. At a ceremony held at the White House in Washington, DC in May 2015, representatives from the US Department of Energy (DOE), the US National Science Foundation and CERN signed a co-operation agreement for continued joint research in particle physics and computing, both at CERN and in the US. This was followed by a ceremony at CERN in December, at which the US ambassador to the United Nations and the former CERN Director-General signed five formal agreements that will serve as the framework for future US–CERN collaboration. The new agreements enable US scientists to continue their vital contribution to the LHC and its upgrade programme, while for the first time enabling CERN participation in experiments hosted in the US.

The US physics community and DOE are committed to the success of CERN. Physicists migrated from the US to CERN en masse following the 1993 cancellation of the Superconducting Super Collider. In 2008, lack of clarity about the future of US particle physics contributed to budget cuts, which together brought us to a low point for our field. These painful periods taught us that a unified scientific community and strong partnerships are vital to success.

Fortunately, the tides have now turned, in particular thanks to two important planning reports. The first was the 2013 European Strategy Report, which for the first time recommended that CERN supports physics programmes, particularly regarding neutrinos, outside of its laboratory. The following year, this bold proposal led the US Particle Physics Project Prioritisation panel to strongly recommended a continued partnership with CERN on the LHC and to pursue an ambitious long-baseline neutrino programme hosted by Fermilab, for which international participation and contributions are vital.

CERN’s support and European leadership are critical to the success of the ambitious Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) being hosted by Fermilab. In partnership with the Italian Institute for Nuclear Physics, CERN is also upgrading the ICARUS detector for our short-baseline neutrino programme. Thanks largely to this partnership with CERN, the US particle-physics community is now enjoying a sense of optimism and increasing budgets.

Fermilab and CERN have always worked together at some level, but the high-level agreements between CERN and the DOE will reach decades into the future. CERN recognises the extensive technical capability of Fermilab and the US community, which are currently working to help upgrade CMS and ATLAS as well as accelerator magnets for the HL-LHC, while the US recognises CERN’s leadership in high-energy collider physics, and more than 1000 US physicists call CERN their scientific home.

Yet, not everyone agrees that our laboratories should be intertwined. Some in the US think too much money is sent abroad and believe that the funds could be used for particle physics at “home”, or for other uses entirely. On the other side of the Atlantic, some might wonder why they should work outside of CERN or, worse, outside of Europe. These views are short-sighted. The best science is best achieved through collaborative global partnerships. For this reason, CERN and Fermilab will be intertwined for a long time to come.

The Unknown as an Engine for Science: An Essay on the Definite and the Indefinite

By H J Pirner
Springer

the-unknown-as-an-engine-for-science-hans-j-pirner-9783319185088

This essay is the result of interdisciplinary research pursued by the author, a theoretical physicist, on the concept of the indefinite and its expression in different fields of human knowledge. Examples are taken from the natural sciences, mathematics, economics, neurophysiology, history, ecology and philosophy.

Physics and mathematics often deal with the indefinite, but they try to reduce it, to reach a theory that would be able to explain everything or to allow reliable predictions. Indefiniteness is strictly connected to uncertainty, which is a component of many analyses of complex processes, so the concept of the indefinite can also be found in economics and risk assessments.

The author explains how uncertainty is present in the humanities. For example, historians might have to work on just a few indeterminate sources and connect the dots to reconstruct a story. Uncertainty is also inherent to our memory – we tend to forget, and lose and confuse details. Psychologists understand that forgetting permits new ideas to form, while strong memories would prevent them from emerging.

The book shows how uncertainty and indefiniteness define the border of our understanding and, at the same time, are engines for research and for continuous attempts to push back that limit.

The first part focuses on information and how it helps to reduce indefiniteness. New elements must be combined with existing parts to be integrated in the knowledge system, so that maximum profit can be taken from the new information. The author tries to quantify the value of information on the basis of its ability to reduce uncertainty.

The second part of the book presents a number of methods that can be used to handle indefiniteness, which come from fuzzy logic, decision theory, hermeneutics, and semiotics. An interdisciplinary approach is promoted because it enables bridges to be built between the different fields among which our knowledge is dispersed.

Goethe’s ‘Exposure of Newton’s Theory’: A Polemic On Newton’s Theory of Light and Colour

By M Duck and M Petry (translators), with an introduction by M Duck
Imperial College Press

Goethes

Johann Wolfgang von Goethe is undoubtedly famous for his literary work, however it is not widely known that he was also fond of science and wrote a polemic text on Newton’s theory of light and colours, which he did not accept. He tried to reproduce the experiment that Newton used to demonstrate that light is heterogeneous but, according to what Goethe himself wrote, he could not obtain the same results.

The book provides an English translation of Goethe’s polemic, completed by an introduction in which a possible justification of this resistance by Goethe to Newton’s theory is given. Many suppositions have been offered: maybe he was prevented from reasoning clearly by a psychological refusal, or perhaps he was simply unable to understand Newton’s experiments and reproduce them well.

In the introduction to this volume, the editor suggests that the reason for Goethe’s stubborn attitude, which made him preserve his belief that light is immutable and that colours result from the interaction of light and darkness, is theological. Goethe believed in the spiritual nature of light, and he could not conceive it as being anything other than simple, immutable and unknowable.

This book, addressed to historians of science, philosophers and scientists, will allow the reader to discover Goethe’s polemic against Newton and to obtain new insights into the multifaceted personality of the German poet.

Superconductivity: A New Approach Based on the Bethe–Salpeter Equation in the Mean-Field Approximation

By G P Malik
World Scientific

41Bb6p7IhAL._SX312_BO1,204,203,200_

This specialist book on superconductivity proposes an approach to the topic, based on the Bethe–Salpeter equations, that allows a description of the characteristics of superconductors (SCs) that are considered unconventional.

The basic theory of superconductivity, elaborated in 1957 by Bardee, Cooper and Schrieffer (BCS), which was worth a Nobel prize to its “fathers”, proves itself to be inadequate in describing the behaviour of high-temperature superconductors (HTSCs) – materials that have a critical temperature higher than 30 K. In this monographic work, the author shows how a generalisation of the BCS equations enables the superconducting features of non-elemental SCs to be addressed in the manner that elemental SCs are dealt with in the original theory. This generalisation is achieved by adopting the “language” of Bethe–Salpeter.

It was the intention of the author to give an essential treatment of the topic, without including material that is not strictly necessary, and to keep it reasonably simple and accessible. Nevertheless, quantum field theory (QFT) and its finite-temperature version (FTFT) are used to derive some equations in the text, so a basic knowledge of them is needed to follow the dissertation.

Principles of Radiation Interaction in Matter and Detection (4th edition)

By C Leroy and P G Rancoita
World Scientific
Also available at the CERN bookshop

51d2dubvP+L._SX336_BO1,204,203,200_

Based on a series of lectures given to undergraduate and graduate students over several years, this book provides a comprehensive and clear presentation of the physics principles that underlie radiation detection.

To detect particles and radiation, the effects of their interaction with matter, when passing through it, have to be studied. The development of increasingly sophisticated and precise detectors has made possible many important discoveries and measurements in particle and nuclear physics.

The book, which has reached its 4th edition thanks to its good reception by readers, is organised into two main parts. The first is dedicated to an extensive treatment of the theories of particle interaction, of the physics and properties of semiconductors, as well as of the displacement damage caused in semiconductors by traversing radiation.

The second part focuses on the techniques used to reveal different kinds of particles, and the relative detectors. Detailed examples are presented to illustrate the operation of the various types of detectors. Radiation environments in which these mechanisms of interaction are expected to take place are also described. The last chapter is dedicated to the application of particle detection to medical physics for imaging. Two appendices and a very rich bibliography complete the volume.

This latest edition of the book has been fully revised, and many sections have been extended to give as complete a treatment as possible of this developing field of study and research. Among other things, this edition provides a treatment of Coulomb scattering on screened nuclear potentials resulting from electrons, protons, light ions and heavy ions, which allows the corresponding non-ionising energy-loss (NIEL) doses deposited in any material to be derived.

Physics and Mathematical Tools: Methods and Examples

By A Alastuey, M Clusel, M Magro and P Pujol
World Scientific

61LLbgVMI7L

This volume presents a set of useful mathematical methods and tools that can be used by physicists and engineers for a wide range of applications. It comprises four chapters, each structured in three parts: first, the general characteristics of the methods are described, then a few examples of applications in different fields are given, and finally a number of exercises are proposed and their solutions sketched.

The topics of the chapters are: analytical properties of susceptibilities in linear response theory, static and dynamical Green functions, and the saddle-point method to estimate integrals. The examples and exercises included range from classical mechanics and electromagnetism to quantum mechanics, quantum field theory and statistical physics. In this way, the general mechanisms of each method are seen from different points of view and therefore made clearer.

The authors have chosen to avoid derivations that are too technical, but without sacrificing rigour or omitting the mathematics behind the method applied in each instance. Moreover, three appendices at the end of the book provide a short overview of some important tools, so that the volume can be considered self-contained, at least to a certain extent.

Intended primarily for undergraduate and graduate physics students, the book could also be useful reading for teachers, researchers and engineers.

Unifying Physics of Accelerators, Lasers and Plasmas

By Andrei Seryi
CRC Press

CCboo2_05_16

Particle accelerators have led to remarkable discoveries and enabled scientists to develop and test the Standard Model of particle physics. On a different scale, accelerators have many applications in technology, materials science, biology, medicine (including cancer therapy), fusion research, and industry. These machines are used to accelerate electrons, positrons or ions to energies in the range of 10 s of MeV to 10 s of GeV. Electron beams are employed in generating intense X-rays in either synchrotrons or free-electron lasers, such as the Linear Collider Light Source at Stanford or the XFEL in Hamburg, for a range of applications.

Particle accelerators developed over the last century are now approaching the energy frontier. Today, at the terascale, the machines needed are extremely large and costly. The size of a conventional accelerator is determined by the technology used and final energy required. In conventional accelerators, radiofrequency microwave cavities support the electric fields responsible for accelerating charged particles. Plasma-based particle accelerators, driven by either lasers or particle beams, are showing great promise as future replacements, primarily due to the extremely large accelerating electric fields they can support, leading to the possibility of compact structures. These fields are supported by the collective motion of plasma electrons, forming a space-charge disturbance moving at a speed slightly below the speed of light in a vacuum. This method is commonly known as plasma wakefield particle acceleration.

Plasma-based accelerators are the brainchild of the late John Dawson and colleagues at the University of California, Los Angeles, and is a topic that is being investigated worldwide with a great deal of success. In the 1980s, John David Lawson asked: “Will they be a serious competitor and displace the conventional ‘dinosaur’ variety?” This is still a valid question, with plasma accelerators already producing bright X-ray sources through betatron radiation at the lower energy scale, and there are plans to create electron beams that are good enough to drive free-electron lasers and future colliders. The topic and application of these plasma accelerators have seen rapid progress worldwide in the last few years, with the result that research is no longer limited to plasma physicists, but is now seeing accelerator and radiation experts involved in developing the subject.

The book fills a void in the understanding of accelerator physics, radiation physics and plasma accelerators. It is intended to unify the three areas and does an excellent job. It also introduces the reader to the theory of inventive problem solving (TRIZ), proposed by Genrikh Altshuller in the mid 20th century to aid in the development of successful patents. It is argued that plasma accelerators fall into the prescription of TRIZ, however, it could also be argued that knowledge, imagination, creativity and time were all that was needed. The concept of TRIZ is outlined, and it is shown how it can be adopted for scientific and engineering problems.

The book is well organised. First, the fundamental concepts of particle motion in EM fields, common to accelerators and plasmas, are presented. Then, in chapter 3, the basics of synchrotron radiation are introduced. They are discussed again in chapter 7, with a potted history of synchrotrons together with Thomson and Compton scattering. It would make sense to have the history of synchrotrons in the earlier chapter.

The main topic of the book, namely the synergy between accelerators, lasers and plasma, is covered in chapter 4, where a comparison between particle-beam bunch compression and laser-pulse compression is made. Lasers have the additional advantage of being amplified through a non-linear medium amplification using chirped-pulse amplification (CPA). This method, together with optical parametric amplification, can push the laser pulses to even higher intensities.

The basics of plasma accelerators are covered in chapter 6, where simple models of these accelerators are described, including laser- and beam-driven wakefield accelerators. However, only the lepton wakefield drivers, not the proton one used for the AWAKE project at CERN, are discussed. This chapter also describes general laser plasma processes, such as laser ionisation, with an update on the progress in developing laser peak intensity. The application of plasma accelerators as a driver of free-electron lasers is covered in chapter 8, describing the principles in simple terms, with handy formulae that can be easily used. Proton and ion acceleration are covered in chapter 9, where the reader is introduced to Bragg scattering, the DNA response to radiation and proton-therapy devices, ending with a description of different plasma-acceleration schemes for protons and ions. The basic principles of the laser acceleration of protons and ions by sheaths, radiation pressure and shock waves are briefly covered. The penultimate chapter discusses beam and pulse manipulation, bringing together a fairly comprehensive but brief introduction to some of the issues regarding beam quality: beam stability, cooling and phase transfer, among others. Finally, chapter 11 looks at inventions and innovations in science, describing how using TRIZ could help. There is also a discussion on bridging the gap between initial scientific ideas and experimental verification to commercial applications, the so-called “Valley of Death”, something that is not discussed in textbooks but is now more relevant than ever.

This book is, to my knowledge, the first to bridge the three disciplines of accelerators, lasers and plasmas. It fills a gap in the market and helps in developing a better understanding of the concepts used in the quest to build compact accelerators. It is an inspiring read that is suitable for both undergraduate and graduate students, as well as researchers in the field of plasma accelerators. The book concentrates on the principles, rather than being heavy on the mathematics, and I like the fact that the pages have wide margins to take notes.

Melting Hadrons, Boiling Quarks: From Hagedorn Temperature to Ultra-Relativistic Heavy-Ion Collisions at CERN. With a Tribute to Rolf Hagedorn

By Johann Rafelski (ed.)
Springer
Also available at the CERN bookshop

CCboo1_05_16

The statistical bootstrap model (SBM), the exponential rise of the hadron spectrum, and the existence of a limiting temperature as the ultimate indicator for the end of ordinary hadron physics, will always be associated with the name of Rolf Hagedorn. He showed that hadron physics contains its own limit, and we know today that this limit signals quark deconfinement and the start of a new regime of strong-interaction physics.

This book is edited by Johann Rafelski, who was a long-time collaborator with Hagedorn and took part in many of the early conceptual developments of the SBM. It may perhaps be best characterised by pointing out what it is not. It is not a collection of review articles on the physics of the SBM and related topics, which could be given to newcomers as an introduction to the field. It is not a collection of reprints to summarise the well-known work of Hagedorn on the SBM, and it is also not a review of the history of this theory. Actually, in this thoughtfully composed volume, aspects of all of the above can be found. However, it goes beyond all of them.

Including a collection of earlier articles on Hagedorn’s work, as well as new invited articles by a number of authors, and original work by Hagedorn himself, along with comments and reprinted material of Rafelski, the book clearly gains its value through the unexpected. It provides an English translation of an early overview article by Hagedorn written in German, as well as unpublished material that may even be new to well-informed practitioners in the field. As such, it presents the transcript of the draft minutes of the 1982 CERN Scientific Policy Committee (SPC) Meeting, at which Maurice Jacob, then head of the CERN Theory Division, reported about the 1982 Bielefeld workshop on the planned experimental exploration of ultra-relativistic heavy-ion collisions, setting the scene for the forthcoming experimental programme at CERN’s SPS.

The book is split into three parts.

Part I, “Reminiscences: Rolf Hagedorn and Relativistic Heavy Ion Research”, contains a collection of 15 invited articles from colleagues of Hagedorn who witnessed the initial stages of his work, leading to formulation of the SBM theory in the early 1960s, and its decisive contribution in expressing the need for an experimental research programme in the early 1980s: Johann Rafelski, Torleif Ericson, Maurice Jacob, Luigi Sertorio, István Montvay and Tamás Biro, Krzysztof Redlich and Helmut Satz, Gabriele Veneziano, Igor Dremin, Ludwik Turko, Marek Gaździcki and Mark Gorenstein, Grażyna Odyniec, Hans Gutbrod, Berndt Müller, and Emanuele Quercigh. These contributions draw a lively picture of Hagedorn, both as a scientist and as a man, with a wide range of interests spanning high-energy physics to music. They also illustrate the impact of Hagedorn’s work on other areas of physics.

Part II, “The Hagedorn Temperature”, contains a collection of original work by Hagedorn. In this section, the scientist’s seminal publication that appeared in 1964 in Nuovo Cimento is deliberately not included; however, publications that emphasise the hurdles that had to be overcome to get to the SBM, and the interpretation Hagedorn offered on his own work in later years, are presented. This is undoubtedly of great interest to those familiar with the physicist’s work but also curious about its creation and growth.

Part III, “Melting Hadrons, Boiling Quarks: Heavy Ion Path to Quark–Gluon Plasma”, puts the work of Hagedorn into the context of the discussion of a possible relativistic heavy-ion programme at CERN that took place in the early 1980s. It starts with his thoughts about a possible programme of this kind, presented at the workshop on future relativistic heavy-ion experiments, held at the Gesellschaft fuer Schwerionenforschung (GSI). It also includes the draft minutes of the 1982 CERN SPC meeting, and some early works on strangeness production as an indicator for quark–gluon plasma formation, as put forward after many years by Rafelski.

The book is undoubtedly an ideal companion to all those who wish to recall the birth of one of the main areas of today’s concepts in high-energy physics, and it is definitely a well-deserved credit to one of the great pioneers in their development.

Data to physics

At the beginning of May, the LHC declared the start of a new physics season for its experiments. The “Stable Beams” visible on the LHC Page 1 screen (see image above) is the “go ahead” for all the experiments to start taking data for physics.

Since 25 March, when the LHC was switched back on after its winter break, the accelerator complex and experiments have been fine-tuned using low-intensity beams and pilot proton collisions, and now the LHC and the experiments are taking an abundance of data.

The short circuit that occurred at the end of April, caused by a small beech marten that had found its way onto a large, open-air electrical transformer situated above ground, resulted in a delay of only a few days in the LHC running schedule. The relevant part of the LHC stopped immediately and safely after the short circuit, and the entire machine remained in standby mode for a few days.

Now, the four largest LHC experiment collaborations, ALICE, ATLAS, CMS and LHCb, have started to collect and analyse the 2016 data (see images above). Last year, operators increased the number of proton bunches to 2244 per beam, spaced at intervals of 25 ns. These enabled the ATLAS and CMS collaborations to study data from about 400 million million proton–proton collisions. In 2016, operators will increase the number of particles circulating in the machine and the squeezing of the beams in the collision regions. The LHC will generate up to one-billion collisions per second in the experiments.

The physics run with protons will last six months. The machine will then be set up for a four-week run colliding protons with lead ions.

bright-rec iop pub iop-science physcis connect