Bluefors – leaderboard other pages

Topics

The Black Book of Quantum Chromodynamics: A Primer for the LHC Era

By J Campbell, J Huston and F Krauss
Oxford University Press

Also available at the CERN bookshop

This book provides a comprehensive overview of the physics of the strong interaction, which is necessary to analyse and understand the results of current experiments at particle accelerators. In particular, the authors aim to show how to apply the framework of perturbative theory in the context of the strong interaction, to the prediction as well as correct interpretation of signals and backgrounds at the Large Hadron Collider (LHC).

The book consists of three parts. In the first, after a brief introduction to the LHC and the present hot topics in particle physics, a general picture of high-energy interactions involving hadrons in the initial state is developed. The relevant terminology and techniques are reviewed and worked out using standard examples.

The second part is dedicated to a more detailed discussion of various aspects of the perturbative treatment of the strong interaction in hadronic reactions. Finally, in the last section, experimental findings are confronted with theoretical predictions.

Primarily addressed at graduate students and young researchers, this book can also be a helpful reference for advanced scientists. In fact, it can provide the right level of knowledge for theorists to understand data more in depth and for experimentalists to be able to recognise the advantages and disadvantages of different theoretical descriptions.

The reader is assumed to be familiar with concepts of particle physics such as the calculation of Feynman diagrams at tree level and the evaluation of cross sections through phase space integration with analytical terms. However, a short review of these topics is given in the appendices.

In Praise of Simple Physics: The Science and Mathematics behind Everyday Questions

By Paul J Nahin
Princeton

81altc6UJ4L

In this book, popular-science writer Paul Nahin presents a collection of everyday situations in which the application of simple physical principles and a bit of mathematics can make us understand how things work. His aim is to take these scientific disciplines closer to the layperson and, at the same time, show them the wonder lying behind many aspects of reality that are often taken for granted.

The problems presented and explained are very diverse, ranging from how to extract more energy from renewable sources, how best to catch a baseball, to how to measure gravity in one’s garage and why the sky is dark at night. These topics are treated in an informal and entertaining way, but without waiving the maths. In fact, as the author himself highlights, he is interested in keeping the discussions simple, but not so simple that they are simply wrong. The whole point of the book is actually to show how physics and some calculus can explain many of the things that we commonly encounter.

Engaging and humorous, this text will appeal to non-experts with some background in maths and physics. It is suited to students at any level beyond the last years of high school, as well as to practicing scientists who might discover alternative, clever ways to solve (and explain) everyday physics problems.

Calorimetry: Energy Measurement in Particle Physics (2nd edition)

By Richard Wigmans
Oxford Science Publications

When the first edition of this book appeared in 2000, it established itself as “the bible of calorimetry” – not only because of the exhaustive approach to this subtle area of detection, but also because its author enjoyed worldwide recognition within the field. Wigmans gained it thanks to his ground-breaking work on the quantitative understanding of so-called compensating calorimeters (i.e. how to equalise the response of such detectors for electromagnetic and hadronic interactions) and to the leading role he played in designing and operating large detectors that are still considered to be state of the art.

As with the real Bible, which underwent several revisions, this book has been reviewed in depth and published in a second edition. The author has updated it to take into account the last 16 years of progress in the field and to improve its impact as a reference for both students and practitioners.

At first look, one immediately notices that considerable work has been put into improving the quality of the graphics and figures – introducing colours where appropriate – and this new edition is available as an e-book. But there is much more to this updated version.

Chapters two to six, in which the fundamentals of calorimetry are discussed, follow the same thorough structure of the first edition, but they include new insights and use more recent data for illustration, mostly coming from the LHC experiments. Chapters one (Seventy Years of Calorimetry), seven (Performance of Calorimeter Systems) and 11 (Contributions of Calorimetry to the Advancement of Science) have also been brought up to date. Chapters eight, nine and (to a large extent) 10 are brand new and, in my opinion, represent the real added value of this new edition. In particular, chapter eight (New Calorimeter Techniques) discusses the two most relevant innovations introduced in the field during the past decade: dual-readout calorimetry (DRC) and particle-flow analysis (PFA).

The concept of DRC is elaborated upon to circumvent the limitations of compensating hadron calorimeters. Their performances depend crucially on the detection of the abundant contribution of the neutrons produced in the hadronic shower development, which in turn requires the use of heavy absorbers and a small sampling fraction – with the consequent loss of resolution for electromagnetic showers – as well as a relatively large signal-integration time and volume. In DRCs, signals coming from scintillation and Cherenkov processes provide complementary information about the shower development and allow the measurement of the electromagnetic fraction of hadron showers event by event, thus eliminating the effects of fluctuations on calorimeter performance. This concept is discussed in depth and predictions are compared with R&D results on prototypes, providing a convincing experimental demonstration of this novel technique. Although no full-scale calorimeter of this type has been built so far, the results obtained with real detectors, combined with Monte Carlo simulations, have outlined the breakthrough power of this idea, which has all the potential to rival the performances of the best compensating calorimeters, with much better energy resolution for electromagnetic showers. It is very stimulating food for thought for whoever is poised to design next-generation calorimeters.

The other important topic discussed in chapter eight, PFA, is a completely different method that is being used to improve calorimeter performances for jets. It is based on the combined use of a precision tracker and a high-granularity calorimeter, which measures the momentum of charged-jet particles and the energy of neutral particles, respectively. High granularity is mandatory to avoid double counting of the charged particles already measured by the tracker. The topic is treated in great detail, with abundant examples of the application of this technique in real experiments, and its pros and cons are discussed in view of future large-scale detector systems.

As an example, the idea that one can relax the requirements on the calorimeters, since they measure on average only one third of the particles in a jet while the remaining two thirds are very well measured by the tracker, is strongly questioned because the jet-energy resolution would be dominated by the fluctuations in the fraction of the total jet energy that is carried by the charged fragments.

Chapter nine (Analysis and Interpretation of Test Beam Data) is a brand-new addition that I find extremely illuminating and will be valuable for more than just newcomers to the field. By going through it, I have retraced the path of some of my mistakes when dealing with calorimeters, which are complex and subtly deceptive detectors, often exhibiting counterintuitive properties.

Finally, chapter 10 (Calorimeters for Measuring Natural Phenomena) is a tribute to the realisation and successful employment of calorimetric systems to the study of natural phenomena (neutrinos, cosmic rays) in the Antarctica, the Mediterranean Sea and the Argentinian pampa, inside a variety of mountains and deep mines, and in space.

In summary, this second edition of Calorimetry fully meets the ambitious goals of its author: it is a well written and pleasant book, a reference manual for both beginners and experts, and a source of inspiration for future developments in the field.

The Cosmic Web

By John Richard Gott
Princeton University Press

The observation of the night sky is as old as humankind itself. Cosmology, however, has only achieved the status of “science” in the past century or so. In this book, Gott accompanies the reader through the birth of this new science and our growing understanding of the universe as a whole, starting from the observation by Hubble and others in the 1920s that distant galaxies are receding away from us. This was one of the most important discoveries in the history of science because it shifted the position of humans farther away from the centre of the cosmos and showed that the universe is not eternal, but had a beginning. The philosophical implications were hard to digest, even for Einstein, who invented the cosmological constant such that his equations of general relativity could have a static solution.

Following the first observations of distant galaxies, astronomers began to draw a comprehensive map of the observable universe. They played the same role as the explorers travelling around our planet, except that they could only sit where they were and receive light from distant objects, like the faded photography of a lost past.

After an introduction to the early days of cosmology, the book becomes more personal, and the reader feels drawn in to the excitement of actually doing research. Gott’s account of cosmology is given through the lens of his own research, making the book slightly biased towards the physics of the large-scale structure of the universe, but also more focused and definitely captivating for the reader.

The overarching theme of the book is the quest to understand the shape of the “cosmic web”, which is the distribution of galaxies and voids in a universe that is homogeneous only on very large scales. Tiny fluctuations in the matter density, ultimately quantum in origin, grow via gravity to weave the web.

In graduate school, under the supervision of Jim Gunn, Gott wrote his most cited paper, proposing a mathematical model of the gravitational collapse of small density fluctuations. Here, the readers are given a flavour of the way real research is carried out. The author describes in detail the physics involved in the topic, as well as how the article was born and completed and how it took on a life of its own to become a classic.

The author’s investigation of the large-scale structure intertwines with his passion for topology. He was fascinated by polyhedrons with an infinite number of faces, which were the subject of an award-winning project that he developed in high school and of his first scientific article published in a mathematics journal.

At the time, when astronomical surveys were covering only a small portion of the sky, it was unclear how the cosmic structures assembled. American cosmologists thought that galaxies gathered in isolated clusters floating in a low-density universe, like meatballs in a soup. On the other hand, Soviet scientists maintained that the universe was made up of a connected structure of walls and filaments, where voids appear like holes in a Swiss cheese.

Does the 3D map of the universe resemble a meatball stew or a Swiss cheese? Neither, Gott says. With his collaborators, he proposed that the cosmic web is topologically like a sponge, where voids and galaxy clusters form two interlocking regions, much like the infinite polyhedrons Gott studied in his youth.

The reader is given clear and mathematically precise descriptions of the methods used to demonstrate the idea, which was later confirmed by deeper and larger astronomical observations (in 3D), and by the analysis of the cosmic microwave background (in 2D). By that time, we had the theory of cosmological inflation to explain a few of the puzzles regarding the origin of the universe. Remarkably, inflation predicts tiny quantum fluctuations in the fabric of space–time, giving rise to a symmetry between higher and lower density perturbations, leading to the observed sponge-like topology.

Therefore, by the end of the 20th century, the pieces of our understanding of the universe were falling into place and, in 1998, the discovery that the universe is accelerating allowed us to start thinking about the ultimate fate of the cosmos. This is the subject of the last chapter, an interesting mix of sound predictions (for the next trillion years) and speculative ideas (in a future so far away that it is hard to think about), ending the book with a question – rather than an exclamation – mark.

This is not only a good popular science book that achieves a balance between mathematical precision and a layperson’s intuition. It is also a text about the day-to-day life of a researcher, describing details of how science is actually done, the excitement of discovery and the disappointment of following a wrong path. It is a book for readers curious about cosmology, for researchers in other fields, and for young scientists, who will be inspired by an elder one to pursue the fascinating exploration of nature.

FCC presents at tunnel congress

The World Tunnel Congress (WTC) brings together leading tunnel and underground-space experts from all around the world. This year, the congress was held in Dubai from 21 to 26 April and was attended by nearly 2000 professionals, with case studies illuminating the latest trends and innovations and discussions about the role of tunnels in supporting future sustainable cities. CERN’s Future Circular Collider (FCC) study – which is exploring the possibility of a 100 km-circumference collider (see CERN thinks bigger) – would require one of the world’s largest ever underground projects, generating great interest from WTC delegates.

The extensive underground tunnel works required for FCC were presented by John Osborne from CERN and by Werner Dallapiazza from ILF Consulting, who have been tasked with performing a cost and schedule study for the civil-engineering aspects of the FCC study.

The FCC could provide a facility able to host machines in several different collider modes, as well as four very large experimental caverns and service caverns at depths of up to approximately 300 m below the surface. The key challenges for civil engineering come from the difficult geology under Lake Geneva, the river Arve crossing and the area where the river Rhone exits the Geneva basin. In addition, solutions for the 9.2 million cubic metres of excavated rock and other environmental issues need to be studied further.

Antimatter research leaps ahead

The 13th Low Energy Antiproton Physics (LEAP) conference was held from 12–16 March at the Sorbonne University International Conference Center in Paris. A large part of the conference focused on experiments at the CERN Antiproton Decelerator (AD), in particular the outstanding results recently obtained by ALPHA and BASE.

One of the main goals of this field is to explain the lack of antimatter observed in the present universe, which demands that physicists look for any difference between matter and antimatter, apart from their quantum numbers. Specifically, experiments at the AD make ultra-precise measurements to test charge-party-time (CPT) invariance and soon, via the free-fall of antihydrogen atoms, the gravitational equivalence principle to look for any differences between matter and antimatter that would point to new physics.

The March meeting began with talks about antimatter in space. AMS-02 results, based on a sample of 3.49 × 105 antiprotons detected during the past four years onboard the International Space Station, showed that antiprotons, protons and positrons have the same rigidity spectrum in the energy range 60–500 GeV. This is not expected in the case of pure secondary production and could be a hint of dark-matter interactions (CERN Courier December 2016 p31). The development of facilities at the AD, including the new ELENA facility, and at the Facility for Antiproton and Ion Research Facility (FAIR), were also described. FAIR, under construction in Darmstadt, Germany, will increase the antiproton flux by at least a factor of 10 compared to ELENA and allow new physics studies focusing, for example, on the interactions between antimatter and radioactive beams (CERN Courier July/August 2017 p41).

Talks covering experimental results and the theory of antiproton interactions with matter, and the study of the physics of antihydrogen, were complemented with discussions on other types of antimatter systems, such as purely leptonic positronium and muonium. Measurements of these systems offer tests of CPT in a different sector, but their short-lived nature could make experiments here even more challenging than those on antihydrogen.

Stefan Ulmer and Christian Smorra from the AD’s BASE experiment described how they managed to keep antiprotons in a magnetic trap for more than 400 days under an astonishingly low pressure of 5 × 10–19 mbar. There is no gauge to measure such a value, only the lifetime of antiprotons and the probability of annihilation with residual gas in the trap. The feat allowed the team to set the best direct limit so far on the lifetime of the antiproton: 21.7 years (indirect observations from astrophysics indicate an antiproton lifetime in the megayear range). The BASE measurement of the proton-to-antiproton charge over mass ratio (CERN Courier September 2015 p7) is consistent with CPT invariance and, with a precision of 0.69 × 10–12, it is the most stringent test of CPT with baryons. The BASE comparison of the magnetic moment of the proton and the antiproton at the level of 2 × 10–10 is another impressive achievement and is also consistent with CPT (CERN Courier March 2017 p7).

Three new results from ALPHA, which has now achieved stable operation in the manipulation of antihydrogen atoms that has allowed spectroscopy to be performed on 15,000 antiatoms, were also presented. Tim Friesen presented the hyperfine spectrum and Takamasa Momose presented the spectroscopy of the 1S–2P transition. Chris Rasmussen presented the 1S–2S lineshape, which gives a resonant frequency consistent with that of hydrogen at a precision of 2 × 10–12 or an energy level of 2 × 10–20 GeV, already exceeding the precision on the mass difference between neutral kaons and antikaons. ALPHA’s rapid progress suggests hydrogen- like precision in antihydrogen is achievable, opening unprecedented tests of CPT symmetry (CERN Courier March 2018 p30).

The next edition of the LEAP conference will take place at Berkeley in the US in August 2020. Given the recent pace of research in this relatively new field of fundamental exploration, we can look forward to a wealth of new results between now and then.

The history and future of the PHYSTAT series

Most particle-physics conferences emphasise the results of physics analyses. The PHYSTAT series is different: speakers are told not to bother about the actual results, but are reminded that the main topics of interest are the statistical techniques used, the resulting uncertainty on measurements, and how systematics are incorporated. What makes good statistical practice so important is that particle-physics experiments are expensive in human effort, time and money. It is thus very worthwhile to use reliable statistical techniques to extract the maximum information from data (but no more).

Origins

Late in 1999, I had the idea of a meeting devoted solely to statistical issues, and in particular to confidence intervals and upper limits for parameters of interest. With the help of CERN’s statistics guru Fred James, a meeting was organised at CERN in January 2000 and attracted 180 participants. It was quickly followed by a similar one at Fermilab in the US, and further meetings took place at Durham (2002), SLAC (2003) and Oxford (2005). These workshops dealt with general statistical issues in particle physics, such as: multivariate methods for separating signal from background; comparisons between Bayesian and frequentist approaches; blind analyses; treatment of systematics; p-values or likelihood ratios for hypothesis testing; goodness-of-fit techniques; the “look elsewhere” effect; and how to combine results from different analyses.

Subsequent meetings were devoted to topics in specific areas within high-energy physics. Thus, in 2007 and 2011, CERN hosted two more meetings focusing on issues relevant for data analysis at the Large Hadron Collider (LHC), and particularly on searches for new physics. At the 2011 meeting, a whole day was devoted to unfolding, that is, correcting observed data for detector smearing effects. More recently, two PHYSTAT-ν workshops took place at the Institute for Physics and Mathematics of the Universe in Japan (2016) and at Fermilab (2017). They concentrated on issues that arise in analysing data from neutrino experiments, which are now reaching exciting levels of precision. In between these events, there were two smaller workshops at the Banff International Research Station in Canada, which featured the “Banff Challenges” – in which participants were asked to decide which of many simulated data sets contained a possible signal of new physics.

The PHYSTAT workshops have largely avoided having parallel sessions so that participants have the opportunity to hear all of the talks. From the very first meetings, the atmosphere has been enhanced by the presence of statisticians; more than 50 have participated in the various meetings over the years. Most of the workshops start with a statistics lecture at an introductory level to help people with less experience in this field understand the subsequent talks and discussions. The final pair of summary talks are then traditionally given by a statistician and a particle physicist.

A key role

PHYSTAT has played a role in the evolution of the way particle physicists employ statistical methods in their research, and has also had a real influence on specific topics. For instance, at the SLAC meeting in 2003, Jerry Friedman (a SLAC statistician who was previously a particle physicist) spoke about boosted decision trees for separating signal from background; such algorithms are now very commonly used for event selection in particle physics. Another example is unfolding, which was discussed at the 2011 meeting at CERN; the Lausanne statistician Victor Panaretos spoke about theoretical aspects, and subsequently his then student Mikael Kuusela became part of the CMS experiment, and has provided much valuable input to analyses involving unfolding. PHYSTAT is also one of the factors that has helped in raising the level of respectability with which statistics is regarded by particle physicists. Thus, graduate summer schools (such as those organised by CERN) now have lecture courses on statistics, some conferences include plenary talks, and books on particle-physics methodology have chapters devoted to statistics. With the growth in size and complexity of data in this field, a thorough grounding in statistics is going to become even more important.

Recently, Olaf Behnke of DESY in Hamburg has taken over the organisation and planning of the PHYSTAT programme and already there are ideas regarding having a monthly lecture series, a further PHYSTAT-ν workshop at CERN in January 2019 and a PHYSTAT-LHC meeting in autumn 2019, and possibly one devoted to statistical issues in dark-matter experiments. In all probability, the future of PHYSTAT is bright.

Accelerator aficionados meet in Vancouver

The 9th International Particle Accelerator Conference (IPAC18) was held in Vancouver, Canada, from 29 April to 4 May. Hosted by TRIUMF and jointly sponsored by the IEEE Nuclear and Plasma Sciences Society and the APS Division of Physics of Beams, the event attracted more than 1210 delegates from 31 countries, plus 80 industry exhibits staff. The scientific programme included 63 invited talks and 62 contributed orals, organised according to eight main classes. While impossible to summarise the full programme in a short article, below are some of the highlights from IPAC18 that demonstrate the breadth and vibrancy of the accelerator field at this time.

A foray into the future of accelerators by Stephen Brooks of Brookhaven National Laboratory was a walk on the wild side. The idea of a single-particle collider was presented as a possibility to achieve diffraction-limited TeV beams to bridge the potential “energy desert” between current technology and the next energy regime of interest. Relevant technological and theoretical challenges were discussed, including multiple ideas for overcoming emittance growth from synchrotron radiation, focusing beams (via gravitational lensing!) and obtaining nucleus-level alignment, as was how to reduce the cost of future accelerators.

The rise of X-ray free-electron lasers in the past decade, opening new scientific avenues in areas highly related to wider society, was a strong theme of the conference. In addition, in the session devoted to photon sources and electron accelerators, Michael Spata described the Jefferson Laboratory’s 12 GeV upgrade of CEBAF, which began full-power operation in April after overcoming numerous challenges (including installation and operation of a new 4 kW helium liquefier, and field-emission limitations in the cryomodules). James Rosenzweig (UCLA) described progress towards an all-optical “fifth-generation” light source. Here, a TW laser pulse would be split into two, with half being used to accelerate high-quality electron bunches as they co-propagate in a tapered undulator, and the other half striking the accelerated electron beam head on so that the back-scattered photons are shifted to much shorter wavelengths. The scheme could lead to a compact, tunable multi-MeV gamma-ray source, and successful demonstrations have already taken place at the RUBICONICS test stand at UCLA.

Concerning novel particle sources and acceleration techniques, plasma-wakefield acceleration featured large. CERN’s Marlene Turner described progress at the AWAKE experiment, which aims to use a high-energy proton beam to generate a plasma wake that is then used to accelerate an electron beam. Last year, the AWAKE team demonstrated self-modulation of the proton beam and measured the formation of the plasma wakefield. Now the team has installed the equipment to test the acceleration of an injected electron beam, which is expected to be completed in 2018. Felicie Albert of Lawrence Livermore National Laboratory also described the use of laser-wakefield technology to generate betatron X rays, which could enable new measurements at X-ray free-electron lasers.

With IPAC18 coinciding with TRIUMF’s 50th anniversary (CERN Courier May 2018 p31), laboratory director Jonathan Bagger described the evolution of TRIUMF from its founding in 1968 by three local universities to the present-day set-up with 20 member universities, users drawn from 38 countries and an annual budget of CA$100 million. Also in the hadron-accelerator session was a talk by Sergei Nagaitsev of Fermilab about the path to the Long-Baseline Neutrino Facility, which is actually three parallel paths: one for the proton beams (PIP-II), one for the detector which will be located in the Homestake mine in South Dakota (DUNE) 1300 km away, and one for the facilities at Fermilab and Homestake. The three projects will engage more than 175 institutions from around the world with the aim of investigating leptonic CP violation and the mass hierarchy in the neutrino sector. The International Facility for Antiproton and Ion Research (FAIR) under construction in Germany (CERN Courier July/August 2007 p4) was another focus of this session, with Mei Bai of GSI Darmstadt summarising the significant upgrade of the heavy-ion synchrotron SIS18 that will drive the world’s most intense uranium beams for future FAIR operation.

In the session devoted to beam dynamics and electromagnetic fields, Valery Telnov (Budker Institute) introduced a cautionary note about bremsstrahlung at the interaction points of future electron–positron colliders (such as FCC-ee) that will impact beam lifetimes whereas present-generation colliders (such as SuperKEKB) are dominated by synchrotron radiation in the arcs. Tessa Charles of the University of Melbourne, meanwhile, introduced the method of “caustics” to understand and optimise longitudinal beam-dynamics problems, such as how to minimise coherent synchrotron radiation effects in recirculation arcs.

The proton linac for the European Spallation Source (ESS) under construction in Sweden was presented by Morten Jensen during the session on accelerator technology. He outlined the variety of radio-frequency (RF) power sources used in the ESS proton linac and the development of the first-ever MW-class “inductive out tubes” for the linac’s high-beta cavities, which have been tested at CERN and reached record-beating performances of 1.2 MW output for 8.3 kW input power. Pending the development of a production series, the accelerator community may have a new RF workhorse.

As indicated, these are just a few of the many scientific highlights from IPAC18. Industry was also a major presence. In an industry panel discussion, speakers talked about successful models for technology transfer, while talks such as that from Will Kleeven (IBA) described the Rhodotron compact industrial CW electron accelerator producing intense beams with energies in the range from around 1 to 10 MeV, which has key industry applications including polymer cross-linking, sterilisation, food treatment and container security scanning.

IPAC is committed to welcoming young researchers, offering more than 100 student grants and heavily discounted fees for all students. Almost 1500 posters were presented by authors from 233 institutions over four days. The regional attendance distribution was 24% from Asia, 41% from Europe and 35% from the Americas, demonstrating the truly international nature of our field. The 10th IPAC will take place in Melbourne, Australia, on 19–24 May 2019.

Richard Taylor 1929–2018

Richard E Taylor died at the age of 88 on 22 February at his home on the Stanford campus in the US. Taylor was the co-recipient of the 1990 Nobel Prize in Physics, along with Henry Kendall and Jerome Friedman of MIT, for their discoveries of scaling in deep-inelastic electron–proton scattering. It was these results that led to the experimental demonstration of the existence of quarks.

Taylor was born in Medicine Hat, Alberta, Canada, to Clarence and Delia Taylor. He was interested in a career as a surgeon, but an early explosion while using a chemistry set as a child cost him parts of two fingers and the thumb on his left hand – and thus pushed him towards a career in science. He was an undergraduate at the University of Alberta, receiving a bachelor and then master of science in 1952. At Alberta, he married Rita Bonneau in 1951.

Taylor then went to Stanford, working at the Stanford High Energy Physics Laboratory (HEPL). In 1958, he was invited by colleagues at École Normale Supérieure in Orsay to work on experiments for their new accelerator at the Laboratoire de l’Accelerateur Lineare. After three years, he returned to the US, spending a year at Lawrence Berkeley National Laboratory. He then returned to Stanford to complete his PhD under Robert Mosely in 1962. Wolfgang Panofsky invited him to join the core group building the Stanford Linear Accelerator Center (SLAC), roughly 1 km west of the main Stanford campus. Taylor was given responsibility for the “Beam Switchyard” at the end of the linear accelerator that analysed and steered beams to experiments and for the large “End Station A” and its electron spectrometers. Taylor organised a talented group at SLAC including David Coward and Herbert (Hobie) DeStaebler, which carried out the design and construction of these major facilities. The three electron spectrometers with momentum ranges centered around 1.6, 8 and 20 GeV/c made the critical measurements that established SLAC in the forefront of particle physics.

Taylor led his group at SLAC into a collaboration with Caltech and MIT that foresaw the rise of powerful particle-physics collaborations now at the scale of a few thousand physicists for the major LHC experiments. That collaboration proposed and carried out a series of experiments beginning with the elastic scattering of electrons off protons at high momentum transfer in 1967. The measurements extended those made by Richard Hofstadter at HEPL, but led to no surprises.

The proposal for deep-inelastic scattering had no mention of point-like particles in the nucleon. The inelastic cross sections beyond the nucleon resonances were unexpectedly large and flat with increasing momentum transfer, especially when compared to elastic scattering. The data also displayed a simplifying feature called scaling – a prediction by Bjorken from current algebra – suggesting that deep-inelastic cross sections could be expressed as a function of one kinematic variable. These results were extended by Taylor’s group and MIT into more kinematic regions and to studies of the neutron with a deuterium target.

At the “Rochester Conference” in Vienna in 1968, Panofsky summed up the first public results of the experiments with the comment: “Therefore, theoretical speculations are focused on the possibility that these data might give evidence on the behaviour of point-like, charged structures within the nucleon.” Following a visit to SLAC in August 1968, Richard Feynman introduced his “naïve parton theory” in which electrons scattered from point-like free partons give both the observed weak momentum-transfer dependence and scaling. Subsequent experiments by Taylor and collaborators allowed the two nucleon structure functions to be separated, determining that the partons were spin-½ particles. Evaluations of sum rules derived by Bjorken and Kurt Gottfried were consistent with charge assignments in the nascent quark model. Finally, the Gargamelle neutrino and antineutrino results at CERN confirmed the Gell–Mann–Zweig quark model, and these experiments collectively gave rise to the Standard Model of particle physics.

Taylor’s connections to Paris, and later DESY and CERN, continued as a theme through his life. He was awarded a doctorate (Honoris Causa) by the Université de Paris-Sud. After becoming a member of the SLAC faculty in 1968, Taylor won a Guggenheim fellowship and spent a sabbatical year at CERN. He received an Alexander von Humboldt award and spent the 1981–1982 academic year at DESY. Taylor’s group at SLAC was a lively place, with many young European visitors who became staunch colleagues and friends.

In 1978, an experiment at SLAC led by Charles Prescott and Taylor demonstrated parity violation in polarised electron–deuterium scattering – a very challenging experiment that followed negative results from atomic-physics experiments. Parity violation was the essential component of the unification of the electromagnetic and weak interactions, another key chunk of the Standard Model that led to the Nobel Prize for Sheldon Glashow, Abdus Salam and Steven Weinberg in 1979.

Taylor also was awarded the W K H Panofsky Prize, and was a fellow of the American Physical Society, American Association for the Advancement of Science, Royal Society and the Royal Society of Canada. He was also a member of the American Academy of Arts and Sciences and the Canadian Association of Physics, a foreign associate of the National Academy of Science, and Companion of the Order of Canada.

Taylor stayed rooted to his Canadian origins, often vacationing in Medicine Hat where he maintained a home and enjoyed fly fishing in the local streams. He always saw himself as an experimentalist, saying in a 2008 Nobel-prize interview: “My job was to measure things and to make sure that the measurements were right. It is the job of the theoretical community to understand why things are the way that I see them when I do experiments.”

Taylor was a large man and pretended to enjoy a reputation of being somewhat fierce. His friends and colleagues all knew him as a gentle soul, caring deeply for SLAC and always promoting the younger generations of scientists. He is survived by his wife Rita and son Ted.

Ferdinand Hahn 1959–2018

It was with great sadness that we learned that Ferdi Hahn passed away on 4 March. He was an enthusiastic and highly skilled colleague, and an openhearted friend.

Ferdi first came to CERN in 1987 as technical student of the University of Wuppertal, when he joined the barrel-RICH project for the DELPHI experiment at LEP. As part of his diploma thesis, he participated in the photon detector project, SYBIL, a TPC-like drift chamber with single photoelectron detection, which was a prototype of the DELPHI barrel-RICH system.

Here, Ferdi became very much acquainted with all hardware and software aspects of such a test program, both in the innumerable technical matters and in the analysis of the data taken. From 1990, as a CERN fellow, he was heavily involved in the commissioning of the drift tubes of the RICH detector, a particularity of the DELPHI experiment, followed by the development of the temperature control of the barrel RICH. A specific part of the detector was not delivered in time, so Ferdi immediately drove 800 km to the company and back again to allow the start of data taking on time in 1989. Later, Ferdi completed his PhD with a measurement of the differential cross-sections of charged kaons and protons using the DELPHI detector, taking advantage of the unique RICH system.

In 1995 Ferdi joined the CERN physics department as a member of the DELPHI gas group. As section leader in the support groups to CERN experiments and deputy group leader of the DELPHI detector unit, he perfected the operation of the many and complex DELPHI gas systems. He also structured the LHC experiments gas working-group, which led to a professional and efficient system for all LHC detectors.

After having led the detector technology group of the physics department between 2007 and 2008, Ferdi then took over the technical coordination of the NA62 experiment with considerable commitment and great competence in many experimental aspects. Through the preparation of the Technical Design Report and the coordination of the entire installation of the experiment, his exquisite ability to bring collaborators from all kinds of cultures together was clearly an asset for the success of the project.

Knowing that the NA62 experiment was operating smoothly, Ferdi happily agreed to support the physics department as deputy head in 2015. As part of the management, he was in charge of the coordination of the technical groups in the department, including the planning of personnel. With his pleasant manner, patience and exemplary communication skills, he solved numerous tricky problems.

Ferdi was treasured as a close colleague by many; it was a pleasure to work with him. His open character and smile made it easy to discuss subjects, even when they involved complicated issues. He was enthusiastic and full of energy, always ready to help. His friendly way of dealing with people was backed up by a deep competence in technical issues. He was one of a kind and will be sadly missed.

bright-rec iop pub iop-science physcis connect