Gaseous photomultipliers are gas-filled devices capable of detecting single photons (in the visible and UV spectrum) with a high position resolution. They are used in various research settings, in particular high-energy physics, and are among several types of contemporary single-photon detectors. This book provides a detailed comparison between photosensitive detectors based on different technologies, highlighting their advantages and disadvantages of them for diverse applications.
After describing the main principles underlying the conversion of photons to photoelectrons and the electron avalanche multiplication effect, the characteristics (and requirements) of position-sensitive gaseous photomultipliers are discussed. A long section of the book is then dedicated to describing and analysing the development of these detectors, which evolved from photomultipliers filled with photosensitive vapours to devices using liquid and then solid photocathodes. UV-sensitive photodetectors based on caesium iodide and caesium telluride, which are mainly used as Cherenkov-ring imaging detectors and are currently employed in the ALICE and COMPASS experiments at CERN, are presented in a dedicated chapter. The latest generation of gaseous photomultipliers, sensitive up to the visible region, are also discussed, as are alternative position-sensitive detectors.
The authors then focus on the Cherenkov light effect, its discovery and the way it has been used to identify particles. The introduction of ring imaging Cherenkov (RICH) detectors was a breakthrough and led to the application of these devices in various experiments, including the Cosmic AntiParticle Ring Imaging Cherenkov Experiment (CAPRICE) and the former CERN experiment Charge Parity violation at Low Energy Antiproton Ring (CP LEAR).
The latest generation of RICH detectors and applications of gaseous photomultipliers beyond RICH detectors are also discussed, completing the overview of the subject.
By S Tackmann, K Kampmann and H Skovby (eds)
Forlaget Historika/Gad Publishers
This book, which includes a contribution by CERN Director-General Fabiola Gianotti, presents 17 radical and game-changing ideas to help reach the 2030 Global Goals for Sustainable Development identified by the United Nations General Assembly.
Renowned and influential leaders propose innovative solutions for 17 “big bets” that the human race must face in the coming years. These experts in the environment, finance, food security, education and other relevant disciplines share their vision of the future and suggest new paths towards sustainability.
In the book, Gianotti replies to this call and shares her ideas about the importance of basic science and research in science, technology, engineering and maths (STEM) to underpin innovation, sustainable development and the improvement of global living conditions. After giving examples of breakthrough innovations in technology and medicine that came about from the pursuit of knowledge for its own sake, Gianotti contends that we need science and scientifically aware citizens to be able to tackle pressing issues, including drastic reduction of poverty and hunger, and the provision of clean and affordable energy. Finally, she proposes a plan to secure STEM education and funding for basic scientific research.
Published as part of the broader Big Bet Initiative to engage stakeholders around new and innovative ideas for global development, this book provides fresh points of view and credible solutions. It would appeal to readers who are interested in innovation and sustainability, as well as in the role of science in such a framework.
This book aims to deliver a concise, practical and intuitive introduction to probability and statistics for undergraduate and graduate students of physics and other natural sciences. The author attempts to provide a textbook in which mathematical complexity is reduced to a minimum, yet without sacrificing precision and clarity. To increase the appeal of the book for students, classic dice-throwing and coin-tossing examples are replaced or accompanied by real physics problems, all of which come with full solutions.
In the first part (chapters 1–6), the basics of probability and distributions are discussed. A second block of chapters is dedicated to statistics, specifically the determination of distribution parameters based on samples. More advanced topics follow, including Markov processes, the Monte Carlo method, stochastic population modelling, entropy and information.
The author also chooses to cover some subjects that, according to him, are disappearing from modern statistics courses. These include extreme-value distributions, the maximum-likelihood method and linear regressions using singular-value decomposition. A set of appendices concludes the volume.
An introduction to the novel and developing field of quantum information, this book aims to provide undergraduate and beginning graduate students with all of the basic concepts needed to understand more advanced books and current research publications in the field. No background in quantum physics is required because its essential principles are provided in the first part of the book.
After an introduction to the methods and notation of quantum mechanics, the authors explain a typical two-state system and how it is used to describe quantum information. The broader theoretical framework is also set out, starting with the rules of quantum mechanics and the language of algebra.
The book proceeds by showing how quantum properties are exploited to develop algorithms that prove more efficient in solving specific problems than their classical counterparts. Quantum computation, information content in qubits, cryptographic applications of quantum-information processing and quantum-error correction are some of the key topics covered in this book.
In addition to the many examples developed in the text, exercises are provided at the end of each chapter. References to more advanced material are also included.
This book collates information from various literature to provide students with a unified guide to contemporary developments in atomic physics. In just 400 pages it largely succeeds in achieving this aim.
The author is a professor of physics at the Indian Institute of Science in Bangalore. His research focuses on laser cooling and trapping of atoms, quantum optics, optical tweezers, quantum computation in ion traps, and tests of time-reversal symmetry using laser-cooled atoms. He received a PhD from the Massachusetts Institute of Technology under the supervision of David Pritchard, a leader in modern atomic physics and a mentor of two researchers – Eric Cornell and Wolfgang Ketterle – who went on to become Nobel laureates.
The book addresses the basis of atomic physics and state-of-the-art topics. It explains material clearly, although the arrangement of information is quite different to classical atomic-physics textbooks. This is clearly motivated by the importance of certain topics in modern quantum-optics theory and experiments. The physics content is often accompanied by the history behind concepts and by explanations of why things are named the way they are. Historical notes and personal anecdotes give the book a very appealing flair.
Chapter one covers different measurement systems and their merits, followed by universal units and fundamental constants, with a detailed explanation of which constants are truly fundamental. The next chapter is devoted to preliminary materials, starting with the harmonic oscillator and moving to concepts – namely coherent and squeezed states – that are important in quantum optics but not explicitly covered in some other books in the field. The chapter ends with a section on radiation, even including a description of the Casimir effect.
Chapter three is called Atoms. Alongside classical content such as energy levels of one-electron atoms, interactions with magnetic and electric fields, and atoms in oscillating fields, this chapter explains dressed atoms and also, unfortunately only briefly, includes a description of the permanent atomic electric dipole moment (EDM).
The following chapter is devoted to nuclear effects, the isotope shift and hyperfine structure. At this point it would have been nice to see some mention of the flourishing field of laser spectroscopy of radioactive nuclei, which exploits the two above effects to investigate the ground-state properties of nuclei far from the valley of stability.
Chapter five is about resonance, which is often scattered around in other books about atomic physics. Here, interestingly, nuclear magnetic resonance (NMR) plays a central role, and the chapter connects this topic very naturally to atomic physics. The chapter closes with a description of the density matrix formalism. After this comes a chapter devoted to interactions, including the electric dipole approximation, selection rules, transition rates and spontaneous emission. The last section is concerned with differences in saturation intensities by broadband and monochromatic radiation.
Multiphoton interactions are the topic of chapter seven, which is clearly motivated by their importance in modern quantum-optics laboratories. Two-photon absorption and de-excitation, Raman processes and the dressed atom description are all explained. Another crucial concept in modern quantum optics is coherence. Thus it is included as a full chapter, which includes coherence in a single atom and in ensembles of atoms, as well as coherent control in multilevel atoms. Spin echo appears as well, showing again how close the topics presented in the book are to NMR.
Chapter nine is devoted to lineshapes, which is clearly a subject relevant for modern atomic spectroscopists. Spectroscopy is the next chapter, which starts with alkali atoms – used extensively in laser cooling and Bose–Einstein condensates. The rest of the material is aimed at experimentalists. Uniquely for such a book, it includes a description of the key experimental tools, followed by Doppler-free techniques and nonlinear magneto-optic rotation.
The last chapter covers cooling and trapping, with so many relevant concepts already presented in the preceding chapters. The content includes different cooling approaches, principles of atom and ion traps, the cryptic and equally common Zeeman slower, and even more intriguing optical tweezers.
Each chapter ends with a problems section, in which the problems are often relevant to a real quantum-optics lab, for example concerning quantum defects, RF-induced magnetic transitions, Raman scattering cross-sections, quantum beats or the Voigt line profile. The problems are worked out in detail, allowing readers to follow how to arrive at the solution.
The appendices cover the standards and the frequency comb, which is one of the ingenious devices to come from the laboratory of Nobel laureate Theodor Hänsch and which can be now found in an ever-growing number of laser-spectroscopy and quantum-optics labs. Two other appendices are very different: they have a philosophical flair and deal with the nature of the photon and with Einstein as nature’s detective.
The presented theoretical basis leads to state-of-the art experiments, especially related to ion and atom cooling and to Bose–Einstein condensates. The selection of topics is thus clearly tailored for experimentalists working in a quantum optics lab. One small criticism is that it would be good to read more about the EDM experiments and laser spectroscopy of radioactive ions, which are currently two very active fields. Readers interested in different classic subjects, like atomic collisions, should turn to other books such as Bransden and Joachain’s Physics of Atoms and Molecules.
The level of the book makes it suitable for undergraduate level, but also for new graduate students. It can also serve as a quick reference for researchers, especially concerning the topics of general interest: metrology, what is a photon or how a frequency comb works, and how to achieve a Bose–Einstein condensate. Overall, the book is a very good guide to the topics relevant in modern atomic physics and its style makes it quite unique and personal.
The Baryon Antibaryon Symmetry Experiment (BASE) collaboration at CERN has made the most precise direct measurement of the magnetic moment of the antiproton, allowing a fundamental comparison between matter and antimatter.
The BASE measurement shows that the magnetic g-factors (which relate the magnetic moment of a particle to the nuclear magneton) of the proton and antiproton are identical within the experimental uncertainty of 0.8 parts per million: 2.7928465(23) for the antiproton, compared to 2.792847350(9) for the proton. The result improves the precision of the previous best measurement by the ATRAP collaboration in 2013, also at CERN, by a factor of six.
Comparisons of the magnetic moments of the proton and antiproton at this level of precision provide a powerful test of CPT invariance. Were even slight differences to be found, it would point to physics beyond the Standard Model. It could imply, for example, the existence of a new vector boson that couples only to antimatter, which could have a direct effect on the lifetime of baryons. Such effects more generally could also shed light on the mystery of the missing antimatter observed on cosmological scales.
BASE uses antiprotons from CERN’s Antiproton Decelerator (AD), which serves several other experiments making rapid progress in precision antimatter measurements (CERN Courier December 2016 p16). By trapping the particles in electromagnetic containers called Penning traps and cooling them to temperatures below 1 K, the BASE team can measure the cyclotron and Larmor frequencies of single trapped antiprotons. By measuring the ratio of these two frequencies the magnetic moment of the antiproton is obtained in units of the nuclear magneton.
Similar techniques have been successfully applied in the past to electrons and positrons. However, antiprotons present a much bigger challenge because their magnetic moments are considerably weaker, requiring BASE to design Penning traps with about 2000 times higher sensitivity with respect to magnetic moments. BASE now plans to measure the antiproton magnetic moment using a new double-Penning trap technique, which should enable a precision at the level of a few parts per billion in the future.
Late in the evening of 12 January, a beam of electrons circulated for the first time in the SESAME light source in Jordan. Following the first single turn, the next steps will be to achieve multi-turns, store and then accelerate a beam. This is an important milestone towards producing intense beams of synchrotron light at the pioneering facility, which is the first light-source laboratory in the Middle East.
SESAME, which stands for Synchrotron-light for Experimental Science and Applications in the Middle East, will eventually operate several beamlines at different wavelengths for wide-ranging studies of the properties of matter. Experiments there will enable SESAME users to undertake research in fields ranging from medicine and biology, through materials science, physics and chemistry to healthcare, the environment, agriculture and archaeology.
CERN has a long-standing involvement with SESAME, notably through the European Commission-funded CESSAMag project, coordinated by CERN. This project provided the magnet system for SESAME’s 42 m-diameter main ring and brought CERN’s expertise in accelerator technology to the facility in addition to training, knowledge and technology transfer.
The January milestone follows a series of key events, beginning with the establishment of a Middle East Scientific Collaboration group in the mid-1990s. This was followed by the donation of the BESSY1 accelerator by the BESSY laboratory in Berlin. Refurbished and upgraded components of BESSY1 now serve as the injector for the completely new SESAME main ring, which is a competitive third-generation light source built by SESAME with support from the SESAME members, as well as the European Commission and CERN through CESSAMag, and Italy.
There is still a lot of work to be done before experiments can get underway. Beams have to be accelerated to SESAME’s operating energy of 2.5 GeV. Then the synchrotron light emitted as the beams circulate has to be channelled along SESAME’s two initial beamlines and optimised for the experiments that will take place there. This process is likely to take around six months, leading to first experiments in the summer of 2017.
Following a record year of proton–proton operations at the LHC in 2016, which was followed by a successful proton–lead run, on 5 December 2016 the machine entered a longer than usual winter shutdown. Since then, hundreds of people from CERN’s technical teams have been working to repair and upgrade equipment across the whole accelerator chain and also the LHC experiments themselves. The extended year-end technical stop (EYETS), which will be complete by the end of April, has enabled CERN and its users to perform important interventions including the upgrade of the CMS pixel detector.
As the EYETS officially got under way in early December, 10 days were dedicated to powering tests for the LHC magnets to investigate the feasibility of operating at its design energy of 7 TeV per beam. Although this is only 0.5 TeV higher than the energy of the LHC during Run 2, which began in 2015, the LHC’s 1232 dipole magnets must be trained at higher currents to allow the higher-energy beams to circulate. Powering tests were conducted in two of the LHC’s eight sectors during which the current was gradually increased: in sector 4–5, for example, the current reached 11.535 kA, corresponding to a beam energy of 6.82 TeV. The considerable amount of data collected during the powering tests will now be analysed to define the best strategy for reaching the LHCʼs full design energy.
The EYETS is now in full swing, with several activities taking place: maintenance of cryogenic, ventilation, vacuum, electrical and other systems; upgrades to the accelerators and injectors for the High-Luminosity LHC (HL-LHC) and LHC Injector Upgrade (LIU) projects; consolidation works; and other activities such as the replacement of two lifts that have been in use since the early days of LEP.
The entire LHC has been emptied of liquid helium, which normally keeps the superconducting dipoles at a temperature of 1.9 K, and the bulk of the machine is being held at a temperature of 20 K during the shutdown. This is to avoid wasting any of the precious gas due to unexpected electrical failures during EYETS activities, and also to allow important maintenance works to be carried out on the cryogenic system. Since it takes several weeks to refill, pump and “boil off” the cryogenics before the LHC can restart operations, the already busy EYETS schedule is extremely tight. Cryo-filling of the first sector is foreseen between the end of February and the beginning of March, with the final cool-down expected in early April.
Another major activity is the replacement of a dipole magnet in sector 1–2, which lies between ATLAS and ALICE. This meant that the sector had to be warmed up to ambient conditions, allowing several tests of its electrical quality and liquid-helium insulation at ambient temperature, which revealed no major issues. One of the major risks of warming up a sector is the deformation of the expansion bellows – the thin corrugated structures that compensate for the contraction and expansion of the quench recovery line for the helium distribution system as the machine is cooled and warmed – but X-ray scans performed on all 250 bellows in this sector show no such problems. In addition, the “ball test”, during which a ping-pong ball is fired along the LHC beam-pipe, has been carried out and no faults were found in the sector interconnects.
Regarding the injectors, which transport protons between the various accelerators in the LHC complex, the main EYETS activities concern the Proton Synchrotron Booster (PSB) and the Super Proton Synchrotron (SPS). Critical activities at the PSB include a major de-cabling and cabling campaign, which involves removing all obsolete cables identified during the previous technical stop to make way for the LIU project. Many works are also being carried out on the surface of the PSB to install all the required LIU components.
The SPS is also undergoing a de-cabling and cabling campaign. Other key activities here concern the installation of the cryogenic modules and related infrastructure for the HL-LHC’s superconducting crab cavities (see “On the trail of the HL-LHC magnets”), in addition to civil engineering works to prepare for the replacement of the SPS internal beam dump. The poor functioning of this dump last year limited the number of proton bunches that could be injected from the SPS to the LHC, and the new beam dump will be installed during Long Shutdown 2 beginning the end of 2018.
Despite the extensive works taking place and many technical challenges faced, the EYETS schedule is on track with no major disruptions. Once complete, the LHC will be prepared for its 2017 run, for which commissioning will begin in May.
Fig. 1. Mass spectrum of beauty baryons decaying into a proton and three pions (red).
The LHCb experiment has uncovered tantalising evidence that baryons made of matter behave differently to those made of antimatter, violating fundamental charge-parity (CP) symmetry. Although CP-violating processes have been studied for more than 50 years, dating back to the Nobel-prize winning experiment of James Cronin and Val Fitch in 1964, CP violation has only been observed in mesons – that is, hadrons made of a quark and an antiquark. Until now, no significant effects had been seen in baryons, which are three-quark states, despite predictions from the Standard Model (SM) that CP violation also exists in the baryon sector.
Searching for new sources of CP violation, which is one of the main goals of LHCb, could help account for the overwhelming excess of matter over antimatter observed on cosmological scales. Since this excess is too large to be explained by CP violation as described in the SM, other sources must contribute.
The new LHCb result is based on an analysis of data collected during Run 1 of the LHC, from which the collaboration isolated a sample of Λ0b baryons (comprising a beauty, up and down quark) decaying into a proton plus three charged pions. The analysis also selected events in which the antimatter Λ0b baryon decays into an antiproton and three pions. Both of these processes are extremely rare and have never previously been observed. The high production cross-section of beauty baryons at the LHC and the specialised capabilities of the LHCb detector allowed a pure sample of around 6000 such decays to be isolated (figure 1).
The LHCb data revealed significant non-zero asymmetries in certain bins (figure 2) and the general pattern of asymmetries across all bins was found to be inconsistent from that which would be expected in the CP-conserving case with a statistical significance of 3.3σ.
The results, published in Nature Physics, will soon be updated with the larger data set collected so far in Run 2. If this signal of CP violation is reproduced and seen with greater significance in the larger sample, the result will be an important milestone in the study of CP violation.
The large mass of the top quark means that the top-quark sector has great potential for gaining a deeper understanding of the Standard Model (SM) and for revealing new physics beyond it. With the large statistics available at the LHC, very precise measurements of the top-quark properties are possible. Two recent analyses performed by ATLAS based on proton–proton collisions recorded at an energy of 8 TeV have allowed the collaboration to probe the angular distributions of the top quark and its decay products in unprecedented detail.
The first analysis concerns the polarisation of W bosons produced in the decays of top-quark–antiquark pairs, which is determined by measuring the angle between the decay products of the W and the b-quark from the top decay. Both leptonic and hadronic W decays were identified, and the fractions of longitudinal, left- and right-handed polarisation states fitted from the angular distributions. The results from ATLAS are the most precise to date and are in good agreement with the SM predictions. This measurement is also used to probe the structure of the Wtb vertex, which could be modified by contributions from new-physics processes and thus allows new constraints to be placed on anomalous tensor and vector couplings.
The goal of the second analysis was to completely characterise the spin-density matrix of the top-quark–antiquark pair production. This required the measurement of 15 independent variables, 10 of which were never previously measured. Specifically, ATLAS measured the polarisation of the top quark and the spin correlation between the top and anti-top along three different spin-quantisation axes: the helicity axis, the axis orthogonal to the production plane created by the directions of the top quark and the beam axis, and a third axis orthogonal to the former two. Using this scheme, the collaboration was able to measure new “cross-correlation” observables for the first time, based on the angular distributions of the leptons from the top-quark decays. The distributions were corrected back to generator-level to allow the results to be interpreted in terms of new physics models, and so far all results are in agreement with the SM expectations.
These studies of the angular distributions of top-quark decays will benefit from the larger data sample collected at 13 TeV, allowing stronger constraints to be placed on potential new-physics contributions or opening new opportunities to observe deviations from the SM.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.