Yong Ho Chin, a leading theoretical accelerator physicist at the High Energy Accelerator Research Organization (KEK) in Japan and chair of the beam dynamics panel of the International Committee for Future Accelerators (ICFA) since November 2016, unexpectedly passed away on 8 January.
In 1984, Yong Ho received his PhD in accelerator physics from the University of Tokyo for studies performed at KEK under the supervision of Masatoshi Koshiba, who won the Nobel Prize in Physics jointly with Raymond Davis Jr and Riccardo Giacconi in 2002. Yong Ho participated in the design and commissioning of the TRISTAN accelerator, and later in the designs of the KEKB and J-PARC accelerators, along with major contributions to JLC (the Japan Linear Collider) and ILC (the International Linear Collider). In the 1980s and 1990s he spent several years abroad, at DESY and CERN in Europe, and at LBL (now LBNL) in the US.
In his long and distinguished career, Yong Ho made numerous essential contributions in the fields of beam-coupling impedances, coherent beam instabilities, radio-frequency klystron development, space–charge and beam–beam collective effects. He considered his “renormalisation theory for the beam–beam interaction”, developed during his last six months at DESY in the 1980s, as his greatest achievement. However, in the accelerator community, Yong Ho Chin’s name is linked, in particular, to two computer codes he wrote and maintained, and which have been widely used over the past decades.
The first of these codes, developed by Yong Ho in the 1980s, is MOSES (MOde-coupling Single bunch instabilities in an Electron Storage ring), which computes the complex transverse coherent betatron tune shifts as a function of the beam current for a bunch interacting with a resonator impedance. The second well-known code, written by Yong Ho in the 1990s, is the ABCI (Azimuthal Beam Cavity Interaction) code for impedance and wakefield calculations. This served as a time-domain solver of electromagnetic fields when a bunched beam with arbitrary charge distribution goes through an axisymmetric structure, on or off axis.
In the mid-1990s, Yong Ho’s work expanded to two-stream beam instabilities. He rightly foresaw that such instabilities could potentially limit the performance of KEKB and organised and co-organised several international workshops to address this issue early on. Subsequently, he was put in charge of the development and modelling of the X-band klystron for the JLC. He also greatly contributed to the development of the multi-beam klystron now in use for large superconducting linacs, and to the optimisation of the J-PARC accelerators.
Yong Ho returned to the field of collective effects more than 10 years ago and he remained extremely active there. Over the past few years, together with two other renowned accelerator physicists, Alexander W Chao and Michael Blaskiewicz, he developed a two-particle model to study the effects of space–charge force on transverse coherent beam instabilities. The purpose of this model was to obtain a simple picture of some of the essence of the physics of this intricate subject and at the same time provide a good starting point for newcomers joining the effort to solve this long-lasting issue.
As illustrated by his role as chair of an ICFA panel, and by his co-organisation of a large number of international workshops and conferences (including PAC and LINAC), Yong Ho was devoted to serving the international physics community. He was a productive author, diligent referee and esteemed editor for several journals. In 2015 he was recognised with an Outstanding Referee Award by the American Physical Society, and just a few months ago, in the summer of 2018, Yong Ho was appointed associate editor of Physical Review Accelerators and Beams.
Yong Ho was a very good lecturer, teaching at different accelerator schools, including the CERN Accelerator School. He was also in charge of a collaboration programme in which young accelerator scientists were invited to spend a few weeks at KEK.
Yong Ho was a wonderful person and an outstanding scientist. We are very proud to have had the chance to work and collaborate with him. His passing away is a great loss to the community and he will be sorely missed.
The Soviet Atomic Project: How the Soviet Union Obtained the Atomic Bomb by Lee G Pondrom World Scientific
“Leave them in peace. We can always shoot them later.” Thus spoke Soviet Union leader Josef Stalin, in response to a query by Soviet security and secret police chief Lavrentiy Beria about whether research in quantum mechanics and relativity (considered by Marxists to be incompatible with the principles of dialectical materialism) should be allowed. With these words, a generation of Soviet physical scientists were spared a disaster like the one perpetrated on Soviet agriculture by Trofim Lysenko’s politically correct, pseudoscientific theories of genetics. The reason behind this judgement was the successful development of nuclear weapons by Soviet physical scientists and the recognition by Stalin and Beria of the essential role that these “bourgeois” sciences played in that development.
Political intrigue, the arms race, early developments of nuclear science, espionage and more are all present in this gripping book, which provides a comprehensive account of the intensive programme the Soviets embarked on in 1945, immediately after Hiroshima, to catch up with the US in the area of nuclear weapons. A great deal is known about the Manhattan project, from the key scientists involved, to the many Los Alamos incidents – such as Fermi’s determination of the Alamogordo test-blast energy using scraps of paper and Feynman’s ability to crack his Los Alamos colleagues’ safes – that are intrinsic parts of the US nuclear/particle-physics community’s culture. On the contrary, little is known, at least in the West, about the huge effort made by the war-ravaged Soviet Union in less than five years to reach strategic parity with the US.
Pondrom, a prominent experimental particle physicist with a life-long interest in Russia and its language, provides an intriguing narrative. It is based on a thorough study of available literature plus a number of original documents – many of which he translated himself – that gives a fascinating insight into this history-changing enterprise and into the personalities of the exceptional people behind it.
The success of the Soviet programme was primarily due to Igor Kurchatov, a gifted experimental physicist and outstanding scientific administrator, who was equally at ease with laboratory workers, prominent theoretical physicists and the highest leaders in government, including Beria and Stalin himself. Saddled with developing several huge and remotely located laboratories from scratch, he remained closely involved in many important nitty-gritty scientific and engineering problems. For example, Kurchatov participated hands-on and full-time in the difficult commissioning of Reactor A, the first full-scale reactor for plutonium-239 production at the sprawling Combine #817 laboratory, receiving, along the way, a radiation dose that was 100 times the safe limit that he had established for laboratory staff members.
Beria was the overall project controller and ultimate decision-maker. Although best known for his role as Stalin’s ruthless enforcer – Pondrom describes him as “supreme evil,” Sakharov as a “dangerous man” – he was also an extraordinary organiser and a practical manager. When asked in the 1970s, long after Beria’s demise, how best to develop a Soviet equivalent of Silicon Valley, Soviet Academy of Sciences president A P Alexandrov answered “Dig up Beria.” Beria promised project scientists improved living conditions and freedom from persecution if they performed well (and that they would “be sent far away” if they didn’t). His daily access to Stalin was critical for keeping the project on track. Most of the project’s manual construction work used slave labour from Beria’s gulag.
Both the US and Soviet projects were monumental in scope; Pondrom estimates the Manhattan project’s scale to be about 2% of the US economy. The Soviet’s project scale was similar, but in an economy one-tenth the size. The Soviets had some advantage from the information gathered by espionage (and the simple fact that they knew the Manhattan project succeeded). Also, German scientists interned in Russia for the project played important support roles, especially in the large-scale purification of reactor-grade natural uranium. In addition, there was a nearly unlimited supply of unpaid labourers, as well as German prisoners of war with scientific and engineering backgrounds whose participation in the project was rewarded by better living conditions.
The book is crisply written and well worth the read. The text includes a number of translated segments of official documents plus extracts from memoirs of some of the people involved. So, although Pondrom sprinkles his opinions throughout, there is sufficient material to permit readers to make their own judgements. He doesn’t shirk from explaining some of the complex technical issues, which he (usually) addresses clearly and concisely. The appendices expand on technical issues, some on an elementary level for non-physicists, and others, including isotope extraction techniques, nuclear reaction issues and encryption, in more detail, much of which was new to me.
On the other hand, the confusing assortment of laboratories, their locations, leaders and primary tasks begged for some kind of summary or graphics. The simple chart describing the Soviet’s complex espionage network in the US was useful for keeping track of the roles of the persons involved; a similar chart for the laboratories and their roles would have been equally valuable. The book would also have benefited from a final edit that might have eliminated some of the repetition and caught some obvious errors. But these are minor faults in an engaging, informative book.
Stephen L Olsen, University of Chinese Academy of Sciences.
Advances in Particle Therapy: A multidisciplinary approach by Manjit Dosanjh and Jacques Bernier (eds) CRC Press, Taylor and Francis Group
A new volume in the CRC Press series on Medical Physics and Biomedical Engineering, this interesting book on particle therapy is structured in 19 chapters, each written by one or more co-authors out of a team of 49 experts (including the two editors). Most are medical physicists, radiation oncologists and radiobiologists who are well renowned in the field.
The opening chapter provides a brief and useful summary of the evolution of modern radiation oncology, starting from the discovery of X rays up to the latest generation of proton and carbon-ion accelerators. The second and third chapters are devoted to the radiobiological aspects of particle therapy. After an introductory part where the concepts of relative biological effectiveness (RBE) and oxygen-enhancement ratio are defined, this section of the book goes on to review the most recent knowledge gained in the field, from DNA structure to the production of radiation-induced damage, to secondary cancer risk. The conclusion is that, as biological effects and clinical response are functions of a broad range of parameters, we are still far from a complete understanding of all radiobiological aspects underlying particle therapy, as well as from a universally accepted RBE model providing the optimum RBE value to be used for any given treatment.
Chapter 4 and, later, chapter 18 are dedicated to particle-therapy technologies. The first provides a simple explanation of the operating principles of particle accelerators and then goes into the details of beam delivery systems and dose conformation devices. Chapter 18 recalls the historical development of particle therapy in Europe, first with the European Light Ion Medical Accelerator (EULIMA) study and Proton-Ion Medical Machine Study (PIMMS), and then with the design and construction of the HIT, CNAO and MedAustron clinical facilities (CERN Courier January/February 2018 p25). It then provides an outlook on ongoing and expected future technological developments in accelerator design.
Chapter 5 discusses the general requirements for setting up a particle therapy centre, while the following chapter provides an extensive review of imaging techniques for both patient positioning and treatment verification. These are made necessary by the rapid spread of active beam delivery technologies (scanning) and robotic patient positioning systems, which have strongly improved dose delivery. Chapter 7 reviews therapeutic indications for particle therapy and explains the necessity to integrate it with all other treatment modalities so that oncologists can decide on the best combination of therapies for each individual patient. Chapter 8 reports on the history of the European Network of Light Ion Hadron Therapy (ENLIGHT) and its role in boosting collaborative efforts in particle therapy and in training specialists.
The central part of the book (chapters 9 to 15) reviews worldwide clinical results and indications for particle therapy from different angles, pointing out the inherent difficulties in comparing conventional radiation therapy and particle therapy. It analyses the two perspectives under which the dosimetric properties of particles can translate into clinical benefit: decreasing the dose to normal tissue to reduce complications, or scaling the dose to the tumour to improve tumour control without increasing the dose to normal tissue.
Chapter 16 discusses the economic aspects of particle therapy, such as cost-effectiveness and budget impact, while the following chapter describes the benefits of a “rapid learning health care” system. The last chapter discusses global challenges in radiation therapy, such ashow to bring medical electron linac technology to low- and middle-income countries (CERN Courier March 2017 p31). I found this last chapter slightly confusing as I did not understand what is meant by “radiation rotary” and I could not fully grasp the mixing-up of different topics, such as particle therapy and nuclear detonation-terrorism. This part also seemed too US-focussed when discussing the various initiatives, and I was not in agreement with some of the statements (e.g. that particle therapy has undergone a cost reduction by an order of magnitude or more in the past 10 years).
Overall, this book provides a useful compendium of state-of-the-art particle therapy and each chapter is supported by an extensive bibliography, meeting the expectations of both experts and readers interested in gaining an overview of the field. The essay is well structured, and enables readers to go through only selected chapters and in the order that they prefer. Some knowledge of radiobiology, clinical oncology and accelerator technology is assumed. It is disappointing that clinical dosimetry and treatment planning are not addressed other than in a brief mention in chapter 5, but perhaps this is something to consider for a second edition.
Marco Silari, CERN.
Mad maths Theatre, CERN Globe
24 January 2019
Do you remember your maths high-school teachers? Were they strict? Funny? Extraordinary? Boring? The theatre comedy “Mad maths” presents the two most unusual teachers you can imagine. Armed with chalk and measuring tapes, Mademoiselle X and Mademoiselle Y aim to heal all those with maths phobia, and teach the audience more about their favourite subject.
On 24 January CERN’s fully booked Globe of Science and Innovation turned into a bizarre classroom. Marching along well-defined 90° angles, and meticulously measuring everything around them, the comedians Sophie Leclercq and Garance Legrou play with numbers and fight at the blackboard to make maths entertaining. The dialogues are juiced up with rap and music, spiced by friendly maths jargon, and seasoned with a hint of craziness. Bumping with trigonometry, philosophising about the number zero, and inventing new counting systems with dubious benefits, the rhythm grows exponentially. For example, did you know that some people’s mood goes up and down like a sine function? That you can make music with fractions? And that some bureaucratic steps are noncommutative?
This comedy show originated from an idea by Olivier Faliez and Kevin Lapin from the French theatre company Sous un autre angle. First studying maths at the university, then attending theatre school, Faliez combined his two passions in 2003 to create an entertaining programme based on maths-driven jokes and turns of event.
Perfect for families with children, this French play has already been performed more than 500 times, especially at science festivals and schools. The topics are customised depending on the level of the students. Future showings are scheduled in Castanet (15 March), Les Mureaux (22 March) and in several schools in France and other countries. Teachers and event organisers who are interested in the show are advised to contact Sophie Leclercq.
At times foolish, at times witty, it is worth watching if and only if you want to unwind and rediscover maths from a different perspective.
Letizia Diamante, CERN.
The Life, Science and Times of Lev Vasilevich Shubnikov, Pioneer of Soviet Cryogenics by L J Reinders Springer
This book is a biography of Russian physicist Lev Vasilevich Shubnikov, whose work is scarcely known despite its importance and broad reach. It is also a portrayal of the political and ideological environment existing in the Soviet Union in the late 1930s under Stalin’s repressive regime.
While at Leiden University in the Netherlands, which at the time had the most advanced laboratory for low-temperature physics in the world, Shubnikov co-discovered the Shubnikov–De Haas effect: the first observation of quantum-mechanical oscillations of a physical quantity (in this case the resistance of bismuth) at low temperatures and high magnetic fields.
In 1930 Shubnikov went to Kharkov (as it is called in Russian) in the Ukraine, where he built up the first low-temperature laboratory in the Soviet Union. There he led an impressive scientific programme and, together with his team, he discovered what is now known as type-II superconductivity (or the Shubnikov phase) and nuclear paramagnetism. In addition, independently of and almost simultaneously with Meissner and Ochsenfield, they observed the complete diamagnetism of superconductors (today known as the Meissner effect).
In 1937, aged just 36, Shubnikov was arrested, processed by Stalin’s regime and executed “for no other reason than that he had shown evidence of independent thought”, as the author states.
Based on thorough document research and a collection of memories from people who knew Shubnikov, this book will appeal not only to those curious about this physicist, but also to readers interested in the history of Soviet science, especially the development of Soviet physics in the 1930s and the impact that Stalin’s regime had on it.
Virginia Greco, CERN.
The Workshop and the World, what ten thinkers can teach us about science and authority by Robert P Crease W. W. Norton & Company
In this book, science historian Robert Crease discusses the concept of scientific authority, how it has changed along the centuries, and how society and politicians interact with scientists and the scientific process – which he refers to as the “workshop”.
Crease begins with an introduction about current anti-science rhetoric and science denial – the most evident manifestation of which is probably the claim that “global warming is a hoax perpetrated by scientists with hidden agendas”.
Four sections follow. In part one, the author introduces the first articulation of scientific authority through the stories of three renowned scientists and philosophers: Francis Bacon, Galileo Galilei and René Descartes. Here, some vulnerabilities of the authority of the scientific workshop emerge, but they are discussed further in the second section of the book through the stories of thinkers like Giambattista Vico, Mary Shelley and Auguste Comte.
Part three attempts to understand the deeply complicated relationship between the workshop and the world, described through the stories of Max Weber, Kemal Atatürk and his precursors, and Edmund Husserl. The final section is all about reinventing authority and is discussed through the work of Hannah Arendt, a thinker who barely escaped the Holocaust and who provided a deep analysis of authority as well as provding clues as to how to restore it.
With this brilliantly written essay, Crease aims to explore what practising science for the common good means and to understand what makes a social and political atmosphere in which science denial can flourish. Finally, Crease tries to suggest what can be done to ensure that science and scientists regain the trust of the people.
Ideas from supersymmetry have been used to address a longstanding challenge in optics – how to suppress unwanted spatial modes that limit the beam quality of high-power lasers. Mercedeh Khajavikhan at the University of Central Florida in the US and colleagues have created a first supersymmetric laser array, paving the way towards new schemes for scaling up the radiance of integrated semiconductor lasers.
Supersymmetry (SUSY) is a possible additional symmetry of space–time that would enable bosonic and fermionic degrees of freedom to be “rotated” between one another. Devised in the 1970s in the context of particle physics, it suggests the existence of a mirror-world of supersymmetric particles and promises a unified description of all fundamental interactions. “Even though the full ramification of SUSY in high-energy physics is still a matter of debate that awaits experimental validation, supersymmetric techniques have already found their way into low-energy physics, condensed matter, statistical mechanics, nonlinear dynamics and soliton theory as well as in stochastic processes and BCS-type theories, to mention a few,” write Khajavikhan and collaborators in Science.
The team applied the SUSY formalism first proposed by Ed Witten of the Institute for Advanced Study in Princeton to force a semiconductor laser array to operate exclusively in its fundamental transverse mode. In contrast to previous schemes developed to achieve this, such as common antenna-feedback methods, SUSY introduces a global and systematic method that applies to any type of integrated laser array, explains Khajavikhan. “Now that the proof of concept has been demonstrated, we are poised to develop high-power electrically pumped laser arrays based on a SUSY design. This can be applicable to various wavelengths, ranging from visible to mid-infrared lasers.”
To demonstrate the concept, the Florida-based team paired the unwanted modes of the main laser array (comprising five coupled ridge-waveguide cavities etched from quantum wells on an InP wafer) with a lossy superpartner (an array of four waveguides left unpumped). Optical strategies were used to build a superpartner index profile with propagation constants matching those of the four higher-order modes associated with the main array, and the performance of the SUSY laser was assessed using a custom-made optical setup. The results indicated that the existence of an unbroken SUSY phase (in conjunction with a judicious pumping of the laser array) can promote the in-phase fundamental mode and produce high-radiance emission.
“This is a remarkable example of how a fundamental idea such as SUSY may have a practical application, here increasing the power of lasers,” says SUSY pioneer John Ellis of King’s College London. “The discovery of fundamental SUSY still eludes us, but SUSY engineering has now arrived.”
Newly published results from the MINOS+ experiment at Fermilab in the US cast fresh doubts on the existence of the sterile neutrino – a hypothetical fourth neutrino flavour that would constitute physics beyond the Standard Model. MINOS+ studies how muon neutrinos oscillate into other neutrino flavours as a function of distance travelled, using magnetised-iron detectors located 1 and 735 km downstream from a neutrino beam produced at Fermilab.
Neutrino oscillations, predicted more than 60 years ago, and finally confirmed in 1998, explain the observed transmutation of neutrinos from one flavour to another as they travel. Tantalising hints of new-physics effects in short-baseline accelerator-neutrino experiments have persisted since 1995, when the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory reported an 88±23 excess in the number of electron antineutrinos emerging from a muon–antineutrino beam. This suggested that muon antineutrinos were oscillating into electron antineutrinos along the way, but not in the way expected if there are only three neutrino flavours.
The plot thickened in 2007 when another Fermilab experiment, MiniBooNE, an 818 tonne mineral-oil Cherenkov detector located 541 m downstream from Fermilab’s Booster neutrino beamline, began to see a similar effect. The excess grew, and last November the MiniBooNE collaboration reported a 4.5σ deviation from the predicted event rate for the appearance of electron neutrinos in a muon neutrino beam. In the meantime, theoretical revisions in 2011 meant that measurements of neutrinos from nuclear reactors also show deviations suggestive of sterile-neutrino interference: the so-called “reactor anomaly”.
Tensions have been running high. The latest results from MINOS+, first reported in 2017 and recently accepted for publication in Physical Review Letters, fail to confirm the MiniBooNE signal. The MINOS+ results are also consistent with those from a comparable analysis of atmospheric neutrinos in 2016 by the IceCube detector at the South Pole. “LSND, MiniBooNE and the reactor data are fairly compatible when interpreted in terms of sterile neutrinos, but they are in stark conflict with the null results from MINOS+ and IceCube,” says theorist Joachim Kopp of CERN. “It might be possible to come up with a model that allows compatibility, but the simplest sterile neutrino models do not allow this.” In late February, the long-baseline T2K experiment in Japan joined the chorus of negative searches for the sterile neutrino, although excluding a different region of parameter space.
Whereas MiniBooNE and LSND sought to observe a second-order flavour transition (in which a muon neutrino morphs into a sterile and then electron neutrino), MINOS+ and IceCube are sensitive to a first-order muon-to-sterile transition that would reduce the expected flux of muon neutrinos. Such “disappearance” experiments are potentially more sensitive to sterile neutrinos, provided systematic errors are carefully modelled.
“The MiniBooNE observations interpreted as a pure sterile neutrino oscillation signal are incompatible with the muon-neutrino disappearance data,” says MINOS+ spokesperson Jenny Thomas of University College London. “In the event that the most likely MiniBooNE signal were due to a sterile neutrino, the signal would be unmissable in the MINOS/MINOS+ neutral-current and charged-current data sets.” Taking into account simple unitarity arguments, adds Thomas, the latest MINOS+ analysis is incompatible with the MiniBooNE result at the 2σ level and at 3σ sigma below a “mass-splitting” of 1 eV2 (see figure 1).
The sterile-neutrino hypothesis is also in tension with cosmological data, says theorist Silvia Pascoli of Durham University. “Sterile neutrinos with these masses and mixing angles would be copiously produced in the early universe and would make up a significant fraction of hot dark matter. This is somewhat at odds with cosmological observations.”
One possibility for the surplus electron–neutrino-like events in MiniBooNE is insufficient accuracy in the way neutrino–nucleus interactions in the detector are modelled – a challenge for neutrino-oscillation experiments generally. According to MiniBooNE collaborator Teppei Katori, one effect proposed to account for the MiniBooNE anomaly is neutral-current single-gamma production. “This rare process has many theoretical interests, both within and beyond the Standard Model, but the calculations are not yet tractable at low energies (around 1 GeV) as they are in the non-perturbative QCD region,” he says.
MINOS+ is now analysing its final dataset and working on a direct comparison with MiniBooNE to look for electron-neutrino appearance as well as the present study on muon-neutrino disappearance. Clarification could also come from other short-baseline experiments at Fermilab, in particular MicroBooNE, which has been operating since 2015, and two liquid-argon detectors ICARUS and SBND (CERN Courier June 2017 p25). The most exciting possibility is that new physics is at play. “One viable explanation requires a new neutral-current interaction mediated by a new GeV-scale vector boson and sterile neutrinos with masses in the hundreds of MeV,” explains Pascoli. “So far this has not been excluded. And it is theoretically consistent. We have to wait and see.”
On 18 February the CMS and MoEDAL collaborations at CERN signed an agreement that will see a 6 m-long section of the CMS beam pipe cut into pieces and fed into a SQUID in the name of fundamental research. The 4 cm diameter beryllium tube – which was in place (right) from 2008 until its replacement by a new beampipe for LHC Run 2 in 2013 – is now under the proud ownership of MoEDAL spokesperson Jim Pinfold and colleagues, who will use it to search for the existence of magnetic monopoles.
Magnetic monopoles with multiple magnetic charge, if produced in high-energy particle collisions at the LHC, are so highly ionising that they could stop in the material surrounding the collision points and bind there with the beryllium nuclei of the beam pipe. To detect the trapped monopoles, Pinfold and coworkers will pass the beam-pipe material through superconducting loops and look for a non-decaying current using highly precise SQUID-based magnetometers.
Materials from the CDF and D0 detectors at the Tevatron and from the H1 detector at HERA were subjected to such searches during the 1990s, and the first pieces of beam pipe from the LHC experiments, taken from the CMS region, were tested in 2012. But these were from regions far from the collision point, whereas the new study will use material surrounding the CMS central-interaction region. “It’s the most directly exposed piece of material of the experiment that the monopoles encounter when produced and moving away from the collision point,” says Albert De Roeck of CMS and MoEDAL, who was involved in the previous LHC and HERA studies. “Although no signs of monopoles have shown up in data so far, this new study pushes the search for monopoles with magnetic charge well beyond the five Dirac charges currently achievable with the MoEDAL detector.”
MoEDAL technical coordinator Richard Soluk and a small team of technicians will first cut the beampipe into bite-sized pieces at a special facility constructed at the Centre for Particle Physics at the University of Alberta, Canada, where they have to be especially careful because beryllium is highly toxic. The resulting pieces, carefully enshrined in plastic, will then be shipped back to Europe to the SQUID Magnetometer Laboratory at ETH Zurich, where the freshly sliced beam pipe will undergo a short measurement campaign planned for early summer. “On the analysis front we have to estimate how many monopoles would have been trapped in the beam pipe during its deployment at CMS as a function of monopole mass, spin, magnetic charge, kinetic energy and production mechanism,” says Pinfold.
The latest search is complementary to general monopole searches that have already been carried out by the ATLAS and MoEDAL collaborations. Deployed at LHC Point 8, MoEDAL contains more than 100 m2 of nuclear-track detectors that are sensitive only to new physics and has a dedicated trapping detector consisting of around one tonne of aluminum.
“Most modern theories such as GUTs and string theory require the existence of monopoles,” says Pinfold. “The monopole is the most important particle not yet found.”
The High-Luminosity LHC (HL-LHC), scheduled to operate from 2026, will increase the instantaneous luminosity of the LHC by at least a factor of five beyond its initial design luminosity. The analysis of a fraction of the data already delivered by the LHC – a mere 6% of what is expected by the end of HL-LHC in the late-2030s – led to the discovery of the Higgs boson and a diverse set of measurements and searches that have been documented in some 2000 physics papers published by the LHC experiments. “Although the HL-LHC is an approved and funded project, its physics programme evolves with scientific developments and also with the physics programmes planned at future colliders,” says Aleandro Nisati of ATLAS, who is a member of the steering group for a new report quantifying the HL-LHC physics potential.
The 1000+ page report, published in January, contains input from more than 1000 experts from the experimental and theory communities. It stems from an initial workshop at CERN held in late 2017 (CERN Courier January/February 2018 p44) and also addresses the physics opportunities at a proposed high-energy upgrade (HE-LHC). Working groups have carried out hundreds of projections for physics measurements within the extremely challenging HL-LHC collision environment, taking into account the expected evolution of the theoretical landscape in the years ahead. In addition to their experience with LHC data analysis, the report factors in the improvements expected from the newly upgraded detectors and the likelihood that new analysis techniques will be developed. “A key aspect of this report is the involvement of the whole LHC community, working closely together to ensure optimal scientific progress,” says theorist and steering-group member Michelangelo Mangano.
Physics streams
The physics programme has been distilled into five streams: Standard Model (SM), Higgs, beyond the SM, flavour and QCD matter at high density. The LHC results so far have confirmed the validity of the SM up to unprecedented energy scales and with great precision in the strong, electroweak and flavour sectors. Thanks to a 10-fold larger data set, the HL-LHC will probe the SM with even greater precision, give access to previously unseen rare processes, and will extend the experiments’ sensitivity to new physics in direct and indirect searches for processes with low-production cross sections and more elusive signatures. The precision of key measurements, such as the coupling of the Higgs boson to SM particles, is expected to reach the percent level, where effects of new physics could be seen. The experimental uncertainty on the top-quark mass will be reduced to a few hundred MeV, and vector-boson scattering – recently observed in LHC data – will be studied with an accuracy of a few percent using various diboson processes.
The 2012 discovery of the Higgs boson opens brand-new studies of its properties, the SM in general, and of possible physics beyond the SM. Outstanding opportunities have emerged for measurements of fundamental importance at the HL-LHC, such as the first direct constraints on the Higgs trilinear self-coupling and the natural width. The experience of LHC Run 2 has led to an improved understanding of the HL-LHC’s ability to probe Higgs pair production, a key measure of its self-interaction, with a projected combined ATLAS and CMS sensitivity of four standard deviations. In addition to significant improvements on the precision of Higgs-boson measurements (figure 1), the HL-LHC will improve searches for heavier Higgs bosons motivated by theories beyond the SM and will be able to probe very rare exotic decay modes thanks to the huge dataset expected.
The new report considers a large variety of new-physics models that can be probed at HL-LHC. In addition to searches for new heavy resonances and supersymmetry models, it includes results on dark matter and dark sectors, long-lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. “Particular attention is placed on the potential opened by the LHC detector upgrades, the assessment of future systematic uncertainties, and new experimental techniques,” says steering-group member Andreas Meyer of CMS. “In addition to extending the present LHC mass and coupling reach by 20–50% for most new-physics scenarios, the HL-LHC will be able to potentially discover, or constrain, new physics that is not in reach of the current LHC dataset.”
Pushing for precision
The flavour-physics programme at the HL-LHC comprises many different probes – the weak decays of beauty, charm, strange and top quarks, as well as of the τ lepton and the Higgs boson – in which the experiments can search for signs of new physics. ATLAS and CMS will push the measurement precision of Higgs couplings and search for rare top decays, while the proposed second phase of the LHCb upgrade will greatly enhance the sensitivity with a range of beauty-, charm-, and strange-hadron probes. “It’s really exciting to see the full potential of the HL-LHC as a facility for precision flavour physics,” says steering-group member Mika Vesterinen of LHCb. “The projected experimental advances are also expected to be accompanied by improvements in theory, enhancing the current mass-reach on new physics by a factor as large as four.”
Finally, the report identifies four major scientific goals for future high-density QCD studies at the LHC, including detailed characterisation of the quark–gluon plasma and its underlying parton dynamics, the development of a unified picture of particle production, and QCD dynamics from small to large systems. To address these goals, high-luminosity lead–lead and proton–lead collision programmes are considered as priorities, while high-luminosity runs with intermediate-mass nuclei such as argon could extend the heavy-ion programme at the LHC into the HL-LHC phase.
High-energy considerations
One of the proposed options for a future collider at CERN is the HE-LHC, which would occupy the same tunnel but be built from advanced high-field dipole magnets that could support roughly double the LHC’s energy. Such a machine would be expected to deliver an integrated proton–proton luminosity of 15,000 fb–1 at a centre-of-mass energy of 27 TeV, increasing the discovery mass-reach beyond anything possible at the HL-LHC. The HE-LHC would provide precision access to rare Higgs boson (H) production modes, with approximately a 2% uncertainty on the ttH coupling, as well as an unambiguous observation of the HH signal and a precision of about 20% on the trilinear coupling. An HE-LHC would enable a heavy new Z´ gauge boson discovered at the HL-LHC to be studied in detail, and in general double the discovery reach of the HL-LHC to beyond 10 TeV.
The HL/HE-LHC reports were submitted to the European Strategy for Particle Physics Update in December 2018, and are also intended to bring perspective to the physics potential of future projects beyond the LHC. “We now have a better sense of our potential to characterise the Higgs boson, hunt for new particles and make Standard Model measurements that restrict the opportunities for new physics to hide,” says Mangano. “This report has made it clear that these planned 3000 fb–1 of data from HL-LHC, and much more in the case of a future HE-LHC, will play a central role in particle physics for decades to come.”
On 16 June 2018, a bright burst of light was observed by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Hawaii, which automatically searches for optical transient events. The event, which received the automated catalogue name “AT2018cow”, immediately received a lot of attention and acquired a shorter name: “the Cow”. While transient objects are observed on the sky every day – caused, for example, by nearby asteroids or supernovae – two factors make the Cow intriguing. First, the very short time it took for the event to reach its extreme brightness and fade away again indicates that this event is nothing like anything observed before. Second, it took place relatively close to Earth, 200 million light years away in a star-forming arm of a galaxy in the Hercules constellation, making it possible to study the event in a wide range of wavelengths.
Soon after the ATLAS detection, the object was observed by more than 20 different telescopes around the world, revealing it to be 10–100 times brighter than a typical supernova. In addition to optical measurements, the object was observed for several days by space-based X- and gamma-ray telescopes such as NuSTAR, XMM-Newton, INTEGRAL and Swift, which also observed it in the UV energy range, as well as by radio telescopes on Earth. The IceCube observatory in Antarctica also identified two possible neutrinos coming from the Cow, although the detection is still compatible with a background fluctuation. The combination of all the data – demonstrating the power of multi-messenger astronomy – confirmed that this was not an ordinary supernova, but potentially something completely different.
Bright spark
While standard supernovae take several days to reach maximum brightness, the Cow did so in just 1.5 days, after which the brightness also started to decrease much faster than a typical supernova. Another notable feature was the lack of heavy-element decays. Normally, elements such as 56Ni produced during the explosion are the main source of supernovae brightness, but the Cow only revealed signs of lighter elements such as hydrogen and helium. Furthering the event’s mystique is the variability of the X-ray emission several days after its discovery, which is a clear sign of an energy source at its centre. Half a year after its discovery, two opposing theories aim to explain these features.
The first theory states that an unlucky compact object was destroyed when coming too close to a black hole – a phenomenon called a tidal disruption event. The fast increase in brightness excludes normal stars. On the other hand, a smaller object (such as a neutron star, a very dense star consisting of neutron matter) cannot explain the hydrogen and helium observed in the remnant, since it contains no proper elements. The remaining possibility is a white dwarf, a dense star remaining after a normal star has ceased fusion but kept from gravitational collapse into a neutron star or black hole by the electron-degeneracy pressure in its core. The observed emission from the Cow could be explained if a white dwarf was torn apart by tidal forces in the vicinity of a massive black hole. One problem with this theory, however, is the event’s location, since black holes with the sizes required for such an event are normally not found in the spiral arms of galaxies.
The opposing theory is that the Cow was a special type of supernova in which either a black hole or a quickly rotating highly magnetic neutron star, a magnetar, is produced. While the bright emission in the optical and UV bands are produced by the supernova-like event, the variable X-ray emission is produced by radiating gas falling into the compact object. Normally the debris of a supernova blocks most of the light from reaching us, but the progenitor of the Cow was likely a relatively low-mass star that caused little debris. A hint of its low mass was also found in the X-ray data. If so, the observations would constitute the first observation of the birth of a compact object, making these data very valuable for further theoretical development. Such magnetar sources could also be responsible for ultra-high-energy cosmic rays as well as high-energy neutrinos, two of which might have been observed already. The debate on the nature of the Cow continues, but the wealth of information gathered so far indicates the growing importance of multi-messenger astronomy.
Precision measurements of diboson processes at the LHC are powerful probes of the gauge structure of the Standard Model at the multi-TeV energy scale. Among the most interesting directions in the diboson physics programme is the study of gauge-boson polarisation. The existence of three polarisation states is predicted by the Standard Model. The transverse polarisation is composed of right- and left-handed states, with spin either parallel or antiparallel to the momentum vector of the boson. The third state, a longitudinally-polarised component, is generated when the W and Z bosons acquire mass through electroweak symmetry breaking, and is therefore under particular scrutiny.
New phenomena can alter the polarisation predicted by the Standard Model due to interference between new-physics amplitudes and diagrams with gauge-boson self-interactions. WZ production, with its clean experimental signature, offers a sensitive way to search for such anomalies by providing a direct probe of the WWZ gauge coupling, due to the s-channel “Z-strahlung” contribution, where the W radiates a Z.
Building on precision WZ measurements previously reported by the ATLAS and CMS collaborations, a recent ATLAS result constitutes the most precise WZ measurement at a centre-of-mass energy of 13 TeV, and provides the first measurement of the polarisation of pair-produced vector bosons in hadron collisions. Based on 36.1 fb-1 of data collected in 2015 and 2016 by the ATLAS detector, and using leptonic decay modes of the gauge bosons to electrons or muons, ATLAS has achieved a precision of 4.5% for the WZ cross section measured in a fiducial phase space closely matching the detector acceptance. The kinematics of WZ events, including the underlying dynamics of accompanying hadronic jets, has been studied in detail by measuring the cross section as a function of several observables.
The polarisation states for W and Z bosons can be probed through distributions of the angle of the leptons relative to the bosons from which they decayed (figure 1, left). A binned profile-likelihood fit of templates describing the three helicity states allowed ATLAS to extract the W and Z polarisations in the fiducial measurement region. Because of the incomplete knowledge of the neutrino momentum originating from the W-boson decay, it is more difficult to measure the helicity fractions of the W than of the Z. The fraction of a longitudinally-polarised W boson in WZ events is found to be 0.26 ± 0.06 (figure 1, right), while the longitudinal fraction of the Z boson is found to be 0.24 ± 0.04. The analysis leads to an observed significance of 4.2 standard deviations for the presence of longitudinally-polarised W bosons, and 6.5 standard deviations for longitudinally-polarised Z bosons.
Improved precision
The measurements are dominated by statistical uncertainties, but future datasets will improve precision and allow the collaboration to probe new-physics effects in events where both the Z and the W are longitudinally polarised. The ultimate target is to measure the scattering of longitudinally polarised vector bosons: this would be a direct test of electroweak symmetry breaking.
The Standard Model (SM) allows neutral flavoured mesons such as the D0 to oscillate into their antiparticles. Having first observed this process in 2012, the LHCb collaboration has recently made some of the world’s most precise measurements of this behaviour, which is potentially sensitive to new physics. The oscillation of the D0 (cu̅) into its antiparticle, the D̅0 (c̅u), occurs through the exchange of massive virtual particles. These might include as-yet undiscovered particles, so the measurements are sensitive to non-Standard Model dynamics at large energy scales. By examining D0 and D̅0 mesons separately, it is also possible to search for the violation of charge–parity (CP) symmetry in the charm sector. Such effects are predicted to be very small. Therefore, given LHCb’s current level of experimental precision, any sign of CP violation would be a clear indication of physics beyond the Standard Model.
Given LHCb’s current level of experimental precision, any sign of CP violation would be a clear indication of physics beyond the Standard Model.
Due to quantum-mechanical mixing between the neutral charm meson’s mass and flavour eigenstates, the probabilities of observing either it or its antiparticle vary as a function of time. This mixing can be described by two parameters, x and y, which relate the properties of the mass eigenstates: x is the normalised difference in mass, and y is the normalised difference in width, or inverse lifetime. The mixing rate is very slow, making these parameters difficult to measure. Isolating the differences between the D0 and D̅0mesons is an even greater challenge. For these two papers, LHCb was able to achieve small statistical uncertainties thanks to the large samples of charm mesons collected during Run 1, and minimised systematic uncertainties by measuring ratios of yields to cancel detector effects.
In the first paper, LHCb physicists studied the effective lifetime of the mesons. As a consequence of mixing, the effective decay width to CP-even final states, such as K+K– and π+π–, differs from the average width measured in decays such as D0 → K– π+. The parameter yCP, which in the limit of CP symmetry is equal to y, can be deduced from the ratio of decay rates to these two final states as a function of time. LHCb measured yCP with the same precision as all previous measurements combined, obtaining a value consistent with the world-average value of y.
In the second analysis, LHCb reconstructed D0 decays into the final state K0S π+π– to measure the parameter x, which had not previously been shown to differ from zero. In this mode, mixing manifests as small variations in the decay rate in different parts of phase space as a function of time. Measuring it requires good control over experimental effects as a function of both phase space and decay time. LHCb achieved this by measuring the ratios of the yields in complementary regions of phase space (mirrored in the Dalitz plane) as a function of time. The measured value of x is the world’s most precise, and in combination with previous measurements there is now evidence that it differs from zero.
As well as the mixing itself, both analyses are also sensitive to mixing-induced CP violation. While CP violation was not observed, the limits on its parameters were greatly improved (figure 1). This is a good example of how different decay modes give complementary information and, when taken together, can have a big impact. LHCb will continue to perform measurements with additional modes and the larger samples collected in Run 2.
The fundamental structure of nucleons is described by the properties and dynamics of their constituent quarks and gluons, as described by QCD. The gluon’s self-interaction complicates this picture considerably. Non-linear recombination reactions, where two gluons fuse, are predicted to lead to a saturation of the gluon density. This largely unexplored phenomenon is expected to occur when the gluons in a hadron overlap transversally, and is enhanced for hadrons with high atomic numbers. Gluon saturation may be studied in lead-proton collisions at the LHC in the kinematic region where the gluon density is high and the gluons have sizable transverse dimensions.
Gluon saturation has been at the focal point of the heavy-ion community for decades. Precision measurements at HERA, RHIC and previously at the LHC agree with the predictions made by saturation models, however, the measurements do not allow an unambiguous interpretation of whether gluon saturation occurs in nature. This is a strong motivation both for the LHC experiments and for the planned Electron Ion Collider (CERN Courier October 2018 p31).
The CMS collaboration recently submitted a paper on gluon saturation in proton-lead collisions to the Journal of High Energy Physics (JHEP). The collisions that were used for this analysis occurred in 2013 at a centre-of-mass energy of 5 TeV and were detected using the CMS experiment’s CASTOR calorimeter. This is a very forward calorimeter of CMS, where “forward” refers to regions of the detector that are close to the beam pipe. Therefore, unlike any other LHC experiment, CMS can measure jets at very forward angles (–6.6<|η|<–5.2) and with transverse momenta (pT) as low as 3 GeV. This is the first time that a jet-energy spectrum measurement from the CASTOR calorimeter has been submitted to a journal.
Forward jets with a small pT can target high-density-regime gluons with ample transverse dimensions, making CASTOR ideal for a study of gluon saturation. By colliding protons with lead ions, the sensitivity of the CASTOR jet spectra to saturation effects was further enhanced. This enabled CASTOR to overcome the ambiguity associated with the interpretation of the previous measurements.
The jet-energy spectrum obtained using CASTOR was compared to two saturation models (figure 1, left). These were the “Katie KS” model and predictions from the AAMQS collaboration; the latter are based on the colour-glass-condensate model. In the Katie KS model, the strength of the non-linear gluon recombination reactions can be varied. Upon comparison with the model, it was seen that the linear and non-linear predictions differed by an order of magnitude for the lowest energy bins of the spectrum, which correspond to low-pT jets. Meanwhile, they converged at the highest energies, confirming the high sensitivity of the measurement to gluon saturation. The AAMQS predictions underestimated the data progressively, up to an order of magnitude, in the region most strongly affected by saturation. Overall, neither model described the spectrum correctly.
The spectrum was also compared to two cosmic ray models (EPOS-LHC and QGSJetII-04) and to the HIJING event generator (figure 1, right). The former models underestimated the data by over two orders of magnitude while HIJING, which incorporates an implementation of nuclear shadowing, agreed well with the data. Nuclear shadowing is an interference effect between the nucleons of a heavy ion. Like gluon saturation, it is expected to lead to a decrease in the probability for a proton-lead collision to occur, however further data analysis is required for more definite conclusions on nuclear shadowing.
These results establish CASTOR jets as an experimental reality and their sensitivity to saturation effects is encouragement for further, more refined CASTOR jet studies.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.