Comsol -leaderboard other pages

Topics

The Tevatron legacy: a luminosity story

Résumé

Une histoire de luminosité

Les résultats du programme du Tevatron ont largement dépassé les objectifs de physique prévus initialement, en raison notamment de l’augmentation d’un facteur 100 de la luminosité produite par rapport à ce qui était prévu au départ. Même avec une énergie fixe, les collisionneurs d’hadrons sont, à bien des égards, une source presque intarissable de physique. Le LHC a produit jusqu’ici environ 1 % de la luminosité attendue au total, situation similaire à celle du Tevatron en 1995, au moment de la découverte du quark top. Si les choses se passent comme avec le Tevatron, les expériences LHC nous offriront encore beaucoup de résultats exceptionnels !

Throughout history, the greatest instruments have yielded a treasure trove of scientific results and breakthroughs via long-term “exposures” to the landscape they were designed to study. Among many examples, there are telescopes and space probes (such as Hubble), land-based observatories (such as LIGO), and particle accelerators (such as the Tevatron and the LHC).

The long-lived nature of these explorations not only opens up the possibility for discovery of the rarest of phenomena with increases in the amount of the data collected, but also allows a narrower focus on specific regions of interest. In these sustained endeavours, the scientist’s ingenuity is unbounded, and through a combination of instrumental and data-analysis innovations, the programmes evolve well beyond their original scope and expected capabilities.

In 2015, the LHC increased its collision energy from 8 to 13 TeV, marking the start of what ought to be a long era of exploration of proton–proton collisions at the LHC’s design energy. In December 2015, both the CMS and the ATLAS experiments disclosed intriguing results in their di-photon invariant mass spectra, where an excess of events near 750 GeV suggest the possibility of a new and unexpected particle emerging from the data (CERN Courier January/February 2016 p8). With just a few inverse femtobarn of data recorded, the statistical significance of the observation is not sufficient to conclude if this is a coincidental background fluctuation or whether this might be a great new discovery. One thing is certain: more data are needed.

It is worth reflecting on the experience of the Tevatron collider programme, where proton–antiproton collisions at ~ 2 TeV centre-of-mass energy were accumulated during a 25 year period, from 1986 to 2011. During this time period, the Tevatron’s instantaneous luminosity increased from 1029 cm–2 s–1 to above 4 × 1032 cm–2 s–1 – exceeding by two orders of magnitude the original design luminosity. Figure 1 shows the progression of the initial luminosity for each Tevatron store (in each interaction region) versus time. Also shown in the figure are periods of no data during extended upgrade shutdowns. The luminosity growth was due to the construction of new large facilities, and due to upgrades and better use of the existing equipment. The steady growth of antiproton production was the cornerstone of the growth in luminosity. The construction of the facilities supporting the Tevatron’s luminosity growth included the Linac extension in the early 1990s that resulted in doubling its energy to 400 MeV and an increase of the Booster intensity; the construction of the Main Injector (150 GeV rapid-cycling proton accelerator) that greatly increased proton beam power for antiproton production; the construction of the Recycler ring made of permanent magnets and commissioned at the beginning of the 2000s added the third antiproton ring, which was helpful for an increase in the antiproton production rate; and a major upgrade of the stochastic cooling system for the antiproton complex and development and construction of the electron cooling in the 2000s reduced the antiproton beam emittances. A large number of other accelerator improvements were also key for the Tevatron ultimately delivering more than 10 fb–1 of luminosity to each of the two Tevatron general-purpose experiments, CDF and D0. All of them required deep insight into the underlying accelerator physics problems, inventiveness and creativity.

From the measurement of the charged-particle multiplicity in the first few proton–antiproton collisions to the tour-de-force that was the search for the Higgs boson with the full data set of ~10 fb–1, the CDF and D0 experiments harvested a cornucopia of scientific results. In the period from 2005 to 2013, the combined number of publications was constant at roughly 80 per year, with the total number of papers published using Tevatron data reaching 1200, with more coming.

The results from this bountiful programme include such fundamental results as the top-quark discovery and evidence for the Higgs boson, rare Standard Model processes (such as di-boson and single top-quark production), new composite particles (such as a new family of heavy b-baryons), and very subtle quantum phenomena (such as Bs mixing). The results also include many high-precision measurements (such as the mass of the W boson), the opening of new research areas (such as precision measurement of the top-quark mass and its properties), and searches for new physics in all of its forms. As shown in Figure 2, progress in each of these categories was obtained steadily throughout the whole running period of the Tevatron, as more and more data were accumulated.

The observation of Bs mixing is an example where ~ 0.1 fb–1 of 1990’s data were simply not enough to yield a statistically significant measurement. With 10 times more data by 2006, this phenomena was clearly established, with a statistical significance exceeding five standard deviations. As a result, many models of new physics that predicted an oscillation frequency away from its Standard Model expectation were excluded.

With about 2 fb–1 of data, enough events were accumulated to firmly establish a new family of heavy baryons containing a b quark, such as Cascade b and Sigma b baryons. Some of these discoveries had ~ 10–20 signal events, and a large number of proton–antiproton collisions, in addition to the development of new analysis methods, were critical for discovering these new baryons. It took a bit longer to discover the Omega b baryon, which is heavier and has a smaller production cross-section, but with 4 fb–1, 16 events were observed with backgrounds small enough to firmly establish its existence. It was exciting to witness how events accumulated in the corresponding mass peak with each additional inverse femtobarn of data collected.

It was not only discoveries that benefited from more data, high-precision measurements did also. The masses of elementary particles, such as the W boson, are among the most fundamental parameters in particle physics. With 1 fb–1 of data, samples containing hundreds of thousands of W bosons became available, resulting in an uncertainty of ~ 40 MeV or 0.05%. With 4 fb–1 of data, the accuracy of the measurement for each individual experiment was of ~ 20 MeV. With more data, many of the systematic uncertainties were successfully reduced as well. Ultimately, not all systematic uncertainties are better constrained by more data, and those become the limiting factor in the measurement.

Searches for physics beyond the Standard Model are always of the highest priority in the research programme of every energy-frontier collider, and the Tevatron was no exception. The number of publications in this area is the largest among all physics topics studied. Tighter and tighter limits have been set on many exotic theories and models, including supersymmetry. In many cases, limits on the masses of the new sought-after particles reached 1 TeV and above, about half of the Tevatron’s centre-of-mass energy.

The observation of the electroweak production of the top quark was among the many important tests of the Standard Model performed at the Tevatron. While the cross-section for electroweak single top-quark production is only a factor of two lower than top-quark pair production at the Tevatron, the final state with such a single decaying heavy particle was very difficult to detect in the presence of large backgrounds, such as W+jets production. It was the search for the single top quark where new multivariate analysis methods were very effectively used for the first time in the discovery of a new process, replacing standard “cut based” analyses and increasing the sensitivity of the search substantially. Even with these new analysis methods, 1 fb–1 of data was needed to obtain the first evidence for this process, and more than 2 fb–1 to make a firm discovery – almost 50 times more data than the amount of luminosity that was needed to discover the top-quark via pair production in 1995.

The analysis methods developed in the single top-quark observation were, in turn, very useful later on in the search for the Standard Model Higgs boson. The cross-sections for Higgs production are rather low at the Tevatron, so only the most probable decay modes to a pair of b quarks or W bosons contributed significantly to the search sensitivity. With 5 fb–1 of data accumulated, each experiment began to be sensitive to detecting Higgs bosons with mass around 165 GeV, where the Higgs decays mainly to a pair of W bosons. It became evident at that time that the statistical accuracy that each experiment could achieve on its own would not be enough to reach a strong result in their Higgs searches, so the two experiments combined their results to effectively double the luminosity. In this way, by 2011, the Tevatron experiments were able to exclude the Higgs boson in nearly the complete mass range allowed by the Standard Model. In the summer of 2012, using their full data set, the Tevatron’s experiments obtained evidence for Higgs boson production and its decay to a pair of b quarks, as the LHC experiments discovered the Higgs boson in its decays to bosons.

Lessons learnt

Among the many lessons learnt from the 25 year-long Tevatron run is that important results will appear steadily as the size of the data set increases. Among the reasons for this are the vast sets of studies that these general-purpose experiments perform. Hundreds of studies, for example, with the top quark or with particles containing a b quark or with processes containing a Higgs boson, provided exciting results at various luminosities when enough data for the next important result in one of the analyses were accumulated. Upgrades to the detectors are critical to handle ever-higher luminosities. Both CDF and D0 had major upgrades to the trackers, calorimeters and muon detectors, as well as trigger and data-acquisition systems. Developments of new analysis methods are also important, enabling the extraction of more information from the data. Improvements in the Tevatron luminosity were critical in keeping the luminosity doubling time to about a year or two, until the end of the programme, providing significant data increases over a relatively short period of time.

The impact of the Tevatron programme extended well beyond its originally planned physics goals, to a large extent due to the hundred-fold increase in the delivered luminosity with respect to what was originally planned. In many ways, even at a fixed energy, hadron colliders are a nearly inexhaustible source of physics. The LHC has gathered so far approximately 1% of its expected luminosity, a similar situation to where the Tevatron was back in 1995 at the time of the top-quark discovery. Based on the Tevatron experience, many more exciting results from the LHC experiments are yet to come.

• For further details, see www-d0.fnal.gov/d0_publications/d0_pubs_list_bydate.htmland www-cdf.fnal.gov/physics/physics.html.

Super-magnets at work

 

To obtain 10 times the LHC original design luminosity, the HL-LHC will need to replace more than 40 large superconducting magnets, in addition to about 60 superconducting corrector magnets. A wealth of innovative magnet technologies will be exploited to ensure the final performance of the new machine. Two key features are of paramount importance for the whole project: the production of high magnetic fields and the stability and reliability of the various components.

The backbones of the upgrade are the 24 new focussing quadrupoles (inner triplets) that will be installed at both ends of the ATLAS (Point 1) and CMS (Point 5) interaction regions. These magnets will provide the final beam squeeze to maximise the collision rate in the experiments. They are particularly challenging, because they will have to reach a field of nearly 12 T with an aperture that is more than double that of the current triplets.

In its final configuration, the new machine will have 36 new superferric corrector magnets, of which four will be quadrupoles, and 32 higher-order magnets, up to the dodecapoles. These magnets will also feature a much larger aperture than the ones currently used in the LHC. However, they are designed to be more stable and reliable, to stand the tougher operating conditions of the new machine.

The HL-LHC will also need a more efficient collimation system, because the present one will not be sufficient to handle the new beam intensity, which is twice the LHC nominal design value. For this reason, powerful dipoles will be installed at Point 7 of the ring, in the dispersion-suppression region. The idea is to replace an 8 T, 15 m-long standard LHC dipole with two 11 T, 5.5 m-long new dipoles, therefore achieving the same beam bending strength, but making space to allow the insertion of new collimators in the 4 m central slot. The new dipoles will have a peak field approaching 12 T, comparable to the new inner triplet quadrupoles.

Superferric: strong and reliable

The first HL-LHC magnet, ready and working according to specifications, is a sextupole corrector. This first component is also rather unique because, unlike the superconducting magnets currently used in the LHC, it relies on a “superferric” heart.

Although the name might sound unfamiliar, superferric magnets were initially proposed in the 1980s as a possible solution for high-energy colliders. However, many technical problems had to be overcome, and a good opportunity had to show up, before the use of superferric magnets could become a reality.

In a standard superconducting magnet, the iron is used only in the yoke, while in a superferric (or “iron-dominated”) magnet, iron is also used in the poles that shape the field, much like classical resistive magnets. In the HL-LHC superferric correctors, the coils are made of Nb-Ti superconductor and will be operated at 1.9 K. The superferric design was selected among other options because it has a sharp fringe field and is very robust. This requirement is crucial for HL-LHC, where it will be essential to sustain the increased radioactive load caused by the collisions of the high-intensity beams.

A superferric corrector magnet had been developed by CIEMAT for the sLHC-PP study (the study that preceded that of the HL-LHC, see project-slhc.web.cern.ch/project-slhc/), and that design was used as a starting point for the HL-LHC correctors. Subsequently, in the framework of a collaboration agreement for the HL-LHC project signed in 2013 between CERN and the Italian National Institute for Nuclear Physics (INFN), the LASA laboratory of the Milan section of the INFN has taken over as a partner in the project.

Recent tests carried out at LASA showed that the sextupole corrector magnet is highly stable, because it could reach and surpass the ultimate current value of 150 A (132 A being the nominal operating value) required by the design specifications before quenching. Actually, the first training quench appeared well above 200 A.

Record dipoles

The HL-LHC is an important test bed for a new concept of dipoles. Built using superconducting niobium-tin (Nb3Sn) coils kept at a temperature of 1.9 K, the new dipoles will have to reach a bore field of 11 T in stable conditions.

If the expected requirements will be met, the niobium-tin magnets will be crucial to all future collider machines because, for the time being, this is the only technology able to produce magnetic fields greater than 10 T.

Following up on initial successful tests conducted at Fermilab on single-aperture magnets, CERN’s experts went on to design and manufacture the first 2 m-long model magnet. While relying on the coil technology developed at Fermilab, the CERN magnet includes some new design features: cable insulation made by braiding S2-glass on mica-glass tape, a new material for the coil wedge, more flexible coil end spacers and a new collaring concept for pre-loading the brittle niobium-tin coils.

Magnets reach their nominal operation after “training” – a procedure that pushes the magnet to the highest possible field before it quenches. Quench after quench, the magnets acquire memory of their previous performance and reach higher fields. In January this year, the first double-aperture (two-in-one) dipole has shown a full memory of the single-coil test, and has established a record field for accelerator dipoles reaching a stable operating field of 12.5 T. Even more relevant to future operation in the LHC, it passed a nominal field of 11 T with no quench, and reached the ultimate field of 12 T with only two quenches (see figure 1).

Even if they will be able to produce a much higher field, the HL-LHC 11 T dipoles are very similar to the standard LHC dipoles, because they must fit in the continuous cryostat and must be powered in series with the rest of the LHC dipoles in a sector. In parallel, the members of the HL-LHC project are developing a bypass cryostat to host the new collimators, which will be inserted between two 11 T dipoles. The new components will allow the machine to cope with the increased number of particles drifting out of the primary beam and hitting the magnets in the cleaning insertions of Point 7. The work is so well advanced that the first collimators should be installed in the dispersion suppression region of the LHC ring during the next long shutdown of the machine scheduled for 2019–2020. They will improve the performance of the LHC during Run 3 and will put the new Nb3Sn technology to test, in preparation for the final configuration of the HL-LHC.

With an aperture of 150 mm, the focussing quadrupoles currently being developed by the US-LARP collaboration (BNL, FNAL and LBNL) in collaboration with CERN take advantage of all of the most advanced magnet technologies. Similar to the 11 T dipoles, they use Nb3Sn coils and their operating temperature will be 1.9 K. Because of the high field and large aperture, these magnets store an energy per unit length twice as large as that stored in the LHC dipoles. The mechanical structure to contain forces and to assure the field shape is of a new type, first proposed by LBNL, called “keys & bladders”. Based on force control rather than dimension control, like in the case of a classical “collars” structure, “keys and bladders” is very well suited for the Nb3Sn mechanical characteristics (Nb3Sn is indeed a very brittle material), and it is easy to implement on a limited number of magnets. A special tungsten absorber system, integrated in the beam screen, shields these magnets from the “heavy rain” of collision debris, which will be five times more intense than in the LHC.

At the beginning of March, the LARP teams at Fermilab succeeded in training the first 1.5 m-long model. Designed and manufactured by a CERN–LARP joint team, this is the first accelerator-quality final cross-section model of the inner triplet magnet. Following a very smooth training curve (see figure 2), the model magnet surpassed the operating gradient, which corresponds to a peak field of 11.4 T, actually reaching 12.5 T. Together with the HL-LHC companion 11 T dipole, it is the first built accelerator-quality magnet reaching such fields.

Building on the proven performance of the Nb3Sn technology, experts from both sides of the Atlantic will go on to build the actual-length quadrupoles. At Fermilab, the final US magnets will measure 4.2 m in length, while the CERN team aims at manufacturing 7.2 m-long quadrupoles to halve the number of magnets to be installed.

In addition to ensuring the future of collider physics, for which the success of high-field Nb3Sn for the HL-LHC is a key ingredient, the new powerful magnets will certainly pave the way for their use in other fields, including the medical one. Indeed, the Nb3Sn technology is already at the core of ultra-high-field magnets used in various fields but not yet in magnetic resonance imaging (MRI), which today represents the largest commercial use of superconductivity. However, the hope is that the development of this technology in the HL-LHC machine may also boost wider use of Nb3Sn magnets in the medical sector. Indeed, thanks to the higher magnetic fields that can be achieved with Nb3Sn, MRI systems using this technology would be able to provide more detailed images and faster scanning, e.g. for functional imaging. The challenge is now within reach.

The HL-LHC: a bright vision

The LHC is one of the world’s largest and most complex scientific instruments. Its design and construction required more than 20 years of hard work and the unique expertise of a number of experts. Following on from the discovery of the Higgs boson in 2012, the machine continues to run at unprecedented energy to give physicists access to phenomena that have so far remained out of reach.

The full exploitation of the LHC and its high-luminosity upgrade programme, the High Luminosity LHC (HL-LHC), have been identified as one of Europe’s highest priorities for the next decade in the European Strategy for Particle Physics (CERN Courier July/August 2013 p9) adopted by CERN Council in the special session held in Brussels on 30 May 2013. The HL-LHC was also recently selected as one of the 29 landmark projects of the European Strategy Forum on Research Infrastructures (ESFRI) 2016 Roadmap.

Although it concerns only about 5% of the current machine, the HL-LHC is a major upgrade programme requiring a number of key innovative technologies, each one an exceptional technological challenge that involves several institutes around the world.

At the heart of the new configuration are the powerful magnets – both dipoles and quadrupoles – that will have to operate at unprecedented field values: 11 and 12 T, respectively. In particular, the quadrupoles, also called “inner triplets”, which will be installed on both sides of the collision points, are crucial components to obtain the designed leap in the integrated luminosity: from the 300 fb–1 of the LHC by the end of its initial run to the 3000 fb–1 of the HL-LHC. Their aperture will be more than double that of the current triplets – a requirement that would scare many magnet experts, because the stored energy goes with the square of the magnetic field and the magnet aperture.

The overall increase of luminosity cannot be reached without revolutionising the superconducting technologies currently used in particle accelerators. The new magnets rely on niobium-tin (Nb3Sn) superconducting cables, instead of the LHC’s niobium-titanium alloy. The first model, with full-size cross-section and shorter length than the actual magnet (1 m long compared with the final 4.2 or 7 m), has just proven that the technology works well, even beyond expectation. Similarly good results were reached in January by the experts dealing with the 11 T dipoles that will house the new collimation system for the Dispersion Suppressor, which is being entirely redesigned (see “Super-magenets at work” in this issue).

Another key element of the new machine is the crab cavities. Unlike standard radiofrequency cavities, crab cavities produce a rotation of the beam by providing a transverse deflection of the bunches. This is used to increase the luminosity at the collision points and to reduce the beam–beam parasitic effects that limit the collision efficiency of the accelerator. The crab-cavity concept was explored by the KEKB machine, but it will be implemented for the first time at the HL-LHC for a proton collider.

The current operation of the LHC is often disrupted by the machine powering system breaking down. This also happens because of high levels of radiation caused by the high-energy and high-intensity circulating beams. With even higher luminosity, this problem could prevent the accelerator from performing reliably. New magnesium-diboride-based (MgB2) superconducting cables capable of transporting electrical currents of 20 to 100 kA have already proven their capability for such large current transport, at a convenient 20 K temperature. In this way, it will be possible to move the power converters from the LHC tunnel to a new service gallery, thereby facilitating technical and maintenance operations and reducing the radiation dose to personnel.

All in all, more than 1.2 km of the current ring will need to be replaced with new components. Using cutting-edge technologies, it will be possible for scientists to significantly extend the discovery potential of the LHC (e.g. providing about a 30% higher mass reach for new particles) without replacing the full ring. This is also a challenge for the experiments, which will have to upgrade their inner detectors and other components to face the higher collision rate (CERN Courier January/February 2016 p26).

Based on innovative technological solutions, the HL-LHC will also allow physicists to study in depth the properties of the Higgs boson and any possible new particles that the LHC may discover in future runs. In addition, it will play a decisive role in the future of experimental particle physics because it is the ideal test bed for both technology demonstration and for the design of future accelerators beyond the LHC.

The promising results obtained so far have been possible thanks to the collaborative effort of several institutes in Europe and around the world. It is indeed amazing to realise that since its inception, the HL-LHC has brought together more than 250 scientists from 25 countries. This is confirmation that, today, no big scientific endeavour, however bright and smart it might be, can actually be pursued without the contribution of the whole community.

Laser Experiments for Chemistry and Physics

By R N Compton and M A Duncan
Oxford University Press

9780198742982

The book provides an introduction to the characteristics and operation of lasers through laboratory experiments for undergraduate students in physics and chemistry.

After a first section reviewing the properties of light, the history of laser invention, the atomic, molecular and optical principles behind how lasers work, as well as the kinds of lasers that are available today, the text presents a rich set of experiments on various topics: thermodynamics, chemical analysis, quantum chemistry, spectroscopy and kinetics.

Each chapter gives the historical and theoretical background to the topics covered by the experiments, and variations to the prescribed activities are suggested.

Both of the authors began their research careers at the time when laser technology was taking off, and witnessed advances in the development and application of this new technology to many fields. In this book they aim to pass on some of their experience to new students, and to stimulate practical activities in optics and lasers courses.

Resummation and Renormalization in Effective Theories of Particle Physics

By Antal Jakovác and András Patkós
Springer

978-3-319-22620-0

The book re-collects notes written by the authors for a course on finite-temperature quantum fields, and more specifically on the application of effective models of strong and electroweak interactions in particle-physics phenomenology.

The topics selected reflect the research interests of the authors, nevertheless in their opinion, the material covered in the volume can help students of master’s degrees in physics to improve their ability to deal with reorganisations of the perturbation series of renormalisable theories.

The book is made up of eight chapters and is organised in four parts. An historic overview of effective theories (which are scientific theories that propose to model certain effects without proposing to adequately model their causes) opens the text, then two chapters provide the basics of quantum field theory necessary for following the directions of contemporary research. The third part introduces three different and widely used approaches to improving convergence properties of renormalised perturbation theory. Finally, results that emerge from the application of these techniques to the thermodynamics of strong and electroweak interactions are reviewed in the last two chapters.

Holograms: A Cultural History

By Sean F Johnston
Oxford University Press

319G9S9F5SL._AC_SY780_

The book is a sort of biography of holograms, peculiar optical “objects” that have crossed the border of science to enter other cultural and anthropological fields, such as art, visual technology, pop culture, magic and illusion. No other visual experience is like interacting with holograms – they have the power to fascinate and amuse people. Not only physicists and engineers, but also artists, hippies, hobbyists and illusionists have played with, and dreamed about, holograms.

This volume can be considered as complementary to a previous book by the same author, a professor at the University of Glasgow, called Holographic Visions: A History of New Science. While the first book gave an account of the scientific concepts behind holography, and of its development as a research subject and engineering tool, the present text focuses on the impact that holography has had on society and consumers of such technology.

The author explores how holograms found a place in distinct cultural settings, moving from being expressions of modernity to countercultural art means, from encoding tools for security to vehicles for mystery.

Clearly written and full of interesting factual information, this book is addressed to historians and sociologists of modern science and technology, as well as to enthusiasts who are interested in understanding the journey of this fascinating optical medium.

Theoretical Foundations of Synchrotron and Storage Ring RF Systems

By Harald Klingbeil, Ulrich Laier, and Dieter Lens
Springer
Also available at the CERN bookshop

CCboo2_03_16

This book is one of few, if not the only one, dedicated to radiofrequency (RF) accelerator systems and longitudinal dynamics in synchrotrons, providing a self-contained and clear theoretical introduction to the subject. Some of these topics can be found in separate articles of specialised schools, but not in such a comprehensive form in a single source. The content of the book is based on a university course and it is addressed to graduate students who want to study accelerator physics and engineering.

After a short introduction on accelerators, the second chapter provides a concise but complete overview of the mathematical-physics tools that are required in the following chapters, such as Fourier analysis and Laplace transform. Ordinary differential equations and the basics of non-linear dynamics are presented with the notions of phase space, phase flow and velocity vector fields, leading naturally to the continuity equation and to the Liouville theorem. Hamiltonian systems are elegantly introduced, and the mathematical pendulum as well as a LC circuit are used as examples. This second chapter provides the necessary background for any engineer or physicist willing to enter the field of accelerator physics. The basic formulas and concepts of electromagnetism and special relativity are briefly recalled. The text is completed by a useful set of tables and diagrams in the appendix. An extensive set of references is given, although a non-negligible number are in German and might not be of help for the English-speaking reader. This feature is also found in other chapters.

In the third chapter, the longitudinal dynamics in synchrotrons is detailed. The basic equations and formulas describing synchrotron motion, bunch and bucket parameters are derived step-by-step, confirming the educational vocation of the book. The examples of a ramp and of multicavity operation are sketched out. I would have further developed the evolution of the RF parameters in a ramp using one of the GSI accelerators as a more concrete numerical example.

In the fourth chapter, the two most common types of RF cavities (ferrite-loaded and pillbox) are discussed in detail (in particular, the ferrite-loaded ones used in low- and medium-energy accelerators), providing detailed derivations of the various parameters and completing them with two examples referring to two specific applications.

The fifth chapter contains an interesting and thorough discussion on the theoretical description of beam manipulation in synchrotrons, with particular emphasis on the notion of adiabaticity, which is critical for emittance preservation in operation with high-brightness beams. This concept is normally dealt with in a qualitative way, while in this book a more solid background, derived from classical Hamiltonian mechanics, is provided. In the second part of the chapter, after an introduction to the description of a bunch by means of moments, including the concept of RMS emittance, a description of longitudinal bunch oscillations and their spectral representation is given, providing the basis for the study of longitudinal beam stability. This is not addressed in the book, and the notion of impedance is briefly introduced in the case of space charge, while some references covering these subjects are provided.

The last two chapters are devoted to the engineering aspects of RF accelerator systems: power amplifiers and closed-loop controls. The chapter on power amplifiers is mainly focused on the solutions of interest for low- and medium-energy synchrotrons, whereas high-frequency narrowband power amplifiers like klystrons are very briefly discussed. The chapter on low-level RF is rather dense but still clearly written, and is built around a specific example of an amplitude control loop. That eases the understanding of concepts and criteria underlying feedback stability and the impact of time delays and disturbances. The necessary mathematical tools are presented with a due level of detail, before delving into the stability criteria and into a discussion of the chosen example.

The volume is completed by a rich appendix summarising basic concepts and formulas required elsewhere in the book (e.g. some notions of transverse beam dynamics and the characterisation of fixed points) or working out in detail some examples of subjects treated in the main text. Some handy recalls of calculus and algebra are also provided.

This book undoubtedly fills a gap in the panorama of textbooks dedicated to accelerator physics. I would recommend it to any physicist or engineer entering the field. I enjoyed reading it as a comprehensive and clear introduction to some aspects of accelerator RF engineering, as well as to some of the theoretical foundations of accelerator physics and, in general, of classical mechanics.

50 Years of Quarks

By Harald Fritzsch and Murray Gell-Mann (eds)
World Scientific
Also available at the CERN bookshop

CCboo1_03_16

This book was written on the occasion of the golden anniversary of a truly remarkable year in fundamental particle physics: 1964 saw the discovery of CP violation in the decays of neutral kaons, of the Ω baryon (at the Brookhaven National Laboratory), and of cosmic microwave background radiation. It marked the invention of the Brout–Englert–Higgs mechanism, and the introduction of a theory of quarks as fundamental constituents of strongly interacting particles.

Harald Fritzsch and Murray Gell-Mann, the two fathers of quantum chromodynamics, look back at the events that led to the discovery, and eventually acceptance, of quarks as constituent particles. Why should we look back at the 1960s? Besides the fact that it is always worthwhile to reminisce about those times when theoretical physicists were truly eclectic, these stories are the testimony of a very active era, in which theoretical and experimental discoveries rapidly chased one another. What is truly remarkable is that, even in the absence of an underlying theory, piecing together sets of disparate experimental hints, the high-energy physics community was always able to provide a consistent description of the observed particles and their interactions. In fact, it was general principles such as causality, unitarity and Lorentz invariance that allowed far-reaching insights into analyticity, dispersion relations, the CPT theorem and the relation between spin and statistics to be obtained.

In this volume, Fritzsch and Gell-Mann present a collection of contributions written by renowned physicists (including S J Brodsky, J Ellis, H Fritzsch, S L Glashow, M Kobayashi, L B Okun, S L Wu, G Zweig and many others) that led to crucial developments in particle theory. The individual contributions in the book range from technical manuscripts, lecture notes and articles written 50 years ago, to personal, anecdotal and autobiographical accounts of endeavours in particle physics, emphasising how they interwove with the conception and eventually acceptance of the quark hypothesis. The book conveys the enthusiasm and motivation of the scientists involved in this journey, their triumph in cases of success, their amazement in cases of surprises or difficulties, and their disappointment in cases of failures. One realises that while quantum chromodynamics seems a simple and natural theory today, not everything was as easy as it now looks, 50 years later. In fact, the paradoxical properties of quarks, imprisoned for life in hadrons, had no precedent in the history of physics.

The last 50 years has witnessed spectacular progress in the description of elementary constituents of matter and their fundamental interactions, with important discoveries that led to the establishment of the Standard Model of particle physics. This theory accurately describes all observable matter, namely quarks and leptons, and their interactions at colliders through the electromagnetic, weak and strong force. Yet many open questions remain that are beyond the reach of our current understanding of the laws of physics. Of central importance now is the understanding of the composition of our universe, the dark matter and dark energy, the hierarchy of masses and forces, and a consistent quantum framework of unification of all forces of nature, including gravity. The closing contributions of the book put this venture in the context of today’s high-energy physics programme, and make a connection to the most popular ideas in high-energy physics today, including supersymmetry, unification and string theory.

Open access ebooks

Open access (OA) publishing is proving to be a very successful publishing model in the scholarly scientific field: today, more than 10,000 journals are accessible in OA, according to the Directory of Open Access Journals. Building on this positive experience, ebooks are also becoming available under this free-access scheme.

The economic model is largely inspired by the well-established practice in scientific article publishing, and several publishers have expanded their catalogues to include OA books. Under an appropriate licensing system, the authors retain copyright but the content can be freely shared and reused with appropriate author credit.

The ebooks system, in addition to expanding the diffusion of knowledge, overcomes the production and distribution costs of paper books that result in high prices for titles often acquired only by libraries. OA ebooks are also the ideal outlet for the publication of conference proceedings, maximising their visibility, and with great benefits for library budgets.

Five key works written or edited by CERN authors are already profiting from the impact that comes from their free dissemination.

Three of them are already accessible online:

• Melting Hadrons, Boiling Quarks – From Hagedorn Temperature to Ultra-Relativistic Heavy-Ion Collisions at CERN: With a Tribute to Rolf Hagedorn by Johann Rafelski (ed), published by Springer (link.springer.com/book/10.1007%2F978-3-319-17545-4).

• 60 Years of CERN Experiments and Discoveries by Herwig Schopper and Luigi Di Lella (eds), published by World Scientific (dx.doi.org/10.1142/9441#t=toc).

• The High Luminosity Large Hadron Collider: The New Machine for Illuminating the Mysteries of Universe by Lucio Rossi and Oliver Brüning (eds), published by World Scientific (dx.doi.org/10.1142/9581#t=toc).

Two further OA titles will appear in 2016:

• The Standard Theory of Particle Physics: 60 Years of CERN by Luciano Maiani and Luigi Rolandi (eds), published by World Scientific.

• echnology Meets Research: 60 Years of Technological Achievements at CERN, Illustrated with Selected Highlights by Chris Fabjan, Thomas Taylor and Horst Wenninger (eds), published by World Scientific.

Members of the organising committee of a conference, looking for an OA outlet for the proceedings, and authors who are planning to publish a book, are invited to contact the CERN Library, so that the staff there can help them to negotiate conditions with potential publishers.

The LHC is restarting

The LHC, last among all of CERN’s accelerators, is resuming operation with beam while this issue goes to press. The year-end technical stop (YETS) started on 14 December 2015. During the 11 weeks of scheduled maintenance activities, several interventions have taken place in all of the accelerators and beamlines. They included the maintenance of the cryogenic system at several points; the replacement of 18 magnets in the Super Proton Synchrotron (SPS); an extensive campaign to identify and remove thousands of obsolete cables; the replacement of the LHC beam absorbers for injection (TDIs) that are used to absorb the SPS beam if a problem occurs, providing vital protection for the LHC; and 12 LHC collimators have been dismantled and reinstalled after modification of the vacuum chambers, which restricted their movement.

The YETS also gave the experiments the opportunity to carry out repairs and maintenance work in their detectors. In particular, this included fixing the ATLAS vacuum-chamber bellow and cleaning the cold box at CMS, which had caused problems for the experiment’s magnet during 2015.

Bringing beams back into the machine after a technical stop of a few weeks is no trivial matter. The Electrical Quality Assurance (ELQA) team needs to test the electrical circuits of the superconducting magnets, certifying their readiness for operation. After that, the powering tests can start, and this means about 7000 tests in 12 days – a critical task for all of the teams involved, which will rely on the availability of all of the sectors. About four weeks after the start of commissioning, the LHC is ready to receive first beams and for them to circulate for several hours in the machine (stable beams).

The goal of this second part of Run 2 is to reach 2700 bunches per beam at 6.5 TeV and with nominal 25 ns spacing. In 2015, the machine reached a record of 2244 bunches in each beam, just before the beginning of the YETS. In 2016, the focus of the operators will be on ensuring maximum availability of the machine. For this, pipe scrubbing will be performed several times to keep the electron cloud effects under control. Thanks to the experience acquired in 2015, the operators will be able to improve the injection process and to perform ramping and squeezing at the same time, therefore reducing the time needed between two successive injections.

In addition to several weeks of steady standard 13 TeV operation with 2700 bunches per beam and β* = 40 cm, the accelerator schedule for 2016 includes a high-β* (~ 2.5 km) running period for TOTEM/ALFA dedicated to the measurement of the elastic proton–proton scattering in the Coulomb–nuclear interference region. The schedule also includes one month of heavy-ion run. Although various configurations (Pb–Pb and p–Pb) are still under consideration, the period – November – has already been decided. As usual, the heavy-ion run will conclude the 2016 operation of the LHC, while the extended year-end technical stop (EYETS) will start in December and will last about five months, until April 2017. Several upgrades are already planned by the experiments during the EYETS, including installation of the new pixel system at CMS.

The goal for the second part of Run 2 is to reach 1.3 × 1034 cm–2 s–1 of luminosity, which with about 2700 bunches and 25 ns spacing is estimated to produce a pile-up of 40 events per bunch crossing. This should give an integrated luminosity of about 25 fb–1 in 2016, which should ensure a total of 100 fb–1 for Run 2 – planned to end in 2018.

bright-rec iop pub iop-science physcis connect