Comsol -leaderboard other pages

Topics

Learning Scientific Programming With Python

By Christian Hill

Cambridge University Press

CCboo2_09_16

Science cannot be accomplished nowadays without the help of computers to produce, analyse, treat and visualise large experimental data sets. Scientists are called to code their programs using a programming language such as Python, which in recent times has become very popular among researchers in different scientific domains. It is a high-level language that is relatively easy to learn, rich in functionality and fairly compact. It includes many additional modules, in particular scientific and visualisation tools covering a vast area in numerical computation, which make it very handy for scientists and engineers.

In this book, the author covers basic programming concepts – such as numbers, variables, strings, lists, basic data structures, control flow, and functions. It also deals with advanced concepts and idioms of the Python language and of the tools that are presented, enabling readers to quickly gain proficiency. The most advanced topics and functionalities are clearly marked, so they can be skipped in the first reading.

While discussing Python structures, the author explains the differences with respect to other languages, in particular C, which can be useful for readers migrating from these languages to Python. The book focuses on version 3 of Python, but when needed exposes the differences with version 2, which is still widely in use among the scientific community.    

Once the basic concepts of the language are in place, the book passes to the NumPy, SciPy and Matplotlib libraries for numerical programming and data visualisation. These modules are open source, commonly used by scientists and easy to obtain and install. The functionality of each is well introduced with lots of examples, which is clearly an advantage with respect to the terse reference documentation of the modules that are available from the web. NumPy is the de facto standard for general scientific programming that deals very efficiently with data structures such as unidimensional arrays, while the SciPy library complements NumPy with more specific functionalities for scientific computing, including the evaluation of special functions frequently used in science and engineering, minimisation, integration, interpolation and equation solving.

Essential for any scientific work is the plotting of the data. This is achieved with the Matplotlib module, which is probably the most popular one that exists for Python. Many kinds of graphics are nicely introduced in the book, starting from the most basic ones, such as 1D plots, to fairly complex 3D and contour plots. The book also discusses the use of IPython notebooks to build rich-media documents, interleaving text and formulas with code and images into shareable documents for scientific analysis.

The book has many relevant examples, with their development traced from both science and engineering points of view. Each chapter concludes with a series of well-selected exercises, the complete step-by-step solutions of which are reported at the end of the volume. In addition, a nice collection of problems without solutions are also added to each section.

The book is a very complete reference of the major features of the Python language and of the most common scientific libraries. It is written in a clear, precise and didactical style that would appeal to those who, even if they are already familiar with the Python programming language, would like to develop their proficiency in numerical and scientific programming with the standard tools of the Python system.

Reviews of Accelerator Science and Technology: Volume 7

By Alexander W Chao and Weiren Chou (eds)

World Scientific

Also available at the CERN bookshop

reviews-of-accelerator-science-and-technology-volume-7-colliders

Volume 7 of Reviews of Accelerator Science and Technology is dedicated to colliders and provides an in-depth panorama of the different technologies developed since the construction in the 1960s of the first three: AdA in Italy, CBX in the US, and VEP-1 in the then Soviet Union.

Colliders have been crucial for proving the validity of the Standard Model, and they still define the energy frontier in particle physics because at present no machine can overcome the current LHC limit of 13 TeV in the centre of mass.

The book opens with an article by Burton Richter, a pioneer of high-energy colliders, who shares his viewpoint about their future. This is followed by contributions from leading experts worldwide, who discuss the characteristics, advantages and limits of machines that collide different types of particles. Proton–proton and proton–antiproton colliders are reviewed by Walter Scandale, electron–positron circular colliders by Katsunobu Oide, ion colliders by Wolfram Fischer and John M Jowett, and electron–proton and electron–ion colliders by Ilan Ben-zvi and Vadim Ptitsyn. Akira Yamamoto and Kaoru Yokoya then discuss linear colliders, Robert B Palmer muon colliders, and Jeffrey Gronberg photon colliders.

A section of the book is dedicated to the accelerator physics that form the basis of the design of these machines. In particular, Frank Zimmermann provides a general overview of collider-beam physics, while Eugene Levichev goes into more detail discussing the technologies for circular colliders.

The volume concludes with an article by Kwang-Je Kim, Robert J Budnitz and Herman Winick on the life of Andy Sessler, an accelerator physicist considered by his colleagues as an inspiring figure.

Comprehensive and containing contributions by high-profile experts, this book will be a good resource for students, physicists and engineers willing to learn about colliders and accelerator physics.

Relativistic Quantum Mechanics: An Introduction to Relativistic Quantum Fields

By Luciano Maiani and Omar Benhar

CRC Press

Quantum field theory (QFT) is the mathematical framework that forms the basis of our current understanding of the fundamental laws of nature. Its present formulation is the achievement of almost a century of theoretical efforts, first initiated by the necessity of reconciling quantum mechanics with special relativity. Its success is exemplified by the Standard Model, a specific QFT that spectacularly accounts for all of the observations performed so far in particle-physics experiments over many orders of magnitude in energy. Learning and mastering QFT is therefore essential for anyone who wants to understand how nature works on the smallest scales.

This book gives a concise and self-contained introduction to the basic concepts of QFT. As mentioned in the preface, it is mainly addressed to students with different interests who are approaching the subject for the first time, and is based on a series of lecture courses taught by the authors over the course of a decade at the University of Rome La Sapienza. Topics are selected and presented following their historical development and constant reference is made to those experiments that marked key advances, and sometimes breakthroughs, on the theoretical front. Some important subjects were not included, but they can be reconsidered later for more in-depth study.

The book is conceived as the first of a series that comprises two other texts on the more advanced topics of gauge theories and electroweak interactions (in collaboration with the late Nicola Cabibbo). The authors do not indulge in technical discussions of more formal aspects but try to derive the main physics results with the minimum amount of mathematical machinery. Although some concepts would have benefitted from a more systematic discussion, such as the scattering matrix and its definition through asymptotic states, the goal of giving an essential introduction to QFT and providing a solid foundation in this for the reader is achieved overall. The experience of the authors as both proficient teachers of the subject and main players is crucial to finding a good balance in establishing the QFT framework.

The first part of the book (chapters 1–3) is dedicated to a short review of classical dynamics in the relativistic limit. Starting from the principles of relativity and minimal action, the motion of point-like particles and the evolution of fields are described in their Lagrangian and Hamiltonian formulations. Special emphasis is given to symmetries and conservation laws. Quantisation is introduced in chapter 4 through the example of the scalar field by replacing the Poisson brackets with commutators of operators. Equal-time commutation rules are then used to define creation and destruction operators and the Fock space. Chapter 5 deals with the quantisation of the electromagnetic field. The approach is that of canonical formalism in the Coulomb gauge, but no mention is made of the complication due to the presence of constraints on fields. Chapters 6 and 7 are dedicated to the Dirac equation and the quantisation of the Dirac field. Besides introducing the usual machinery of spinors and gamma matrices, they include a detailed analysis of the relativistic hydrogen atom as well as concise though important discussions about Wigner’s method of induced representations as applied to the Lorentz group, micro-causality and the relation between spin and statistics. The propagation of free fields is analysed in chapter 8, while the three chapters that follow introduce the reader to relativistic perturbation theory. Chapter 12 discusses discrete symmetries (C, P and T) in QFT, gives a proof of the CPT theorem and illustrates its consequences. The last part of the book is dedicated to applications of QFT formalism to phenomenology. The authors give a detailed account of QED in chapter 14 by discussing a variety of physical processes. The reader is here introduced to the method of Feynman diagrams through explicit examples following a pragmatic approach. The following chapter deals with Fermi’s theory of weak interactions, again making use of several explicit examples of physical processes. Finally, chapters 13 and 16 are devoted to the theory and phenomenology of neutrinos. In particular, the last section discusses neutrino oscillations (both in a vacuum and through matter) and presents a thorough analysis of current experimental results. There is also a useful set of exercises at the end of each chapter.

Both the pragmatic approach and choice of topics make this book particularly suited for readers who want a concise and self-contained introduction to QFT and its physical consequences. Students will find it a valuable companion in their journey into the subject, and expert practitioners will enjoy the various advanced arguments that are scattered throughout the chapters and not commonly found in other textbooks.

All systems go for the High-Luminosity LHC

On 19 September, the European Investment Bank (EIB) signed a 250 million Swiss francs (€230 million) credit facility with CERN in order to finance the High-Luminosity Large Hadron Collider (HL-LHC) project. The finance contract follows recent approval from CERN Council, and will allow CERN to carry out the work necessary for the HL-LHC within a constant CERN budget.

The HL-LHC is expected to produce data from 2026 onwards, with the overall goal of increasing the integrated luminosity recorded by the LHC by a factor 10. Following approval of the HL-LHC as a priority project in the European Strategy Report for Particle Physics, this major upgrade is now gathering speed together with companion upgrade programmes of the LHC injectors and detectors. Engineers are currently putting the finishing touches to a full working model of an HL-LHC quadrupole, which will eventually be installed in the insertion regions close to the ATLAS and CMS experiments in order to focus the HL-LHC beam. Built in partnership with Fermilab, the magnets are based on an innovative niobium-tin superconductor (Nb3Sn) that can produce higher magnetic fields than the niobium-titanium magnets used in the LHC.

The contract signed between CERN and EIB falls under the InnovFin Large Projects facility, which is part of the new generation of financial instruments developed and supported under the European Union’s Horizon 2020 scheme. It’s the second EIB financing for CERN, following a loan of €300 million in 2002 for the LHC. “This loan under Horizon 2020, the EU’s research-funding programme, will help keep CERN and Europe at the forefront of particle-physics research,” says the European commissioner for research, science and innovation, Carlos Moedas. “It’s an example of how EU funding helps extend frontiers of human knowledge.”

First physics at HIE-ISOLDE begins

In early September, the first physics experiment using radioactive beams from the newly upgraded ISOLDE facility got under way: a study of tin, which is a special element because it has two double magic isotopes. ISOLDE is CERN’s long-running nuclear research facility, which for the past 50 years has allowed many different studies of the properties of atomic nuclei. The upgrade means the machine can now reach an energy of 5.5 MeV per nucleon, making ISOLDE the only Isotope Separator On-Line (ISOL) facility in the world capable of investigating heavy and super-heavy radioactive nuclei.

HIE-ISOLDE (High Intensity Energy-ISOLDE) is a major upgrade of the ISOLDE facility that will increase the energy, intensity and quality of the beams delivered to scientific users. “Our success is the result of eight years of development and manufacturing,” explains HIE-ISOLDE project-leader Yacine Kadi. “The community around ISOLDE has grown a lot recently, as more scientists are attracted by the possibilities that new higher energies bring. It’s an energy domain that’s not explored much, since no other facility in the world can deliver pure beams at these energies.”

The first run of the facility took place in October last year, but because the machine only had one cryomodule, it operated at an energy of 4.3 MeV per nucleon. Now, with the second cryomodule in place, the machine is capable of reaching up to 5.5 MeV per nucleon and therefore can investigate the structure of heavier isotopes. The rest of 2016 will be a busy time for HIE-ISOLDE, with scheduled experiments studying nuclei over a wide range of mass numbers – from 9Li to 142Xe. When two additional cryomodules are installed in 2017 and 2018, the facility will operate at 10 MeV per nucleon and be capable of investigating nuclei of all masses.

HIE-ISOLDE will run until mid-November, and all but one of the seven different experiments planned during this time will use the Miniball detection station.

Three-year extension for open-access initiative

In September, following three years of successful operation and growth, CERN announced the continuation of the global SCOAP3 open-access initiative for at least three more years. SCOAP3 (Sponsoring Consortium for Open Access Publishing in Particle Physics) is a partnership of more than 3000 libraries, funding agencies and research organisations from 44 countries that has made tens of thousands of high-energy physics articles publicly available at no cost to individual authors. Inspired by the collaborative model of the LHC, SCOAP3 is hosted at CERN under the oversight of international governance. It is primarily funded through the redirection of budgets previously used by libraries to purchase journal subscriptions.

Since 2014, in co-operation with 11 leading scientific publishers and learned societies, SCOAP3 has supported the transition to open access of many long-standing titles in the community. During this time, 20,000 scientists from 100 countries have benefited from the opportunity to publish more than 13,000 open-access articles free of charge.

With strong consensus of the growing SCOAP3 partnership, and supported by the increasing policy requirements for and global commitment to open access in its Member States, CERN has now signed contracts with 10 scientific publishers and learned societies for a three-year extension of the initiative. “With its success, SCOAP3 has shown that its model of global co-operation is sustainable, in the same broad and participative way we build and operate large collaborations in particle physics,” says CERN’s director for research and computing, Eckhard Elsen.

LUX-ZEPLIN passes approval milestone

A next-generation dark-matter detector in the US called LUX-ZEPLIN (LZ), which will be at least 100 times more sensitive than its predecessor, is on schedule to begin its deep-underground hunt for WIMPs in 2020. In August, LZ received a US Department of Energy approval (“Critical Decision 2 and 3b”) concerning the project’s overall scope, cost and schedule. The latest approval step sets in motion the building of major components and the preparation of its nearly mile-deep cavern at the Sanford Underground Research Facility (SURF) in Lead, South Dakota.

The experiment, which is supported by a collaboration of more than 30 institutions and about 200 scientists worldwide, is designed to search for dark-matter signals from within a chamber filled with 10 tonnes of purified liquid xenon. LZ is named for the merger of two dark-matter-detection experiments: the Large Underground Xenon experiment (LUX) and the UK-based ZonEd Proportional scintillation in LIquid Noble gases (ZEPLIN) experiment. LUX, a smaller liquid-xenon-based underground experiment at SURF that earlier this year ruled out a significant region of WIMP parameter space, will be dismantled to make way for the new project.

“Nobody looking for dark-matter interactions with matter has so far convincingly seen anything, anywhere, which makes LZ more important than ever,” says LZ project-director Murdock Gilchriese of the University of California at Berkeley.

Large beams take LHC physics forward

Usually, the motto of the LHC operations team is “maximum luminosity”. For a few days per year, however, this motto is put aside to run the machine at very low luminosity. The aim is to provide data for the broad physics programme of the LHC’s “forward physics” experiments – TOTEM and ATLAS/ALFA. By running the LHC with larger beam sizes at the interaction points, corresponding to a lower luminosity, the dedicated TOTEM and ATLAS/ALFA detectors can probe the proton–proton elastic-scattering regime at small angles.

In elastic scattering, two protons survive their encounter intact and only change direction by exchanging momentum. TOTEM, which is located in the straight sections of the LHC on either side of CMS at Point 5, and ATLAS/ALFA at Point 1, are not able to study this process during normal operation. To facilitate the special run, which took place in the third week of September, the LHC team has developed a special machine configuration that delivers exceptionally large beams at the interaction points (IP) of ATLAS and CMS. The focusing at the IP is normally parameterised by β*: the higher the value of β*, the bigger the beams and, importantly, the lower the angular divergence. For this year’s high-β* run, its value had to be raised to 2.5 km compared with around 1 km during LHC Run 1 at an energy of 8 TeV, because the higher energy of LHC Run 2 causes the two incoming protons to scatter at smaller angles. The measurements were carried out with very low-intensity beams, allowing  TOTEM and ALFA to bring their “Roman Pot” detectors remarkably close to the beam.

In addition to the precise determination of the total proton–proton interaction probability at 13 TeV, TOTEM will focus on a detailed study of elastic scattering in the low-transferred momentum regime. The experiment will investigate how Coulomb scattering interferes with the nuclear component of the elastic interaction, which can shed light on the internal structure of the protons. TOTEM will also search for special states formed by three gluons.

ATLAS/ALFA also intends to carry out a precision measurement of the proton–proton total cross-section, and will use this to determine the absolute LHC luminosity at Point 1. For ATLAS/ALFA, the interesting part of the spectrum is at low values of transferred momentum, where Coulomb scattering is dominant. Since the Coulomb scattering cross-section is theoretically known, its measurement provides an independent estimate of the absolute luminosity of the LHC. This would provide an important cross-check of the luminosity calibration measurements performed via van der Meer scans during dedicated LHC fills.

ATLAS homes in on Higgs-quark couplings

boosted-decision-tree output

The Higgs boson has been observed via its decays to photons, tau leptons, and Z and W bosons, which has allowed ATLAS to glean much information about the particle’s properties. So far, these properties agree with the predictions of the Standard Model (SM). However, there are several aspects of the Higgs boson that are still largely unexplored, most notably the coupling of the Higgs boson to quarks. The two heaviest quarks, the bottom and top, are particularly interesting because they have the largest couplings to the Higgs boson. If these couplings differ from the SM predictions, it could provide a first hint of new physics.

Observing the coupling of the Higgs boson to these two quark flavours is challenging, however. Despite the Higgs decaying to a pair of bottom quarks around 58% of the time, this decay has not yet been observed because such decays manifest themselves as jets in the detector and this signature is overwhelmed by the SM production of multi-jets. As a result, physicists search for this decay by looking for the production of the Higgs in association with a vector boson (W or Z) or a top-quark pair. The additional particles have a more distinctive decay signature, but this comes at the price of a much lower signal-production rate.

Regarding the top quark, the only way to directly measure the coupling of the Higgs to the top quark at the LHC is to study events where a Higgs is produced in association with a top-quark pair. Like the situation with bottom quarks, this process has not yet been observed. Indeed, even with the more distinct decays, the background processes that mimic these signals are large, complex and difficult to model. In both the top and bottom production channels, the backgrounds are controlled by using advanced machine-learning techniques to separate signal events from background (see figure).

We should finally observe both of these processes at a high statistical significance later during Run 2,

Both searches have now been carried out by ATLAS with data from LHC Run 2, revealing a sensitivity to the Higgs boson couplings to top and bottom quarks that is competitive with searches at Run 1. However, they are still not precise enough to identify if there are any deviations from SM behaviour. With further improvements to the analyses, better understanding of the backgrounds and the unprecedented performance of the LHC, we should finally observe both of these processes at a high statistical significance later during Run 2. This will tell us if the Higgs boson is indeed responsible for the masses of the quarks as predicted in the SM, or if there is new physics beyond it.

CMS investigates the width of the top quark

Twenty years after its discovery at the Tevatron collider at Fermilab, interest in studying the top quark at the LHC is higher than ever. This was illustrated by the plethora of new results presented by the CMS collaboration at the ICHEP conference in August and at TOP 2016, which took place in the Czech Republic from 19 to 23 September.

The top quark is the only fermion heavier than the W boson and which has weak decays that do not involve a virtual particle. This leads to an unusually short lifetime (5 × 10–24 s) for a weak-mediated process, and provides a unique opportunity to probe the properties and couplings of a bare quark. In particular, the width of the top quark (which, like for all quantum resonances, is inversely proportional to its lifetime) may be easily affected by new-physics processes.

In a series of recent publications, the CMS collaboration has explored the width of the top quark in a model-independent way and searched for contributions from extremely rare processes mediated by so-called flavour-changing neutral currents (FCNCs).

The top-quark width is too narrow compared with the experimental resolution of the CMS detector to allow a precision measurement directly from the shape of the top’s invariant-mass distribution. CMS therefore considers alternative observables that provide complementary information on the top’s mass and width.

One of those observables is the invariant-mass distribution of lepton and b-jet systems produced after top-quark pair decays, which has allowed the collaboration to place new bounds on a Standard Model-like top-quark width of 0.6 ≤ Γt ≤ 2.4 GeV, based on the first 13 fb–1 of data collected in 2016 at a collision energy of 13 TeV. In parallel, based on the LHC Run 1 data set recorded at lower energies, a set of dedicated searches for FCNC processes involving top quarks has been carried out. This analysis focuses on the couplings of the top-quark to other up-type quarks (up, charm) and different neutral bosons: the gluon, the photon, the Z boson and the Higgs boson.

CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data.

Another approach adopted by CMS was to search for the rare production of a single top quark in association with a photon and a Z boson with the 8 TeV data set. These channels exploit the large up-quark density in the proton, and to a lesser extent the charm-quark density, therefore compensating for the smallness of the FCNC couplings. Finally, events with the conventional signature of t-channel production (resulting in a single top-quark decay and a light-quark jet) were used to set constraints on FCNC and other anomalous couplings by simultaneously considering their effects on the production and the decay of the top quark with both the 7 and 8 TeV data sets.

Although no deviation from the background-only expectations has been observed in any of the analyses so far, the CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data (see figure). All the analyses are limited in statistics and therefore will only benefit from more data to start effectively probing beyond-the-Standard-Model effects in the top quark sector.

bright-rec iop pub iop-science physcis connect