Bluefors – leaderboard other pages

Topics

The CERN effect

Interest in CERN has evolved over the years. At its inception, the Organization’s founding member states clearly saw the new institution’s potential as a centre of excellence for basic research, a driver of innovation, a provider of first-class education and a catalyst for peace. After several decades of business as usual, CERN is again on the radar of its member-state governments. This is spurred on partly by the public interest that has made CERN something of a household name. But whether in the public spotlight or not, it is incumbent on CERN to spell out to all of its stakeholders why it represents such good value for money today, just as it did 60 years ago. Even though the reasons may be familiar to those working at CERN, they are not always so clear to government officials and policy makers, and it’s worth setting them out in detail.

First and foremost, CERN has made major contributions to how we understand the world that we live in. The discovery and detailed study of weak vector bosons in the 1980s and 1990s, and the recent discovery of the Higgs boson, messenger of the Brout–Englert–Higgs mechanism, have contributed much to our understanding of nature at the level of the fundamental particles and their interactions: now rightfully called the Standard Model of particle physics. This on its own is a major cultural achievement, and it has taught us much about how we have arrived at this point in history: right from the moment it all began, 13.6 billion years ago. Appreciation of this cultural contribution has never been higher than today. More than 100,000 people visit CERN every year, including hundreds of journalists reaching millions of people. None leave CERN unimpressed, and all are, without a doubt, culturally enriched by their experience.

Educating and innovating

CERN’s second major area of impact is education, having educated many generations of top-level physicists, engineers and technicians. Some have remained at CERN, while others have gone on to pursue careers in basic research at universities and institutes elsewhere, therefore contributing to top-level education and multiplying the effect of their own experience at CERN. Many more, however, have made their way into industry, fulfilling an important mission for CERN – that of providing skilled people to advance the economies of our member states and collaborating nations. More than 500 doctoral degrees are awarded annually on the basis of work carried out at CERN experiments and accelerators. In 2015, more than 400 doctoral, technical and administrative students were welcomed by CERN, usually staying for between several months and a year. The CERN summer-student and teacher programmes, which provide short stints of intensive education, also welcome hundreds of students and high-school teachers every year.

A third important contribution of CERN is the innovation that results from research that requires technology at levels and in areas where no one has gone before. The best-known example of CERN technology is the World Wide Web, which has profoundly changed the way that our society works worldwide. But the web is just the tip of the iceberg. Advances in fields such as magnet technology, cryogenics, electronics, detector technology and statistical methods have also made their way into society in ways that are equally impactful, although less obviously evident. While the societal benefits of techniques such as advanced photon and lepton detection may not seem immediately relevant beyond the realms of research, the impact they have had in medical imaging, for instance, is profound.

Often not very visible, but no less effective in contributing to our prosperity and well-being, developments such as this are a vital part of the research cycle. CERN is increasingly taking a proactive approach towards transferring its innovation, knowledge and skills to those who can make these count for society as a whole, and this is generally well appreciated. Recent initiatives include public–private partnerships such as OpenLab, Medipix and IdeaSquare, which provide low-entry-threshold mechanisms for companies to engage with CERN technology. In return, CERN benefits through stimulating the kind of industrial innovation that enables next-generation accelerators and detectors.

The recent Viewpoint by CERN Director-General, Fabiola Gianotti (CERN Courier March 2016 p5) gives a superb outline of the opportunities and challenges for particle physics during the coming years. Clearly it will require great dexterity to juggle the continuation of a state-of-the-art research programme at the LHC and a diverse range of other facilities, with greater engagement with important activities beyond CERN, such as the US neutrino programme, while at the same time preparing for future accelerators and detectors. This will stretch CERN’s capabilities to the limit. But it is precisely this challenge that will motivate the Organization to do better and innovate in all areas, with inevitable benefits for society. Scientific culture and societal impacts advancing hand-in-hand through cutting-edge research: it is this that makes CERN worthy of the support it receives from governments worldwide.

Quantum Confined Laser Devices: Optical Gain and Recombination in Semiconductors

By P Blood
Oxford University Press

Quantum confined

This book provides a comprehensive discussion of quantum confined semiconductor lasers, based on the author’s long and extensive experience in the field. In a pedagogical fashion, it takes the reader from the physics principles and processes exploited by lasers (giving a consistent treatment of both quantum-dot and quantum-well structures) to operation of the most advanced devices.

The text begins with a short historical account of the birth and development of lasers in general (called “maser” at the very beginning because restricted to microwaves), and the diode laser in particular. Thereafter, the book is organised into five sections. The first, dedicated to the diode laser, provides the framework for the whole volume. The second section describes the fundamental processes involved in the physics of lasers, a subject that is then treated in depth in the third part. The fourth section discusses the operation of laser devices and their characteristics (light-current curves, threshold current, efficiency, etc). Finally, the author tackles the important topics of recombination and optical gain, describing ways in which they can be measured on device structures and compared with theoretical predictions.

Full of detailed explanations, illustrations from model calculations and experimental observations, as well as a comprehensive set of exercises, the book is recommended to final-year undergraduate and PhD students, as well as researchers who are new to the field and need a complete overview of the subject.

Numerical Relativity: 100 Years of General Relativity – Vol. 1

By M Shibata
World Scientific

51uYR7wPNuL

Numerical relativity is a field of theoretical physics in which Einstein’s equation and associated matter field equations are solved using computer calculations, because they are nonlinear partial-differential equations and therefore they cannot be solved analytically for general problems.

The purpose of this volume is to describe the techniques of numerical relativity and to report the knowledge obtained from the numerical simulations performed so far. The first chapter offers an overview of the basics of general relativity, gravitational waves and relativistic astrophysics, which are the background of numerical relativity. Then, in the first part of the book (chapters 2 to 7), the author discusses the most used formulations and numerical methods, while in the second part (chapters 8 to 11), he reports on representative numerical-relativity simulations and the knowledge derived from them.

Particular importance is given to the results obtained by applying these simulation techniques to the study of black-hole formation, binary compact objects, and the merger of binary neutron stars and black holes. New frontiers in numerical relativity are also touched on in the last two chapters.

Combinatorial Identities for Stirling Numbers: The Unpublished Notes of H W Gould

By J Quaintance and H W Gould
World Scientific

combinatorial-identities-for-stirling-numbers-the-unpublished-notes-of-h-w-gould

Written by Henry Gould’s assistant Jocelyn Quaintance, this book is the result of the deep work and personal relationship between the great mathematician and the author. They met when Quaintance had recently graduated with a PhD, and was looking for a career in research and an advisor who could guide him. He had the luck to collaborate with Gould, who showed him his manuscripts: several handwritten volumes on combinatorial identities. Quaintance offered to edit a text collecting together all of that material, which led to the publication of this book.

The first eight chapters introduce readers to the special techniques that Gould used in proving his binomial identities. This first part is easily accessible to people who have taken basic courses in calculus and discrete mathematics. The second half of the book applies the techniques from the first part, and is particularly relevant for mathematics researchers. It focuses on the connection between various classes of Stirling numbers, and between them and Bernoulli numbers.

Some of the demonstrations presented in the volume represent the only systematic record of Gould’s results. As such, this book is a unique work that could appeal to a wide audience: from graduate students to specialists in enumerative combinatorics, to enthusiasts of Gould’s work.

Advances of Atoms and Molecules in Strong Laser Fields

By Y Liu
World Scientific

CCboo2_02_16

The challenge of developing more intense, shorter-pulse lasers has already seen outstanding results and opened up completely new perspectives. In fact, the next generation of very-high-power laser facilities will provide the opportunity to explore even ultrarelativistic and vacuum nonlinearity at unprecedented levels, moving towards a QCD regime. At the same time, during the last few years, attosecond physics has provided a new, intriguing way to visualise both atoms and molecules, and the electromagnetic-field structure of the excitation wave packet itself, because this time domain is comparable with the classical periods of electrons orbiting around the nucleus. This growing research field is so recent that the literature on the subject is not yet adequate: in this sense, this book partially fills the gap. It contains contributions from several Chinese groups, both experimental and theoretical, and reports on recent studies of bound electron and molecular nonlinearities. The content is organised over eight chapters and spans a broad range of topics of this specialist subject.

Strong-field tunnelling is a possible key to the ionisation of neutrals. It offers a sophisticated method to image and probe atomic and molecular quantum processes. In fact, the study of direct and rescattered (by the nucleus) electrons in the ionisation process is able to resolve orbitals; in this context, it becomes important to go beyond strong-field approximation, and to evaluate the contribution of the long-range Coulomb field generated by the ion in the electron dynamical evolution (chapter 1).

Direct and rescattered electrons can be recorded together as a reference wave and a signal wave, respectively: the interferential patterns constitute the analogue of optical holography, reconstructing the illuminated objects. It is possible to integrate the influence of the Coulomb field, either in a numerical solution of the time-dependent Schrödinger equation (TDSE) or in a more intuitive quantum-trajectories Monte Carlo method describing the formation mechanisms of the photoelectron angular distribution of above-threshold ionisation (chapter 2).

Dissociation is a basic process of physical chemistry and, before the advent of new ultrafast tools, seemed completely out of scientists’ control, because the typical timescale is below the femtosecond range. For an easier comparison of theoretical predictions and experimental results for a molecule interacting with a strong ultrashort laser pulse, it is necessary to start with the simplest systems – the hydrogen molecular ion H+2. In chapter 3, on the basis of a numerical analysis of the related TDSE, the author suggests a pump–probe strategy to understand dissociation.

The theoretical discussion of double ionisation in a strong laser field is treated in chapters 4 and 5 for different kinds of atoms. In the case of high Z, the experiments show a different degree of correlation of the two expelled electrons, with respect to the low-Z case: this is due to the major importance of rescattering, as described by a semiclassical model. For the simpler systems H2 and He, TDSE is a powerful tool for calculating all of the main features of double ionisation (total and differential cross-sections, recoil-ion momentum spectra, two electron angular distributions, and two electron-interference phenomena).

A promising application of strong-field excitation on atoms and molecules is high-order harmonics generation (HHG), usually providing a XUV comb with different harmonics at the same intensities, both in a single attosecond pulse and in a train of attosecond pulses, by a conversion of the light frequency from IR to the X-ray regime. This technique provides a tomographic image of molecular orbitals as an alternative to scanning tunnelling microscopy or angle-resolved photoelectron spectroscopy, as well as a way to study ultrafast electronic structures, electron dynamics and multichannel dynamics (chapters 6 and 7).

Finally, chapter 8 presents an interesting review of the properties of free electron laser radiation, showing how nuclear motion in photo-induced reactions can be monitored in real time, the electronic dynamics in molecular co-ordinates can be extracted, and the site-specific information in the structural dynamics of chemical reactions can be provided. The experiments are based on EUV pump–probe and optical pump-X-ray probe excitation techniques, and are located at FLASH (Hamburg) and LCLS (SLAC), respectively.

As a summary, the book is a useful update for people who are interested in the specialised field of the interaction of atoms and molecules with femtosecond or sub-femtosecond high-intensity fields. The comprehensive bibliography allows the reader to gain a more exhaustive view of the subject.

The Thermophysical Properties of Metallic Liquids: Fundamentals (volume 1) and Predictive Models (volume 2)

By T Iida and R I L Guthrie
Oxford University Press

51QyMvKMxnL
41eOVy5r+3L

Authored by two leading experts in the field, these books provide a complete review of the static and dynamic thermophysical properties of metallic liquids. Divided into two volumes, the first one (Fundamentals) is intended as an introductory text in which the basic topics are covered: the structure of metallic liquids, their thermodynamic properties, density, velocity of sound, surface tension, viscosity, diffusion, and electrical and thermal conductivities. Essential concepts about the methods used to measure these experimental data are also presented.

In the second volume (Predictive Models), the authors explain how to develop reliable models of liquid metals, starting from the essential conditions for a model to be truly predictive. They use a statistical approach to rate the validity of different models. On the basis of this assessment, the authors have compiled tables of predicted values for the thermophysical properties of metallic liquids, which are included in the book. A large amount of experimental data are also given.

The two books are particularly oriented to students of materials science and engineering, but also to research scientists and engineers engaged in liquid metallic processing. They collect a large amount of information and are written in a clear and readable way, therefore they are bound to become an essential reference for students and researchers involved in the field.

Routledge Handbook of Public Communication of Science and Technology (2nd edition)

By M Bucchi and B Trench (eds)
Routledge

CCboo1_02_16

With scientists increasingly asked to engage the public and society-at-large with their research, and include outreach plans as part of grant applications, it helps to have a guide to various involvement possibilities and the research behind them. The second edition of the Routledge Handbook of Public Communication of Science and Technology (henceforth referred to as “the Handbook”) provides a thorough introduction to public engagement – or outreach, as it is sometimes called – through a varied collection of articles on the subject. In particular, it brings to attention the underlying issues associated with the old “deficit model of science communication”, which presupposes a knowledge deficit about science among the general public that must be filled by scientists providing facts, and facts alone. Although primarily targeting science-communication practitioners and academics researching the field, the Handbook can also help scientists to reflect on their outreach efforts and to appreciate the interplay between science and society.

Before plunging into the depths of the book, it is important to remember that the study of science communication is the study of evolving terminology. Historically, an effort was made to determine the “scientific literacy” of society, under the assumption that a society knowledgeable in the facts and methods of science would support research endeavours without much opposition. This approach was made obsolete by the introduction of the “public communication of science and technology” paradigm, which itself was superseded by what is today called “public engagement with science and technology”, or “public engagement” for short. The first chapter, written by the editors, is the best place to familiarise oneself with the various science-communication models, as well as the terms and phrases used throughout the Handbook. That said, those with backgrounds in natural sciences might feel somewhat out of their depth, due to a lack of definitions in the rest of the Handbook for words and phrases used on a daily basis by their social-science counterparts. However, this is largely mitigated by each chapter containing a wealth of notes and references at the end, pointing readers in the direction of further reading.

The chapters themselves are stand-alone articles by experts in their respective topics, many written in engaging, conversational styles. They cover everything from policy and participants, to the handling of “hot-button” issues, to research and assessment methodology. Readers of the Courier may find the chapters on science journalism, on public relations in science, on the role of scientists as public experts and on risk management particularly illuminating.

What the same readers might find missing from the book is a specific treatment of fundamental research: the Handbook focuses on domains of science – such as climate change – that tend to have a direct or immediate impact on society. Scientists from other areas of research might therefore consider shoehorning (perhaps non-existing) societal impact into their science-communication efforts, rather than learning how to adapt the lessons learnt from fields such as climate science to their own work. It is therefore this reviewer’s desire that future editions of the Handbook address the science-communication challenges of more diverse areas of research, proposing ways in which scientists and practitioners can tackle them.

Overall, the Handbook gives readers valuable insight into science-communication research, and merits a place on the library shelves of every university and research institution.

CERN sets accelerator objectives for 2016

The LHC Performance Workshop took place in Chamonix from 25 to 28 January. Attended by representatives from across the accelerator sector, including members of the CERN Machine Advisory Committee, and the LHC experiments, the programme covered a review of the 2015 performance and a look forward to 2016, as well as the status of both the LHC injector upgrade (LIU) and the High Luminosity LHC (HL-LHC) projects. It finished with a session dedicated to the next long shutdown (LS2), planned for 2019–2020.

For the LHC, 2015 was the first year of operation following the major interventions carried out during the long shutdown (LS1) of 2013–2014. At Chamonix, an analysis of the year’s operations and operation efficiency was performed, with the aim to identify possible improvements for 2016. The performance of key systems (e.g. machine protection, collimation, radio frequency, transverse dampers, magnetic circuits, and beam diagnostics) has been good, but nonetheless a push is still being made for better reliability, improved functionality and more effective monitoring.

The first year of operation also revealed a number of challenges, including the now-famous unidentified falling objects (UFOs), and an unidentified aperture restriction in an arc dipole called the unidentified lying object (ULO). Both problems are under control and there should be no surprises in 2016.

A dominating feature of 2015 was the electron cloud. The worst effects were suppressed by a systematic scrubbing campaign and a strategy that allowed continued scrubbing in physics conditions at 6.5 TeV. This strategy delivered 2244 bunches and encouraging luminosity performance. The electron cloud has side effects such as heat load to the cold sectors of the machine and beam instabilities. These have to be effectively handled to avoid compromising operations. In particular, the heat load to the beam screens that shield the walls of the beam pipes was a major challenge for the cryogenics teams, who were forced to operate their huge system close to its cooling-power limit. Plans for tackling the electron cloud in 2016 were discussed at the Chamonix meeting, including a short scrubbing run that should allow the conditions at the end of 2015 to be re-established. Further staged improvement will be obtained by further scrubbing while delivering luminosity to the experiments.

The machine configuration, planning and potential performance for 2016 and Run 2 were outlined. The LHC has shaken off the after effects of LS1, and the clear hope is to enter into a production phase in the coming years. Besides luminosity production, 2016 will include the usual mix of machine development, technical stops, special physics runs and an ion run. The special runs will include the commissioning of a machine configuration that will allow TOTEM and ALFA to probe very-low-angle elastic scattering.

Machine availability is key to efficient luminosity production, and a day was spent examining availability tracking and the performance of all key systems. Possible areas for improvement in the short and medium term were identified.

The LIU project has the job of upgrading the injectors to deliver the extremely challenging beams for the HL-LHC. The status of Linac4 and the necessary upgrades to the Booster, PS and SPS were presented. Besides the completion of Linac4 and its connection to the Booster, the upgrade programme comprises an impressive and extensive number of projects. The energy upgrade to the Booster will involve the replacement of its entire radio-frequency (RF) system with a novel solution based on a new magnetic alloy (Finemet). The PS will have to tackle the increased injection energy from the Booster, as well as upgrades to its RF and damper systems. The SPS foresees a major RF upgrade, a new beam dump, an extensive campaign of impedance reduction, and the deployment of electron-cloud reduction measures. The upgrade programme also targets ions as it plans improvements to Linac3 and LEIR, and looks at implementing new techniques to produce a higher number of intense ion bunches in the PS and SPS.

An in-depth survey of the potential performance limitations of the HL-LHC and means to mitigate or circumvent them were discussed. Although it is clear that the electron cloud will remain an issue, the experts gathered at Chamonix proposed a number of measures including in-situ amorphous-carbon (a-C) coating and in-situ laser-engineered surface structures (LESS) as a way of tackling the electron cloud in the magnets at the insertion regions.

Besides the complete re-working of the high-luminosity insertions, key upgrades to the RF and collimation systems are also required. Here, plans have been base-lined and work is in progress to develop and produce the required hardware. An important novel contribution from RF is the production of crab cavities, which are designed to mitigate the effort of the large crossing angle at the high-luminosity interaction points. The preparation for the installation of test crab cavities in the SPS is well under way.

Ions will be an integral part of the HL-LHC programme, and the means to deliver the required beams and luminosity are taking shape. The recent successful Pb–Pb run at 5.02 TeV centre-of-mass energy per colliding nucleon pair and the quench tests performed during the same run have provided very useful input.

Although it will only start in 2019, planning for LS2 is already under way, and a dedicated session looked at the considerable amount of work foreseen for the next two-year stop of the accelerator complex. A major part of the effort will be devoted to the deployment of the LIU injector upgrade discussed previously. Looking at the experiments, ALICE and LHCb will perform major upgrades to their detectors and read-out systems. An impressive amount of consolidation work is also foreseen. Of note is major work in the much-solicited PS and SPS experimental areas.

Besides the exploitation of the LHC in the short term, the workshop revealed that there is a huge amount of work going on to anticipate and assure the mid-term future of the laboratory, both at the high-energy frontier and in the extensive non-LHC physics programmes. The LIU upgrade and the consolidation effort will help to guarantee the future for, and offer potential performance improvements to, the extensive fixed-target facilities, including the Antiproton Decelerator and the new Extra Low ENergy Antiproton ring (ELENA), HIE-ISOLDE, nTOF and AWAKE.

Exploring the nature of the proposed magic number N = 32

Magic numbers appear in nuclei in which protons or neutrons completely fill a shell. The existence of magic numbers to explain certain regularities observed in nuclei was discovered in 1949 independently by M Goeppert-Mayer and J D Jensen, who were awarded the Nobel prize in 1963. Nuclei containing a magic number of nucleons, namely 2, 8, 20, 28, 50 and 82, are spherical, and present a very high degree of stability, which makes them very difficult to excite. The degree of “magicity” of a nucleus can be determined by precisely determining its shape, mass, excitation energy and electromagnetic observables – properties that can be precisely studied with dedicated experiments at ISOLDE.

The calcium-isotopic chain (Z = 20, magic proton number) is a unique nuclear system to study how protons and neutrons interact inside of the atomic nucleus: two of its stable isotopes are magic in both their proton and neutron number (40Ca with N = 20 and 48Ca with N = 28). Despite an excess of eight neutrons, 48Ca exhibits the striking feature that it has an identical mean square charge radius as 40Ca. In addition, experimental evidence of doubly magic features in a short-lived calcium isotope, 52Ca (N = 32), was obtained in 2013 (Wienholtz et al. 2013 Nature 498 346). Therefore, to determine the radius beyond 48Ca was crucial from an experimental and theoretical point of view. The new determination of the nuclear radius is now challenging the magicity of the 52Ca isotope.

The measurements were performed by using high-resolution bunched-beam collinear laser spectroscopy in the COLLAPS installation at ISOLDE, CERN. The charge radii for 40–52Ca isotopes were obtained from the measured optical isotope shifts extracted from the fit of the hyperfine experimental spectra. Indeed, although the average distance between the electrons and the nucleus in an atom is about 5000 times larger than the nuclear radius, the size of the nuclear-charge distribution is manifested as a perturbation of the atomic energy levels. A change in the nuclear size between two isotopes gives rise to a shift of the atomic hyperfine structure (hfs) levels. This shift between two isotopes, one million times smaller than the absolute transition frequency, commonly known as the isotope shift, includes a part that is proportional to the change in the nuclear mean square charge radii. Measurement of such a tiny change is only possible by using ultra-high-resolution techniques. With a production yield of only a few hundred ions per second, the measurement on 52Ca represents one of the highest sensitivities ever reached using fluorescence-detection techniques. The collinear laser spectroscopy technique developed at ISOLDE has been established as a unique method to reach such high resolution, and has been applied with different detection schemes to study a variety of nuclear chains.

The resulting charge radius of 52Ca is found to be much larger than expected for a doubly magic nucleus, and largely exceeds the theoretical predictions. The large and unexpected increase of the size of the neutron-rich calcium isotopes beyond N = 28 challenges the doubly magic nature of 52Ca, and opens new and intriguing questions on the evolution of nuclear size away from stability, which are of importance for our understanding of neutron-rich atomic nuclei.

New ALPHA measurement of the charge of antihydrogen

The ALPHA collaboration has just published a new measurement of the charge of the antihydrogen atom. Although the Standard Model predicts that antihydrogen must be strictly neutral, only a few actual direct measurements have been performed so far to test this conjecture.

A glance at the Particle Data Book reveals that, according to the latest measurements, the antiproton charge can differ from the charge of the electron by at most 7 × 10–10 times the fundamental charge. The comparable number for the positron is somewhat larger, at 4 × 10–8. Note that studies with atoms of normal matter show that they are neutral to about one part in 1021. We are, therefore, unsurprisingly, way behind in our ability to study antimatter. Given that we still do not understand the baryon asymmetry, it is generally a good idea to take a hard look at antimatter, if you can get your hands on some.

Antihydrogen is unique in the laboratory in that it should be neutral, stable antimatter. Indeed, the charge–parity–time (CPT) symmetry requires antihydrogen to have the same properties as hydrogen, including charge neutrality. In ALPHA, we can produce antihydrogen atoms and catch them in a trap formed by superconducting magnets, and we can hold them for at least 1000 s.

The current article in Nature results from experiments in the recently commissioned ALPHA-2 machine, and uses a new technique proposed by ALPHA member Joel Fajans and colleagues at UC Berkeley. The new method, known as stochastic acceleration, involves subjecting the trapped antihydrogen atoms to electric-field pulses at various time intervals. If the antihydrogen is not really neutral, it will be “heated” by the repeated pulses until it finally escapes the trap and annihilates. Comparing the results of trials with and without the pulsed field, we can derive a limit on how “charged” antihydrogen might be. The answer so far: antihydrogen is neutral to 0.7 ppb (one standard deviation) of the fundamental charge. This is a factor of 20 improvement over our previous limit, set by using static electric fields to try to deflect antihydrogen when it is released from the trap.

If we take another approach and assume that antihydrogen is indeed neutral, we can combine this result with ASACUSA’S measurement of the antiproton charge anomaly to improve the limit on the positron charge anomaly by a factor of about 25. Of course, we are looking for signs of new physics in the antihydrogen system – it is probably best not to assume anything.

bright-rec iop pub iop-science physcis connect