By T Iida and R I L Guthrie Oxford University Press
Authored by two leading experts in the field, these books provide a complete review of the static and dynamic thermophysical properties of metallic liquids. Divided into two volumes, the first one (Fundamentals) is intended as an introductory text in which the basic topics are covered: the structure of metallic liquids, their thermodynamic properties, density, velocity of sound, surface tension, viscosity, diffusion, and electrical and thermal conductivities. Essential concepts about the methods used to measure these experimental data are also presented.
In the second volume (Predictive Models), the authors explain how to develop reliable models of liquid metals, starting from the essential conditions for a model to be truly predictive. They use a statistical approach to rate the validity of different models. On the basis of this assessment, the authors have compiled tables of predicted values for the thermophysical properties of metallic liquids, which are included in the book. A large amount of experimental data are also given.
The two books are particularly oriented to students of materials science and engineering, but also to research scientists and engineers engaged in liquid metallic processing. They collect a large amount of information and are written in a clear and readable way, therefore they are bound to become an essential reference for students and researchers involved in the field.
With scientists increasingly asked to engage the public and society-at-large with their research, and include outreach plans as part of grant applications, it helps to have a guide to various involvement possibilities and the research behind them. The second edition of the Routledge Handbook of Public Communication of Science and Technology (henceforth referred to as “the Handbook”) provides a thorough introduction to public engagement – or outreach, as it is sometimes called – through a varied collection of articles on the subject. In particular, it brings to attention the underlying issues associated with the old “deficit model of science communication”, which presupposes a knowledge deficit about science among the general public that must be filled by scientists providing facts, and facts alone. Although primarily targeting science-communication practitioners and academics researching the field, the Handbook can also help scientists to reflect on their outreach efforts and to appreciate the interplay between science and society.
Before plunging into the depths of the book, it is important to remember that the study of science communication is the study of evolving terminology. Historically, an effort was made to determine the “scientific literacy” of society, under the assumption that a society knowledgeable in the facts and methods of science would support research endeavours without much opposition. This approach was made obsolete by the introduction of the “public communication of science and technology” paradigm, which itself was superseded by what is today called “public engagement with science and technology”, or “public engagement” for short. The first chapter, written by the editors, is the best place to familiarise oneself with the various science-communication models, as well as the terms and phrases used throughout the Handbook. That said, those with backgrounds in natural sciences might feel somewhat out of their depth, due to a lack of definitions in the rest of the Handbook for words and phrases used on a daily basis by their social-science counterparts. However, this is largely mitigated by each chapter containing a wealth of notes and references at the end, pointing readers in the direction of further reading.
The chapters themselves are stand-alone articles by experts in their respective topics, many written in engaging, conversational styles. They cover everything from policy and participants, to the handling of “hot-button” issues, to research and assessment methodology. Readers of the Courier may find the chapters on science journalism, on public relations in science, on the role of scientists as public experts and on risk management particularly illuminating.
What the same readers might find missing from the book is a specific treatment of fundamental research: the Handbook focuses on domains of science – such as climate change – that tend to have a direct or immediate impact on society. Scientists from other areas of research might therefore consider shoehorning (perhaps non-existing) societal impact into their science-communication efforts, rather than learning how to adapt the lessons learnt from fields such as climate science to their own work. It is therefore this reviewer’s desire that future editions of the Handbook address the science-communication challenges of more diverse areas of research, proposing ways in which scientists and practitioners can tackle them.
Overall, the Handbook gives readers valuable insight into science-communication research, and merits a place on the library shelves of every university and research institution.
The LHC Performance Workshop took place in Chamonix from 25 to 28 January. Attended by representatives from across the accelerator sector, including members of the CERN Machine Advisory Committee, and the LHC experiments, the programme covered a review of the 2015 performance and a look forward to 2016, as well as the status of both the LHC injector upgrade (LIU) and the High Luminosity LHC (HL-LHC) projects. It finished with a session dedicated to the next long shutdown (LS2), planned for 2019–2020.
For the LHC, 2015 was the first year of operation following the major interventions carried out during the long shutdown (LS1) of 2013–2014. At Chamonix, an analysis of the year’s operations and operation efficiency was performed, with the aim to identify possible improvements for 2016. The performance of key systems (e.g. machine protection, collimation, radio frequency, transverse dampers, magnetic circuits, and beam diagnostics) has been good, but nonetheless a push is still being made for better reliability, improved functionality and more effective monitoring.
The first year of operation also revealed a number of challenges, including the now-famous unidentified falling objects (UFOs), and an unidentified aperture restriction in an arc dipole called the unidentified lying object (ULO). Both problems are under control and there should be no surprises in 2016.
A dominating feature of 2015 was the electron cloud. The worst effects were suppressed by a systematic scrubbing campaign and a strategy that allowed continued scrubbing in physics conditions at 6.5 TeV. This strategy delivered 2244 bunches and encouraging luminosity performance. The electron cloud has side effects such as heat load to the cold sectors of the machine and beam instabilities. These have to be effectively handled to avoid compromising operations. In particular, the heat load to the beam screens that shield the walls of the beam pipes was a major challenge for the cryogenics teams, who were forced to operate their huge system close to its cooling-power limit. Plans for tackling the electron cloud in 2016 were discussed at the Chamonix meeting, including a short scrubbing run that should allow the conditions at the end of 2015 to be re-established. Further staged improvement will be obtained by further scrubbing while delivering luminosity to the experiments.
The machine configuration, planning and potential performance for 2016 and Run 2 were outlined. The LHC has shaken off the after effects of LS1, and the clear hope is to enter into a production phase in the coming years. Besides luminosity production, 2016 will include the usual mix of machine development, technical stops, special physics runs and an ion run. The special runs will include the commissioning of a machine configuration that will allow TOTEM and ALFA to probe very-low-angle elastic scattering.
Machine availability is key to efficient luminosity production, and a day was spent examining availability tracking and the performance of all key systems. Possible areas for improvement in the short and medium term were identified.
The LIU project has the job of upgrading the injectors to deliver the extremely challenging beams for the HL-LHC. The status of Linac4 and the necessary upgrades to the Booster, PS and SPS were presented. Besides the completion of Linac4 and its connection to the Booster, the upgrade programme comprises an impressive and extensive number of projects. The energy upgrade to the Booster will involve the replacement of its entire radio-frequency (RF) system with a novel solution based on a new magnetic alloy (Finemet). The PS will have to tackle the increased injection energy from the Booster, as well as upgrades to its RF and damper systems. The SPS foresees a major RF upgrade, a new beam dump, an extensive campaign of impedance reduction, and the deployment of electron-cloud reduction measures. The upgrade programme also targets ions as it plans improvements to Linac3 and LEIR, and looks at implementing new techniques to produce a higher number of intense ion bunches in the PS and SPS.
An in-depth survey of the potential performance limitations of the HL-LHC and means to mitigate or circumvent them were discussed. Although it is clear that the electron cloud will remain an issue, the experts gathered at Chamonix proposed a number of measures including in-situ amorphous-carbon (a-C) coating and in-situ laser-engineered surface structures (LESS) as a way of tackling the electron cloud in the magnets at the insertion regions.
Besides the complete re-working of the high-luminosity insertions, key upgrades to the RF and collimation systems are also required. Here, plans have been base-lined and work is in progress to develop and produce the required hardware. An important novel contribution from RF is the production of crab cavities, which are designed to mitigate the effort of the large crossing angle at the high-luminosity interaction points. The preparation for the installation of test crab cavities in the SPS is well under way.
Ions will be an integral part of the HL-LHC programme, and the means to deliver the required beams and luminosity are taking shape. The recent successful Pb–Pb run at 5.02 TeV centre-of-mass energy per colliding nucleon pair and the quench tests performed during the same run have provided very useful input.
Although it will only start in 2019, planning for LS2 is already under way, and a dedicated session looked at the considerable amount of work foreseen for the next two-year stop of the accelerator complex. A major part of the effort will be devoted to the deployment of the LIU injector upgrade discussed previously. Looking at the experiments, ALICE and LHCb will perform major upgrades to their detectors and read-out systems. An impressive amount of consolidation work is also foreseen. Of note is major work in the much-solicited PS and SPS experimental areas.
Besides the exploitation of the LHC in the short term, the workshop revealed that there is a huge amount of work going on to anticipate and assure the mid-term future of the laboratory, both at the high-energy frontier and in the extensive non-LHC physics programmes. The LIU upgrade and the consolidation effort will help to guarantee the future for, and offer potential performance improvements to, the extensive fixed-target facilities, including the Antiproton Decelerator and the new Extra Low ENergy Antiproton ring (ELENA), HIE-ISOLDE, nTOF and AWAKE.
Magic numbers appear in nuclei in which protons or neutrons completely fill a shell. The existence of magic numbers to explain certain regularities observed in nuclei was discovered in 1949 independently by M Goeppert-Mayer and J D Jensen, who were awarded the Nobel prize in 1963. Nuclei containing a magic number of nucleons, namely 2, 8, 20, 28, 50 and 82, are spherical, and present a very high degree of stability, which makes them very difficult to excite. The degree of “magicity” of a nucleus can be determined by precisely determining its shape, mass, excitation energy and electromagnetic observables – properties that can be precisely studied with dedicated experiments at ISOLDE.
The calcium-isotopic chain (Z = 20, magic proton number) is a unique nuclear system to study how protons and neutrons interact inside of the atomic nucleus: two of its stable isotopes are magic in both their proton and neutron number (40Ca with N = 20 and 48Ca with N = 28). Despite an excess of eight neutrons, 48Ca exhibits the striking feature that it has an identical mean square charge radius as 40Ca. In addition, experimental evidence of doubly magic features in a short-lived calcium isotope, 52Ca (N = 32), was obtained in 2013 (Wienholtz et al. 2013 Nature498 346). Therefore, to determine the radius beyond 48Ca was crucial from an experimental and theoretical point of view. The new determination of the nuclear radius is now challenging the magicity of the 52Ca isotope.
The measurements were performed by using high-resolution bunched-beam collinear laser spectroscopy in the COLLAPS installation at ISOLDE, CERN. The charge radii for 40–52Ca isotopes were obtained from the measured optical isotope shifts extracted from the fit of the hyperfine experimental spectra. Indeed, although the average distance between the electrons and the nucleus in an atom is about 5000 times larger than the nuclear radius, the size of the nuclear-charge distribution is manifested as a perturbation of the atomic energy levels. A change in the nuclear size between two isotopes gives rise to a shift of the atomic hyperfine structure (hfs) levels. This shift between two isotopes, one million times smaller than the absolute transition frequency, commonly known as the isotope shift, includes a part that is proportional to the change in the nuclear mean square charge radii. Measurement of such a tiny change is only possible by using ultra-high-resolution techniques. With a production yield of only a few hundred ions per second, the measurement on 52Ca represents one of the highest sensitivities ever reached using fluorescence-detection techniques. The collinear laser spectroscopy technique developed at ISOLDE has been established as a unique method to reach such high resolution, and has been applied with different detection schemes to study a variety of nuclear chains.
The resulting charge radius of 52Ca is found to be much larger than expected for a doubly magic nucleus, and largely exceeds the theoretical predictions. The large and unexpected increase of the size of the neutron-rich calcium isotopes beyond N = 28 challenges the doubly magic nature of 52Ca, and opens new and intriguing questions on the evolution of nuclear size away from stability, which are of importance for our understanding of neutron-rich atomic nuclei.
The ALPHA collaboration has just published a new measurement of the charge of the antihydrogen atom. Although the Standard Model predicts that antihydrogen must be strictly neutral, only a few actual direct measurements have been performed so far to test this conjecture.
A glance at the Particle Data Book reveals that, according to the latest measurements, the antiproton charge can differ from the charge of the electron by at most 7 × 10–10 times the fundamental charge. The comparable number for the positron is somewhat larger, at 4 × 10–8. Note that studies with atoms of normal matter show that they are neutral to about one part in 1021. We are, therefore, unsurprisingly, way behind in our ability to study antimatter. Given that we still do not understand the baryon asymmetry, it is generally a good idea to take a hard look at antimatter, if you can get your hands on some.
Antihydrogen is unique in the laboratory in that it should be neutral, stable antimatter. Indeed, the charge–parity–time (CPT) symmetry requires antihydrogen to have the same properties as hydrogen, including charge neutrality. In ALPHA, we can produce antihydrogen atoms and catch them in a trap formed by superconducting magnets, and we can hold them for at least 1000 s.
The current article in Nature results from experiments in the recently commissioned ALPHA-2 machine, and uses a new technique proposed by ALPHA member Joel Fajans and colleagues at UC Berkeley. The new method, known as stochastic acceleration, involves subjecting the trapped antihydrogen atoms to electric-field pulses at various time intervals. If the antihydrogen is not really neutral, it will be “heated” by the repeated pulses until it finally escapes the trap and annihilates. Comparing the results of trials with and without the pulsed field, we can derive a limit on how “charged” antihydrogen might be. The answer so far: antihydrogen is neutral to 0.7 ppb (one standard deviation) of the fundamental charge. This is a factor of 20 improvement over our previous limit, set by using static electric fields to try to deflect antihydrogen when it is released from the trap.
If we take another approach and assume that antihydrogen is indeed neutral, we can combine this result with ASACUSA’S measurement of the antiproton charge anomaly to improve the limit on the positron charge anomaly by a factor of about 25. Of course, we are looking for signs of new physics in the antihydrogen system – it is probably best not to assume anything.
As the LHC delivered proton–proton collisions at the record energy of 13 TeV during the summer and autumn of last year, the experiments were eagerly scrutinising the new data. They were on alert for the signatures that would be left by new heavy particles, indicating a breakdown of the Standard Model in the new energy regime. A few days before CERN closed for the Christmas break, and only six weeks after the last proton–proton collisions of 2015, the ATLAS collaboration released the results of seven new searches for supersymmetric particles.
Supersymmetry (SUSY) predicts that, for every known elementary particle, there is an as-yet-undiscovered partner whose spin quantum number differs by half a unit. In most models, the lightest SUSY particle (LSP) is stable, electrically neutral and weakly interacting, hence it is a good dark-matter candidate. SUSY also protects the Higgs boson mass from catastrophically large quantum corrections, because the contributions from normal particles and their partners cancel each other out. The cancellation is effective only if some of the SUSY particles have masses in the range probed by the LHC. There are therefore well-founded hopes that SUSY particles might be detected in the higher-energy collisions of LHC Run 2.
The data collected by the ATLAS detector in 2015 are just an appetiser. The 3.2 fb–1 of integrated luminosity available are an order of magnitude less that collected in Run 1, and a small fraction of that expected by 2018. The first Run 2 searches for SUSY particles have focused on the partners of the quarks and gluons (called squarks and gluinos). They would be abundantly produced through strong interactions with cross-sections up to 40 times larger than in Run 1. The sensitivity has also been boosted by detector upgrades (in particular, the new “IBL” pixel layer installed near to the beam pipe) and improvements in the data analysis.
Squarks and gluinos would decay to quarks and the undetectable LSP, producing an excess of events with energetic jets and missing transverse momentum. The seven searches looked for such a signature, with different selections depending on the number of jets, b-tagged jets and leptons, to be sensitive to different production and decay modes. Six of the searches found event rates in good agreement with the Standard Model prediction, and placed new limits on squark and gluino masses. The figure shows the new limits for a gluino decaying to two b quarks and a neutralino LSP. For a light neutralino, the Run 1 limit of 1300 GeV on the gluino mass has been extended to 1780 GeV by the new results.
The seventh search looked for events with a Z boson, jets and missing transverse momentum, a final state where a 3σ was observed in the Run 1 data. Intriguingly, the new data show a modest 2σ excess over the background prediction. This intriguing excess, and a full investigation of all SUSY channels, make the upcoming 2016 data eagerly awaited.
The LHC Run 1, famed for its discovery of the Higgs boson, came to a conclusion on Valentine’s Day (14 February) 2013. The little-known fact is that the last three days of the run were reserved neither for the highest energies nor for the heaviest ions, but for relatively low-energy proton–proton collisions at 2.76 TeV centre-of-mass energy. Originally designed as a reference run for heavy-ion studies, it also provided the perfect opportunity to bridge the wide gap in jet measurements between the Tevatron’s 1.96 TeV and the LHC’s 7 and 8 TeV.
Jet measurements are often plagued by large uncertainties arising from the jet-energy scale, which itself is subject to changes in detector conditions and reconstruction software. Because the 2.76 TeV run was an almost direct continuation of the 8 TeV proton–proton programme, it provided a rare opportunity to measure jets at two widely separated collision energies with almost identical detector conditions and with the same analysis software. CMS used this Valentine’s Day gift from the LHC to measure the inclusive jet cross-section over a wide range of angles (absolute rapidity |y| < 3.0) and for jet transverse momenta pT from 74 to 592 GeV, nicely complementing the measurements performed at 8 TeV. The data are compared with the theoretical prediction at next-to-leading-order QCD (the theory of the strong force) using different sets of parameterisations for the structure of the colliding protons. This measurement tests and confirms the predictions of QCD at 2.76 TeV, and extends the kinematic range probed at this centre-of-mass energy beyond those available from previous studies.
Calculating ratios of the jet cross-sections at different energies allows particularly high sensitivity to certain aspects of the proton structure. The main theory scale-uncertainties from missing orders of perturbative calculations mostly cancel out in the ratio, leaving exposed the nearly pure, so-called DGLAP, evolution of the proton parton density functions (PDFs). In particular, one can monitor directly the evolution of the gluon density as a function of the energy of the collisions. This lays a solid foundation for future searches for new physics, for which the parametrisations of the PDFs are the leading uncertainty. Also, the experimental uncertainties cancel out in the ratio if the conditions are stable enough, as indeed they were for this period of data-taking. This principle was proven by ATLAS with 2.76 TeV data collected in 2011 (2013 Eur. Phys. J. C73 2509), but with a data set 20 times smaller.
The figure demonstrates the excellent agreement of the ratio of 2.76 and 8 TeV data with the QCD predictions, laying a solid foundation for future searches for new physics through smaller QCD uncertainties. This opportunistic use of the 2.76 GeV data by CMS has again proven the versatility and power of the LHC programme – a true Valentine’s Day for jet aficionados.
High-energy scattering of partons (quarks and gluons) produces collimated cones of particles called jets, the production rate of which can be calculated using perturbative QCD techniques. In heavy-ion collisions, partons lose energy in the hot, dense quark–gluon plasma (QGP), leading to a modification of the jet-energy distribution. Measurement of the jet characteristics can therefore be used to probe QGP properties.
In non-central heavy-ion collisions, the overlap region between the two nuclei where nucleon–nucleon scattering takes place has a roughly elliptic shape, resulting in a longer average path length – and therefore larger energy loss – for jets and particles that are emitted along the major axis, than for those emitted along the minor axis of the interaction region. The resulting variation of the azimuthal jet distribution can be expressed as jet ν2, the second coefficient of a Fourier expansion of the angular distribution. The magnitude of jet ν2 depends on the path-length dependence of the jet energy loss, which differs among proposed energy-loss mechanisms and can be studied via model comparisons.
The figure shows the new jet ν2 measurement of ALICE, using only charged particles for jet reconstruction, in semi-central collisions (30–50% collision centrality) compared with earlier jet ν2 results of ATLAS, using both charged and neutral fragments, and ν2 of ALICE and CMS, using single charged particles. The ALICE measurement covers the pT range between the charged-particle results and the ATLAS jet measurements. Jet measurements in this momentum range (20–50 GeV/c) are challenging due to the large background of soft particles in the event. This background is itself subject to azimuthal variations, which have to be carefully separated from the jet ν2 signal.
The significant positive ν2 for both jets and single charged particles indicates that in-medium parton energy loss is large, and that sensitivity to the collision geometry persists up to high pT. For a more quantitative interpretation in terms of density and path-length dependence, the experimental results will need to be interpreted within theoretical models that include the effects of parton energy loss as well as jet fragmentation. Larger data samples from Run 2 will further improve the measurement, giving more precise information about the nature of the QGP and its interactions with high-momentum quarks and gluons.
Owing to the large cross-section for charm production at the LHC, LHCb collected the world’s largest sample of charmed hadrons, allowing for stringent tests of the Cabibbo–Kobayashi–Maskawa (CKM) mechanism in the Standard Model (SM). The search for violation of the charge-parity (CP) symmetry in weak interactions is among the most relevant of such tests.
In recent years, LHCb confirmed unequivocally that CP violation occurs in the B0 system, and observed, for the first time, the same mechanism in B0s decays. All of the results match the SM predictions well. Although an outstanding experimental precision in the charm sector has been achieved, clear evidence of CP violation has not been seen yet. Mesons composed of a charm and an anti-up quark, so-called D0 particles, constitute an interesting laboratory for this search. The D0 meson is the only particle in nature containing an up-type quark that gives rise to the phenomenon of matter–antimatter oscillation.
In the SM, in contrast to the case of beauty mesons, the weak decays of charmed mesons are not expected to produce large CP-violating effects. However, CP violation can be enhanced by transitions involving new particles beyond those already known.
In 2011, LHCb reported the first evidence for CP violation in the charm sector, measuring the difference of the time-integrated CP asymmetries in D0 →K–K+ and D0 →π–π+ decays to differ significantly from zero, ΔACP = [–0.82±0.21 (stat.)±0.11 (syst.)]%. This result was reinforced later by new measurements from the CDF and Belle experiments. On the other hand, in 2014, LHCb published a more precise measurement, ΔACP = [+0.14±0.16 (stat.)±0.08 (syst.)]%, with a central value closer to zero than that obtained previously, with a precision of 2 × 10–3.
Now, using the full data sample collected in Run 1, LHCb breaks the wall of 10–3 for the first time ever, reaching a precision of 9 × 10–4. The measured value of ΔACP is [–0.10±0.08 (stat.)±0.03 (syst.)]%.
Although the evidence for CP violation in the charm sector is not confirmed, LHCb brings charm physics to the frontier of experimental knowledge. The experiment plans to collect an integrated luminosity of 50 inverse femtobarn, owing to an upgraded detector, in about 10 years from now. This will improve the precision of these results by an order of magnitude.
In the early morning of 3 December, scientists and engineers started the installation of KM3NeT (CERN Courier July/August 2012 p31). Once completed, it will be the largest detector of neutrinos in the Northern Hemisphere. Located in the depths of the Mediterranean Sea, the infrastructure will be used to study the fundamental properties of neutrinos and to map the high-energy cosmic neutrinos emanating from extreme cataclysmic events in space.
Neutrinos are the most elusive of elementary particles and their detection requires the instrumentation of enormous volumes: the KM3NeT neutrino telescope will occupy more than a cubic kilometre of seawater. It comprises a network of several hundred vertical detection strings, anchored to the seabed and kept taut by a submerged buoy. Each string hosts 18 light-sensor modules, equally spaced along its length. In the darkness of the abyss, the sensor modules register the faint flashes of Cherenkov light that signal the interaction of neutrinos with the seawater surrounding the telescope.
On board the Ambrosius Tide deployment boat, the first string – wound, like a ball of wool, around a spherical frame – arrived at the location of the KM3NeT-Italy site, south of Sicily. It was anchored to the seabed at a depth of 3500 m and connected to a junction box, already present on the sea floor, using a remotely operated submersible. The junction box is connected by a 100 km cable to the shore station located in Portopalo di Capo Passero in the south of Sicily.
After verification of the quality of the power and fibre-optic connections to the shore station, the go-ahead was given to trigger the unfurling of the string to its full 700 m height. During this process, the deployment frame is released from its anchor and floats towards the surface while slowly rotating. In doing so, the string unwinds from the spherical frame, eventually leaving behind a vertical string. The string was then powered on from the shore station, and the first data from the sensor modules started streaming to shore.
The successful acquisition of data from the abyss with the novel technology developed by the KM3NeT collaboration is a major milestone for the project. It represents the culmination of more than 10 years of research and development by the many research institutes that make up the international collaboration.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.