Comsol -leaderboard other pages

Topics

Allegro Neutrino ou L’attrape-temps

De François Vannucci
L’Harmattan
Broché: €27

CCboo4_10_13

Paris, dans les années 1950. Michel a 11 ans et voit des bulles, ce dont il est très fier. Terme résolument non scientifique, le mot ” bulle ” désigne pour le narrateur – Michel – ” une myriade de points lumineux dansant dans tous les sens “, points lumineux qui se révèleront être, au fil des pages, des ” neutrinos “. Nous y voilà.

Vous l’aurez compris, bien qu’écrit par un physicien des particules spécialisé en physique des neutrinos, ce livre est un roman. L’objectif n’étant pas de vous en apprendre des kilomètres sur ces fameux neutrinos, mais de vous embarquer dans une histoire dont ils sont les protagonistes. Et si l’histoire est contée par un jeune narrateur passionné de physique, il n’en reste pas moins qu’il s’agit d’un enfant, et non pas (encore) d’un physicien des particules.

L’intrigue, si je puis donner à l’histoire cette connotation très romanesque, est somme toute assez simple. Michel, écolier plutôt mauvais en maths mais bon en imagination, vit dans un minuscule appartement parisien avec ses parents. Il va à l’école à pied, troue ses chaussettes, accompagne sa mère au marché le jeudi et à la messe le dimanche, passe ses vacances d’été à la campagne, collectionne les timbres, adore les truffes au chocolat, et se délecte des histoires de science de son oncle Albert, fonctionnaire tire-au-flanc et lecteur assidu de magazines de vulgarisation scientifique. Mais ce qui anime surtout Michel, moins son histoire, c’est cette étrange capacité à voir des neutrinos.

Mais ne vous méprenez pas, les neutrinos de Michel sont loin de coller à l’idée que l’on s’en fait au CERN. Pour Michel, ce ne sont en effet ni plus ni moins que les constituants de l’âme des êtres vivants, ou, comme les décrit encore le narrateur, ” notre carburant spirituel “. Ce qui explique d’ailleurs que les jeunes en émettent plus que les vieux, et que ceux qui n’en émettent plus sont morts. CQFD.

Au final, ce livre est un long voyage dans la tête d’un gamin de 11 ans, à la rencontre de ses idées farfelues, de ses expérimentations et déductions scientifiques, de ses découvertes triomphantes et de ses confrontations au monde des adultes. Certains passages sont franchement réjouissants, et l’on finit par se prendre d’affection pour le jeune Michel, qui garde précieusement au fond de sa poche, un marron, une bille et une boîte pleine de neutrinos.

Reviews of Accelerator Science and Technology: Volume 5 – Applications of Superconducting Technology to Accelerators

By Alexander W Chao and Weiren Chou (eds.)
World Scientific
Hardback: £98
E-book: £74

Reviews of Accelerator Science and Technology is a journal series that began in 2008 with the stated aim “to provide readers with a comprehensive review of the driving and fascinating field of accelerator science and technology” – in a “journal of the highest quality”. It made an excellent start, with the first volume presenting the history of accelerators, followed by one that focused on medical applications. With one volume published a year, there are now five in the series, which appears to show no signs of failing in its original goals. Each has communicated a specific topic through the words of highly respected experts in articles that are well illustrated and presented. The books they form hold the promise of becoming an unrivalled encyclopaedia of accelerators.

CCboo1_09_13

This latest volume is no exception. It looks at the role of superconductivity in particle accelerators and how this intriguing phenomenon has been harnessed in the pursuit of ever-increasing beam energy or intensity. It also considers the application of superconducting technology beyond the realm of accelerators, for example in medical scanners and fusion devices. As well as containing much technical detail it is also full of fascinating facts.

Exactly 100 years ago, Heike Kamerlingh Onnes speculated that a 10 T superconducting magnet “ought not to be far away”. The first contributions to this volume, in particular, outline some of the steps to 10 T – and why it took longer than Onnes had originally hoped for the industrial-scale production of high-field superconducting magnets to become reality. A major problem lay in finding superconducting materials with physical properties that allow large-scale fabrication into wires. The first commercially produced wires were of niobium-zirconium, as used in early superconducting magnets for bubble chambers. However, this alloy was soon superceded by niobium-titanium (NbTi) – the material of choice in high-energy physics for the past 40 years, culminating today in the superconducting magnets for the LHC, as well as the huge toroidal and solenoidal magnets for the ATLAS and CMS detectors. Now, R&D effort is turning to Nb3Sn, which can allow higher magnetic fields, for example for the High Luminosity LHC project.

In this context, it is worth realizing that the biggest market for superconducting magnets is for nuclear magnetic-resonance spectroscopy – and it is here that a field as high as 23.5 T has been reached in a magnet based on Nb3Sn. There is also interest in high magnetic fields for magnetic resonance imaging (MRI) in medicine. In MRI the signal strength is related to the polarization of the protons in whatever is being scanned. Increasing the magnetic field from the 1.5 T that is currently used routinely to 10 T results in a polarization that is almost seven times higher, as well as improved signal-to-noise, leading to a clear improvement in image quality. Upcoming developments include 6 T magnets based on Nb3Sn.

The application of superconductivity in particle accelerators extends of course to the acceleration system, with the use of superconducting RF technology, first proposed in 1961. In this case, an important part of the R&D has focused on the physics and materials science of the surface – the surface resistance being a key parameter. So far there are no commercial applications for superconducting RF, but it has a role in many types of particle accelerators, from high-current storage rings at light sources to the high-energy machines of the future, such as the International Linear Collider (ILC).

Jefferson Lab’s Continuous Electron-Beam Accelerator Facility (CEBAF) is in a sense the “LHC” of superconducting RF, employing originally 360 five-cell 1.5 GHz cavities. It is currently undergoing an upgrade to 12 GeV with cavities that will operate at 19.2 MV/m. The European X-ray free-electron laser project, XFEL at DESY, will use 800 nine-cell 1.3 GHz cavities operating at more than 22 MV/m, but it would be dwarfed by an ILC with more than 15,000 cavities.

Besides the contributions on the major topics of superconducting magnets and RF, others are dedicated to cryogenic technology, industrialization and applications in medicine. In addition, following the journal’s tradition, there are articles that are not related to the overall theme but are of concern to the accelerator community worldwide. In this case, one article discusses the education and training of the next generation of accelerator physicists and engineers, while another reviews the history of the KEK laboratory in Japan. Altogether, this makes for more than a journal volume – in my opinion, it is a book, well worth reading.

Doing Physics: How Physicists Take Hold of the World (2nd edition)

By Martin H Krieger
Indiana University Press
Paperback: £16.99 $24.00
E-book: £14.99 $21.99

First published over two decades ago, Doing Physics has recently been released as a second edition. The book relates the concepts of physics to everyday experiences through a carefully selected series of analogies. It attempts to provide a non-scientific description of the methods employed by physicists to do their work, what motivates them and how they make sense of the world.

CCboo2_09_13

Martin Krieger began his academic career in experimental particle physics but quickly realised that he was not suited to working in large groups on experiments. Following his PhD, he moved into the social sciences and began working on computing models for city planning. He uses this experience to reflect on the way science is done from a social science viewpoint. His aim is to explain how doing physics is part of familiar general culture.

Krieger claims that physicists employ a small number of everyday notions to “get a handle on the world” experimentally and conceptually. He argues further that these models and metaphors describe the way physicists actually view the world and that to see the world in such terms is to be trained as a physicist. The analogies he chooses to support his ideas are drawn from the diverse areas of economics, computing, anthropology, theatre and engineering. Each of the first five chapters of the book is devoted to exploring each of the analogies in detail.

The book begins with a discussion on division of labour according to the economist Adam Smith’s model of a pin factory. The description of physical situations in terms of interdependent particles and fields is analogous to the design of a factory with its division of labour among specialists. The second chapter considers physical degrees of freedom as the parts of a complex model such as a clockwork mechanism or a computer. Chapter three is devoted to the anthropological theory of kinship and marriage, comparing the rules of relationships to the rules of interaction for the families of elementary particles or for chemical species – who can marry whom is like what can interact with what. The conclusion is that anything that is not forbidden will happen. The theatrical world provides an analogy to creation, where a vacuum is represented by a simple stage setting on which something arises out of nothing. Finally, machine-tool design is used to describe the physicist’s toolkit, where the work of doing physics is like grasping the world with handles and probes.

In the second edition, Krieger has provided some minor revisions to the text and has added a brief chapter on the role of mathematics and formal models in physics. This additional discussion is based on work from two other books he has written in the intervening years. It is questionable whether the second edition is warranted. In this highly technical chapter Krieger goes so far as to discern an analogy of analogies in physics and mathematics – a so-called syzygy.

Krieger claims that the book is for high-school students and upwards. However, it seems more appropriate for a specialized audience. Doing Physics is aimed at sociologists and philosophers of science, rather than at the science community itself. Indeed, for some the experience of reading the book could bring to mind a well known quote by Richard Feynman: “Philosophy of science is about as useful to scientists as ornithology is to birds.” For others, however, the book might provide some useful insights into patterns or relationships between physics and the everyday world that they have not previously considered.

Neutrinos head off again to Minnesota

In August, after a 16-month shutdown, Fermilab resumed operation of its Neutrinos at the Main Injector (NuMI) beamline and sent the first muon neutrinos to three neutrino experiments: MINERvA, MINOS+ and the new NOvA experiment. Numerous upgrades to the Fermilab accelerator complex have laid the groundwork for increasing the beam power of the NuMI beamline from about 350 kW to 700 kW. In addition, Fermilab has changed the NuMI horn and target configurations to deliver a higher-energy neutrino beam compared with pre-shutdown operation.

The NOvA experiment – still under construction – will study the properties of neutrinos, especially the elusive transition of muon neutrinos into electron neutrinos. The results will help to answer questions about the neutrino-mass hierarchy, neutrino oscillations and the role that neutrinos might have played in the evolution of the universe. The construction of the NOvA near and far detectors, both located 14 milliradians off the NuMI beam axis, is advancing quickly.

The near detector – located 100 m underground in a new cavern that has been excavated at Fermilab – has more than a quarter of its structure in place. Meanwhile, 810 km away in northern Minnesota, technicians have installed more than three quarters of the plastic structure that is the skeleton of the huge, 14,000 tonne far detector. More than 70% of the far detector’s plastic modules have been filled with 5.7 million litres of liquid scintillator and the first modules are recording data. The first part of the near detector will turn on before the end of the year.

The MINOS+ experiment uses the existing MINOS near and far detectors and takes advantage of the fact that the post-shutdown NuMI neutrino beam differs from earlier operation. The new beam, which is optimized for the NOvA experiment, yields higher-energy neutrinos at the location of the MINOS detector and should not show measurable oscillations. This means that MINOS+ can look for surprises. New types of neutrino interactions could deform the spectrum at the far detector’s distance of 735 km and the observation of additional neutrinos would indicate new physics. The experiment can even search for extra dimensions.

MINERvA – located in front of the MINOS near detector – is a dedicated neutrino-interaction experiment designed to study a range of nuclei. These measurements will not only improve understanding of the nucleus but will also be important inputs to neutrino-oscillation experiments. The MINERvA detector has several targets including helium, carbon, scintillator, water, steel and lead, followed by precise tracking and calorimetry. Previously, MINERvA took data in a beam around 3 GeV, where quasi-elastic, resonance and deep-inelastic scattering processes contribute roughly equally to the event rates. With the new, higher-energy neutrino beam, the event rate is much higher and the events are dominated by deep-inelastic scattering. While MINERvA will study all processes at higher energy, the huge increase in deep-inelastic scattering events in particular will allow precise measurements of the nuclear structure-functions.

Breaking news: The 2013 Nobel Prize in Physics

CCnew2_09_13

François Englert, left, and Peter W Higgs have been awarded the 2013 Nobel Prize in Physics “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”. The announcement by the ATLAS and CMS collaborations took place at CERN on 4 July last year.

CLOUD shines new light on aerosol formation in atmosphere

CCnew3_09_13

The CLOUD experiment at CERN, which is studying whether cosmic rays have a climatically significant effect on aerosols and clouds, is also tackling one of the most challenging and long-standing problems in atmospheric science – understanding how new aerosol particles are formed in the atmosphere and the effect that these particles have on climate. In a major step forward, the CLOUD collaboration has made the first measurements – either in the laboratory or in the atmosphere – of the formation rates of atmospheric aerosol particles that have been identified with clusters of precisely known molecular composition.

Atmospheric aerosol particles cool the climate by reflecting sunlight and by forming smaller but more numerous cloud droplets, which makes clouds brighter and extends their lifetimes. By current estimates, about half of all cloud drops are formed on aerosol particles that were “nucleated” – that is, produced from the clustering of tiny concentrations of atmospheric molecules rather than being emitted directly into the atmosphere, as happens with sea-spray particles. Nucleation is therefore likely to be a key process in climate regulation. However, the physical mechanisms of nucleation are not understood, nor is it known which molecules participate in nucleation and whether they derive from natural sources or are emitted by human activities.

CLOUD has studied the formation of new atmospheric particles in a specially designed chamber under extremely well controlled laboratory conditions of temperature, humidity and concentrations of nucleating vapours. This chamber is the first to reach the challenging technical requirements on ultra-low levels of contaminants that are necessary to carry out these experiments in the laboratory. Using state-of-the-art instruments that are connected to the chamber, the experiment can measure extremely low concentrations of atmospheric vapours. It can also study the precise molecular make-up and growth of newly formed molecular clusters from single molecules up to stable aerosol particles.

This has enabled CLOUD to measure the formation of particles that are caused by sulphuric acid and tiny concentrations of dimethylamine near the level of 1 molecule per trillon (1012 ) air molecules. The measurements, made at 278 K and 38% relative humidity, involved different combinations of sulphuric acid (H2SO4) and water (H2O), with ammonia (NH3) or dimethylamine (DMA). The figure shows the results from CLOUD together with various atmospheric measurements and theoretical expectations based on quantum chemical calculations of cluster binding energies. The results indicate that amines at typical atmospheric concentrations of only a few parts per trillion by volume combine with sulphuric acid to form highly stable aerosol particles at rates that are similar to those observed in the lower atmosphere. The figure also shows that these highly detailed measurements allow a fundamental understanding of the nucleation process at the molecular level because they can be reproduced by the theoretical calculations of molecular clustering.

Amines are atmospheric vapours that are closely related to ammonia. Derived largely from anthropogenic activities – mainly animal husbandry – they are also emitted by the oceans, the soil and from biomass burning. The results from CLOUD suggest that natural and anthropogenic sources of amines could influence climate. CLOUD has also found that ionization by cosmic rays has only a small effect on the formation rate of amine–sulphuric-acid particles, suggesting that cosmic rays are unimportant for the generation of these particular aerosol particles in the atmosphere.

• The CLOUD collaboration consists of the California Institute of Technology, Carnegie Mellon University, CERN, Finnish Meteorological Institute, Helsinki Institute of Physics, Johann Wolfgang Goethe University Frankfurt, Karlsruhe Institute of Technology, Lebedev Physical Institute, Leibniz Institute for Tropospheric Research, Paul Scherrer Institute, University of Beira Interior, University of Eastern Finland, University of Helsinki, University of Innsbruck, University of Leeds, University of Lisbon, University of Manchester, University of Stockholm and University of Vienna.

LHCb plans for cool pixel detector

As the first long shutdown since the start-up of the LHC continues, many teams at CERN are already preparing for future improvements in performance that were foreseen when the machine restarts after the second long shutdown, in 2019. The LHCb collaboration, for one, has recently approved the choice of technology for the upgrade of its Vertex Locator (VELO), giving the go-ahead for a new pixel detector to replace the current microstrip device.

CCnew4_09_13

The collaboration is working towards a major upgrade of the LHCb experiment for the restart of data-taking in 2019. Most of the subdetectors and electronics will be replaced so that the experiment can read out collision events at the full rate of 40 MHz. The upgrade will also allow LHCb to run at higher luminosity and eventually accumulate an order of magnitude more data than was foreseen with the current set-up.

The job of the VELO is to peer closely at the collision region and reconstruct precisely the primary and secondary interaction vertices. The aim of the upgrade of this detector is to reconstruct events with high speed and precision, allowing LHCb to extend its investigations of CP violation and rare phenomena in the world of beauty and charm mesons.

The new detector will contain 40 million pixels, each measuring 55 μm square. The pixels will form 26 planes arranged perpendicularly to the LHC beams over a length of 1 m (see figure). The sensors will come so close to the interaction region that the LHC beams will have to thread their way through an aperture of only 3.5 mm radius.

Operating this close to the beams will expose the VELO to a high flux of particles, requiring new front-end electronics capable of spitting out data at rates of around 2.5 Tbits/s from the whole VELO. To develop suitable electronics, LHCb has been collaborating closely with the Medipix3 collaboration. The groups involved have recently celebrated the successful submission and delivery of the Timepix3 chip. The VeloPix chip planned for the read-out of LHCb’s new pixel detector will use numerous Timepix3 features. The design should be finalized about a year from now.

An additional consequence of the enormous track rate is that the VELO will have to withstand a considerable radiation dose. This means that it requires highly efficient cooling, which must also be extremely lightweight. LHCb has therefore been collaborating with CERN’s PH-DT group and the NA62 collaboration to develop the concept of microchannel cooling for the new pixel detector. Liquid CO2 will circulate in miniature channels etched into thin silicon plates, evaporating under the sensors and read-out chips to carry the heat away efficiently. The CO2 will be delivered via novel lightweight connectors that are capable of withstanding the high pressures involved. LHCb will be the first experiment to use evaporative CO2 cooling in this way, following on from the successful experience with CO2 cooling delivered via stainless steel pipes in the current VELO.

All of these novel concepts combine to make a “cool” pixel detector, well equipped to do the job for the LHCb upgrade.

Genetic multiplexing: how to read more with less electronics

CCnew5_09_13

Modern physics experiments often require the detection of particles over large areas with excellent spatial resolution. This inevitably leads to systems equipped with thousands, if not millions, of read-out elements (strips, pixels) and consequently the same number of electronic channels. In most cases, it increases the total cost of a project significantly and can even be prohibitive for some applications.

In general, the size of the electronics can be reduced considerably by connecting several read-out elements to a single channel through an appropriate multiplexing pattern. However any grouping implies a certain loss of information and this means that ambiguities can occur. Sébastien Procureur, Raphaël Dupré and Stéphan Aune at CEA Saclay and IPN Orsay have devised a method of multiplexing that overcomes this problem. Starting from the assumption that a particle leaves a signal on at least two neighbouring elements, they built a pattern in which the loss of information coincides exactly with this redundancy of the signal, therefore minimizing the ambiguities of localization. In this pattern, two given channels are connected to several strips in such a way that these strips are consecutive only once in the whole detector. The team has called this pattern “genetic multiplexing” for its analogy with DNA, as a sequence of channels uniquely codes the particle’s position.

Combinatorial considerations indicate that, using a prime number p of channels, a detector can be equipped with at most p(p–1)/2+1 read-out strips. Furthermore, the degree of multiplexing can be adapted easily to the incident flux. Simulations show that a reduction in the electronics by a factor of two can still be achieved at rates up to the order of 10 kHz/cm2.

The team has successfully built and tested a large, 50 × 50 cm2 Micromegas (micro-pattern gaseous detector) with such a pattern, the 1024 strips being read out with only 61 channels. The prototype showed the same spatial resolution as a non-multiplexed detector (Procureur et al. 2013). A second prototype that is built from resistive-strip technology will be tested soon, to achieve efficiencies close to 100%.

The possibility of building large micro-pattern detectors with up to 30 times less electronics opens the door for new applications both within and beyond particle physics. In muon tomography, this multiplexing could be used to image large objects with an unprecedented accuracy, either by deflection (containers, trucks, manufacturing products) or by absorption (geological structures such as volcanoes, large monuments such as a cathedral roof). The reduction of the electronics and power consumption also suggests applications in medical imaging or dosimetry, where light, portable systems are required. Meanwhile, in particle physics this multiplexing could bring a significant reduction in the cost of electronics – after optimizing the number of channels with the incident flux – and simplifications in integration and cooling.

LBNE gains new partners from Brazil, Italy and UK

In mid-September, the Long Baseline Neutrino Experiment (LBNE) collaboration, based at Fermilab, welcomed the participation of 16 additional institutions from Brazil, Italy and the UK. The new members represent a significant increase in overall membership of more than 30% compared with a year ago. Now, more than 450 scientists and engineers from more than 75 institutions participate in the LBNE science collaboration. They come from universities and national laboratories in the US, India and Japan, as well as Brazil, Italy and the UK.

The swelling numbers strengthen the case to pursue an LBNE design that will maximize its scientific impact. In mid-2012, an external review panel recommended phasing LBNE to meet the budget constraints of the US Department of Energy (DOE). In December the project received the DOE’s Critical Decision 1 (CD-1) approval on its phase 1 design, which excluded both the near detector and an underground location for the far detector. However, the CD-1 approval explicitly allows for an increase in design scope if new partners are able to contribute additional resources. Under this scenario, goals for a new, expanded LBNE phase 1 bring back these excluded design elements, which are crucial to execute a robust and far-reaching neutrino, nucleon-decay and astroparticle-physics programme.

Volatile millisecond pulsar validates theory

For the first time, astronomers have caught a pulsar in a crucial transitional phase that explains the origin of the mysterious millisecond pulsars. The newly found pulsar swings back and forth between accretion-powered X-ray emission and rotation-driven radio emission, bringing conclusive evidence for a 30-year-old model that explains the high spin rate of millisecond pulsars as a result of matter accretion from a companion star.

Pulsars are the highly magnetized, spinning remnants of supernova explosions of massive stars and are primarily observed as pulsating sources of radio waves. The radio emission is powered by the rotating magnetic field and focused in two beams that stem from the magnetic poles. Similarly to a rotating lighthouse beacon, the rotation of the pulsar swings the emission cone through space, resulting in distant observers seeing regular pulses of radio waves (CERN Courier March 2013 p12). It is actually the kinetic rotational energy of the neutron star that is radiated away, leading to a gradual slow down of the rotation. While pulsars spin rapidly at birth, they tend to rotate more slowly – with periods of up to a few seconds – as they age. For this reason, astronomers in the 1980s were puzzled by the discovery of millisecond pulsars – old but extremely quickly rotating pulsars with periods of a few thousandths of a second.

The mysterious millisecond pulsars can be explained through a theoretical model known as the “recycling” scenario. If a pulsar is part of a binary system and is accreting matter from a stellar companion via an accretion disc, then it might also gain angular momentum. This process can “rejuvenate” old pulsars, boosting their rotation and making their periods as short as a few milliseconds. This scenario relies on the existence of accreting pulsars in binary systems, which can be detected through the X rays that are emitted in the accretion process. The discovery in the 1990s of the first X-ray millisecond pulsars was first evidence for this model but, until now, the search for a direct link between X-ray bright millisecond pulsars in binary systems and the radio-emitting millisecond pulsars has been in vain.

Now, the missing link to prove the validity of the scenario has finally been discovered by the wide-field IBIS/ISGRI imager on board ESA’s INTEGRAL satellite. A new X-ray source appeared in images taken on 28 March 2013 at the position of the globular cluster M28. Subsequent observations by the XMM-Newton satellite found a modulation of its X-ray emission at a period of 3.9 ms, revealing the incredibly fast spin of the neutron star of more than 250 rotations per second. A very clear modulation of the delay in the pulse arrival time further showed that a low-mass companion star orbits the pulsar every 11 hours.

These results obtained by an international team led by Alessandro Papitto from the Institute of Space Sciences in Barcelona were then compared with properties of a series of known radio pulsars in M28 and – luckily – they found one with precisely the same values. There is therefore no doubt that the radio and X-ray sources are the same pulsar, providing the missing link that validates the recycling scenario of millisecond pulsars. Follow-up radio observations by several antennae in Australia, the Netherlands and the US showed that the source does not exhibit radio pulsations when active in X-rays and vice-versa. It was only at the end of the X-ray outburst, on 2 May, that radio pulsations resumed.

This bouncing behaviour is caused by the interplay between the pulsar’s magnetic field and the pressure of accreted matter. When the accretion dominates, the source emits X rays and radio emission is inhibited by the presence of the accretion disc closing the magnetic field lines.

bright-rec iop pub iop-science physcis connect