Comsol -leaderboard other pages

Topics

Cosmic Ray Origin: Beyond the Standard Models

By Omar Tibolla et al. (eds)
Elsevier
Nuclear Physics B (Proc. Suppl.) 256–257 (2014)

1-s2.0-S2405601418X00030-cov200h

Where do cosmic rays, discovered more than a century ago, come from? The standard model of their origin points to natural particle accelerators in the form of shock waves in supernova remnants, but there is mounting experimental evidence that there are other sources. This conference brought together a range of experts to examine the evidence and to consider some of the key questions. What other sources might there be in the Galaxy? What causes the knee? Where (in energy) is the transition to an extragalactic component? What extragalactic sources are conceivable?

The Beauty of Physics: Patterns, Principles, and Perspectives

By A R P Rau
Oxford University Press
Hardback: £25
Also available as an e-book

41h9-LyYyzL._SX316_BO1,204,203,200_
The selection of topics in this book reflects the author’s four-decade career in research physics and his resultant perspective on the subject. While aimed primarily at physicists, including junior students, it also addresses other readers who are willing to think with symbols and simple algebra in understanding the physical world. Each chapter, on themes such as dimensions, transformations, symmetries, or maps, begins with simple examples accessible to all, while connecting them later to more sophisticated realizations in more advanced topics of physics.

Crackle and Fizz: Essential Communication and Pitching Skills for Scientists

By Caroline van den Brul
Imperial College Press
Hardback: £35
Paperback: £15
E-book: £11

CCboo3_06_15

The introduction of Crackle and Fizz sets out a trope that may sound familiar: a decade-old social faux pas between scientists and journalists at a dinner party, where the speed-dating format for presenting science was met with ire, derision and altogether not having a nice time. The claim is made that this could have been a chance to start over, to reframe science communication and realign the expectations of those involved. To do so misses out on the past few decades of development in the science-communication field, which is now reaching a reflective maturity and presence between academia, industry and media. Unfortunately, the same erasure is a leitmotif in many of the chapters that follow.

Caroline van den Brul’s credentials are impressive, with years at the helm of BBC productions and engagement workshops. This history forms the backbone of the book, setting an anecdote-per-chapter rate that reads more like an autobiography than an attempt to impart any lessons or experience to the reader. The remaining space is given over to consideration of narrative devices useful in contextualizing topics and engagement from a practitioner’s perspective. However, these are only superficially explored and offer little in variation. After many pages promoting the importance of clarity, the titular “Crackle” is eventually revealed in the final chapter to be a (somewhat forced) acronym that summarizes and distils all preceding guidance. Had this been the starting point from which each aspect was explored in depth, the tone and flow of the book may have made for a more compelling read. When used as the conclusion, it feels condescendingly simplified. It’s a shame that, considering van den Brul’s history, the final chapter is the main one worth reading.

Overall, the book feels less like the anticipated dive into years of experience, and more like a pre-lunch conference workshop. If you are in the first stages of incorporating engagement and communication into your current practice, working through each chapter’s closing questions could be of some use. Or, should you feel like refreshing your current framework, they might give you a moment’s pause and adjustment, but no more than any other evaluation.

A Chorus of Bells and Other Scientific Inquiries

By Jeremy Bernstein
World Scientific
Hardback: £25
E-book: £19

51p3NNPjbhL._SX332_BO1,204,203,200_

In this volume of essays, written across a decade, Bernstein covers a breadth of subject matter. The first part, on the foundations of quantum theory, reflects the author’s conversations with the late John Bell, who persuaded him that there is still no satisfactory interpretation of the theory. The second part deals with nuclear weapons, and includes an essay on the creation of the modern gas centrifuge by German prisoners of war in the Soviet Union. Two shorter sections follow: the first on financial engineering, with a profile of Louis Bachelier, the French mathematician who created the subject at the beginning of the 20th century; the second and final part is on the Higgs boson, and how it is used for generating mass.

To Explain the World: The Discovery of Modern Science

By Steven Weinberg
Harper Collins/Allen Lane
Hardback: £20 $28.99
Also available at the CERN bookshop

CCboo1_06_15

Steven Weinberg’s most recent effort is neither a treatise on the history of science nor a philosophical essay. The author presents instead his own panoramic view of the meandering roads leading to the Newtonian synthesis between terrestrial and celestial physics, rightfully considered as the beginning of a qualitatively new era in the development of basic science.

The first and second parts of the book deal, respectively, with Greek physics and astronomy. The remaining two parts are dedicated to the Middle Ages and to the scientific revolution of Copernicus, Galileo and Newton. The aim is to distil those elements that are germane to the development of modern science. The style is more persuasive than assertive: excerpts of philosophers, poets and historians are abundantly quoted and reproduced, with the aim of corroborating the specific viewpoints conveyed in the text. A similar strategy is employed when dealing with the scientific concepts involved in the discussion. More than a third of the 416 pages of the book contain a series of 35 “technical notes” – a quick reminder of a variety of geometric, physical and astronomical themes (the Thales theorem, the careful explanation of epicycles for inner and outer planets, the theory of rainbows and various other topics relevant to the main discussion of the text).

Passing before you through the pages, you will see not only Plato and Aristotle, but also Omar Khayyam, Albertus Magnus, Robert Grosseteste and many other progenitors of modern scientists. Nearly 2000 years separate the natural philosophy of the “Timaeus” from the birth of the scientific method. Many elements contributed serendipitously to the evolution leading from Plato to Galileo and Newton: the development of algebra and geometry, the divorce between science and religion, and an improved attitude of abstract thinkers towards technology. All of these aspects have certainly been important for the tortuous emergence of modern science. But are they sufficient to explain it? Scientists, historians and laymen will be able to draw their own lessons from the past as presented here, and this is just one of the intriguing aspects of this interdisciplinary book.

After reading this book quietly, you might be led to conclude that good scientific ideas and daring conjectures take a long time to mature. It has been an essential feature of scientific progress to understand which problems are ripe to study and which are not. No one could have made progress in understanding the nature of the electron, before the advent of quantum mechanics. The plans for tomorrow require not only boldness and fantasy, but also a certain realism that can be trained by looking at the lessons of the past. Today’s most interesting questions may not be scientifically answerable tomorrow, and lasting progress does not come by looking along a single line of sight, but all around, where there are mature phenomena to be scrutinized. This seems to be true for science as a whole, and in particular for physics.

The Oskar Klein Memorial Lectures 1988–1999

By Gösta Ekspong (ed.)
World Scientific
Hardback: £45
E-book: £34

CCboo2_06_15

Perhaps every reader of CERN Courier has heard about the Klein–Gordon equation, the Klein–Nishina (Compton effect) cross-section, the Klein paradox and the Kaluza–Klein compactified five-dimensional unified theory of gravity, electricity and magnetism. However, few will know about the scientist, Oskar Klein (1894–1977), the pre-eminent and visionary Swedish theoretical physicist from Stockholm whose work continues to influence us to this day.

This book is needed. The reason is described eloquently in the contribution by Alan Guth, whose words I paraphrase: how many recognize Oskar as the first name of “this” Klein? Compare here (by birth year, within 10 years): Niels B (1885), Hermann W (1885), Erwin S (1887), Satyendra N B (1894), Wolfgang P (1900), Enrico F (1901), Werner H (1901), Paul A M D (1902), Eugene W (1902), Robert O (1904). Thanks to this book, Oskar K (1894) will take his place on this short list.

Part of the book collects together all of the Oskar Klein Memorial Lectures given since the series began at Stockholm University in 1988, through to 1999, by many well-known theoreticians, from Chen Ning Yang to Gerard ’t Hooft. Some of these lectures relate to Klein because he often happened to “be there” at the beginning of a new field in physics. For example, in early 1948, Klein recognized immediately, following the disambiguation of the pion and muon, that muon decay and common beta decay can be described by the same four-fermion interaction (see the contribution by T D Lee).

The other part of the book – a third of the 450 pages – is a biographical collection about Klein and his pivotal scientific articles (about a fifth of the volume), all presented in English, although Klein published in Danish, French, English, German and Swedish, as a check of the titles in his publication list reveals. Having Klein’s important work all in one place can lead to interesting insights: for me, finding that 24 December 1928 was a special birthday.

On this day, just eight weeks after the Klein–Nishina paper on the interaction of radiation with electrons, the paper on the Klein paradox reached the editors of Zeitschrift für Physics. Klein concludes: “…(the) difficulty of the relativistic quantum mechanics emphasized by Dirac can appear already in purely mechanical problems where no radiation processes are involved.” The yet-to-be-recognized and discovered antiparticle – the positron – was the “difficulty”, allowing for both radiative and field-instigated pair production (the “paradox”), when vacuum instability is inherent in a prescribed external field configuration.

The Klein-paradox result resurfaced soon in the work by Werner Heisenberg and Hans Euler, and Julian Schwinger on the vacuum properties of QED. Today, as we head towards the centenary of the Klein paradox, pair production in strong fields is being addressed as a priority within the large community interested in ultra-intense laser pulses.

Oskar Klein was always a colleague I wished I could meet, and finally, I have. Thank you, Gösta Ekspong, for this introduction to my new-found hero. While at first my profound personal interest in this book arose from curiosity originating from many years of working out the consequences of the Klein paradox in heavy-ion collisions, I now see how Klein can serve as a role model. This is the book to own for anyone interested in seeing further by “standing on the shoulders of giants”.

Stable beams at 13 TeV

At 10.40 a.m. on 3 June, the LHC operators declared “stable beams” for the first time at a beam energy of 6.5 TeV. It was the signal for the LHC experiments to start taking physics data for Run 2, this time at a collision energy of 13 TeV – nearly double the 7 TeV with which Run 1 began in March 2010. After a shutdown of almost two years and several months re-commissioning without and with beam, the world’s largest particle accelerator was back in business. Under the gaze of the world via a live webcast and blog, the LHC’s two counter-circulating beams, each with three bunches of nominal intensity (about 1011 protons per bunch), were taken through the full cycle from injection to collisions. This was followed by the declaration of stable beams and the start of Run 2 data taking.

The occasion marked the nominal end of an intense eight weeks of beam commissioning (CERN Courier May 2015 p5 and June 2015 p5) and came just two weeks after the first test collisions at the new record-breaking energy. On 20 May at around 10.30 p.m., protons collided in the LHC at 13 TeV for the first time. These test collisions were to set up various systems, in particular the collimators, and were established with beams that were “de-squeezed” to make them larger at the interaction points than during standard operation. This set-up was in preparation for a special run for the LHCf experiment (“LHCf makes the most of a special run”), and for luminosity calibration measurements by the experiments where the beams are scanned across each other – the so-called “van der Meer scans”.

Progress was also made on the beam-intensity front, with up to 50 nominal bunches per beam brought into stable beams by mid-June. There were some concerns that an unidentified obstacle in the beam pipe of a dipole in sector 8-1 could be affected by the higher beam currents. This proved not to be the case – at least so far. No unusual beam losses were observed at the location of the obstacle, and the steps towards the first sustained physics run continued.

The final stages of preparation for collisions involved setting up the tertiary collimators (CERN Courier September 2013 p37). These are situated on the incoming beam about 120–140 m from the interaction points, where the beams are still in separate beam pipes. The local orbit changes in this region both during the “squeeze” to decrease the beam size at the interaction points and after the removal of the “separation bumps” (produced by corrector magnets to keep the beams separated at the interaction points during the ramp and squeeze). This means that the tertiary collimators must be set up with respect to the beam, both at the end of the squeeze and with colliding beams. In contrast, the orbit and optics at the main collimator groupings in the beam-cleaning sections at points 7 and 3 are kept constant during the squeeze and during collisions, so their set-up remains valid throughout all of the high-energy phases.

By the morning of 3 June, all was ready for the planned attempt for the first “stable beams” of Run 2, with three bunches of protons at nominal intensity per beam. At 8.25 a.m, the injection of beams of protons from the Super Proton Synchrotron to the LHC was complete, and the ramp to increase the energy of each beam to 6.5 TeV began. However, the beams were soon dumped in the ramp by the software interlock system. The interlock was related to a technical issue with the interlocked beam-position monitor system, but this was rapidly resolved. About an hour later, at 9.46 a.m, three nominal bunches were once more circulating in each beam and the ramp to 6.5 TeV had begun again.

At 10.06 a.m., the beams had reached their top energy of 6.5 TeV and the “flat top” at the end of the ramp. The next step was the “squeeze”, using quadrupole magnets on both sides of each experiment to decrease the size of the beams at the interaction point. With this successfully completed by 10.29 a.m., it was time to adjust the beam orbits to ensure an optimal interaction at the collision points. Then at 10.34 a.m., monitors showed that the two beams were colliding at a total energy of 13 TeV inside the ATLAS and CMS detectors; collisions in LHCb and ALICE followed a few minutes later.

At 10.42 a.m., the moment everyone had been waiting for arrived – the declaration of stable beams – accompanied by applause and smiles all round in the CERN Control Centre. “Congratulations to everybody, here and outside,” CERN’s director-general, Rolf Heuer, said as he spoke with evident emotion following the announcement. “We should remember this was two years of teamwork. A fantastic achievement. I am touched. I hope you are also touched. Thanks to everybody. And now time for new physics. Great work!”

The eight weeks of beam commissioning had seen a sustained effort by many teams working nights, weekends and holidays to push the programme through. Their work involved optics measurements and corrections, injection and beam-dump set-up, collimation set-up, wrestling with various types of beam instrumentation, optimization of the magnetic model, magnet aperture measurements, etc. The operations team had also tackled the intricacies of manipulating the beams through the various steps, from injection through ramp and squeeze to collision. All of this was backed up by the full validation of the various components of the machine-protection system by the groups concerned. The execution of the programme was also made possible by good machine availability and the support of other teams working on the injector complex, cryogenics, survey, technical infrastructure, access, and radiation protection.

Over the two-year shutdown, the four large experiments ALICE, ATLAS, CMS and LHCb also went through an important programme of maintenance and improvements in preparation for the new energy frontier.

Among the consolidation and improvements to 19 subdetectors, the ALICE collaboration installed a new dijet calorimeter to extend the range covered by the electromagnetic calorimeter, allowing measurement of the energy of the photons and electrons over a larger angle (CERN Courier May 2015 p35). The transition-radiation detector that detects particle tracks and identifies electrons has also been completed with the addition of five more modules.

A major step during the long shutdown for the ATLAS collaboration was the insertion of a fourth and innermost layer in the pixel detector, to provide the experiment with better precision in vertex identification (CERN Courier June 2015 p21). The collaboration also used the shutdown to improve the general ATLAS infrastructure, including electrical power, cryogenic and cooling systems. The gas system of the transition-radiation tracker, which contributes to the identification of electrons as well as to track reconstruction, was modified significantly to minimize losses. In addition, new chambers were added to the muon spectrometer, the calorimeter read-out was consolidated, the forward detectors were upgraded to provide a better measurement of the LHC luminosity, and a new aluminium beam pipe was installed to reduce the background.

To deal with the increased collision rate that will occur in Run 2 – which presents a challenge for all of the experiments – ATLAS improved the whole read-out system to be able to run at 100 kHz and re-engineered all of the data acquisition software and monitoring applications. The trigger system was redesigned, going from three levels to two, while implementing smarter and faster selection-algorithms. It was also necessary to reduce the time needed to reconstruct ATLAS events, despite the additional activity in the detector. In addition, an ambitious upgrade of simulation, reconstruction and analysis software was completed, and a new generation of data-management tools on the Grid was implemented.

The biggest priority for CMS was to mitigate the effects of radiation on the performance of the tracker, by equipping it to operate at low temperatures (down to –20 °C). This required changes to the cooling plant and extensive work on the environment control of the detector and cooling distribution to prevent condensation or icing (CERN Courier May 2015 p28). The central beam pipe was replaced by a narrower one, in preparation for the installation in 2016–2017 of a new pixel tracker that will allow better measurements of the momenta and points of origin of charged particles. Also during the shutdown, CMS added a fourth measuring station to each muon endcap, to maintain discrimination between low-momentum muons and background as the LHC beam intensity increases. Complementary to this was the installation at each end of the detector of a 125 tonne composite shielding wall to reduce neutron backgrounds. A luminosity-measuring device, the pixel luminosity telescope, was installed on either side of the collision point around the beam pipe.

Other major activities for CMS included replacing photodetectors in the hadron calorimeter with better-performing designs, moving the muon read-out to more accessible locations for maintenance, installation of the first stage of a new hardware triggering system, and consolidation of the solenoid magnet’s cryogenic system and of the power distribution. The software and computing systems underwent a significant overhaul during the shutdown to reduce the time needed to produce analysis data sets.

To make the most of the 13 TeV collisions, the LHCb collaboration installed the new HeRSCheL detector – High Rapidity Shower Counters for LHCb. This consists of a system of scintillators installed along the beamline up to 114 m from the interaction point, to define forward rapidity gaps. In addition, one section of the beryllium beam pipe was replaced and the new beam pipe support-structure is now much lighter.

The CERN Data Centre has also been preparing for the torrent of data expected from collisions at 13 TeV. The Information Technology department purchased and installed almost 60,000 new cores and more than 100 PB of additional disk storage to cope with the increased amount of data that is expected from the experiments during Run 2. Significant upgrades have also been made to the networking infrastructure, including the installation of new uninterruptible power supplies.

First stable beams was an important step for LHC Run 2, but there is still a long way to go before this year’s target of around 2500 bunches per beam is reached and the LHC starts delivering some serious integrated luminosity to the experiments. The LHC and the experiments will now run around the clock for the next three years, opening up a new frontier in high-energy particle physics.

• Complied from articles in CERN’s Bulletin and other material on CERN’s website. To keep up to date with progress with the LHC and the experiments, follow the news at bulletin.cern.ch or visit www.cern.ch.

Turkey becomes associate member state of CERN

The Republic of Turkey became an associate member state of CERN on 6 May, following notification that Turkey has ratified an agreement signed last year, granting this status to the country. Turkey’s new status will strengthen the long-term partnership between CERN and the Turkish scientific community. Associate membership will allow Turkey to attend meetings of the CERN Council. Moreover, it will allow Turkish scientists to become members of the CERN staff, and to participate in CERN’s training and career-development programmes. Finally, it will allow Turkish industry to bid for CERN contracts, thus opening up opportunities for industrial collaboration in areas of advanced technology.

The road from CERN to space

Roberto Battiston

The Agenzia Spaziale Italiana (ASI) – the Italian Space Agency – has the tag line “The road to space goes through Italy.” Make a simple change and it becomes a perfectly apt summary of the career to date of the agency’s current president. For Roberto Battiston, the road to space goes through CERN.

As a physics student at the famous Scuola Normale in Pisa, which has provided many of CERN’s notable physicists, he studied the production of dimuons in proton collisions at the Intersecting Storage Rings, under the guidance of Giorgio Bellettini. For his PhD, he moved in 1979 to the University of Paris IX in Orsay, where his thesis was on the construction of the central wire-proportional chamber of UA2, the experiment that went on, with UA1, to discover the W and Z particles at CERN. Until 1995, his research focused on electroweak physics, first at the SLAC Linear Collider and then, back at CERN, at the L3 experiment at the Large Electron–Positron collider. However, at the point when the LHC project was on its starting blocks, his interest began to turn towards cosmic rays. With Sam Ting, who led the L3 experiment, Battiston became involved in the Alpha Magnetic Spectrometer, which as AMS-02 has now been taking data on board the International Space Station (ISS) for four years (CERN Courier July/August 2011 p18). Three years after the launch of AMS-02, Battiston found himself closer to space, at least metaphorically, when he was appointed president of ASI in May 2014.

The decision to move away from experiments at the LHC will surprise many people. How do you explain your unconventional choice?

The LHC, a machine of extraordinary importance, as its results have shown, was the obvious choice for someone who wanted to continue a research career in particle physics. But I chose to take a less beaten path. In space, less has been researched and less has been discovered than at accelerators. I realized that, in both neutral and charged cosmic rays, we are presented with information that is waiting to be decoded, potentially hiding unforeseen discoveries. The universe is, by definition, the ultimate laboratory of physics, a place where, in the various phases of its evolution, matter and energy have reached all of the possible conditions one could imagine – conditions that we will never be able to reproduce artificially. For this reason, when I was discussing with Sam Ting in 1994 about what would be the most interesting new project – whether to go for an LHC experiment or, radically, for a new direction – I had no hesitation: space and space exploration immediately triggered my enthusiasm and curiosity. I absolutely do not regret this choice.

Was your experience and know-how as a high-energy physicist useful for the construction and, now, the operation of AMS?

The AMS detector was designed exactly like the LHC experiments. It has an electromagnetic spectrometer with a particle tracker and particle identifiers. Subdetectors are positioned before and after the magnet and the tracker, to identify the types of particles passing through the experiment. We use the same approach as at accelerators – 99% of the events are thrown away, the interesting ones being the few that remain. However, within these data, processes that we still do not know about remain potentially hidden. The challenge is to find new methods to look at this radiation and extract a signal, exactly as at the LHC. The difference is that the trigger rate is kilohertz in space, rather than gigahertz at the LHC: AMS gets one or two particles at a time instead of hundreds of thousands per event. Moreover, space offers some advantages and optimal conditions for detecting particles: surprisingly, it provides stable environmental conditions, so detectors that on the ground would suffer from environmental changes – such us too much heat or atmospheric pressure changes or humidity – enjoy ideal conditions in space. Silicon detectors, transition-radiation detectors, electromagnetic calorimeters and Cherenkov detectors have performed much better than the best detectors on the ground.

But in space you must face more complex challenges that put constraints on your instrument’s design?

Given the complexity of the current LHC experiments, the situation is comparable. Repairing a huge detector 100 m below ground is as difficult as repairing a detector in space. If something breaks down underground, dismantling the whole structure of a detector might require months if not a year. Everything in both environments must have sufficient reliability to operate for a long time. In space, radiation doses are relatively small compared with the doses that the detectors can sustain, but there are problems of the shock at launch, pressure drops, extreme temperatures and the ability to operate in a vacuum, so the tests that a detector must pass to be able to perform in space are severe. Shock and stress resistance at launch require the detectors to be more robust than those built to stay on Earth. Another huge difference is weight and power. On Earth there are no limits. In space, we must use low-weight instruments – a few tonnes compared with the 10,000 tonnes of the large LHC detectors. And because detectors in space are powered by solar panels, there are power limits – a few kilowatts compared with tens of megawatts at the LHC. So in space, resources are optimized to the last small part.

What about the choice of leading technology vs reliability, for an experiment in space?

It is true that in space we have instruments that are dated, technologically speaking. But AMS is an exception: we made the effort of bringing to space technology developed at CERN since 2000, which has shown itself to be 10–100 times more powerful and effective than current space standards.]

Space is particle physics multiplied to the nth power.

Roberto Battiston

Now, with AMS-02 successfully installed on the ISS and reaping promising results, you have been appointed president of the ASI, one of the large European space agencies. What can a physicist like yourself bring to the management of the space industry at the European and international level?

Space is a place were human dreams converge: from photographing the Moon, to walking on Mars, to taking a snapshot of the first instants of the universe – these are global dreams of humanity. Yet, space is a different world from physics. In certain aspects, it’s wider. Particle physics is an international discipline, but is so focused that the bases for discussion are limited, however fascinating and however important might be the consequences of finding a new brick in the construction of the universe. Space is particle physics multiplied to the nth power. It is a context, not just one discipline. Many different sectors interact, but each has its own dynamics – my leitmotiv is “interdisciplinarity”. Many different things happen at a fast pace, which requires a great capacity for synthesis and ability to process a lot of data in a short time. Decisions must be taken so fast that a well-trained brain is needed. I can only thank my tough training in physics research for this. The tough discipline at the basis of research at CERN and in astroparticle physics, the continuous challenge of having to solve complex problems, the requirement of working in a large community made of people with different characters, cultures and languages, typical of experimental physics, are an asset within the context of a space agency.

How do large collaborations work in space research? Is it as global as the LHC?

The capability to keep the construction effort of very large accelerators or extremely complex detectors under direct control is still, today, an essential aspect of the high-energy physics community. Space research has not made the transition to a global collaboration in the same way as CERN, because it is still dominated by a strong element of international politics and national prestige. The amount of funding involved and the related industrial aspects and business pressures are so big, that decisions must be taken at the level of heads of state and government.

Is there a difference in approach between NASA and ESA?

They’re both huge agencies, although NASA has four times the budget of ESA. In the past, they’ve collaborated on large projects, but in the past 10 years this collaboration has dimmed, as is the case for LISA [the Laser Interferometer Space Antenna]. Sometimes, such projects are even done in competition, as in the case of WMAP and Planck. The US pulled out of Rosetta long ago, and is now focused on the James Webb Space Telescope. To do so, the US basically chose to stop most international collaborations in science, except for the ISS and exploration. The ISS exists because of a precise political will. It is a demonstration that collaboration in space is decided top-down instead of bottom-up, and it can hold or break according to politics.

AMS will soon be joined in space by new powerful instruments to study cosmic rays. Are we witnessing a change of focus, from particle physics in the lab back to the sky?

Space is a less-frequented frontier, and it is understandable that it is now attracting many physicists. Astroparticle physics is a bridge between the curiosity of particle physicists who try to understand fundamental problems and the tradition of astronomy to observe the universe. Two different aspects of physics converge here: deciphering vs photographing and explaining. In astroparticle physics we try to find traces of fundamental phenomena, in astrophysics, to explain what we are able to see.

So what would your advice be to young physics graduates? Where would they best fulfil their research ambitions today?

Physics in space is becoming enormously interesting, not just in the understanding of both the infinitely small and the infinitely large. In the coming decades, astrophysics and particles studied in space radiation will be the place from where surprises and important discoveries could come, although this will take time and more sophisticated technologies, because the limits of technology are farther from the limits of the observable phenomena in the universe than in the case of particle accelerators. Building a new accelerator will require decades and big investments, as well as new technologies, but most of all it will need a discovery indicating where to look. The resources required are so considerable that we will not be able to build such a machine just to explore and see what there is at higher energies, as we did many times in the past. This is less true in astrophysics. There will surely be decades of discoveries with more sophisticated instruments, the frontiers are not completely explored at all. However, physics keeps its outstanding fascination. With current computing capacity, latest technologies, the present understanding of quantum mechanics, the interactions between physics and biology, the amount of physics that you can do at atomic and subatomic level, using many atoms together, cold systems and so on – there are so many sectors in which an excellent physicist can find great satisfaction.

And after ASI, will you go back to particle physics?

For the moment I need to put all of my energy into the job that has just started. I have not lost the pleasure of discovery, and the main objective of the years ahead is to support the best ideas in space science and technology, trying to get results as quickly as possible. And of course, I will keep following AMS.

The Mu2e experiment: a rare opportunity

The Mu2e experiment at Fermilab recently achieved an important milestone, when it received the US Department of Energy’s critical-decision 2 (CD-2) approval in March. This officially sets the baselines in the scope, cost and schedule of the experiment. At the same time, the Mu2e collaboration was awarded authorization to begin fabricating one of the experiment’s three solenoids and to begin the construction of the experimental hall, which saw ground-breaking on 18 April (figure 1). The experiment will search with unprecedented sensitivity for the neutrinoless conversion of a muon into an electron.

Some history

The muon was first observed in 1937 in cosmic-ray interactions. The implications of this discovery, which took decades of additional progress in both experiment and theory to reveal, were profound and ultimately integral to the formulation of the Standard Model. Among the cornerstones of the model are symmetries in the underlying mathematics and the conservation laws they imply. This connection between theory (the mathematical symmetries) and experiment (the measurable conservation laws) was formalized by Emmy Noether in 1918, and is fundamental to particle physics. For example, the mathematics describing the motion of a system of particles gives the same answer regardless of where in the universe this system is placed. In other words, the equations of motion are symmetric, or invariant, to translations in space. This symmetry manifests itself as the conservation of momentum. A similar symmetry to translations in time is responsible for the conservation of energy. In this way, in particle physics, observations of conserved quantities offer important insights into the underlying mathematics that describe nature’s inner workings. Conversely, when a conservation law is broken, it often reveals something important about the underlying physics.

The implications of neutrino mixing have yet to be revealed fully.

In the Standard Model there are three families of quarks and three families of leptons. Generically speaking, members of the same family interact preferentially with one another. However, it has long been known that quark families mix. The Cabibbo–Kobayashi–Maskawa matrix characterizes the degree to which a particular quark interacts with quarks of a different family. This phenomenon has profound implications, and plays a role in the electroweak interactions that power the Sun and in the origin of CP violation. For decades it appeared that the lepton family did not mix: the lepton family number was always conserved in experiments. This changed with the observation that neutrinos mix (Fukuda et al. 1998, Ahmad et al. 2001). This discovery has profound implications; for example, neutrinos must have a finite mass, which requires the addition of a new field or a new interaction to the original Standard Model – the updated Standard Model is sometimes denoted the νSM. Indeed, the implications of neutrino mixing have yet to be revealed fully, and a vigorous worldwide experimental programme is aimed at further elucidating the physics underlying this phenomena. As often happens in science, the discovery of neutrino oscillations gave rise to a whole new set of questions. Among them is this: if the quarks mix, and the neutral leptons (the neutrinos) mix, what about the charged leptons?

A probe of new physics

Searches for charged-lepton flavour violation (CLFV) have a long history in particle physics. When the muon was discovered, one suggestion was that it might be an excited state of the electron, and so experiments searched for μ → eγ decays (Hicks and Pontecorvo 1948, Sard and Althaus 1948). The non-observation of this reaction, and the subsequent realization that there are two distinct neutrinos produced in traditional muon decay, led physicists to conclude that the muon was a new type of lepton, distinct from the electron. This was an important step along the way to formulating a theory that included several families of leptons (and, eventually, quarks). Nevertheless, searches for CLFV have continued ever since, and it is easy to understand why. In the Standard Model, with massless neutrinos, CLFV processes are strictly forbidden. Therefore, any observation of a CLFV decay would signal unambiguous evidence of new physics beyond the Standard Model. Today, even with the introduction of neutrino mass, the situation is not significantly different. In the νSM, the rate of CLFV decays is proportional to [Δm2ij/M2W]2, where Δm2ij is the mass-squared difference between the ith and jth neutrino, and MW is the mass of the W boson. The predicted rates are therefore in the region of 10–50 or smaller – far below any experimental sensitivity currently conceivable. Therefore, it remains the case that any observation of a CLFV interaction would be a discovery of new physics.

The case for pursuing CLFV searches is compelling. A wide variety of models of new physics predict large enhancements relative to the νSM (30–40 orders of magnitude) for CLFV interactions. Extra dimensions, little Higgs, lepto quarks, heavy neutrinos, grand unified theories, and all variety of supersymmetric models predict CLFV rates to which upcoming experiments will have sensitivity (see, for example, Mihara et al. 2013). Importantly, ratios of various CLFV interactions can discriminate among the different models and offer insights into the underlying new physics complementary to what experiments at the LHC, neutrino experiments, or astroparticle-physics endeavours can accomplish.

The most constraining limits on CLFV come from μ → eγ muon-to-electron conversion, μ → 3e, K → ll’, and τ decays. In the coming decade the largest improvements in sensitivity will come from the muon sector. In particular, there are plans for dramatic improvements in sensitivity for the muon-to-electron conversion process, in which the muon converts directly to an electron in the presence of a nearby nucleus with no accompanying neutrinos, μN → eN. The presence of the nucleus is required to conserve energy and momentum. The process is a coherent one and, apart from receiving a small recoil energy, the nucleus is unchanged from its initial state. The Mu2e experiment at Fermilab (Bartoszek et al. 2015) and the COMET experiment at the Japan Proton Accelerator Research Complex (Cui et al. 2009) both aim to improve the current state-of-the-art by a factor of 10,000, starting in the next five years.

The Mu2e experiment

The Mu2e experiment will use the existing Fermilab accelerator complex to take 8-GeV protons from the Booster, rebunch them in the Recycler, and slow-extract them to the experimental apparatus from the Muon Campus Delivery Ring, which was formerly the anti-proton Accumulator/Debuncher ring for the Tevatron. Mu2e will collect about 4 × 1020 protons on target, resulting in about 1018 stopped muons, which will yield a single-event sensitivity for μN → eN of 2.5 × 10–17 relative to normal muon nuclear capture (μN → νμN´). The expected background yield over the full physics run is estimated to be less than half an event. This gives an expected sensitivity of 6 × 10–17 at 90% confidence level and a discovery sensitivity of 5σ to all conversion rates larger than about 2 × 10–16. For comparison, many of the new-physics models discussed above predict rates as large as 10–14, which would yield hundreds of signal events. This projected sensitivity is 10,000 times better than the world’s current best limit (Bertl et al. 2006), and will probe effective mass scales for new physics up to 104 TeV/c2, well beyond what experiments at the LHC can explore directly.

The Mu2e experimental concept is simple. Protons interact with a primary target to create charged pions, which are focused and collected by a magnetic field in a volume where they decay to yield an intense source of muons. The muons are transported to a stopping target, where they slow, stop and are captured in atomic orbit around the target nuclei. Mu2e will use an aluminium stopping target: the lifetime of the muon in atomic orbit around an aluminium nucleus is 864 ns. The energy of the electron from the CLFV interaction μN → eN – given by the mass of the muon less the atomic binding energy and the nuclear recoil energy – is 104.96 MeV. Because the nucleus is left unchanged, the experimental signature is a simple one – a mono-energetic electron and nothing else. Active detector components will measure the energy and momentum of particles originating from the stopping target and discriminate signal events from background processes.

Because the signal is a single particle, there are no combinatorial backgrounds, a limiting factor for other CLFV reactions. The long lifetime of the muonic-aluminium atom can be exploited to suppress prompt backgrounds that would otherwise limit the experimental sensitivity. While the energy scale of the new physics that Mu2e aims to explore is at the tera-electron-volt level, the physical observables are at much lower energy. In Mu2e, 100 MeV is considered “high energy”, and the vast majority of background electrons are at energies < Mμ/2 ~ 53 MeV.

Mu2e’s dramatic increase in sensitivity relative to similar experiments in the past is enabled by two important improvements in experimental technique: the use of a solenoid in the region of the primary target and the use of a pulsed proton beam. Currently, the most intense stopped-muon source in the world is at the Paul Scherrer Institut in Switzerland, where they achieve more than 107 stopped-μ/s using about 1 MW of protons. Using a concept first proposed some 25 years ago (Dzhilkibaev and Lobashev 1989), Mu2e will place the primary production target in a solenoidal magnetic field. This will cause low-energy pions to spiral around the target where many will decay to low-energy muons, which then spiral down the solenoid field and stop in an aluminium target. This yields a very efficient muon beamline that is expected to deliver three-orders-of-magnitude-more stopped muons per second than past facilities, using only about 1% of the proton beam power.

A muon beam inevitably contains some pions. A pulsed beam helps to control a major source of background from the pions. A low-energy negative pion can stop in the aluminium target and fall into an atomic orbit. It annihilates very rapidly on the nucleus, producing an energetic photon a small percentage of the time. These photons can create a 105 MeV electron through pair production in the target, which can, in turn, fake a conversion electron. Pions at the target must be identified to high certainty or be eliminated. With a pulsed muon beam, the search for conversion electrons is delayed until almost all of the pions in the beam have decayed or interacted. The delay is about 700 ns, while the search period is about 1-μs long. The lifetime of muonic aluminium is long enough that most of the signal events occur after the initial delay. To prevent pions from being produced and arriving at the aluminium target during the measurement period, the beam intensity between pulses must be suppressed by 10 orders of magnitude.

The Mu2e apparatus consists of three superconducting solenoids connected in series (figure 2). Protons arriving from the upper right strike a tungsten production target in the middle of the production solenoid. The resulting low-energy pions decay to muons, some of which spiral downstream through the “S”-shaped transport solenoid (TS) to the detector solenoid (DS), where they stop in an aluminium target. A strong negative magnetic-field gradient surrounding the production target increases the collection efficiency and improves muon throughput in the downstream direction. The curved portions of the TS, together with a vertically off-centre collimator, preferentially transmit low-momentum negative particles. A gradient surrounding the stopping target reflects some upstream-spiralling particles, improving the acceptance for conversion electrons in the detectors.

When a muon stops in the aluminium target, it emits X-rays while cascading through atomic orbitals to the 1s level. It then has 61% probability of being captured by the nucleus, and 39% probability of decaying without being captured. In the decay process, the distribution of decay electrons largely follows the Michel spectrum for free muon decay, and most of the electrons emitted have energies below 53 MeV. However, the nearby nucleus can absorb some energy and momentum, with the result that, with low probability, there is a high-energy tail in the electron distribution reaching all of the way to the conversion-electron energy, and this poses a potential background. Because the probability falls rapidly with increasing energy, this background can be suppressed with sufficiently good momentum resolution (better than about 1% at 105 MeV/c).

Detector components

Inside the DS, particles that originate from the stopping target are measured in a straw-tube tracker followed by a barium-fluoride (BaF2) crystal calorimeter array. The inner radii of the tracker and calorimeter are left un-instrumented, so that charged particles with momenta less than about 55 MeV/c, coming from the beamline or from Michel decays in the stopping target, have low transverse momentum and spiral downstream harmlessly.

The tracker is 3-m long with inner and outer active radii of 39 cm and 68 cm, respectively. It consists of about 20,000 straw tubes 5 mm in diameter, which have 15-μm-thick mylar walls and range in length from 0.4–1.2 m (figure 3). They are oriented perpendicular to the solenoid axis. Conversion-electron candidates make between two and three turns of the helix in the 3-m length. The tracker provides better than 1 MeV/c (FWHM) resolution for 105 MeV/c electrons.

The final solenoid commissioning is scheduled to begin in 2019.

Situated immediately behind the tracker, the calorimeter provides sufficient energy and timing resolution to separate muons and pions from electrons with energy around 100 MeV. The BaF2 crystals have a fast component (decay time around 1 ns) that makes the Mu2e calorimeter tolerant of high rates without significantly affecting the energy or timing resolutions. Surrounding the DS and half the TS is a four-layer scintillator system that will identify through-going cosmic rays with 99.99% efficiency. A streaming data acquisition (DAQ) architecture will handle about 70 GB of data a second when beam is present. A small CPU farm will provide an online software trigger to reduce the accept rate to about 2 kHz. A dedicated detector system will monitor the suppression of out-of-time protons, while another will determine the number of stopped muons.

Having cleared the CD-2 milestone in March, the Mu2e collaboration is now focused on clearing the next hurdle – a CD-3 “construction readiness” review in early 2016. In preparation, prototypes of the tracker, calorimeter, cosmic-ray veto, DAQ and other important components are being built and tested. In addition, the fabrication of 27 coil modules that make up the “S” of the transport solenoid will begin soon, and the building construction will continue into 2016. The final solenoid commissioning is scheduled to begin in 2019, while detector and beamline commissioning are scheduled to begin in 2020.

bright-rec iop pub iop-science physcis connect