Bluefors – leaderboard other pages

Topics

CMS identifies Higgs bosons decaying to bottom quarks

CCnew5_05_15

The mass of the Higgs boson discovered at CERN is close to 125 GeV. If it really is the Standard Model Higgs boson (H), it should decay predominantly into a bottom quark–antiquark pair (bb), with a probability of about 58%. Therefore, the observation and study of the H → bb decay, which involves the direct coupling of H to fermions and in particular to down-type quarks like d-, s- and b-quarks, is essential in determining the nature of the discovered boson. The inclusive observation of the decay H → bb is currently not achievable at the LHC: in proton–proton collisions, bb pairs are produced abundantly via the strong force as described via QCD, providing a completely irreducible background.

CCnew4_05_15

An intriguing and challenging way to search for H → bb is through the mechanism of vector-boson fusion (VBF). In this case, the signal features a four-jet final state: two b-quark (bb) jets originating from the Higgs-boson decay, and two light quark (qq) jets, predominantly in the forward and backward directions with respect to the beamline – a distinctive signature of VBF in proton collisions. An additional peculiar feature of VBF is that no QCD colour is exchanged in the processes. This leads to the expectation of a “rapidity gap” – that is, reduced hadronic activity between the two tagging qq jets, apart from Higgs boson decay products.

CMS has searched for these VBF-produced Higgs bosons decaying to b quarks in the 2012 8-TeV proton–proton collision data. This is the only fully hadronic final state that is employed to search for a Standard Model Higgs boson at the LHC. A crucial dedicated data-triggering strategy was put in place, both within standard “prompt” data streams and, in parallel, within “parked” data streams that were reconstructed later, during the LHC shutdown. Candidate events are required to have four jets with transverse momenta above optimized thresholds. Separation in terms of pseudorapidity (angle) and b-quark tagging criteria are employed to assign two jets to the bb system and the other two jets to the qq VBF-tagging jet system.

CCnew6_05_15

Selected events are passed to a multi-variate boosted decision tree (BDT) trained to separate signal events from the large background of multi-jet events produced by QCD. The events are categorized according to the output values of the BDT, making no use of the kinematic information of the two b-jet candidates. Subsequently, the invariant-mass distribution of two bjets is analysed in each category, to search for a signal “bump” on top of the smooth background shape. The figure shows the results of the fit in the best signal category. They reveal an observed (expected) significance of the signal of 2.2 (0.8)σ, for a Higgs-boson mass of 125 GeV. A parallel measurement of Z → bb decays in the selected data samples, using the same signal-extraction technique, has been performed to validate the analysis strategy.

The results of this search have been combined with results of other CMS searches for the decay of the Higgs boson to bottom quarks, produced in association with a vector boson, or with a top-quark pair. For m= 125 GeV, the combination yields a fitted H → bb signal strength μ = 1.03 + 0.44, relative to the expectations of the Standard Model, with a significance of 2.6σ. This is a convincing hint from the LHC for the coupling of the discovered boson to bottom quarks.

First full jet measurement in Pb–Pb collisions with ALICE

ALICE

In high-energy collisions at the LHC, quarks and gluons occasionally scatter violently and produce correlated showers of particles, or “jets”. In proton–proton collisions, the rate of such scatters is precisely calculable using perturbative QCD. However, in heavy-ion collisions, jets should be modified, because the scattered quarks and gluons are expected to interact with the surrounding hot nuclear matter, the quark–gluon plasma (QGP). Jet measurements, together with model calculations of the “jet quenching” phenomenon, therefore provide important information about the properties of the QGP.

Fully reconstructed jets are measured in ALICE by high-precision tracking of charged particles in the central barrel, and by measuring the energy deposits of neutral particles in the electromagnetic calorimeter. This method of reconstructing jets differs from the more traditional approach with hadronic and electromagnetic calorimetry. It was first applied in ALICE to determine the production rate for jets in the case of proton–proton collisions (CERN Courier May 2013 p31). In heavy-ion collisions, measurements of jets are more challenging, because a single event contains multiple jets from independent nucleon–nucleon scatters, as well as combinatorial jets from the large and partially correlated underlying background of particles with low transverse momentum (pT).

ALICE has recently published results from the 2011 lead–lead (Pb–Pb) run, down to low jet pT, where jet quenching is expected to be most dramatic. Jets were reconstructed using the anti-kT algorithm with a resolution parameter of R = 0.2. Even for this rather small cone size, the average contribution of the background was measured to be 25±5 GeV/c in the 0–10% most central (highest multiplicity) Pb–Pb events. To deal with the background, the analysis first subtracted the average contribution in a given event jet-by-jet, and then corrected the resulting reconstructed jet spectrum for the background fluctuations and instrumental resolution via an unfolding procedure. This led to an overall systematic uncertainty of about 15–20%.

The nuclear modification of the jet yield (RAA) is quantified by the ratio of the jet spectrum measured in Pb–Pb to that in proton–proton collisions scaled by the number of independent nucleon–nucleon collisions. The figure shows RAA for the 0–10% and 10–30% most central Pb–Pb collisions, together with the two model calculations. It reveals that the jets in Pb–Pb are suppressed strongly, almost independent of jet pT, with an average nuclear modification factor of 0.28±0.04 in 0–10% and 0.35±0.04 in 10–30% of Pb–Pb collisions. Both model calculations were able to predict the level of jet suppression, while one of them expected a slightly steeper increase with pT than the data. This new measurement, which uses jet constituents down to a few hundred MeV/c, even in Pb–Pb collisions, opens new perspectives for studying the QGP with ALICE.

ATLAS’s paths to the top-quark mass

The top quark is the heaviest elementary particle known currently, and its mass (mtop) is a fundamental parameter of the Standard Model. Its precise determination is essential for testing the consistency of the Standard Model and to constrain models of new physics. Now, ATLAS has released new measurements of mtop using events with one or two isolated charged leptons and jets in the final state – the lepton+jets and dilepton channel. The new results are based on proton–proton collision data taken at a centre-of-mass-energy of 7 TeV.

The measurements were obtained from the direct reconstruction of the top-quark final states, and use calibrations based on Monte Carlo simulation. In the analysis, for the first time, the lepton+jets channel mtop is determined simultaneously with a global jet-energy scale factor, thus exploiting information from the hadronically decaying W boson and a separate b-to-light-quark jet-energy scale factor – a technique that reduces the corresponding systematic uncertainties on mtop significantly. The measurement in the dilepton channel is based on the invariant mass of the two charged-lepton and b-quark-jet systems from top-quark-pair decays. The measurements in the two channels are largely uncorrelated, which allows their combination to yield a substantial improvement in precision. The result, mtop = 172.99±0.91 GeV, corresponds to a relative uncertainty of 0.5% (ATLAS 2015a).

These new measurements, together with the results from the fully hadronic decay channel (ATLAS 2015b), complete the suite of mtop results based on 7-TeV data that exploit top-quark-pair signatures. They are complemented by a result based on single-top-quark-enriched topologies, using 8-TeV data (ATLAS 2014a).

In the direct mass-reconstruction techniques described above, the extracted value of mtop corresponds to the parameter implemented in the Monte Carlo (mMCtop) whose relationship with the top-mass parameter in the Standard Model Lagrangian is not completely clear. The uncertainty relating the top mass in the Standard Model to mMCtop is a matter of debate, but is often estimated to be about 1 GeV, which is comparable to the present experimental precision.

ATLAS follows complementary paths to measure mtop by comparing the measurements of cross-sections for inclusive and differential top-quark-pair production with the corresponding theoretical calculations, which depend on the top-quark-pole mass mpoletop. To date, the most precise mpoletop determination is obtained from the differential cross-section measurements of top-quark-pair events with one additional jet. Using 7-TeV data, the measurement yields mpoletop = 173.7+2.3–2.1 GeV (ATLAS 2014b), which is compatible to the results from the direct reconstruction of the top-quark decays. The figure shows the ATLAS results for mtop, together with results from the Tevatron and the world average.

Upcoming results exploiting the full 8-TeV data seta, and data from LHC Run 2, will further improve understanding of the mass of the top quark and its theoretical interpretation.

COMPASS observes a new narrow meson

Mass spectrum for the f0(980)

The bulk of visible matter originates from the strong interactions between almost massless fundamental building blocks: quarks and antiquarks bound together by gluons. Although these interactions are described by QCD, the understanding of the underlying principle – of how exactly these building blocks form observable matter (hadrons), and which configurations are or are not realized in nature – has been a major challenge for a long time. The question of how hadrons are formed relates directly to the excitation spectrum of hadrons, in particular, mesons, which are made from quark–antiquark pairs. Theoretical predictions on the nature of hadronic bound-states, their masses and decays, have long been based on models, but direct QCD calculations performed on high-performance computers using a discretized space–time lattice are now also reaching a predictive level for new hadron states.

The finding was made using the COMPASS spectrometer to study peripheral (diffractive) reactions of pions.

For many years, experiments have searched for hadronic bound states with exotic contents, such as gluon-only states (glueballs) or multi-quark states with a molecular nature. Some candidates had been found in studying systems with light quarks (glueballs, hybrids) or, most recently, with heavy quarks, revealing the first evidence for explicit multi-quark systems, based on the characteristic combination of charge and flavour.

Mass-dependent phase variation

The COMPASS collaboration has recently observed the existence of an unusual meson made from light quarks at a mass of 1.42 GeV/c2. Since this mass region had been investigated for half a century, this new particle comes as a surprise, and its finding is by virtue of the world’s largest data sample for such studies. The particle is called the a1(1420), reflecting its properties of unit spin/isospin and positive parity, characteristic of the “a” mesons. The finding was made using the COMPASS spectrometer to study peripheral (diffractive) reactions of pions with a momentum of 190 GeV/c on a liquid-hydrogen target at CERN’s Super Proton Synchrotron. Despite its production rate of only 10–3 with respect to known mesons, the existence of the a1(1420) was clearly unravelled using an advanced complex analysis technique that allows a produced superposition of individual quantum states to be disentangled into the individual contributing components, both in terms of quantum numbers and decay paths. The unique signature for this particular observation is a strong narrow enhancement in the mass spectrum of this JPC = 1++ quantum state (figure opposite) in conjunction with an observed phase delay of about 1800 – which any wave undergoes when its frequency (mass) passes a resonance.

The a1(1420) is observed decaying only into the f0(980), which is often discussed as a molecular-type state, and an additional pion, so rendering it unique. Following first announcements of the finding, several explanations have already been put forward. They cover the interpretation of the a1(1420) as a molecular/tetraquark state partnering another known state f1(1420), as well as scenarios in which the a1(1420) is generated by long-range effects of different sorts, all involving the light meson a1(1260). However, despite some remarkable features, not all of the experimental findings can be reproduced by those explanations. Thus, the a1(1420) enters the club of resonances that are unexplained, although experimentally well established.

Laser set-up generates electron–positron plasma in the lab

More than 99% of the visible universe exists as plasma, the so-called fourth state of matter. Produced from the ionization of predominantly hydrogen- and helium-dominated gases, these electron–ion plasmas are ubiquitous in the local universe. An exotic fifth state of matter, the electron–positron plasma, exists in the intense environments surrounding compact astrophysical objects, such as pulsars and black holes, and until recently, such plasmas were exclusively the realm of high-energy astrophysics. However, an international team, led by Gianluca Sarri of Queen’s University of Belfast, together with collaborators in the UK, US, Germany, Portugal and Italy, has at last succeeded in producing a neutral electron–positron plasma in a terrestrial laboratory experiment.

Electron–positron plasmas display peculiar features when compared with the other states of matter, on account of the symmetry between the negatively charged and positively charged particles, which in this case have equal mass but opposite charge. These plasmas play a fundamental role in the evolution of extreme astrophysical objects, including black holes and pulsars, and are associated with the emission of ultra-bright gamma-ray bursts. Moreover, it is likely that the early universe in the leptonic era – that is, in the minutes following approximately one second after the Big Bang – consisted almost exclusively of a dense electron–positron plasma in a hot photon bath.

While production of positrons has long been achievable, the formation of a plasma of charge-neutral electron–positron pairs has remained elusive, owing to the practical difficulties in combining equal numbers of these extremely mobile charges. However, the recent success was made possible by looking at the problem from a different perspective. Instead of generating two separate electron and positron populations, and recombining them, it aimed to generate an electron–positron plasma directly, in situ.

These results represent a real novelty for experimental physics, and pave the way for a new experimental field of research.

In an experiment at the Central Laser Facility at the Rutherford Appleton Laboratory in the UK, Sarri and colleagues made use of a laser-induced plasma wakefield to accelerate an ultra-relativistic electron beam. They focused an ultra-intense and short laser pulse (around 40 fs) onto a mixture of nitrogen and helium gas to produce, in only a few millimetres, electrons with an average energy of the order of 500–600 MeV. This beam was then directed onto a thick slab of a material of high atomic number – lead, in this case – to initiate an electromagnetic cascade, in a mainly two-step process. First, high-energy bremsstrahlung photons are generated as electrons or newly generated positrons propagate through the electric fields of the nuclei. Then, electron–positron pairs are generated during the interactions of the high-energy photons with the same fields. Under optimum experimental conditions, the team obtained, at the exit of the lead slab, a beam of electrons and positrons in equal numbers and of sufficient density to allow plasma-like behaviour.

These results represent a real novelty for experimental physics, and pave the way for a new experimental field of research: the study of symmetric matter–antimatter plasmas in the laboratory. Not only will it allow a better understanding of plasma physics from a fundamental point of view, but it should also shed light on some of the most fascinating, yet mysterious, objects in the known universe.

• The Central Laser Facility is supported by the UK’s Science and Technology Facilities Council. This experiment is supported by the UK’s Engineering and Physical Science Research Council.

RHIC smashes record for polarized-proton collisions at 200 GeV

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV collision energy. In the experimental run currently underway, accelerator physicists are delivering 1.2 × 1012 collisions per week – more than double the number routinely achieved in 2012, the last run dedicated to polarized-proton experiments at this collision energy.

The achievement is, in part, the result of a method called “electron lensing”, which uses negatively charged electrons to compensate for the tendency of the positively charged protons in one circulating beam to repel the like-charged protons in the other beam when the two oppositely directed beams pass through one another in the collider. In 2012, these beam–beam interactions limited the ability to produce high collision rates, so the RHIC team commissioned electron lenses and a new lattice to mitigate the beam–beam effect. RHIC is now the first collider to use electron lenses for head-on beam–beam compensation. The team also upgraded the source that produces the polarized protons to generate and feed more particles into the circulating beams, and made other improvements in the accelerator chain to achieve higher luminosity.

With new luminosity records for collisions of gold beams, plus the first-ever head-on collisions of gold with helium-3, 2014 proved to be an exceptional year for RHIC. Now, the collider is on track towards another year of record performance, and research teams are looking forward to a wealth of new insights from the data to come.

Magnetic fields cast light on black hole’s edge

The Atacama Large Millimetre/submillimetre Array (ALMA) has revealed an intense magnetic field at the base of the relativistic jet powered by a supermassive black hole. Probing the physical conditions of a jet so close to the black hole is unprecedented, and confirms that magnetic fields have a driving role in the formation and collimation of the jet.

Supermassive black holes, often with masses billions of times that of the Sun, are located at the heart of almost all galaxies in the universe. These black holes can accrete huge amounts of matter from a surrounding disc. While most of this matter is fed into the black hole, some can escape moments before capture and be flung out into space at close to the speed of light in twin plasma jets, which can extend hundreds of thousands of light-years from their host galaxy (Picture of the month, CERN Courier January/February 2013 p14). How this happens is not well understood, although it is thought that strong magnetic fields, acting very close to the event horizon, play a crucial role in this process.

Up to now, only weak magnetic fields far from black holes – several light-years away – have been probed. A new study by astronomers from Chalmers University of Technology and Onsala Space Observatory in Sweden used ALMA to detect a polarization signal related to the strong magnetic field in a distant galaxy named PKS 1830-211. This quasar was chosen because it is located at a relatively high redshift and is gravitationally lensed. The redshift of z = 2.5 allows submillimetre emission from the distant source to be probed at frequencies 3.5 times higher than reachable by ALMA. An observation at 300 GHz (around 1 mm) therefore probes the terahertz frequency range (around 0.3 mm), where synchrotron self-absorption no longer hides the most intense jet region closest to the black hole.

The gravitational lens splits the remote source into two components, so that Ivan Martí-Vidal and colleagues could study the relative polarization of the two lensed images. This strategy allows them to be free of many calibration-related artefacts that would otherwise limit the analysis. Through repeated observations at different wavelengths, they found clear signals of Faraday rotation that are hundreds of times stronger than previously found in the universe. The strength of this wavelength dependence of the rotation of the polarization angle is given by the rotation measure (RM), which depends on the magnetic field strength multiplied by the electron density integrated along the line of sight.

The RM derived with ALMA in PKS 1830-211 is around 108 rad/m2, which is about 100,000 times greater than in the radio cores of other quasars. This huge difference is owing to the new observations being performed at much higher frequencies, thus probing a region only light-days away from the black hole, instead of light-years when observing in the radio domain. Assuming that both the magnetic field and the electron density increase by about a factor of 300 from the radio core to the apex of the jet, the team obtains a magnetic field of at least a few tens of gauss near the base of the jet. While this is only an order-of-magnitude estimate, its relatively high value – although many billions of times weaker than in neutron stars – reinforces the idea that magnetic fields play an important role in the mechanism that launches the jet.

The road from CERN to space

Roberto Battiston

The Agenzia Spaziale Italiana (ASI) – the Italian Space Agency – has the tag line “The road to space goes through Italy.” Make a simple change and it becomes a perfectly apt summary of the career to date of the agency’s current president. For Roberto Battiston, the road to space goes through CERN.

As a physics student at the famous Scuola Normale in Pisa, which has provided many of CERN’s notable physicists, he studied the production of dimuons in proton collisions at the Intersecting Storage Rings, under the guidance of Giorgio Bellettini. For his PhD, he moved in 1979 to the University of Paris IX in Orsay, where his thesis was on the construction of the central wire-proportional chamber of UA2, the experiment that went on, with UA1, to discover the W and Z particles at CERN. Until 1995, his research focused on electroweak physics, first at the SLAC Linear Collider and then, back at CERN, at the L3 experiment at the Large Electron–Positron collider. However, at the point when the LHC project was on its starting blocks, his interest began to turn towards cosmic rays. With Sam Ting, who led the L3 experiment, Battiston became involved in the Alpha Magnetic Spectrometer, which as AMS-02 has now been taking data on board the International Space Station (ISS) for four years (CERN Courier July/August 2011 p18). Three years after the launch of AMS-02, Battiston found himself closer to space, at least metaphorically, when he was appointed president of ASI in May 2014.

The decision to move away from experiments at the LHC will surprise many people. How do you explain your unconventional choice?

The LHC, a machine of extraordinary importance, as its results have shown, was the obvious choice for someone who wanted to continue a research career in particle physics. But I chose to take a less beaten path. In space, less has been researched and less has been discovered than at accelerators. I realized that, in both neutral and charged cosmic rays, we are presented with information that is waiting to be decoded, potentially hiding unforeseen discoveries. The universe is, by definition, the ultimate laboratory of physics, a place where, in the various phases of its evolution, matter and energy have reached all of the possible conditions one could imagine – conditions that we will never be able to reproduce artificially. For this reason, when I was discussing with Sam Ting in 1994 about what would be the most interesting new project – whether to go for an LHC experiment or, radically, for a new direction – I had no hesitation: space and space exploration immediately triggered my enthusiasm and curiosity. I absolutely do not regret this choice.

Was your experience and know-how as a high-energy physicist useful for the construction and, now, the operation of AMS?

The AMS detector was designed exactly like the LHC experiments. It has an electromagnetic spectrometer with a particle tracker and particle identifiers. Subdetectors are positioned before and after the magnet and the tracker, to identify the types of particles passing through the experiment. We use the same approach as at accelerators – 99% of the events are thrown away, the interesting ones being the few that remain. However, within these data, processes that we still do not know about remain potentially hidden. The challenge is to find new methods to look at this radiation and extract a signal, exactly as at the LHC. The difference is that the trigger rate is kilohertz in space, rather than gigahertz at the LHC: AMS gets one or two particles at a time instead of hundreds of thousands per event. Moreover, space offers some advantages and optimal conditions for detecting particles: surprisingly, it provides stable environmental conditions, so detectors that on the ground would suffer from environmental changes – such us too much heat or atmospheric pressure changes or humidity – enjoy ideal conditions in space. Silicon detectors, transition-radiation detectors, electromagnetic calorimeters and Cherenkov detectors have performed much better than the best detectors on the ground.

But in space you must face more complex challenges that put constraints on your instrument’s design?

Given the complexity of the current LHC experiments, the situation is comparable. Repairing a huge detector 100 m below ground is as difficult as repairing a detector in space. If something breaks down underground, dismantling the whole structure of a detector might require months if not a year. Everything in both environments must have sufficient reliability to operate for a long time. In space, radiation doses are relatively small compared with the doses that the detectors can sustain, but there are problems of the shock at launch, pressure drops, extreme temperatures and the ability to operate in a vacuum, so the tests that a detector must pass to be able to perform in space are severe. Shock and stress resistance at launch require the detectors to be more robust than those built to stay on Earth. Another huge difference is weight and power. On Earth there are no limits. In space, we must use low-weight instruments – a few tonnes compared with the 10,000 tonnes of the large LHC detectors. And because detectors in space are powered by solar panels, there are power limits – a few kilowatts compared with tens of megawatts at the LHC. So in space, resources are optimized to the last small part.

What about the choice of leading technology vs reliability, for an experiment in space?

It is true that in space we have instruments that are dated, technologically speaking. But AMS is an exception: we made the effort of bringing to space technology developed at CERN since 2000, which has shown itself to be 10–100 times more powerful and effective than current space standards.]

Space is particle physics multiplied to the nth power.

Roberto Battiston

Now, with AMS-02 successfully installed on the ISS and reaping promising results, you have been appointed president of the ASI, one of the large European space agencies. What can a physicist like yourself bring to the management of the space industry at the European and international level?

Space is a place were human dreams converge: from photographing the Moon, to walking on Mars, to taking a snapshot of the first instants of the universe – these are global dreams of humanity. Yet, space is a different world from physics. In certain aspects, it’s wider. Particle physics is an international discipline, but is so focused that the bases for discussion are limited, however fascinating and however important might be the consequences of finding a new brick in the construction of the universe. Space is particle physics multiplied to the nth power. It is a context, not just one discipline. Many different sectors interact, but each has its own dynamics – my leitmotiv is “interdisciplinarity”. Many different things happen at a fast pace, which requires a great capacity for synthesis and ability to process a lot of data in a short time. Decisions must be taken so fast that a well-trained brain is needed. I can only thank my tough training in physics research for this. The tough discipline at the basis of research at CERN and in astroparticle physics, the continuous challenge of having to solve complex problems, the requirement of working in a large community made of people with different characters, cultures and languages, typical of experimental physics, are an asset within the context of a space agency.

How do large collaborations work in space research? Is it as global as the LHC?

The capability to keep the construction effort of very large accelerators or extremely complex detectors under direct control is still, today, an essential aspect of the high-energy physics community. Space research has not made the transition to a global collaboration in the same way as CERN, because it is still dominated by a strong element of international politics and national prestige. The amount of funding involved and the related industrial aspects and business pressures are so big, that decisions must be taken at the level of heads of state and government.

Is there a difference in approach between NASA and ESA?

They’re both huge agencies, although NASA has four times the budget of ESA. In the past, they’ve collaborated on large projects, but in the past 10 years this collaboration has dimmed, as is the case for LISA [the Laser Interferometer Space Antenna]. Sometimes, such projects are even done in competition, as in the case of WMAP and Planck. The US pulled out of Rosetta long ago, and is now focused on the James Webb Space Telescope. To do so, the US basically chose to stop most international collaborations in science, except for the ISS and exploration. The ISS exists because of a precise political will. It is a demonstration that collaboration in space is decided top-down instead of bottom-up, and it can hold or break according to politics.

AMS will soon be joined in space by new powerful instruments to study cosmic rays. Are we witnessing a change of focus, from particle physics in the lab back to the sky?

Space is a less-frequented frontier, and it is understandable that it is now attracting many physicists. Astroparticle physics is a bridge between the curiosity of particle physicists who try to understand fundamental problems and the tradition of astronomy to observe the universe. Two different aspects of physics converge here: deciphering vs photographing and explaining. In astroparticle physics we try to find traces of fundamental phenomena, in astrophysics, to explain what we are able to see.

So what would your advice be to young physics graduates? Where would they best fulfil their research ambitions today?

Physics in space is becoming enormously interesting, not just in the understanding of both the infinitely small and the infinitely large. In the coming decades, astrophysics and particles studied in space radiation will be the place from where surprises and important discoveries could come, although this will take time and more sophisticated technologies, because the limits of technology are farther from the limits of the observable phenomena in the universe than in the case of particle accelerators. Building a new accelerator will require decades and big investments, as well as new technologies, but most of all it will need a discovery indicating where to look. The resources required are so considerable that we will not be able to build such a machine just to explore and see what there is at higher energies, as we did many times in the past. This is less true in astrophysics. There will surely be decades of discoveries with more sophisticated instruments, the frontiers are not completely explored at all. However, physics keeps its outstanding fascination. With current computing capacity, latest technologies, the present understanding of quantum mechanics, the interactions between physics and biology, the amount of physics that you can do at atomic and subatomic level, using many atoms together, cold systems and so on – there are so many sectors in which an excellent physicist can find great satisfaction.

And after ASI, will you go back to particle physics?

For the moment I need to put all of my energy into the job that has just started. I have not lost the pleasure of discovery, and the main objective of the years ahead is to support the best ideas in space science and technology, trying to get results as quickly as possible. And of course, I will keep following AMS.

The Mu2e experiment: a rare opportunity

The Mu2e experiment at Fermilab recently achieved an important milestone, when it received the US Department of Energy’s critical-decision 2 (CD-2) approval in March. This officially sets the baselines in the scope, cost and schedule of the experiment. At the same time, the Mu2e collaboration was awarded authorization to begin fabricating one of the experiment’s three solenoids and to begin the construction of the experimental hall, which saw ground-breaking on 18 April (figure 1). The experiment will search with unprecedented sensitivity for the neutrinoless conversion of a muon into an electron.

Some history

The muon was first observed in 1937 in cosmic-ray interactions. The implications of this discovery, which took decades of additional progress in both experiment and theory to reveal, were profound and ultimately integral to the formulation of the Standard Model. Among the cornerstones of the model are symmetries in the underlying mathematics and the conservation laws they imply. This connection between theory (the mathematical symmetries) and experiment (the measurable conservation laws) was formalized by Emmy Noether in 1918, and is fundamental to particle physics. For example, the mathematics describing the motion of a system of particles gives the same answer regardless of where in the universe this system is placed. In other words, the equations of motion are symmetric, or invariant, to translations in space. This symmetry manifests itself as the conservation of momentum. A similar symmetry to translations in time is responsible for the conservation of energy. In this way, in particle physics, observations of conserved quantities offer important insights into the underlying mathematics that describe nature’s inner workings. Conversely, when a conservation law is broken, it often reveals something important about the underlying physics.

The implications of neutrino mixing have yet to be revealed fully.

In the Standard Model there are three families of quarks and three families of leptons. Generically speaking, members of the same family interact preferentially with one another. However, it has long been known that quark families mix. The Cabibbo–Kobayashi–Maskawa matrix characterizes the degree to which a particular quark interacts with quarks of a different family. This phenomenon has profound implications, and plays a role in the electroweak interactions that power the Sun and in the origin of CP violation. For decades it appeared that the lepton family did not mix: the lepton family number was always conserved in experiments. This changed with the observation that neutrinos mix (Fukuda et al. 1998, Ahmad et al. 2001). This discovery has profound implications; for example, neutrinos must have a finite mass, which requires the addition of a new field or a new interaction to the original Standard Model – the updated Standard Model is sometimes denoted the νSM. Indeed, the implications of neutrino mixing have yet to be revealed fully, and a vigorous worldwide experimental programme is aimed at further elucidating the physics underlying this phenomena. As often happens in science, the discovery of neutrino oscillations gave rise to a whole new set of questions. Among them is this: if the quarks mix, and the neutral leptons (the neutrinos) mix, what about the charged leptons?

A probe of new physics

Searches for charged-lepton flavour violation (CLFV) have a long history in particle physics. When the muon was discovered, one suggestion was that it might be an excited state of the electron, and so experiments searched for μ → eγ decays (Hicks and Pontecorvo 1948, Sard and Althaus 1948). The non-observation of this reaction, and the subsequent realization that there are two distinct neutrinos produced in traditional muon decay, led physicists to conclude that the muon was a new type of lepton, distinct from the electron. This was an important step along the way to formulating a theory that included several families of leptons (and, eventually, quarks). Nevertheless, searches for CLFV have continued ever since, and it is easy to understand why. In the Standard Model, with massless neutrinos, CLFV processes are strictly forbidden. Therefore, any observation of a CLFV decay would signal unambiguous evidence of new physics beyond the Standard Model. Today, even with the introduction of neutrino mass, the situation is not significantly different. In the νSM, the rate of CLFV decays is proportional to [Δm2ij/M2W]2, where Δm2ij is the mass-squared difference between the ith and jth neutrino, and MW is the mass of the W boson. The predicted rates are therefore in the region of 10–50 or smaller – far below any experimental sensitivity currently conceivable. Therefore, it remains the case that any observation of a CLFV interaction would be a discovery of new physics.

The case for pursuing CLFV searches is compelling. A wide variety of models of new physics predict large enhancements relative to the νSM (30–40 orders of magnitude) for CLFV interactions. Extra dimensions, little Higgs, lepto quarks, heavy neutrinos, grand unified theories, and all variety of supersymmetric models predict CLFV rates to which upcoming experiments will have sensitivity (see, for example, Mihara et al. 2013). Importantly, ratios of various CLFV interactions can discriminate among the different models and offer insights into the underlying new physics complementary to what experiments at the LHC, neutrino experiments, or astroparticle-physics endeavours can accomplish.

The most constraining limits on CLFV come from μ → eγ muon-to-electron conversion, μ → 3e, K → ll’, and τ decays. In the coming decade the largest improvements in sensitivity will come from the muon sector. In particular, there are plans for dramatic improvements in sensitivity for the muon-to-electron conversion process, in which the muon converts directly to an electron in the presence of a nearby nucleus with no accompanying neutrinos, μN → eN. The presence of the nucleus is required to conserve energy and momentum. The process is a coherent one and, apart from receiving a small recoil energy, the nucleus is unchanged from its initial state. The Mu2e experiment at Fermilab (Bartoszek et al. 2015) and the COMET experiment at the Japan Proton Accelerator Research Complex (Cui et al. 2009) both aim to improve the current state-of-the-art by a factor of 10,000, starting in the next five years.

The Mu2e experiment

The Mu2e experiment will use the existing Fermilab accelerator complex to take 8-GeV protons from the Booster, rebunch them in the Recycler, and slow-extract them to the experimental apparatus from the Muon Campus Delivery Ring, which was formerly the anti-proton Accumulator/Debuncher ring for the Tevatron. Mu2e will collect about 4 × 1020 protons on target, resulting in about 1018 stopped muons, which will yield a single-event sensitivity for μN → eN of 2.5 × 10–17 relative to normal muon nuclear capture (μN → νμN´). The expected background yield over the full physics run is estimated to be less than half an event. This gives an expected sensitivity of 6 × 10–17 at 90% confidence level and a discovery sensitivity of 5σ to all conversion rates larger than about 2 × 10–16. For comparison, many of the new-physics models discussed above predict rates as large as 10–14, which would yield hundreds of signal events. This projected sensitivity is 10,000 times better than the world’s current best limit (Bertl et al. 2006), and will probe effective mass scales for new physics up to 104 TeV/c2, well beyond what experiments at the LHC can explore directly.

The Mu2e experimental concept is simple. Protons interact with a primary target to create charged pions, which are focused and collected by a magnetic field in a volume where they decay to yield an intense source of muons. The muons are transported to a stopping target, where they slow, stop and are captured in atomic orbit around the target nuclei. Mu2e will use an aluminium stopping target: the lifetime of the muon in atomic orbit around an aluminium nucleus is 864 ns. The energy of the electron from the CLFV interaction μN → eN – given by the mass of the muon less the atomic binding energy and the nuclear recoil energy – is 104.96 MeV. Because the nucleus is left unchanged, the experimental signature is a simple one – a mono-energetic electron and nothing else. Active detector components will measure the energy and momentum of particles originating from the stopping target and discriminate signal events from background processes.

Because the signal is a single particle, there are no combinatorial backgrounds, a limiting factor for other CLFV reactions. The long lifetime of the muonic-aluminium atom can be exploited to suppress prompt backgrounds that would otherwise limit the experimental sensitivity. While the energy scale of the new physics that Mu2e aims to explore is at the tera-electron-volt level, the physical observables are at much lower energy. In Mu2e, 100 MeV is considered “high energy”, and the vast majority of background electrons are at energies < Mμ/2 ~ 53 MeV.

Mu2e’s dramatic increase in sensitivity relative to similar experiments in the past is enabled by two important improvements in experimental technique: the use of a solenoid in the region of the primary target and the use of a pulsed proton beam. Currently, the most intense stopped-muon source in the world is at the Paul Scherrer Institut in Switzerland, where they achieve more than 107 stopped-μ/s using about 1 MW of protons. Using a concept first proposed some 25 years ago (Dzhilkibaev and Lobashev 1989), Mu2e will place the primary production target in a solenoidal magnetic field. This will cause low-energy pions to spiral around the target where many will decay to low-energy muons, which then spiral down the solenoid field and stop in an aluminium target. This yields a very efficient muon beamline that is expected to deliver three-orders-of-magnitude-more stopped muons per second than past facilities, using only about 1% of the proton beam power.

A muon beam inevitably contains some pions. A pulsed beam helps to control a major source of background from the pions. A low-energy negative pion can stop in the aluminium target and fall into an atomic orbit. It annihilates very rapidly on the nucleus, producing an energetic photon a small percentage of the time. These photons can create a 105 MeV electron through pair production in the target, which can, in turn, fake a conversion electron. Pions at the target must be identified to high certainty or be eliminated. With a pulsed muon beam, the search for conversion electrons is delayed until almost all of the pions in the beam have decayed or interacted. The delay is about 700 ns, while the search period is about 1-μs long. The lifetime of muonic aluminium is long enough that most of the signal events occur after the initial delay. To prevent pions from being produced and arriving at the aluminium target during the measurement period, the beam intensity between pulses must be suppressed by 10 orders of magnitude.

The Mu2e apparatus consists of three superconducting solenoids connected in series (figure 2). Protons arriving from the upper right strike a tungsten production target in the middle of the production solenoid. The resulting low-energy pions decay to muons, some of which spiral downstream through the “S”-shaped transport solenoid (TS) to the detector solenoid (DS), where they stop in an aluminium target. A strong negative magnetic-field gradient surrounding the production target increases the collection efficiency and improves muon throughput in the downstream direction. The curved portions of the TS, together with a vertically off-centre collimator, preferentially transmit low-momentum negative particles. A gradient surrounding the stopping target reflects some upstream-spiralling particles, improving the acceptance for conversion electrons in the detectors.

When a muon stops in the aluminium target, it emits X-rays while cascading through atomic orbitals to the 1s level. It then has 61% probability of being captured by the nucleus, and 39% probability of decaying without being captured. In the decay process, the distribution of decay electrons largely follows the Michel spectrum for free muon decay, and most of the electrons emitted have energies below 53 MeV. However, the nearby nucleus can absorb some energy and momentum, with the result that, with low probability, there is a high-energy tail in the electron distribution reaching all of the way to the conversion-electron energy, and this poses a potential background. Because the probability falls rapidly with increasing energy, this background can be suppressed with sufficiently good momentum resolution (better than about 1% at 105 MeV/c).

Detector components

Inside the DS, particles that originate from the stopping target are measured in a straw-tube tracker followed by a barium-fluoride (BaF2) crystal calorimeter array. The inner radii of the tracker and calorimeter are left un-instrumented, so that charged particles with momenta less than about 55 MeV/c, coming from the beamline or from Michel decays in the stopping target, have low transverse momentum and spiral downstream harmlessly.

The tracker is 3-m long with inner and outer active radii of 39 cm and 68 cm, respectively. It consists of about 20,000 straw tubes 5 mm in diameter, which have 15-μm-thick mylar walls and range in length from 0.4–1.2 m (figure 3). They are oriented perpendicular to the solenoid axis. Conversion-electron candidates make between two and three turns of the helix in the 3-m length. The tracker provides better than 1 MeV/c (FWHM) resolution for 105 MeV/c electrons.

The final solenoid commissioning is scheduled to begin in 2019.

Situated immediately behind the tracker, the calorimeter provides sufficient energy and timing resolution to separate muons and pions from electrons with energy around 100 MeV. The BaF2 crystals have a fast component (decay time around 1 ns) that makes the Mu2e calorimeter tolerant of high rates without significantly affecting the energy or timing resolutions. Surrounding the DS and half the TS is a four-layer scintillator system that will identify through-going cosmic rays with 99.99% efficiency. A streaming data acquisition (DAQ) architecture will handle about 70 GB of data a second when beam is present. A small CPU farm will provide an online software trigger to reduce the accept rate to about 2 kHz. A dedicated detector system will monitor the suppression of out-of-time protons, while another will determine the number of stopped muons.

Having cleared the CD-2 milestone in March, the Mu2e collaboration is now focused on clearing the next hurdle – a CD-3 “construction readiness” review in early 2016. In preparation, prototypes of the tracker, calorimeter, cosmic-ray veto, DAQ and other important components are being built and tested. In addition, the fabrication of 27 coil modules that make up the “S” of the transport solenoid will begin soon, and the building construction will continue into 2016. The final solenoid commissioning is scheduled to begin in 2019, while detector and beamline commissioning are scheduled to begin in 2020.

Snapshots from the Long Shutdown

A view from the bottom of the ATLAS cavern

A view from the bottom of the ATLAS cavern, up to the LHC beam pipe as the experiment prepares for Run 2 of the LHC at full energy.

Construction of new panels of the pixel detector. The pixel detector is the innermost of ATLAS’s many layers, lying closest to the interaction point where particle collisions occur.

View of the ATLAS calorimeters from below as they were being moved to their final position before the detector closed for the LHC’s second run. Calorimeters measure energy carried by neutral and charged particles.

The ATLAS team watches as the first part of the Insertable B-Layer (IBL), a new component of the pixel subdetector, enters its support tube. The IBL was installed in May 2014, becoming the innermost layer of ATLAS’s inner detector region. It will provide an additional point for tracking particles. An additional point closer to the collision vertex significantly improves precision.

An ATLAS member vacuums the different sectors inside the 7000 tonne detector. Before the toroid magnets can be turned on for tests, the detector must be thoroughly cleaned. In December 2014, 110 ATLAS members worked in 10 different shifts for five days, cleaning and inspecting the detector and the cavern that houses it, to make sure that no object, however miniscule, may have been left behind during the months of upgrade and maintenance.

A thin gap chamber on one of the big wheels being replaced. The big wheels are the final layer of the muon spectrometer, which identifies muons and measures their momenta as they pass through the ATLAS detector. The muon spectrometer is the outermost component of the 25-m tall and 46-m long ATLAS detector.

ATLAS physicists Vincent Hedberg, left, and Giulio Avoni glue optical fibers for the construction of the LUCID calibration system. LUCID is a detector that will help ATLAS continue to measure luminosity with very high precision during the increased collision rates and increased energy expected in the next LHC Run.

The vacuum group’s team members lead the installation of LUCID and the LHC beam pipe. The beam pipe delivers the proton–proton collisions to the heart of the detector.

Raphaël Vuillermet, the technical co-ordination team’s engineer, supervises the separation of the muon spectrometer’s big wheels from the cavern balcony. There are four moveable big wheels at each end of the ATLAS detector, each measuring 23 m in diameter. The wheels are separated to access the interior of the muon stations to change faulty chambers.

Members of the ATLAS muon team inspect the monitored drift tubes of the muon spectrometer before the shielding that encircles the beam pipe, where collisions occur, is installed. The shielding is designed to maintain the integrity of the beam and to protect the sensitive components of the detector near the beamline.

bright-rec iop pub iop-science physcis connect