Bluefors – leaderboard other pages

Topics

60 Years of CERN Experiments and Discoveries

By Herwig Schopper and Luigi Di Lella (eds.)
World Scientific
Also available at the CERN bookshop

CCboo2_01_16

This book is a treasure trove of particle physics, highly recommended for physics teachers, graduate students and professionals of the field. With 17 chapters, it offers a concise essay of 60 years of particle physics at CERN from the point of view of the people in charge of the different experiments.

The first three chapters cover the present day at CERN: the full LHC programme, from the Higgs boson discovery to Beauty physics and quark–gluon plasma. They draw a relatively synthetic but precise picture of the four major experiments at LHC (ATLAS, CMS, LHCb, ALICE), giving really useful information to the reader.

The surprises, at least for me, come in the chapters that follow. They explain physics that is already in textbooks, but provide a great deal of detail about each specific endeavour – a pleasure to read if you are interested not only in the results but also in the intellectual journey and historical context.

Chapters 4 (number of light neutrinos) and 5 (gauge-coupling-constants precision physics) are dedicated to LEP results, chapter 6 to the discovery of W and Z bosons in the Super Proton Synchrotron (SPS), and chapter 7 to the fundamental neutral current experiment at Gargamelle. Going back in time to Gargamelle, one can appreciate the ingenuity of the physicists’ community struggling with the data to get a clearer picture of electroweak physics, at a time when the microelectronics revolution was still far off.

From chapter 8 to the end of the book, the reader picks up little gems. CERN is not only the LHC or LEP, but much more. Chapter 8 tells the story of neutrino physics at the SPS, in particular the precise measurement of the Weinberg angle and how that effort paved the way for actual neutrino-oscillation experiments. Chapters 9 and 10 are dedicated to kaon physics, in particular to the direct measurement of CP violation in kaon decay by the NA31/NA28 collaborations at the SPS, and to discrete symmetry (T, CPT and CP) measurements in the neutral-kaon system using the LEAR antiproton storage ring. Here, the reader discovers that the large volume of statistics on π+π decays possible at LEAR (now evolved to LEIR) enabled testing of the equivalence principle between particles and antiparticles, as well as of EPR correlations.

Chapter 11 highlights the physics discoveries at the Intersecting Storage Rings (ISR). Remembered as the first hadron collider and a technological feat, it also made an important contribution to fundamental physics by discovering the rise of the proton–proton scattering total cross-section. Chapters 12 and 13 discuss topics out of chronological order. Chapter 13 concerns the discovery of partons in hadrons from the ISR to the SPS, with details of the hadron internal structure, revealed by muon scattering in the SPS, given in chapter 12. “Modern” LHC parlance as “gluon colliders” can be traced back to the ISR; jet production, now a workhorse at the LHC programme, was evident from the SPS UA2 experiment. Deep inelastic scattering has been an active field at CERN for more than 35 years, and has had a fundamental impact on the present day understanding of hadronic-matter structure.

But CERN is not only about colliders. Atomic physics is very much alive there, as well as the study of exotic atoms (pionic, muonic, kaonic) and anti-atoms. Chapter 14 traces the history of antimatter–exotic matter at CERN, up to present-day experiments at ALPHA and ATRAP, even for testing the equivalence principle (does antimatter fall down?) with AEgIS or GBAR.

Muon-storage technological challenges and the g-2 measurement at CERN, a hot topic today, come within chapter 15, which contains two special-relativity surprises: a gamma-ray time-of-flight experiment from the Proton Synchrotron (PS) target, demonstrating the independency of c from the source motion, and time dilation in circular orbits for the muon lifetime in flight. Chapter 16 explains the beginning of the accelerator programme at CERN with the physics contribution of the CERN 600 MeV Synchrocyclotron (pi meson decays), in particular the first measurement of the muon anomalous moment.

Closing the book, chapter 17 discusses part of the nuclear-physics programme, specifically with ISOLDE – an “alive and kicking” experiment dedicated to the study of radioactive nuclei, mainly nuclear ground-state properties and excited nuclear states populated in radioactive decays, but now also leading the production of medical isotopes for fundamental studies in cancer research.

As a final remark, I enjoyed this book not only for the range of topics and extensive explanations, but also because it is easily readable – not an easy goal when the number of authors is so high. Definitely a must read.

Modern Optics (2nd edition)

By B D Guenther
Oxford University Press

51NquCRo6KL

This book is the result of a one-semester course that has been taught by the author to juniors, seniors and first-year graduate students in physics and engineering at Duke University for 13 years.

It gives an overview of the fundamentals in optical science, the principals of which are explained by using a rigorous approach based on Maxwell’s equations. Besides the classical topics, the book includes some material not found in more conventional textbooks on the subject: nonlinear optics, guided waves, photonic structures, surface plasmons and more. Anisotropy is also largely discussed, even if it needs the use of tensors, because of its importance in modern optics.

This 2nd edition retains an emphasis on both the fundamental principles of optics and exposure to actual optical-engineering problems and solutions. It introduces a large number of applications such as laser optics, fiber optics and medical imagining, which makes the book appealing to engineering students and professors.

A selection of optional material has also been added in the appendices, adaptable to different interests and to stimulate further reading. Many pictures, tables and diagrams accompany the text, making the exposition clear and complete.

Instantons and Large N: An Introduction to Non-Perturbative Methods in Quantum Field Theory

By Marcos Mariño
Cambridge University Press

9781107068520

Intended to be a fundamental resource for graduate students in particle, theoretical and mathematical physics, the book gives a highly pedagogical introduction to some advanced topics of quantum field theory (QFD).

The standard approach to QFD, one of the pillars of modern physics, is the perturbative one. Although successful, it is not sufficient to address many important phenomena. In this book, the author gives an introduction to two methods that go beyond the standard perturbative framework: instantons and large-N expansion.

The first part of the volume offers a detailed exposition of instantons in quantum mechanics, supersymmetric quantum mechanics, the large-order behaviour of perturbation theory, and Yang–Mills theories. In the second part, large-N expansion in QFT is examined.

The topics are presented in a well-organised form, and each subject is explained with detailed mathematical derivations and then illustrated with a model or example in which it is implemented. This enables students to move easily through the text and gain practical experience with the most important tools of the field.

Apart from the basic building blocks in the theory of instantons and of large-N expansion, the choice of topics has been dictated by the author’s taste and expertise, as he himself admits. As a consequence, some subjects covered extensively elsewhere in the literature are left aside, while space has been given to topics not commonly treated in textbooks. Moreover, supersymmetry has been avoided as much as possible, by choice.

The Large Hadron Collider: Harvest of Run 1

By Thomas Schörner-Sadenius (ed.)
Springer

CCboo1_01_16

On the verge of obtaining new results from the first year of Run 2 of the LHC, a book summarising the results from Run 1 is highly anticipated.

The impressive effort needed to write such an overview must be acknowledged. The LHC experiments (ALICE, ATLAS, CMS, LHCb, and TOTEM) have published more than 1000 results from Run 1, and producing a comprehensive review of them while ensuring that the book remains accessible to young researchers is a demanding task that requires careful editorial work. This seems to have been the intention of the authors, which in my opinion has been accomplished.

Individual chapters are written by teams of well-recognised experts working in each specific field. The book starts with a short historical overview, describing the development of the LHC project – three-decades long – from first ideas to its realisation. The reader will find an interesting summary of the difficult financial situation the LHC had to confront, while receiving harsh competition from similar accelerator projects (UNK, SSC).

Clearly, the legacy of Run 1 is marked by the discovery of the Higgs boson, therefore a long and interesting chapter is dedicated to a description of its discovery and, later on, to the measurement of its properties, but the volume shows the impact of the LHC results on all of the different fronts of high-energy physics. The interplay between recent theory developments and experimental results is clearly presented. Furthermore, each physics chapter is introduced by a short theoretical summary, showing the pedagogical intention of the authors. Results are often contextualised by comparing them with the current status of each topic and by showing perspectives for future improved results.

Besides allowing senior researchers to quickly scan through the plethora of LHC results, the book will be particularly useful for young researchers trying to familiarise themselves with certain aspects of LHC physics. It stimulates further reading and gives a long list of references at the end of each chapter – in my opinion, this is a main bonus of the book.

Although the results from Run 1 at the LHC are destined to be quickly outdated by new results from Run 2, I believe that this book could serve for several years as initial reading for any physicist when first confronted with LHC physics, thanks to the historical and pedagogical point of view adopted.

A year of challenges and successes

2015 was a tough year for CERN’s accelerator sector. Besides assuring delivery of beam to the extensive non-LHC facilities such as the AD, ISOLDE, nTOF and the North Area, many teams also had to work hard to bring the LHC back into business after the far-reaching efforts of the long shutdown.

At the end of 2014 and start of 2015, the LHC was cooled down sector by sector and all magnet circuits were put through a campaign of powering tests to fully re-qualify everything. The six-month-long programme of rigorous tests involved the quench-protection system, power converters, energy extraction, UPS, interlocks, electrical quality assurance and magnet-quench behaviour. The powering-test phase eventually left all magnetic circuits fully qualified for 6.5 TeV.

Some understandable delay was incurred during this period and three things can be highlighted. First was the decision to perform in situ tests of the consolidated splices – the so called Copper Stabilizer Continuity Measurement (CSCM) campaign. These were a success and provided confirmation of the quality work done during the shutdown.

Second, dipole-quench re-training took some time – in particular, the dipoles of sector 45 proved a little recalcitrant and reached the target 11,080 A after some 51 training quenches.

Third, after an impressive team effort co-ordinated by the machine-protection team to conceive, prototype, test and deploy the system, a small piece of metallic debris that was causing an earth fault in a dipole in sector 34 was successfully burnt away on the afternoon of Tuesday 31 March.

First beam 2015 went around the LHC on Easter Sunday, 5 April. Initial commissioning delivered first beam at 6.5 TeV after five days and first “stable beams” after two months of careful set up and validation.

Ramp up

Two scrubbing runs delivered good beam conditions for around 1500 bunches per beam, after a concerted campaign to re-condition the beam vacuum. However, the electron cloud, anticipated to be more of a problem with the nominal 25 ns bunch-spacing beam, was still significant at the end of the scrubbing campaign.

The initial 50 ns and 25 ns intensity ramp-up phase was tough going and had to contend with a number of issues, including earth faults, unidentified falling objects (UFOs), an unidentified aperture restriction in a main dipole, and radiation affecting specific electronic components in the tunnel. Although operating the machine in these conditions was challenging, the teams succeeded in colliding beams with 460 bunches and delivered some luminosity to the experiments, albeit with poor efficiency.

The second phase of the ramp-up following the technical stop at the start of September was dominated by the electron cloud and the heat load that it generates in the beam screens of the magnets in the cold sectors. The challenge was then for cryogenics, which had to wrestle with transients and operation close to the cooling-power limits. The ramp-up in number of bunches was consequently slow but steady, culminating in a final figure for the year of 2244 bunches per beam.

Importantly, the electron cloud generated during physics runs at 6.5 TeV serves to slowly condition the surface of the beam screen and so reduce the heat load at a given intensity. As time passed, this effect opened up a margin for the use of more bunches. Cryogenics operations were therefore kept close to the acceptable maximum heat load, and at the same time in the most effective scrubbing regime.

The overall machine availability is a critical factor in integrated-luminosity delivery, and remained respectable with around 32% of the scheduled time spent in stable beams during the final period of proton–proton physics from September to November. By the end of the 2015 proton run, 2244 bunches per beam were giving peak luminosities of 5.2 × 1033 cm–2s–1 in ATLAS and CMS, with both being delivered an integrated luminosity of around 4 fb–1 for the year. Levelled luminosity of 3 × 1032 cm–2s–1 in LHCb and 5 × 1030 cm–2s–1 in ALICE was provided throughout the run.

Also of note were dedicated runs at high β* for TOTEM and ALFA. These provided important data on elastic and diffractive scattering at 6.5 TeV, and interestingly a first test of the CMS-TOTEM Precision Proton Spectrometer (CT-PPS), which aims to probe double-pomeron exchange.

As is now traditional, the final four weeks of operations in 2015 were devoted to the heavy-ion programme. To make things more challenging, it was decided to include a five-day proton–proton reference run in this period. The proton–proton run was performed at a centre-of-mass energy of 5.02 TeV, giving the same nucleon–nucleon collision energy as that of both the following lead–lead run and the proton–lead run that took place at the start of 2013.

Good intensities

Both the proton reference run and ion run demanded re-set-up and validation of the machine at new energies. Despite the time pressure, both runs went well and were counted a success. Performance with ions is strongly dependent on the beam from the injectors (source, Linac3, LEIR, PS and SPS), and extensive preparation allowed the delivery of good intensities, which open the way for delivery of a levelled design luminosity of 1 × 1027 cm–2s–1 to ALICE and more than 3 × 1027 cm–2s–1 to ATLAS and CMS. For the first time in an ion–ion run, LHCb also took data following participation in the proton–lead run. Dedicated ion machine development included crystal collimation and quench-level tests, the latter providing important input to future ion operation in the HL-LHC era.

The travails of 2015 have opened the way for a full production run in 2016. Following initial commissioning, a short scrubbing run should re-establish the electron cloud conditions of 2015, allowing operation with 2000 bunches and more. This figure can then be incrementally increased to the nominal 2700 as conditioning progresses. Following extensive machine development campaigns in 2015, the β* will be reduced to 50 cm for the 2016 run. Nominal bunch intensity and emittance will bring the design peak luminosity of 1 × 1034 cm–2s–1 within reach. Reasonable machine availability and around 150 days of 13 TeV proton–proton physics should allow the 23 fb–1 total delivered to ATLAS and CMS in 2012 to be exceeded.

Latest ATLAS results with 13 TeV proton–proton collisions at the LHC

Since the first ATLAS results from LHC Run 2 were presented at this summer’s conferences (EPS-HEP 2015 and LHCP 2015) with an amount of data corresponding to an integrated luminosity of approximately 80 pb–1, the LHC has continued to ramp up in luminosity. The maximum instantaneous luminosity for 2015 was 5 × 1033 cm–2s–1, which already approaches the Run 1 record of 7 × 1033 cm–2s–1. ATLAS recorded more than 4 fb–1 in 2015, with different physics analyses using from 3.32 to 3.60 fb–1, depending on the parts of the detector required to be fully operational with good data quality.

The main goal of the early measurements presented this summer was to study in detail the performance of the detector, to characterise the main Standard Model processes at 13 TeV, and to perform the first searches for phenomena beyond the Standard Model at Run 2. These early searches focused on processes such as high-mass quantum and rotating black-hole production in dijet, multijet and lepton-jet event topologies, for which the higher centre-of-mass energy provided an immediate improvement in sensitivity beyond the reach of the Run 1 data.

The recently completed 2015 data set corresponds to more than 30 times that of this summer. With these data, the full programme of measurements and searches at Run 2 has started, and the first results were presented by the collaboration at a joint ATLAS and CMS seminar on 15 December 2015 during CERN Council week.

These new results benefitted from the first calibration of electron, muon and jet reconstruction and trigger algorithms, in situ using the data. The new insertable B layer of pixel detectors significantly improves the precision of the track measurements near the interaction region and is therefore crucial for tagging jets containing heavy quarks.

First measurements include the ZZ cross-section and single top quark, and the Wt production channels at 13 TeV. Top-quark pair production has also been investigated in measurements where the top-quark pair is produced in association with additional jets. These measurements are crucial to provide further checks of the modelling implemented in state-of-the-art generators used to simulate these processes at NLO QCD precision. These measurements can also subsequently be used to further constrain physics beyond the Standard Model that would alter these production modes.

The new data also allowed the first measurements of the Higgs boson production cross-section at 13 TeV, inclusively in the diphoton and ZZ decay channels.

With the increased centre-of-mass energy, and the availability of significantly more data than in the summer, new-particle search results were awaited with much anticipation. A large number of searches for new phenomena motivated by theories beyond the Standard Model in dijet, multijets, photon jets, diphoton, dilepton, single lepton and missing transverse energy channels were completed. Searches for vector-boson pair (VV) and Higgs and vector-boson (VH) topologies with boosted jets have also been completed. Searches for strongly produced supersymmetry (SUSY) that made use of signatures with 0 or 1 lepton, or a Z boson, jets and missing transverse energy and also topologies with B jets, have improved sensitivity from Run 1. Finally, searches for Higgs bosons from extended electroweak symmetry-breaking sectors in final states with a pair of tau leptons, and in pairs of vector bosons, have been performed.

So far, no definitive observation of new physics has been observed in the data, although two excesses have been observed. The first, with a significance of 2.2 standard deviations, was seen in the search for SUSY with gluino production with subsequent decays into a Z boson and missing energy; a 3 standard-deviation excess was observed in this channel in Run 1. The second excess was observed in the search for diphoton resonances where a peak is seen at 750 GeV with a local significance of 3.6 standard deviations, corresponding to a global significance of 2.0 standard deviations. More data will be needed to probe the nature of these excesses.

Limits on a large variety of theories beyond the Standard Model have been derived. The ATLAS experiment is completing its measurements and search programme on the data collected in 2015, and is preparing for the data to come in 2016.

• For more details on the ATLAS results presented at the seminar, see https://twiki.cern.ch/twiki/bin/view/AtlasPublic/December2015-13TeV.

CMS presents new 13 TeV results at end-of-year jamboree

The first phase of collisions after the LHC restart earlier this year provided CMS with data at the novel energy of 13 TeV, enabling CMS to explore uncharted domains of physics. At the end of this exciting year, CMS and ATLAS presented comprehensive overviews of their latest results from analyses performed on the collected data. Here we highlight only a few of the key CMS results – refer to the further reading (below) for more.

Before exploring the “unknown”, CMS first strove to rediscover the “known”, as a means to validate the excellent performance of the detector after emerging from the consolidation and upgrade period of Long Shutdown 1. Convincing performance studies as well as early measurements had already been presented at this year’s summer conferences. Meanwhile, the studies and physics measurements continued as the size of the data sample increased over the course of the autumn. In total, CMS approved 33 new public results for the end-of-year jamboree, capping off a successful period of commissioning, data collection and analysis. In contrast to the studies performed for other Standard Model particles, CMS preferred to remain blinded for studies involving the LHC’s most famous particle, the Higgs boson discovered in 2012, because the collected data sample was not large enough for a Higgs boson signal to be detectable.

However, it was the anticipation of results on searches for new phenomena that filled CERN’s main auditorium beyond capacity. The CMS focus was on searches that would already be sensitive to new physics with the small data sample collected in 2015. Hadron jets play a crucial role in searches for exotic particles such as excited quarks, whose observation would demonstrate that quarks are not elementary particles but rather composite objects, and for heavier cousins of the W boson. These new particles would demonstrate their presence by transforming into two particle jets (a “dijet”). The highest-mass dijet event observed by CMS is shown in the figure. In carrying out this study, CMS searches for bumps in the mass distribution of the dijet system. Seeing no significant excess over the background, a new CMS publication based on the 13 TeV data imposes limits on the masses of these hypothetical particles ranging from 2.6 TeV to 7 TeV, depending on the new-physics model.

CMS also searched for the presence of heavy particles such as a Z´ (Z-prime) boson in the dilepton spectrum, in which unstable exotic particles would transform into pairs of electrons or muons. While CMS observed high-mass events, with dielectrons up to a mass of 2.9 TeV and dimuons up to 2.4 TeV, the data are compatible with the Standard Model and do not provide evidence for new physics.

Finally, CMS observed a slight excess in events with two photons at a diphoton mass around 760 GeV. However, small fluctuations such as this have been observed regularly in the past, including at LHC Run 1, and often disappear as more data is collected. Therefore we are still far from the threshold associated with a new discovery, but the stage is set for great excitement and anticipation in the upcoming 2016 run of the LHC.

Fixed-target and heavy-ion physics with LHCb

LHCb

Beyond its rich programme in flavour physics based on proton–proton collisions, LHCb opened the door in 2015 to a new domain of physics exploration related to cosmic-ray and heavy-ion physics. Due to its forward coverage, the detector has access to a unique kinematic range in colliding-beam physics. In addition, using a system developed for precise luminosity measurements based on the beam-gas imaging method, neon, helium and argon gas has been injected during some periods into the interaction region to exploit the LHC proton and ion beams for fixed-target physics at the highest available energies.

The measurement of proton–helium collisions has been motivated by recent results from AMS and other space detectors, which suggest that the antiproton yield in cosmic rays may exceed the expected value from secondary production in the interstellar medium. The accuracy of such predictions is limited by the poor knowledge of the proton–helium cross-section for proton energies at the TeV scale. By measuring proton–helium collisions, LHCb mimics the conditions for secondary production, and has the potential to help in the interpretation of these exciting results.

In proton–argon collisions, a nucleon–nucleon centre-of-mass energy of 110 GeV is generated, which is in between those achieved in experiments at the SPS in the 1980s and 1990s and those probed at RHIC more recently. While the produced energy densities are too low to create quark–gluon plasma (QGP), they allow the study of cold-nuclear-matter (CNM) effects, which are crucial to determine QGP formation.

During the last weeks of the LHC physics programme of 2015, the LHCb collaboration also participated in the heavy-ion run, taking data in both fixed-target mode by recording lead–argon collisions at a centre of mass energy of 69 GeV, and in colliding-beam mode, collecting lead–lead collisions at 5 TeV. In both modes, the energy densities are large enough to create a QGP, however lead–argon collisions have lower multiplicities than lead–lead collisions, and are therefore easier to analyse. The experiment is able to reconstruct lead–lead collisions up to a centrality of about 50%. The rapidity coverage by the LHCb detector in fixed-target mode in the nucleon–nucleon centre-of-mass frame is about –3 < y < 1; in colliding-beam mode, the range between 2 < y < 5 is covered. The experiment has precise tracking, vertexing, calorimetry and powerful particle identification over the full detector acceptance.

Comparison of collisions in the various configurations allows QGP effects to be disentangled from CNM effects. The various beam configurations are summarised in the diagram.

The focus of LHCb measurements will, on the one hand, be on hard probes such as open heavy-flavour states and quarkonia, which can be carried out down to very low pT. On the other hand, open questions in the soft sector of QCD can be addressed, which cannot be treated perturbatively. LHCb is looking forward to exciting measurements in a variety of beam configurations in the years ahead.

A new chapter opens with more Pb–Pb collisions for a precision study of hot and dense QCD matter

ALICE

After the restart of the LHC physics programme in June 2015 with world-record proton– proton collisions at √s = 13 TeV, nuclear beams reappeared in the LHC tunnel in November 2015 with subsequent first collisions between 208Pb ions. With unprecedented centre-of-mass energy values of 5.02 TeV in the nucleon–nucleon system, collection of these data marks the beginning of a new chapter in the precision study of properties of hot and dense hadronic matter, and the quest to understand QCD confinement.

Measurement of the inclusive production of charged hadrons in high-energy nucleus– nucleus reactions is a key observable to characterise the global properties of the collision, in particular, whenever the collision energy increases significantly (almost a factor of two with respect to the LHC Run 1). Particle production at collider energies originates from the interplay of perturbative (hard) and non-perturbative (soft) QCD processes. Soft scattering processes and parton hadronisation dominate the bulk of particle production at low transverse momenta, and can only be modelled phenomenologically. On the other hand, with an increase in collision energy, the role of hard processes – parton scatterings with large momentum transfer – increases. Such measurements, which contribute essential information to estimate the initial energy density leading to the formation and evolution of the quark–gluon plasma and its relation to the collision geometry, also provide valuable insight into the initial-state partonic structure of the colliding nuclei.

The ALICE experiment has measured the centrality-dependence of the inclusive charged-particle density (dNch/dη) at mid-rapidity (|η| < 0.5) in Pb–Pb collisions at √sNN = 5.02 TeV. For an event sample corresponding to the most central 5% of the hadronic cross-section, the pseudorapidity density of primary charged particles at midrapidity is 1943±54, which corresponds to 10.2±0.3 per participating nucleon pair. This represents an increase of a factor of about 2.4 relative to p–Pb collisions at the same collision energy, and a factor of about 1.2 to central Pb–Pb collisions at 2.76 TeV. Previous measurements were performed by ALICE, ATLAS and CMS at the LHC at √sNN = 2.76 TeV, and also at lower energies in the range √sNN = 17–200 GeV with SPS and RHIC experiments. The figure shows a compilation of results on mid-rapidity charged-particle density for most central nucleus–nucleus collisions and elementary proton–proton and proton(deuteron)–nucleus collisions. Particle production in nucleus–nucleus collisions increases more rapidly with the centre-of-mass energy (per nucleon pair) than in proton–proton and proton(deuteron)–nucleus collisions, in agreement with expectations from the power-law extrapolation of lower-energy results. The characteristics of the centrality dependence of dNch/dη and comparison with several phenomenological models is reported in a recent publication by the ALICE collaboration.

MicroBooNE records first neutrino events

MicroBooNE, an experiment designed to measure neutrinos and antineutrinos generated by Fermilab’s Booster accelerator (CERN Courier September 2014 p8), has recorded its first neutrino events. MicroBooNE is the first of three neutrino detectors of the lab’s new short-baseline neutrino (SBN) programme, recommended by the 2014 report of the US Particle Physics Project Prioritization Panel (P5). The ICARUS detector (being refurbished at CERN) as far detector, MicroBooNE as intermediate detector and SBND as near detector will compose the SBN project.

Designed to search for sterile neutrinos and other new physics phenomena in low-energy neutrino oscillations, the SBN programme aims to confirm or refute the hints of a fourth type of neutrino first reported by the LSND collaboration at Los Alamos National Laboratory, and resolve the origin of a mysterious low-energy excess of particle events seen by the MiniBooNE experiment, which used the same short-baseline neutrino beam line at Fermilab.

MicroBooNE uses a 10.4 m-long liquid-argon time-projection chamber (TPC) filled with 170 tonnes of liquid argon. The TPC probes neutrino oscillations by reconstructing particle tracks as finely detailed 3D images. When a neutrino hits the nucleus of an argon atom, its collision creates a spray of subatomic particles. Tracking and identifying those particles allows scientists to reveal the type and properties of the neutrino that produced them.

The MicroBooNE time-projection chamber is the largest ever built in the US and is equipped with 8256 delicate gold-plated wires. The three layers of wires capture pictures of particle interactions at different points in space and time. The superb resolution of the time-projection chamber will allow scientists to check whether the excess of MiniBooNE events – recorded with a Cherenkov detector filled with mineral oil – is due to photons or electrons.

MicroBooNE will collect data for several years, and computers will sift through thousands of neutrino interactions recorded every day. It will be the first liquid-argon detector to measure neutrino interactions from a neutrino beam with particle energies of less than 800 MeV.

Construction is under way for the two buildings that will house the other detectors of the SBN programme: the new 260 tonne Short-Baseline Near Detector (110 m from the neutrino production target) and the 760 tonne ICARUS detector (600 m) that took data at the Gran Sasso National Laboratory in Italy from 2009 to 2012. Like MicroBooNE (470 m from the target), they are both liquid-argon TPCs.

The MicroBooNE collaboration comprises 138 scientists from 28 institutions, while more than 200 scientists from 45 institutions are collaborating on the SBN programme. The experience and knowledge they will gain is relevant for the forthcoming Deep Underground Neutrino Experiment (DUNE), which will use four 10,000 tonne liquid-argon TPCs to examine neutrino oscillations over a much longer distance (1300 km) and a much higher and broader energy range (0.5–10 GeV).

bright-rec iop pub iop-science physcis connect