Comsol -leaderboard other pages

Topics

CLOUD shines new light on aerosol formation in atmosphere

CCnew3_09_13

The CLOUD experiment at CERN, which is studying whether cosmic rays have a climatically significant effect on aerosols and clouds, is also tackling one of the most challenging and long-standing problems in atmospheric science – understanding how new aerosol particles are formed in the atmosphere and the effect that these particles have on climate. In a major step forward, the CLOUD collaboration has made the first measurements – either in the laboratory or in the atmosphere – of the formation rates of atmospheric aerosol particles that have been identified with clusters of precisely known molecular composition.

Atmospheric aerosol particles cool the climate by reflecting sunlight and by forming smaller but more numerous cloud droplets, which makes clouds brighter and extends their lifetimes. By current estimates, about half of all cloud drops are formed on aerosol particles that were “nucleated” – that is, produced from the clustering of tiny concentrations of atmospheric molecules rather than being emitted directly into the atmosphere, as happens with sea-spray particles. Nucleation is therefore likely to be a key process in climate regulation. However, the physical mechanisms of nucleation are not understood, nor is it known which molecules participate in nucleation and whether they derive from natural sources or are emitted by human activities.

CLOUD has studied the formation of new atmospheric particles in a specially designed chamber under extremely well controlled laboratory conditions of temperature, humidity and concentrations of nucleating vapours. This chamber is the first to reach the challenging technical requirements on ultra-low levels of contaminants that are necessary to carry out these experiments in the laboratory. Using state-of-the-art instruments that are connected to the chamber, the experiment can measure extremely low concentrations of atmospheric vapours. It can also study the precise molecular make-up and growth of newly formed molecular clusters from single molecules up to stable aerosol particles.

This has enabled CLOUD to measure the formation of particles that are caused by sulphuric acid and tiny concentrations of dimethylamine near the level of 1 molecule per trillon (1012 ) air molecules. The measurements, made at 278 K and 38% relative humidity, involved different combinations of sulphuric acid (H2SO4) and water (H2O), with ammonia (NH3) or dimethylamine (DMA). The figure shows the results from CLOUD together with various atmospheric measurements and theoretical expectations based on quantum chemical calculations of cluster binding energies. The results indicate that amines at typical atmospheric concentrations of only a few parts per trillion by volume combine with sulphuric acid to form highly stable aerosol particles at rates that are similar to those observed in the lower atmosphere. The figure also shows that these highly detailed measurements allow a fundamental understanding of the nucleation process at the molecular level because they can be reproduced by the theoretical calculations of molecular clustering.

Amines are atmospheric vapours that are closely related to ammonia. Derived largely from anthropogenic activities – mainly animal husbandry – they are also emitted by the oceans, the soil and from biomass burning. The results from CLOUD suggest that natural and anthropogenic sources of amines could influence climate. CLOUD has also found that ionization by cosmic rays has only a small effect on the formation rate of amine–sulphuric-acid particles, suggesting that cosmic rays are unimportant for the generation of these particular aerosol particles in the atmosphere.

• The CLOUD collaboration consists of the California Institute of Technology, Carnegie Mellon University, CERN, Finnish Meteorological Institute, Helsinki Institute of Physics, Johann Wolfgang Goethe University Frankfurt, Karlsruhe Institute of Technology, Lebedev Physical Institute, Leibniz Institute for Tropospheric Research, Paul Scherrer Institute, University of Beira Interior, University of Eastern Finland, University of Helsinki, University of Innsbruck, University of Leeds, University of Lisbon, University of Manchester, University of Stockholm and University of Vienna.

ICRC 2013: from Earth to the Galaxy and beyond

ICRC 2013

At the traditional dinner party they danced to samba music while holding caipirinhas. During the day, the more than 700 physicists who attended the 33rd Cosmic Ray Conference (ICRC 2013) in Rio de Janeiro listened carefully during the 400 scheduled talks in a variety of plenary and parallel sessions on 2–9 July. Instead of caipirinhas, they held laptops and notepads as they focused on the important findings and data presented at the first ICRC to be held in South America.

Organized under the auspices of the International Union of Pure and Applied Physics (IUPAP) and its C4 Commission on Cosmic Rays, ICRC 2013 was hosted by the Centro Brasileiro de Pesquisas Físicas – an institute of the ministry of science, technology and innovation – the Federal University of Rio de Janeiro and the Brazilian Physical Society. It was sponsored by the National Council for Scientific and Technological Development (CNPq), the Coordination for Improvement of Higher Education Personnel (CAPES) and the Research Support Foundation of the state of Rio de Janeiro (FAPERJ).

The location in South America was not the only “first”. The organization of the 33rd ICRC had a scientific programme committee for the first time, consisting of leading experts in solar and heliospheric physics, cosmic-ray physics, gamma-ray astronomy, neutrino astronomy and dark-matter physics. Also for the first time, ICRC included research on dark matter as a main branch of the programme. For this reason, ICRC 2013 adopted the subtitle “The Astroparticle Physics Conference”. This might also become the C4 Commission’s new name, as Johannes Knapp, the commission’s chair, announced during the closing session. The commission organized a poll during the nine days of the conference in which all registered participants could vote on changing the name from “Cosmic Rays” to “Astroparticle Physics”. The majority voted for the change and the commission is now consulting IUPAP on the matter. To maintain tradition, the conference’s main title – ICRC – will remain unchanged.

In neutrino research, the IceCube experiment has some thrilling results

ICRC 2013 was certainly a success. During the plenary session on results from the Pierre Auger Observatory, Antoine Letessier-Selvon of CNRS and Université Pierre et Marie Curie presented evidence of what could be called “the muon problem”. It concerns the conflict between the prediction from Monte Carlo simulations of the number of muons in the surface Cherenkov detectors and the value extracted from the experimental data, which is about a factor of 1.5 higher. Letessier-Selvon argued that a change in composition at higher energies is not sufficient to explain the discrepancy.

The ground-based gamma-ray experiments HESS, MAGIC and VERITAS have added new gamma-ray sources – both in the Galaxy and beyond it – to the catalogue, which now totals about 150 sources. Teams at the northern-hemisphere observatories reported flaring of the blazar Mkn 421 in April this year, while MAGIC registered another flare in November 2012, in IC 310 – an extra-galactic source that it had previously discovered. Miguel Mostafa of Colorado State University presented the results of the “first light” – in fact, gamma rays – in the High Altitude Water Cherenkov Observatory installed at an altitude of 4150 m in Mexico. It is designed to detect ultra-high-energy gamma rays and is sensitive to energies above 300 GeV. With approximately only one third of the detector in operation, the collaboration was still able to present their view of the Mkn 421 flare of April.

Data from Voyager 1

In neutrino research, the IceCube experiment has some thrilling results. Spencer Klein of the Lawrence Berkeley National Laboratory and the University of California, Berkeley presented the 28 events that were detected with energies above 50 TeV, which include the previously revealed events above 1 PeV (CERN Courier July/August 2013 p5). Klein also spoke of the observation of another very-high-energy event in the ongoing analysis of 2012 data – but its characteristics remain “top secret”.

Another highlight of ICRC 2013 was the presentations by Nobel laureate Sam Ting and the Alpha Magnetic Spectrometer (AMS) collaboration of the first results from two years of AMS-02 operation on the International Space Station (ISS). The main goal is to perform a high-precision, large-statistics and long-duration study of cosmic nuclei, elementary charged particles and gamma rays. At the conference the collaboration presented high-precision measurements of the fluxes, ratios and anisotropies of electrons and positrons, as well as first results on proton and helium fluxes (CERN Courier October 2013 p22).

Moving further out in space, Ed Stone from Caltech presented the saga of the Voyager 1 spacecraft, launched in 1977, which is now at the edge of the solar system. The data clearly show a “wall” characterizing the heliosheath. It is astonishing that Voyager 1 is still collecting data after all these years – with an on-board computer of the 1970s and a power source that is still very much alive having passed through the harsh environment of Jupiter and Saturn. Stone was seen not only by the conference participants but also by the 40 million viewers who watched an interview with him during a popular programme on Brazilian TV.

The parallel sessions included presentations on a plethora of new projects ranging from next-generation imaging air-Cherenkov telescopes, represented by the Cherenkov Telescope Array, to the Extreme Universe Space Observatory onboard the Japanese Experiment Module (JEM-EUSO). To be installed on the ISS, JEM-EUSO is designed to measure ultra-high-energy cosmic rays through the fluorescence of the extensive air showers that they produce – an expression of optimism in the future of the field.

The 34th ICRC meeting will be held at The Hague, the Netherlands, in July 2015 and will be followed two years later by the 35th meeting in Busan, Korea. Although there will be no samba or caipirinhas, there will surely be the same level of results and commitment from astroparticle physicists worldwide.

Awards for astroparticle physics

Besides the announcements of important findings and experiments, the conference was the occasion for the traditional awards for outstanding contributions in astroparticle physics. Six people were honoured, from more than 30 nominations.

Aya Ishihara, from Shiba University, received an IUPAP Young Scientist Award for her outstanding work on the search for ultra-high-energy neutrinos and the detection of the two neutrino events at >1 PeV with the IceCube detector. A second Young Scientist Award went to Daniel Mazin, from IFAE Barcelona, for his outstanding work on gamma-ray blazars and extragalactic background light, using the MAGIC Cherenkov telescopes.

Rolf Bühler, from DESY Zeuthen, received the Shakti Duggal Award for his outstanding work on the variability of the emission from the Crab nebula and extragalactic background light, using the HESS and Fermi telescopes. The O’Ceallaigh Medal was awarded to Edward Stone, from Caltech, for his contributions to cosmic-ray physics and specifically his leading role in the Voyager mission.

Motohiko Nagano, from ICRR Tokyo and Fukui University, received the Yodh Prize for his pioneering leadership in the experimental study of the highest-energy cosmic rays. Sunil Gupta, from TIFR Mumbai, was awarded the Homi Bhabha Medal and Prize for his contributions to non-thermal astrophysics and his leading role in the development of gamma-ray astronomy.

New high-precision constraints on charm CP violation

There are four neutral mesons that allow particle–antiparticle transitions – mixing – and so make ideal laboratories for studies of matter–antimatter asymmetries (CP violation). Indeed, such an asymmetry has already been observed for three of these mesons: K0, B0 and B0s. So far, searches for CP violation in the fourth neutral meson – the charm meson D0 – have not revealed a positive result. However, being the only one of the four systems to contain up quarks, the D0 mesons provide unique access to effects from physics beyond the Standard Model.

CCnew6_08_13

The LHCb collaboration presented recently two new sets of measurements at the CHARM 2013 conference, held in Manchester on 31 August–4 September. Both measurements use several million decays of D0 mesons into two charged mesons. The first is based on D→ K+π decays and their charge conjugate, from data recorded in 2011 and 2012. Owing to the Cabibbo-Kobayashi-Maskawa mechanism, the direct decay is suppressed relative to its Cabibbo-favoured counterpart. However, the final state can also be reached through mixing of the D0 meson into its antimeson, followed by the favoured decay D0 → K+π.

These two components and their interference are distinguished through analysis of the decay-time structure of the decay – comparison of the structure for D0 and D0 decays measures CP violation. The results give the best measurements to date of the mixing parameters in this system and are consistent with no CP violation at an unprecedented level of sensitivity (LHCb 2013a).

The second measurement is based on decays into a pair of kaons or a pair of pions and uses data that were recorded in 2011. The asymmetry between the mean lifetimes measured in D0 and D0 decays is related to a parameter, AΓ, which is the asymmetry between the inverse effective lifetimes of decays to the specific final state. It is a measurement of so-called indirect CP violation. The results for the two final states are AΓ(KK) = (–0.35±0.62±0.12) × 10–3 and AΓ(ππ) = (0.33±1.06±0.14) × 10–3 (LHCb 2013b). This is the first time that a search for indirect CP violation in charm mesons has reached a sensitivity of better than 10–3.

The combination of previous measurements performed by the Heavy Flavor Averaging Group hinted at potentially nonzero values for the parameters of CP-violation in D0 mixing, |q/p| and φ. As the figure shows, the new results from LHCb do not support this indication. However, they provide extremely stringent limits on the underlying parameters of charm mixing, therefore constraining the room for physics beyond the Standard Model.

Daya Bay releases new results

CCnew7_08_13

The international Daya Bay collaboration has announced new results, including their first data on how neutrino oscillations vary with neutrino energy, which allows them to measure mass splitting between different neutrino types. Mass splitting represents the frequency of neutrino oscillation while mixing angles represent the amplitude and both are crucial for understanding the nature of neutrinos.

The Daya Bay experiment, which is run by a collaboration of more than 200 scientists from six regions and countries, is located close to the Daya Bay and Ling Ao nuclear power plants, 55 km north-east of Hong Kong. It measures neutrino oscillation using electron antineutrinos created by six powerful nuclear reactors. Because the antineutrinos travel up to 2 km to underground detectors, some transform to another type and therefore apparently disappear. The rate at which they transform is the basis for measuring the mixing angle, while the mass splitting is determined by studying how the rate of transformation depends on the antineutrino energy.

Daya Bay’s first results were announced in March 2012 and established an unexpectedly large value for the mixing angle θ13 – the last of three long-sought neutrino mixing angles. The new results, which were announced at the XVth International Workshop on Neutrino Factories, Super Beams and Beta Beams (NuFact2013) in Beijing, give a more precise value – sin2 2θ13 = 0.090±0.009. The improvement in precision is a result both of having more data to analyse and of having the additional measurements on how the oscillation process varies with neutrino energy.

The KamLAND experiment in Japan and other solar neutrino experiments have previously measured the mass splitting Δm221 by observing the disappearance of electron antineutrinos from reactors some 160 km from the detector and the disappearance of electron neutrinos from the Sun. The long-baseline experiments MINOS in the US and Super-Kamiokande and T2K in Japan have determined the effective mass splitting |Δm2μμ| using muon neutrinos. The Daya Bay collaboration has now measured the magnitude of the mass splitting |Δm2ee| to be (2.54±0.20) × 10–3 eV2.

The result establishes that the electron neutrino has all three mass states and is consistent with that from muon neutrinos measured by MINOS. Precision measurements of the energy dependence should further the goal of establishing a hierarchy of the three mass states for each neutrino flavour.

Enlightening the dark

Dual-phase TPC

Numerous astronomical observations indicate that about one quarter of the energy content of the universe is made up of a mysterious substance known as dark matter. The Planck collaboration recently measured this to the precise percentage of 26.8%, which is slightly greater than the previous value from nine years of observations by the Wilkinson Microwave Anisotropy Probe (WMAP). Dark matter, which is five times more abundant than baryonic matter, provides compelling evidence for new physics and could be made of a new particle not present in the Standard Model. Theories beyond the Standard Model, such as supersymmetric models or theories with extra dimensions, suggest promising candidates and naturally predict so-called weakly interacting massive particles (WIMPs), which are stable or have lifetimes longer than the age of the universe.

There are several complementary strategies to detect dark matter. The ATLAS and CMS experiments at the LHC search for such particles produced in proton–proton collisions. Indirect searches, for example by the AMS-02 or IceCube detectors, aim at detecting the products of dark-matter annihilation in cosmic rays.

Because dark-matter particles are expected to be abundant in the Galaxy, with an energy density of about 0.3 GeV/c2/cm3 at the location of the Sun, the most direct strategy is to look for their interactions in laboratory-based detectors. In general, it is possible to study spin-independent WIMP–nucleon interactions – which scale with the square of the target’s mass number, A – or spin-dependent couplings to unpaired nucleons in the target nucleus. Because of their nonrelativistic Maxwellian velocity distribution with a typical speed of around 220 km/s and because the WIMPs interact significantly only with nuclei (and not with the electrons), the expected signal is a featureless exponential nuclear-recoil spectrum. The recoil energies depend on the mass of the WIMP and on the target material and are typically of the order of a few tens of kilo-electron-volts.

Because the expected interaction rates are small, a sensitive WIMP detector needs to feature a large target mass, an ultralow background and a low energy threshold. In addition, it should allow the distinction of the nuclear-recoil signal (from WIMPs and also from background neutrons) from the overabundant electronic-recoil background from γ and β radiation.

XENON100 detector

The most sensitive dark-matter detector to date is XENON100, which is operated by the XENON collaboration and situated at the Italian Laboratori Nazionali del Gran Sasso (LNGS), under about 1.3 km of rock that provides a natural shield from cosmic rays. The experiment searches for WIMP interactions in a target of 62 kg of liquid xenon. The noble gas xenon is cooled to around –90°C to bring it to the liquid state with a density of around 3 g/cm3. Its high mass number, A, of around 130 makes it one of the heaviest of all target materials for dark-matter detection.

The detector was built from materials selected for their low intrinsic radioactivity

XENON100 is operated as a dual-phase time-projection chamber (TPC), as figure 1 illustrates. Particle interactions excite the liquid xenon, leading to prompt scintillation light, and also ionize the target atoms. A uniform electric field causes the ionization electrons to drift away from the interaction site to the top of the TPC. Here a strong electric field extracts them into the xenon-gas phase above the liquid. Subsequent scattering on the gas atoms leads to signal amplification and a secondary scintillation signal, which is directly proportional to the ionization extracted. Both the prompt and secondary scintillation light are detected by two arrays of low-radioactivity photomultipliers (PMTs), which are installed above and below the cylindrical target of around 30 cm height and 30 cm diameter (figure 2). The PMTs are immersed in the liquid and gaseous xenon to achieve the highest-possible light-detection efficiency and therefore the lowest threshold. The 3D position of the interaction vertex is obtained by combining the time difference between the prompt and the secondary scintillation signal with the hit pattern of the localized secondary signal on the array of 98 PMTs above the target. The number of secondary signals defines the event multiplicity.

The detector was built from materials selected for their low intrinsic radioactivity. Thanks to its novel detector design – placing most radioactive components outside of a massive passive shield – and the self-shielding provided by the liquid xenon, XENON100 features the lowest published background of all dark-matter experiments. The self-shielding is exploited by selecting only events that interact with the inner part of the detector (“fiducialization”) and by rejecting all events that exhibit a coincident signal in the active veto, which is made of 99 kg of liquid xenon that surrounds the target. Because of their small cross-section, WIMPs will interact only once in the detector, so background can be reduced further by selecting single-scatter interactions with a charge-to-light ratio typical for the expected nuclear-recoil events.

In the summer of 2012, the XENON collaboration published results from a search for spin-independent WIMP–nucleon interactions based on 225 live days of data (XENON collaboration 2012). No indication for dark matter was found but the derived upper limits are the most stringent to date for WIMP masses above 7 GeV/c2. The same data have now been interpreted in terms of spin-dependent interactions and the results published recently (XENON collaboration 2013). This latest analysis requires knowledge of the axial-vector coupling and the nuclear structure of the two xenon isotopes with unpaired nucleons, 129Xe and 131Xe. Improved calculations were employed here, which are based on chiral-effective field-theory currents. Compared with older calculations, these yield superior agreement between calculated and predicted nuclear energy-spectra (Menendez et al. 2012).

The specific nuclear structure of the relevant xenon isotopes leads to different sensitivities for the two extreme cases that are usually considered. For the case where WIMPs are assumed to couple to protons only, the new XENON100 limit is competitive with other results (figure 3). Indirect dark-matter searches looking for signals from the annihilation of WIMPs trapped in the Sun (which mainly consists of protons) are particularly sensitive to this channel. For the neutron-only coupling, XENON100 sets a new best limit for most masses, improving the previous constraints by more than an order of magnitude (figure 3).

The aim is to reach a dark-matter sensitivity two orders of magnitude better than the current best value

While XENON100 continues to take science data at LNGS, the development of a larger liquid-xenon detector is well under way. XENON1T will be about 35 times larger than XENON100, with a TPC of around 100 cm in height and diameter. The aim is to reach a dark-matter sensitivity two orders of magnitude better than the current best value. This will probe a significant part of the theoretically favoured WIMP parameter space but will require the radioactive background of the new instrument to be 100 times lower than that of XENON100. The greatly increased liquid xenon target mass of more than two tonnes helps to achieve this goal.

Illustration of the XENON1T detector

The largest background challenge comes from uniformly distributed traces of radioactive radon (mainly 222Rn) and krypton (85Kr, present in natural krypton at a fraction of about 10–11) dissolved in the xenon, because the background from these isotopes cannot be reduced by target fiducialization. To achieve the background goals for XENON1T, the contamination of radon and krypton in the xenon filling will be reduced to below a level of parts per 1012 by careful material selection and surface treatment and by cryogenic distillation, respectively. Additionally, all of the construction materials for the detector are being carefully selected based on their intrinsic radioactivity using ultrasensitive germanium detectors. A few of the world’s most sensitive detectors are owned and operated by institutions in the XENON collaboration.

The XENON1T detector will be placed inside a large water shield to protect it from environmental radioactivity (figure 4). The water will be equipped with PMTs to tag muons via emission of Cherenkov light, because muon-induced neutrons could mimic WIMP signals. The construction of the water tank is underway in Hall B of LNGS and will be finished by the end of 2013. Together with the XENON1T service building, it will be the first visible landmark of the experiment underground. The other XENON1T systems – from detector and cryogenics to massive facilities for the storage and purification of xenon – are currently being designed, built, commissioned and tested at the various collaborating institutions. In particular, the challenges associated with building a TPC of 100 cm drift length, which will be the longest liquid xenon-based TPC ever, are being addressed with dedicated R&D set-ups.

Once the main underground facilities are erected, the XENON1T low-background cryostat – to contain the TPC and more than three tonnes of xenon – will be installed inside the water shield. The infrastructure for the storage, purification and liquefaction have been designed to handle more than double the amount of xenon initially used in XENON1T. Their commissioning underground is expected to be completed by the summer of 2014. The timeline foresees commissioning of the full XENON1T experiment by the end of 2014 and the first data by early 2015. After two years of data-taking, XENON1T will reach a sensitivity of 2 × 10–47 cm2 for spin-independent WIMP-nucleon cross-sections at a WIMP mass of 100 GeV/c2. This is a factor 100 better than the current best WIMP result from XENON100.

AMS-02 provides a precise measure of cosmic rays

AMS

More than 100 years have passed since the discovery of cosmic rays by Victor Hess in 1912 and there are still no signs of decreasing interest in the study of the properties of charged leptons, nuclei and photons from outer space. On the contrary, the search for a better understanding and clarification of the long-standing questions – the origin of ultrahigh energy cosmic rays, the composition as a function of energy, the existence of a maximum energy, the acceleration mechanisms, the propagation and confinement in the Galaxy, the extra-galactic origin, etc. – are more pertinent than ever. In addition, new ambitious experimental initiatives are starting to produce results that could cast light on more recent challenging questions, such as the nature of dark matter, the apparent absence of antimatter in the explored universe and the search for new forms of matter.

The 33rd International Conference on Cosmic Rays (ICRC 2013) – The Astroparticle Physics Conference – took place in Rio de Janeiro on 2–9 July and provided a high-profile platform for the presentation of a wealth of results from solar and heliospheric physics, through cosmic-ray physics and gamma-ray astronomy to neutrino astronomy and dark-matter physics. A full session was devoted to the presentation of new results from the Alpha Magnetic Spectrometer, AMS-02. Sponsored by the US Department of Energy and supported financially by the relevant funding and space agencies in Europe and Asia, this experiment was deployed on the International Space Station (ISS) on 19 May 2011 (figure 1). The results, which were presented for the first time at a large international conference, are based on the data collected by AMS-02 during its first two years of operation on the ISS.

AMS experiment

AMS-02 is a large particle detector by space standards and built using the concepts and technologies developed for experiments at particle accelerators but adapted to the extremely hostile environment of space. Measuring 5 × 4 × 3 m3, it weighs 7.5 tonnes. Reliability, performance and redundancy are the key features for the safe and successful operation of this instrument in space.

The main scientific goal is to perform a high-precision, large-statistics and long-duration study of cosmic nuclei (from hydrogen to iron and beyond), elementary charged particles (protons, antiprotons, electrons and positrons) and γ rays. In particular, AMS-02 is designed to measure the energy- and time-dependent fluxes of cosmic nuclei to an unprecedented degree of precision, to understand better the propagation models, the confinement mechanisms of cosmic rays in the Galaxy and the strength of the interactions with interstellar media. A second high-priority research topic is an indirect search for dark-matter signals based on looking at the fluxes of particles such as electrons, positrons, protons, antiprotons and photons.

Another important item on the list of priorities – which will be addressed in future – is the search for cosmic antimatter nuclei. This variety of matter is apparently absent in the region of the universe currently explored but – according to the Big Bang theory – it should have been highly abundant in the early phases of the universe. Last but not least, AMS-02 will explore the possible existence of new phenomena or new forms of matter, such as strangelets, which this state-of-the-art instrument will be in a unique position to unravel.

The AMS-02 detector was designed, built and is now operated by a large international collaboration led by Nobel laureate Samuel C C Ting, involving researchers from institutions in America, Europe and Asia. The detector components were constructed and tested in research centres around the world, with large facilities being built or refurbished for this purpose in China, France, Germany, Italy, Spain, Switzerland and Taiwan. The final assembly took place at CERN, benefiting from the laboratory’s significant expertise and experience in the technologies of detector construction. The instrument was then tested extensively with cosmic rays and particle beams at CERN, in the Maxwell electromagnetic compatibility chamber and the large-space thermal simulator at ESA-ESTEC in Noordwijk, as well as in the large facilities at the NASA Kennedy Space Center in the US.

The construction of AMS-02 has stimulated the development of important and novel technologies in advanced instrumentation. These include the first operation in space of a large two-phase CO2 cooling system for the silicon tracker and the two-gas (Xe-CO2) system for the operation of the transition-radiation detector, as well as the overall thermal system. The latter must protect the experiment from the continual changes of temperature that the detector undergoes at every position on its orbit, which affect various parts of the detector subsystems in a manner that is not easy to reproduce. The use of radiation-tolerant fast electronics, a sophisticated trigger, redundant systems for data acquisition, associated protocols for communications with the NASA on-board hardware and a high-rate downlink system for the real-time transmission of data from AMS-02 to the NASA ground facilities, are a few examples that illustrate the complexity and the kind of challenges that the project has had to meet.

Positron flux

The operation of the Payload Operation and Control Center (POCC) at CERN, 24 hours a day and 365 days a year, in permanent connection with the ISS and the NASA Johnson Space Center, has also been a major endeavour. Fast processing of data on reception at the Science Operation Center at CERN has been a formidable tour de force, resulting in the timely reconstruction of 36.5 × 109 cosmic rays during the period 19 May 2011 – August 2013.

After almost 28 months of operation, AMS-02 – with its 300,000 electronics channels, 650 computers, 1100 thermal sensors and 400 thermostats – has worked flawlessly. To maintain performance and reliability, three space-flight simulators operate continuously at CERN, at the NASA Johnson Space Center and at the NASA Marshall Space Flight Center, where they test and certify the numerous upgrades of the software packages for the on-board computers and the communication interfaces and protocols.

First results

At ICRC 2013, the AMS collaboration presented data on two important areas of cosmic-ray physics. One addresses the fluxes, ratios and anisotropies of leptons, while the other concerns charged cosmic nuclei (protons, helium, boron, carbon). The following presents a brief summary of the results and of some of the most critical experimental challenges.

Proton flux

In the case of electrons and positrons, efficient instrumental handles for the suppression of the dominant backgrounds are: the minimal amount of material in the transition-radiation and time-of-flight detectors; the magnet location, separating the transition-radiation detector and the electromagnetic calorimeter; and the capability to match the value of the particle momentum reconstructed in the nine tracker layers of the silicon spectrometer with the value of the energy of the particle showering in the electromagnetic calorimeter.

The performance of the transition-radiation detector results in a high proton-rejection efficiency (larger than 103) at 90% positron efficiency in the rigidity range of interest. The performance of the calorimeter with its 17 radiation lengths provides a rejection factor better than 103 for protons with momenta up to 103 GeV/c. The combination of the two efficiencies leads to an overall proton-rejection factor of 106 for most of the energy range under study.

A precision measurement of the positron fraction in primary cosmic rays, based on the sample of 6.8 million positron and electron events in the energy range of 0.5–350 GeV – collected during the initial 18 months of operation on the ISS – was recently published and presented at the conference (Aguilar et al. 2013 and Kounine ICRC 2013). The positron-fraction spectrum (figure 2), does not exhibit fine structure and the highly precise determination shows that the positron fraction steadily increases from 10–250 GeV, while from 20–250 GeV, the slope decreases by an order of magnitude. The AMS-02 measurements have extended the energy ranges covered by recent experiments to higher values and reveal a different behaviour in the high-energy region of the spectrum.

AMS-02 has also extended the measurements of the positron spectrum to 350 GeV – that is, above the energy range of determinations by other experiments. The individual electron and positron spectra, with the E3 multiplication factor and the combined spectrum, were presented at the conference (Schael, Bertucci ICRC 2013). Figure 3 shows the electron spectrum, which appears to follow a smooth, slowly falling curve up to 500 GeV. The positron spectrum, by contrast, rises to 10 GeV, flattens from 10–30 GeV, before rising again above 30 GeV (figure 4). For the time being, it is not obvious that the models or simple parametric estimations that are currently used to describe the rate spectrum can also describe the behaviour of the individual electron and positron spectra.

helium spectrum

Using a larger data sample, comprising of the order of 9 million of electrons and positrons, the collaboration has performed a preliminary measurement of the combined fluxes of electrons and positrons in the energy range 0.5–700 GeV (Bertucci ICRC 2013). The data do not show significant structures, although a change in the spectral index with increasing lepton energies is clearly observed. However, the positron flux increases with energy and a promising approach to identifying the physics origin of this behaviour lies in the determination of the size of a possible anisotropy, arising in primary sources, in the arrival directions of positrons and electrons measured in galactic co-ordinates. AMS-02 has obtained a limit on the dipole anisotropy parameter d <0.030 at the 95% confidence level for energies above 16 GeV (Casaus ICRC 2013).

Turning to cosmic nuclei, the first AMS-02 measurements of the proton and helium fluxes were presented at the conference (Haino, Choutko ICRC 2013). The rigidity ranges were 1 GV – 1.8 TV for protons and 2 GV – 3.2 TV for helium (figures 5 and 6). In both cases, the experiment observed gradual changes of the fluxes owing to solar modulation, as well as drastic changes after large solar flares. Otherwise, the spectra are fairly smooth and do not exhibit breaks or fine structures of the kind reported for other recent experiments.

Boron-to-carbon ratio

The ratio of the boron to carbon fluxes is particularly interesting because it carries important information about the production and propagation of cosmic rays in the Galaxy. Boron nuclei are produced mainly by spallation of heavier primary elements present in the interstellar medium, whereas primary cosmic rays – such as carbon and oxygen – are predominantly produced at the source. Precision measurements of the boron-to-carbon ratio therefore provide important input for determining the characteristics of the cosmic-ray sources by deconvoluting the propagation effects from the measured data. The capability of AMS-02 to do multiple independent determinations of the electric charges of the cosmic rays allows a separation of carbon from boron with a contamination of less than 10–4. Figure 7 presents a preliminary measurement of the boron-to-carbon ratio in the kinetic-energy interval 0.5 – 670 GeV/n (Oliva ICRC 2013).

For the future

After nearly 28 months of successful operation, the results presented at ICRC 2013 already give a taste of the scientific potential of the AMS-02 experiment. In the near future, the measurements sketched in this article will extend the energy or rigidity coverage and the study of systematic uncertainties will be finalized. The experiment will measure the fluxes of more cosmic nuclei with unprecedented precision to constrain further the size and energy dependence of the underlying background processes.

By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events

High on the priority list for AMS-02 is the measurement of the antiproton flux and the antiproton/proton rate – a relevant and most sensitive quantity for disentangling, among the possible sources, those that induce the observed increase of the positron flux with energy. With the growing data sample and a deeper assessment of the systematic uncertainties, the searches for cosmic antinuclei will become extremely important, as will the search for unexpected new signatures.

By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events. In view of what has been achieved so far, it is reasonable to be fairly confident that this massive amount of new and precise data will contribute significantly to a better understanding of the ever exciting and lively field of cosmic rays.

Quarks, gluons and sea in Marseilles

Participants

The regular “DIS” workshops on Deep-Inelastic Scattering and related subjects usually bring together a unique mix of international communities and cover a spectrum of topics ranging across proton structure, strong interactions and physics at the energy frontier. DIS2013 – the 21st workshop – which took place in the Palais des Congrès in Marseilles earlier this year was no exception. Appropriately, this large scientific event formed part of a rich cultural programme in the city that was associated with its status as European Capital of Culture Marseilles-Provence 2013.

A significant part of the programme was devoted to recent and exciting experimental results, which together with theoretical advances and the outlook for the future created a vibrant scientific atmosphere. The workshop began with a full day of plenary reports on hot topics, followed by two and a half days of parallel sessions that were organized around seven themes: structure functions and parton densities; small-x, diffraction and vector mesons; electroweak physics and beyond the Standard Model; QCD and hadronic final states; heavy flavours; spin physics; future experiments.

Higgs and more

The meeting provided the opportunity to discuss in depth the various connections between strong interactions, proton structure and recent experimental results at the LHC. In particular, the discovery of a Higgs boson and the subsequent studies of its properties attracted a great deal of interest, including from the perspective of the connections with proton structure. A tremendous effort was made in the past year to provide an improved theory, study the constraints from the wealth of new experimental data and adopt a more robust methodology in analyses that determine the proton’s parton distribution functions (PDFs). The PDFs are an essential ingredient for most LHC analyses, from characterization of the Higgs boson to self-consistency tests of the Standard Model. The first “safari” into the new territory at the LHC and the impressive final results with the full data set from Fermilab’s Tevatron have revealed no new phenomena so far. However, it might well be that the search for new physics – which will be re-launched at higher energies during the next LHC run – will be affected by the precision with which the structure of the proton is known.

The goal of the spin community is to produce a 3D picture of the proton with high precision

The most recent experimental results from continuing analysis from HERA – the electron–proton collider that ran at DESY during 1992–2007 – were presented. In particular, both the H1 and ZEUS collaborations have now published measurements at high photon virtualities (Q2). While the refined data from HERA form its immensely valuable legacy, the transfer of the baton to the LHC has already begun. A large number of recent results – in particular from the LHC – provide further constraints on the PDFs, such as in the case of final states with weak bosons or top quarks, which are already in the regime of precision measurements with about 1% accuracy. Stimulated by an active Standard Model community that has many groups that are working on the determination of PDFs (such as ABM, MSTW and CTEQ) and by the release of common analysis tools such as HERAFitter, the new measurements from the LHC are rapidly interpreted in terms of valuable PDF constraints, as figure 1 shows. More exclusive final states have the potential to complement inclusive measurements: for instance, measurements on the W in association with the charm quark could shed new light on the strangeness content of the proton. A huge step in the precision of PDF determination – which might be essential to study new physics – complemented by a standalone programme at the energy frontier would be possible at the proposed Large Hadron Electron Collider (LHeC), which could provide a new opportunity to study Higgs-boson couplings.

Gluon density in the proton

The understanding of proton structure would not be complete without understanding its spin. Polarized experiments – including fixed-target DIS experiments at Jefferson Lab and CERN, as well as the polarized proton–proton programme at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) – continue to provide new data and to open new fields. The goal is to understand the parton contributions to the proton’s spin, long considered a “puzzle” because of the unexpected way that it is shared between the quarks – with only a quarter – the gluons and the relative angular momentum. Recent, more precise measurements of W-boson production in polarized proton–proton collisions at RHIC have the potential to constrain further the valence quark contributions, while semi-inclusive DIS scattering at fixed-target experiments (for instance, using final states with charm mesons) continue to reduce the uncertainty on the gluon contribution. The goal of the spin community – manifest in the project for a polarized Electron-Ion Collider (EIC) – is to produce a 3D picture of the proton with high precision using a large number of observables across an extended phase space.

Impressive precision

The current scientific landscape includes many experiments that are based on hadronic interactions, with the LHC taking these studies to the highest energies. These are reaching impressive, increasing precision across a large phase space, not only in final states with jets but also in more exclusive configurations including photons, weak bosons or tagged heavy flavours. The measurements performed in diffraction – by now a classical laboratory for QCD tests – are also available from the LHC in inclusive and semi-inclusive final states and enforce the global understanding of the strong interactions. An interesting case concerns double-parton interactions where the final state originates from not one but two parton–parton collisions – a contribution that in some cases can pollute final-state configurations (including bosons or Higgs production). Although the measurements are not yet precise enough to identify kinematical dependencies or parton–parton correlations, they are beginning to unveil this contribution, which may prove in future to be related to profound aspects of the proton structure, such as the generalized parton distributions and the proton spin.

A global picture and complete understanding of the strong force can only emerge by using all of the available configurations and energies. In particular, the measurements of the hadronic final states performed in electron–proton collisions at HERA and the refined measurements at the Tevatron provide an essential testing ground for the increasingly precise calculations. Figure 2 illustrates this statement, presenting measurements of the strong coupling from collider experiments – including the most recent measurements from the LHC.

The high-energy heavy-ion collisions at both RHIC and the LHC have been a constant source of new results and paradigms during the past few years and this proved equally true for the DIS2013 conference. Probes such as mesons or jets “disappear” when high densities of the collision fireball are reached. The set of such probes has been consolidated at the LHC, where the experimental capabilities and large phase space allow further measurements involving strangeness, charm or inclusive particle production. In addition, the recently achieved proton–lead collisions provide new testing grounds for the collective behaviour of the quarks and gluons at high densities.

Measurements of the strong coupling constant

A total of 300 talks were given covering the seven themes of the workshop, distributed across two and a half days of parallel sessions, a few of which combined different themes. As tradition requires at DIS workshops, the presentations were followed by intense debates on classic and new issues, including a satellite workshop on the HERAFitter project. On the last day, the working group convenors summarized the highlights of the rich scientific programme of the parallel sessions.

The conference ended with a session on future experiments, where together with upgrades of the LHC experiments and other interesting projects related to new capabilities for QCD-related studies (AFTER, CHIC, COMPASS, NA62, nuSTORM, etc.), the two projects for new colliders EIC and LHeC were discussed. Rolf Heuer, CERN’s director-general, presented the recently updated European Strategy for Particle Physics. The programme at the energy frontier with the LHC will be followed for at least 20 years and studies for further projects are ongoing. The conference ended with an inspiring outlook talk by Chris Quigg of Fermilab, with hints of a possible QCD-like walk on the new physics frontier. In the evening, Heuer gave a talk for the general public to an audience of more than 200 people on recent discoveries at the LHC.

In addition to the workshop sessions, participants enjoyed a dinner in the Pharo castle – with a splendid view of the old and new harbours of Marseilles – where they found out why the French national anthem is called La Marseillaise. There was also half a day of free time for most of the participants – except maybe for convenors who had to prepare their summary reports – with two excursions organized at Cassis and in the historic centre of Marseilles.

In summary, the DIS2013 workshop once again allowed an insightful journey around the fundamental links between QCD, proton structure and physics at the energy frontier – an interface that will continue to grow and create new research ideas and projects in the near future. The next – 22nd – DIS workshop will be held in Warsaw in April 2014.

Conference time in Stockholm

Stockholm

When the Swedish warship Vasa capsized in Stockholm harbour on her maiden voyage in 1628, many hearts must have also sunk metaphorically, as they did at CERN in September 2008 when the LHC’s start-up came to an abrupt end. Now, the raised and preserved Vasa is the pride of Stockholm and the LHC – following a successful restart in 2009 – is leading research in particle physics at the high-energy frontier. This year the two icons crossed paths when the International Europhysics Conference on High-Energy Physics, EPS-HEP 2013, took place in Stockholm on 18–24 July, hosted by the KTH (Royal Institute of Technology) and Stockholm University. Latest results from the LHC experiments featured in many of the parallel, plenary and poster sessions – and the 750 or so participants had the opportunity to see the Vasa for themselves at the conference dinner. There was, of course, much more and this report can only touch on some of the highlights.

Coming a year after the first announcement of the discovery of a “Higgs-like” boson on 4 July 2012, the conference was the perfect occasion for a birthday celebration for the new particle. Not only has its identity been more firmly established in the intervening time – it almost certainly is a Higgs boson – but many of its attributes have been measured by the ATLAS and CMS experiments at the LHC, as well as by the CDF and DØ collaborations using data collected at Fermilab’s Tevatron. At 125.5 GeV/c2, its mass is known to within 0.5% precision – better than for any quark – and several tests by ATLAS and CMS show that its spin-parity, JP, is compatible with the 0+ expected for a Standard Model Higgs boson. These results exclude other models to greater than 95% confidence level (CL), while a new result from DØ rejects a graviton-like 2+ at >99.2% CL.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles

The new boson’s couplings provide a crucial test of whether it is the particle responsible for electroweak-symmetry breaking in the Standard Model. A useful parameterization for this test is the ratio of the observed signal strength to the Standard Model prediction, μ = (σ × BR)/(σ × BR)SM, where σ is the cross-section and BR the branching fraction. The results for the five major decay channels measured so far (γγ, WW*, ZZ*, bb and ττ) are consistent with the expectations for a Standard Model Higgs boson, i.e. μ = 1, to 15% accuracy. Although it is too light to decay to the heaviest quark – top, t – and its antiquark, the new boson can in principle be produced together with a tt pair, so yielding a sixth coupling. While this is a challenging channel, new results from CMS and ATLAS are starting to approach the level of sensitivity for the Standard Model Higgs boson, which bodes well for its future use.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles, making it possible to study the “bare” quark. At the conference, the CMS collaboration announced the first observation, at 6.0σ, of the associated production of a top quark and a W boson, in line with the Standard Model’s prediction. Both ATLAS and CMS had previously found evidence for this process but not to this significance. The DØ collaboration presented latest results on the lepton-based forward–backward lepton asymmetry in tt- production, which had previously indicated some deviation from theory. The new measurement, based on the full data set of 9.7 fb–1 of proton–antiproton collisions at the Tevatron, gives an asymmetry of (4.7±2.3 stat.+1.1–1.4 syst.)%, which is consistent with predictions from the Standard Model to next-to-leading order.

Venue for the conference dinner

The study of B hadrons, which contain the next heaviest quark, b, is one of the aspects of flavour physics that could yield hints of new physics. One of the highlights of the conference was the announcement of the observation of the rare decay mode B0s → μμ by both the LHCb and CMS collaborations, at 4.0 and 4.3σ, respectively. While there had been hopes that this decay channel might open a window on new physics, the long-awaited results align with the predictions of the Standard Model. The BaBar and Belle collaborations also reported on their precise measurements of the decay B → D(*)τντ at SLAC and KEK, respectively, which together disagree with the Standard Model at the 4.3σ level. The results rule out one model that adds a second Higgs doublet to the Standard Model (2HDM type II) but are consistent with a different variant, 2HDM type III – a reminder that the highest energies are not the only place where new physics could emerge.

Precision, precision

Precise measurements require precise predictions for comparison and here theoretical physics has seen a revolution in calculating next-to-leading order (NLO) effects, involving a single loop in the related Feynman diagrams. Rapid progress during the past few years has meant that the experimentalists’ wish-list for QCD calculations at NLO relevant to the LHC is now fulfilled, including such high-multiplicity final states as W + 4 jets and even W + 5 jets. Techniques for calculating loops automatically should in future provide a “do-it-yourself” approach for experimentalists. The new frontier for the theorists, meanwhile, is at next-to-NLO (NNLO), where some measurements – such as pp → tt – are already at an accuracy of a few per cent and some processes – such as pp → γγ – could have large corrections, up to 40–50%. So a new wish-list is forming, which will keep theorists busy while the automatic code takes over at NLO.

With a measurement of the mass for the Higgs boson, small corrections to the theoretical predictions for many measurable quantities – such as the ratio between the masses of the W and the top quark – can now be calculated more precisely. The goal is to see if the Standard Model gives a consistent and coherent picture when everything is put together. The GFitter collaboration of theorists and experimentalists presented its latest global Standard Model fit to electroweak measurements, which includes the legacy both from the experiments at CERN’s Large Electron–Positron Collider and from the SLAC Large Detector, together with the most recent theoretical calculations. The results for 21 parameters show little tension between experiment and the Standard Model, with no discrepancy exceeding 2.5σ, the largest being in the forward–backward asymmetry for bottom quarks.

There is more to research at the LHC than the deep and persistent probing of the Standard Model. The ALICE, LHCb, CMS and ATLAS collaborations presented new results from high-energy lead–lead and proton–lead collisions at the LHC. The most intriguing results come from the analysis of proton–lead collisions and reveal features that previously were seen only in lead–lead collisions, where the hot dense matter that was created appears to behave like a perfect liquid. The new results could indicate that similar effects occur in proton–lead collisions, even though far fewer protons and neutrons are involved. Other results from ALICE included the observation of higher yields of J/ψ particles in heavy-ion collisions at the LHC than at Brookhaven’s Relativistic Heavy-Ion Collider, although the densities are much higher at the LHC. The measurements in proton–lead collisions should cast light on this finding by allowing initial-state effects to be disentangled from those for cold nuclear matter.

Supersymmetry and dark matter

The energy frontier of the LHC has long promised the prospect of physics beyond the Standard Model, in particular through evidence for a new symmetry – supersymmetry. The ATLAS and CMS collaborations presented their extensive searches for supersymmetric particles in which they have explored a vast range of masses and other parameters but found nothing. However, assumptions involved in the work so far mean that there are regions of parameter space that remain unexplored. So while supersymmetry may be “under siege”, its survival is certainly still possible. At the same time, creative searches for evidence of extra dimensions and many kinds of “exotics” – such as excited quarks and leptons – have likewise produced no signs of anything new.

Aula Magna lecture theatre

However, evidence that there must almost certainly be some kind of new particle comes from the existence of dark, non-hadronic matter in the universe. Latest results from the Planck mission show that this should make up some 26.8% of the universe – about 4% more than previously thought. This drives the search for weakly interacting particles (WIMPs) that could constitute dark matter, which is becoming a worldwide effort. Indeed, although the Higgs boson may have been top of the bill for hadron-collider physics, more generally, the number of papers with dark matter in the title is growing faster than those on the Higgs boson.

While experiments at the LHC look for the production of new kinds of particles with the correct properties to make dark matter, “direct” searches seek evidence of interactions of dark-matter particles in the local galaxy as they pass through highly sensitive detectors on Earth. Such experiments are showing an impressive evolution with time, increasing in sensitivity by about a factor of 10 every two years and now reaching cross-sections down to 10–8 pb. Among the many results presented, an analysis of 140.2-kg days of data in the silicon detectors of the CDMS II experiment revealed three WIMP-candidate events with an expected background of 0.7. A likelihood analysis gives a 0.19% probability for the known background-only hypothesis.

Neutrinos are the one type of known particle that provide a view outside the Standard Mode

“Indirect” searches, by contrast, involve in particular the search from signals from dark-matter annihilation in the cosmos. In 2012, an analysis of publically available data from 43 months of the Fermi Large Area Telescope (LAT) indicated a puzzling signal at 130 GeV, with the interesting possibility that these γ rays could originate from the annihilation of dark-matter particles. A new analysis by the Fermi LAT team of four years’ worth of data gives preliminary indications of an effect with a local significance of 3.35σ but the global significance is less than 2σ. The HESS II experiment is currently accumulating data and might soon be able to cross-check these results.

With their small but nonzero mass and consequent oscillations from one flavour to another, neutrinos are the one type of known particle that provide a view outside the Standard Model. At the conference, the T2K collaboration announced the first definitive observation at 7.5σ of the transition νμ → νe in the high-energy νμ beam that travels 295 km from the Japan Proton Accelerator Complex to the Super-Kamiokande detector. Meanwhile, the Double CHOOZ experiment, which studies νe produced in a nuclear reactor, has refined its measurement of θ13, one of the parameters characterizing neutrino oscillations, by using two independent methods that allow much better control of the backgrounds. The GERDA collaboration uses yet another means to investigate if neutrinos are their own antiparticles, by searching for the neutrinoless double-beta decay of the isotope 76Ge in a detector in the INFN Gran Sasso National Laboratory. The experiment has completed its first phase and finds no sign of this process but now provides the world’s best lower limit for the half-life at 2.1 × 1023 years.

On the other side of the world, deep in the ice beneath the South Pole, the IceCube collaboration has recently observed oscillations of neutrinos produced in the atmosphere. More exciting, arguably, is the detection of 28 extremely energetic neutrinos – including two with energies above 1 PeV – but the evidence is not yet sufficient to claim observation of neutrinos of extraterrestrial origin.

Towards the future

In addition to the sessions on the latest results, others looked to the continuing health of the field with presentations of studies on novel ideas for future particle accelerators and detection techniques. These topics also featured in the special session for the European Committee for Future Accelerators, which looked at future developments in the context of the update of the European Strategy for Particle Physics. A range of experiments at particle accelerators currently takes place on two frontiers – high energy and high intensity. Progress in probing physics that lies at the limit of these experiments will come both from upgrades of existing machines and at future facilities. These will rely on new ideas being investigated in current accelerator R&D and will also require novel particle detectors that can exploit the higher energies and intensities.

Paris Sphicas and Peter Higgs

For example, two proposals for new neutrino facilities would allow deeper studies of neutrinos – including the possibility of CP violation, which could cast light on the dominance of matter over antimatter in the universe. The Long-Baseline Neutrino Experiment (LBNE) would create a beam of high-energy νμ at Fermilab and detect the appearance of νe with a massive detector that is located 1300 km away at the Sanford Underground Research Facility. A test set-up, LBNE10, has received funding approval. A complementary approach providing low-energy neutrinos is proposed for the European Spallation Source, which is currently under construction in Lund. This will be a powerful source of neutrons that could also be used to generate the world’s most intense neutrino beam.

The LHC was first discussed in the 1980s, more than 25 years before the machine produced its first collisions. Looking to the long-term future, other accelerators are now on the drawing board. One possible option is the International Linear Collider, currently being evaluated for construction in Japan. Another option is to create a large circular electron–positron collider, 80–100 km in circumference, to produce Higgs bosons for precision studies.

The main physics highlights of the conference were reflected in the 2013 EPS-HEP prizes, awarded in the traditional manner at the start of the plenary sessions. The EPS-HEP prize honoured both ATLAS and CMS – for the discovery of the new boson – and three of their pioneering leaders (Michel Della Negra, Peter Jenni and Tejinder Virdee). François Englert and Peter Higgs were there to present this major prize and took part later in a press conference together with the prize winners. Following the ceremony, Higgs gave a talk, “Ancestry of a New Boson”, in which he recounted what led to his paper of 1963 and also cast light on why his name became attached to the now-famous particle. Other prizes acknowledged the measurement of the all-flavour neutrino flux from the Sun, as well as the observation of the rare decay B0s → μμ, work in 4D field theories and outstanding contributions to outreach. In a later session, a prize sponsored by Elsevier was awarded for the best four posters out of the 130 that were presented by young researchers in the dedicated poster sessions.

To close the conference, Nobel Laureate Gerard ‘t Hooft presented his outlook for the field. This followed the conference summary by Sergio Bertolucci, CERN’s director for research and computing, in which he also thanked the organizers for the “beautiful venue, the fantastic weather and the perfect organization” and acknowledged the excellent presentations from the younger members of the community. The baton now passes to the organizing committees of the next EPS-HEP conference, which will take place in Vienna on 22–29 July 2015.

• This article has touched on only some of the physics highlights of the conference. For all of the talks, see http://eps-hep2013.eu/.

Strangely beautiful dimuons

Display of a Bs → μμ

Since its birth, the Standard Model of particle physics has proved to be remarkably successful at describing experimental measurements. Through the prediction and discovery of the W and Z bosons, as well as the gluon, it continues to reign. The recent discovery of a Higgs boson with a mass of 126 GeV by the ATLAS and CMS experiments indicates that the last piece of this jigsaw puzzle has been put into place. Yet, despite its incredible accuracy, the Standard Model must be incomplete: it offers no explanation for the cosmological evidence of dark matter, nor does it account for the dominance of matter over antimatter in the universe. The quest for what might lie beyond the Standard Model forms the core of the LHC physics programme, with ATLAS and CMS systematically searching for the direct production of a plethora of new particles that have been predicted by various proposed extensions to the model.

Complementary methods

As a consequence of its excellent performance – including collisions at much higher energies than previously achieved and record integrated luminosities – the LHC also provides complementary and elegant approaches to finding evidence of physics beyond the Standard Model, namely precision measurements and studies of rare decays. Through Heisenberg’s uncertainty principle, quantum loops can appear in the diagrams that describe Standard Model decays, which are influenced by particles that are absent from both the initial and final states. This experimentally well established general concept opens a window to observe the effects of undiscovered particles or of other new physics in well known Standard Model processes. Because these effects are predicted to be small, the proposed new-physics extensions remain consistent with existing observations. Now, the high luminosity of the LHC and the unprecedented precision of the experiments are allowing these putative effects to be probed at levels never before reached in previous measurements. Indeed, this is the prime field of study of the LHCb experiment, which is dedicated to the precision measurement of decays involving heavy quarks, beauty (b) and charm (c). The general-purpose LHC experiments can also compete in these studies, especially where the final states involve muons.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background

A rare confluence of factors makes the decay of beauty mesons into dimuon (μ+μ) final states an ideal place to search for this sort of evidence for physics beyond the Standard Model. The decays of B0 (a beauty antiquark, b, and a down quark, d) and Bs (a b and a strange quark, s) to μ+μ? are suppressed in the Standard Model yet several proposed extensions predict a significant enhancement (or an even stronger suppression) of their branching fractions. A measurement of the branching fraction for either of these decays that is inconsistent with the Standard Model’s prediction would be a clear sign of new physics – a realization that sparked off a long history of searches. For the past 30 years, a dozen experiments at nearly as many particle colliders have looked for these elusive decays and established limits that have improved by five orders of magnitude as the sensitivities approach the values predicted by the Standard Model (figure 2). Last November, LHCb found the first clear evidence for the decay Bs → μμ, at the 3.5σ level. Now both the CMS and LHCb collaborations have updated their results for these decays.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background: only three out of every thousand million Bs mesons are expected to decay to μμ, with the rate being even lower for the B0. The challenge is therefore to collect a huge data sample while efficiently retaining the signal and filtering out the background.

Several sources contribute to the large background. B hadrons decay semi-leptonically to final states with one genuine muon, a neutrino and additional charged tracks that could be misidentified as muons, therefore mimicking the signal’s topology. Because the emitted neutrino escapes with some energy, these decays create a dimuon peak that is shifted to a lower mass than that of the parent particle. The decays Λb → pμν form a dangerous background of this kind because the Λb is heavier than the B mesons, so these decays can contribute to the signal region. Two-track hadronic decays of B0 or Bs mesons also add to the background if both tracks are mistaken for muons. This “peaking background” – fortunately rare – is tricky because it exhibits a shape that is similar to that which is expected for the signal events. The third major background contribution arises from events with two genuine muons produced by unrelated sources. This “combinatorial” background leads to a continuous dimuon invariant-mass distribution, overlapping with the B0 and Bs mass windows, which is reduced by various means as discussed below.

The first hurdle to cross in finding the rare signal events is to identify potential candidates during the bursts of proton–proton collisions in the detectors. Given the peak luminosities reached in 2012 (up to 8 × 1033 cm–2 s–1), the challenge for CMS was to select by fast trigger the most interesting 400 events a second for recording on permanent storage and prompt reconstruction, with around 10 per second reserved for the B → μμ searches. With its smaller event size, LHCb could afford a higher output rate from its trigger, recording several kilohertz with a significant fraction dedicated to dimuon signatures.

Results of Bs → μμ

The events selected by the trigger are then filtered according to the properties of the two reconstructed muons to reject as much background as possible while retaining as many signal events as possible. In particular, hadrons misidentified as muons are suppressed strongly through stringent selection criteria applied on the number of hits recorded in the tracking and muon systems, on the quality of the track fit and on the kinematics of the muons. In LHCb, information from the ring-imaging Cherenkov detectors further suppresses misidentification rates. Additional requirements ensure that the two oppositely charged muons have a common origin that is consistent with being the decay point of a (long-lived) B meson. The events are also required to have candidate tracks that are well isolated from other tracks in the detector, which are likely to have originated from unrelated particles or other proton–proton collisions (pile-up). This selection is made possible by the precise measurements of the momentum and impact parameter provided by the tracking detectors in both experiments. The good dimuon-mass resolution (0.6% at mid-rapidity for CMS and 0.4% for LHCb) limits the amount of combinatorial background that remains under the signal peaks. Figure 1 shows event displays from the two experiments, each including a displaced dimuon compatible with being a B → μμ decay.

The final selection of events in both experiments is made with a multivariate “boosted decision tree” (BDT) algorithm, which discriminates signal events from background by considering many variables. Instead of applying selection criteria independently on the measured value of each variable, the BDT combines the full information, accounting for all of the correlations to maximize the separation of signal from background. CMS applies a loose selection on the BDT discriminant to ensure a powerful background rejection at the expense of a small loss in signal efficiency. Both experiments categorize events in bins of the BDT discriminant. LHCb has a higher overall efficiency, which together with the larger B cross-section in the forward region compensates for the lower integrated luminosity, so the final sensitivity is similar for both experiments.

The observable that is sensitive to potential new-physics contributions is the rate at which the B0 or Bs mesons decay to μμ, which requires a knowledge of the total numbers of B0 and Bs mesons that are produced. To minimize measurement uncertainties, these numbers are evaluated by reconstructing events where B mesons decay through the J/ψK channel, with the J/ψ decaying to two muons. This signature has many features in common with the signal being sought but has a much higher and well known branching fraction. The last ingredient required is the fraction of Bs produced relative to B+ or B0 mesons, which LHCb has determined in independent analyses. This procedure provides the necessary “normalization” without using the total integrated luminosity or the beauty production cross-section. LHCb also uses events with the decay B0 → K+π to provide another handle on the normalization.

Results

Both collaborations use unbinned maximum-likelihood fits to the dimuon-mass distribution to measure the branching fractions. The combinatorial background shape in the signal region is evaluated from events observed in the dimuon-mass sidebands, while the shapes of the semileptonic and peaking backgrounds are based on Monte Carlo simulation and are validated with data. The magnitude of the peaking background is constrained from measurements of the fake muon rate using data control samples, while the levels of semileptonic and combinatorial backgrounds are determined from the fit together with the signal yields.

Both collaborations use all good data collected in 2011 and 2012. For CMS, this corresponds to samples of 5 fb–1 and 20 fb–1, respectively, while for LHCb the corresponding luminosities are 1 fb–1 and 2 fb–1. The data are divided into categories based on the BDT discriminant, where the more signal-like categories provide the highest sensitivity. In the fit to the CMS data, events with both muons in the central region of the detector (the “barrel”) are separated from the others (the “forward” regions). Given their excellent dimuon-mass resolution, the barrel samples are particularly sensitive to the signal. All of the resulting mass distributions (12 in total for CMS and eight for LHCb) are then simultaneously fit to measure the B0 → μμ and Bs → μμ branching fractions, yielding the results that are shown in figure 3.

For both experiments, the fits reveal an excess of Bs → μμ events over the background-only expectation, corresponding to a branching fraction BF(Bs → μμ) = 3.0+1.0–0.9 × 10–9 in CMS and 2.9+1.1–1.0 × 10–9 in LHCb, where the uncertainties reflect statistical and systematic effects. These measurements have significances of 4.3σ and 4.0σ, respectively, evaluated as the ratio between the likelihood obtained with a free Bs → μμ branching fraction and that obtained by fixing BF(Bs → μμ) = 0. The results have been combined to give BF(Bs → μμ) = 2.9±0.7 × 10–9 (CMS+LHCb).

Both CMS and LHCb reported this long-sought observation at the EPS-HEP conference in Stockholm in July and in back-to-back publications submitted to Physical Review Letters (CMS collaboration 2013, LHCb collaboration 2013).

The combined measurement of Bs → μμ by CMS and LHCb is consistent with the Standard Model’s prediction, BF(Bs → μμ) = 3.6±0.3 × 10–9, showing that the model continues to resist attempts to see through its thick veil. The same fits also measure the B0 → μμ branching fraction. They reveal no significant evidence of this decay and set upper limits at the 95% confidence level of 1.1 × 10–9 (CMS) and 0.74 × 10–9 (LHCb). These limits are also consistent with the Standard Model, although the measurement fails to reach the precision required to probe the prediction.

While the observation of a decay that has been sought for so long and by so many experiments is a thrilling discovery, it is also a bittersweet outcome. Much of the appeal of the Bs → μμ decay-channel was in its potential to reveal cracks in the Standard Model – something that the measurement has so far failed to provide. However, the story is far from over. As the LHC continues to provide additional data, the precision with which its experiments can measure these key branching fractions will improve steadily and increased precision means more stringent tests of the Standard Model. While these results show that deviations from the expectations cannot be large, even a small deviation – if measured with sufficient precision – could reveal physics beyond the Standard Model.

Additionally, the next LHC run will provide the increase in sensitivity that the experiments need to measure B0 → μμ rates at the level of the Standard Model’s prediction. New physics could be lurking in that channel. Indeed, the prediction for the ratio of the Bs → μμ and B0 → μμ decay rates is well known, so a precise measurement of this quantity is a long-term goal of the LHC experiments. And even in the scenario where the Standard Model continues its undefeated triumphant path, theories that go beyond it must still describe the existing data. Tighter experimental constraints on these branching fractions would be powerful in limiting the viable extensions to the Standard Model and could point towards what might lie beyond today’s horizon in high-energy physics. With the indisputable observation of Bs → μμ decays, experimental particle physics has reached a major milestone in a 30-year-long journey. This refreshing news motivates the LHC experimental teams to continue forward into the unknown.

Surprising studies in multiplicity

One of the key ways of looking into what happens when high-energy hadrons collide is to measure the relationship between the number, or multiplicity, of particles produced and their momentum transverse to the direction of the colliding beams. The results cast light on processes ranging from the interactions of individual partons (quarks and gluons) to the collective motion of hot, dense matter containing hundreds of partons. The ALICE experiment is investigating effects across the range of possibilities, using data collected with proton–proton (pp), proton–lead (pPb) and lead–lead collisions (PbPb) in the LHC – and the results are showing some surprises.

A correlation between the average transverse momentum 〈pT〉 and the charged particle multiplicity Nch was first observed at CERN’s SppS collider and has since been measured in pp(p) collisions over a range of centre-of-mass energies, culminating recently at the LHC. The strong correlation observed led to a change in paradigm in the modelling of such collisions, with the proposal of mechanisms that go beyond independent parton–parton collisions.

In pp collisions, one way to understand the production of high multiplicities is through multiple parton interactions, but the incoherent superposition of such interactions would lead to the same 〈pT〉 for different values of multiplicity. The observation of a strong correlation thus led to the introduction, within the models of the PYTHIA event simulator, of colour reconnections between hadronizing strings. In this mechanism, which can be interpreted as a collective final-state effect, strings from independent parton interactions do not independently produce hadrons, but fuse before hadronization. This leads to fewer, but more energetic, hadrons. Other models that employ similar mechanisms of collective behaviour also describe the data.

CCnew14_07_13

In PbPb collisions, high-multiplicity events are the result of a superposition of (single) parton interactions taking place in a large number of nucleon–nucleon collisions. In this case, substantial rescattering of constituents is thought to lead to a redistribution of the particle spectrum, with most particles being part of a locally thermalized medium that exhibits collective, hydrodynamic-type, behaviour. The moderate increase of 〈pT〉 seen in PbPb collisions (shown in figure 1 for Nch around 10 or larger) is thus usually attributed to collective flow.

Now, the first measurements by ALICE of two-particle correlations in the intermediary system of pPb collisions have sparked an intense debate about the role of initial- and final-state effects. The pPb data on 〈pT〉 indeed exhibit features of both pp and PbPb collisions, at low and high multiplicities, respectively. However, the saturation trend of 〈pT〉 versus Nch is less pronounced in pPb collisions than in PbPb and at high multiplicities leads to a much higher value of 〈pT〉 than in PbPb. Is this nevertheless a fingerprint of collective effects in pPb collisions? Predictions that incorporate collective effects within the hadron interaction model EPOS describe the data well, but alternative explanations, based on initial-state effects (gluon saturation), have also been put forward and are being tested by these data (ALICE collaboration 2013 a).

Other recent measurements of particle production in proton–nucleus collisions have shown unexpected behaviour that is reminiscent of quark–gluon plasma (QGP) signatures. But what could cause such behaviour and is a QGP the only possible explanation? To answer this in more detail, it is important to separate particle species, as collective phenomena should follow an ordering in mass. To this end, ALICE has measured the transverse-momentum spectra of identified particles in pPb collisions at √sNN = 5.02 TeV and their dependence on multiplicity (ALICE collaboration 2013b).

The measurements show that the identified particle spectra become progressively harder with multiplicity, just as in PbPb collisions, where the hardening is more pronounced for particles of higher mass. In heavy-ion collisions, this mass ordering is interpreted as a sign of a collective radial expansion of the system. To check if such an idea describes the observations, a blast-wave parameteriz ation can be used. This assumes a locally thermalized medium that undergoes a collective expansion in a common velocity field, followed by an instantaneous common freeze-out.

CCnew15_07_13

As figure 2 shows, the blast-wave fit describes the spectra well at low pT, where hydrodynamics-like behaviour should dominate. The description fails at higher momenta, however, where the non-thermal components should contribute significantly. But are QGP-like interpretations such as this one unique in describing these measurements? The colour-recombination mechanism present in PYTHIA, discussed above, leads qualitatively to similar features to those observed in the data.

The presence of flow and of a QGP in high multiplicity pPb collisions is thus not ruled out, but since other non-QGP effects could mimic collective phenomena, further investigation is needed. Nevertheless, these results are certainly a crucial step towards a better comprehension not only of pPb collisions but also of high-energy collisions involving nuclei in general.

bright-rec iop pub iop-science physcis connect