PHYSTAT-nu 2019 was held at CERN from 22 to 25 January. Counted among the 130 participants were LHC physicists and professional statisticians as well as neutrino physicists from across the globe. The inaugural meeting took place at CERN in 2000 and PHYSTAT has gone from strength to strength since, with meetings devoted to specific topics in data analysis in particle physics. The latest PHYSTAT-nu event is the third of the series to focus on statistical issues in neutrino experiments. The workshop focused on the statistical tools used in data analyses, rather than experimental details and results.
Modern neutrino physics is geared towards understanding the nature and mixing of the three neutrinos’ mass and flavour eigenstates. This mixing can be inferred by observing “oscillations” between flavours as neutrinos travel through space. Neutrino experiments come in many different types and scales, but they tend to have one calculation in common: whether the neutrinos are created in an accelerator, a nuclear reactor, or by any number of astrophysical sources, the number of events expected in the detector is the product of the neutrino flux and the interaction cross section. Given the ghostly nature of the neutrino, this calculation presents subtle statistical challenges. To cancel common systematics, many facilities have two or more detectors at different distances from the neutrino source. However, as was shown for the NOVA and T2K experiments, competitors to observe CP violation using an accelerator-neutrino beam, it is difficult to correlate the neutrino yields in the near and far detectors. A full cancellation of the systematic uncertainties is complicated by the different detector acceptances, possible variations in the detector technologies, and the compositions of different neutrino interaction modes. In the coming years these two experiments plan to combine their data in a global analysis to increase their discovery power – lessons can be learnt from the LHC experience.
The problem of modelling the interactions of neutrinos with nuclei – essentially the problem of calculating the cross section in the detector – forces researchers to face the thorny statistical challenge of producing distributions that are unadulterated by detector effects. Such “unfolding” corrects kinematic observables for the effects of detector acceptance and smearing, but correcting for these effects can cause huge uncertainties. To counter this, strong “regularisation” is often applied, biasing the results towards the smooth spectra of Monte Carlo simulations. The desirability of publishing unregularised results as well as unfolded measurements was agreed by PHYSTAT-nu attendees. “Response matrices” may also be released, allowing physicists outside of an experimental collaboration to smear their own models, and compare them to detector-level data. Another major issue in modeling neutrino–nuclear interactions is the “unknown unknowns”. As Kevin McFarland of the University of Rochester reflected in his summary talk, it is important not to estimate your uncertainty by a survey of theory models. “It’s like trying to measure the width of a valley from the variance of the position of sheep grazing on it. That has an obvious failure mode: sheep read each other’s papers.”
An important step for current and future neutrino experiments could be to set up a statistics committee, as at the Tevatron, and, more recently, the LHC experiments. This PHYSTAT-nu workshop could be the first real step towards this exciting scenario.
The next PHYSTAT workshop will be held at Stockholm University from 31 July to 2 August on the subject of statistical issues in direct-detection dark-matter experiments.