Comsol -leaderboard other pages

Topics

Antimatter research leaps ahead

The 13th Low Energy Antiproton Physics (LEAP) conference was held from 12–16 March at the Sorbonne University International Conference Center in Paris. A large part of the conference focused on experiments at the CERN Antiproton Decelerator (AD), in particular the outstanding results recently obtained by ALPHA and BASE.

One of the main goals of this field is to explain the lack of antimatter observed in the present universe, which demands that physicists look for any difference between matter and antimatter, apart from their quantum numbers. Specifically, experiments at the AD make ultra-precise measurements to test charge-party-time (CPT) invariance and soon, via the free-fall of antihydrogen atoms, the gravitational equivalence principle to look for any differences between matter and antimatter that would point to new physics.

The March meeting began with talks about antimatter in space. AMS-02 results, based on a sample of 3.49 × 105 antiprotons detected during the past four years onboard the International Space Station, showed that antiprotons, protons and positrons have the same rigidity spectrum in the energy range 60–500 GeV. This is not expected in the case of pure secondary production and could be a hint of dark-matter interactions (CERN Courier December 2016 p31). The development of facilities at the AD, including the new ELENA facility, and at the Facility for Antiproton and Ion Research Facility (FAIR), were also described. FAIR, under construction in Darmstadt, Germany, will increase the antiproton flux by at least a factor of 10 compared to ELENA and allow new physics studies focusing, for example, on the interactions between antimatter and radioactive beams (CERN Courier July/August 2017 p41).

Talks covering experimental results and the theory of antiproton interactions with matter, and the study of the physics of antihydrogen, were complemented with discussions on other types of antimatter systems, such as purely leptonic positronium and muonium. Measurements of these systems offer tests of CPT in a different sector, but their short-lived nature could make experiments here even more challenging than those on antihydrogen.

Stefan Ulmer and Christian Smorra from the AD’s BASE experiment described how they managed to keep antiprotons in a magnetic trap for more than 400 days under an astonishingly low pressure of 5 × 10–19 mbar. There is no gauge to measure such a value, only the lifetime of antiprotons and the probability of annihilation with residual gas in the trap. The feat allowed the team to set the best direct limit so far on the lifetime of the antiproton: 21.7 years (indirect observations from astrophysics indicate an antiproton lifetime in the megayear range). The BASE measurement of the proton-to-antiproton charge over mass ratio (CERN Courier September 2015 p7) is consistent with CPT invariance and, with a precision of 0.69 × 10–12, it is the most stringent test of CPT with baryons. The BASE comparison of the magnetic moment of the proton and the antiproton at the level of 2 × 10–10 is another impressive achievement and is also consistent with CPT (CERN Courier March 2017 p7).

Three new results from ALPHA, which has now achieved stable operation in the manipulation of antihydrogen atoms that has allowed spectroscopy to be performed on 15,000 antiatoms, were also presented. Tim Friesen presented the hyperfine spectrum and Takamasa Momose presented the spectroscopy of the 1S–2P transition. Chris Rasmussen presented the 1S–2S lineshape, which gives a resonant frequency consistent with that of hydrogen at a precision of 2 × 10–12 or an energy level of 2 × 10–20 GeV, already exceeding the precision on the mass difference between neutral kaons and antikaons. ALPHA’s rapid progress suggests hydrogen- like precision in antihydrogen is achievable, opening unprecedented tests of CPT symmetry (CERN Courier March 2018 p30).

The next edition of the LEAP conference will take place at Berkeley in the US in August 2020. Given the recent pace of research in this relatively new field of fundamental exploration, we can look forward to a wealth of new results between now and then.

The history and future of the PHYSTAT series

Most particle-physics conferences emphasise the results of physics analyses. The PHYSTAT series is different: speakers are told not to bother about the actual results, but are reminded that the main topics of interest are the statistical techniques used, the resulting uncertainty on measurements, and how systematics are incorporated. What makes good statistical practice so important is that particle-physics experiments are expensive in human effort, time and money. It is thus very worthwhile to use reliable statistical techniques to extract the maximum information from data (but no more).

Origins

Late in 1999, I had the idea of a meeting devoted solely to statistical issues, and in particular to confidence intervals and upper limits for parameters of interest. With the help of CERN’s statistics guru Fred James, a meeting was organised at CERN in January 2000 and attracted 180 participants. It was quickly followed by a similar one at Fermilab in the US, and further meetings took place at Durham (2002), SLAC (2003) and Oxford (2005). These workshops dealt with general statistical issues in particle physics, such as: multivariate methods for separating signal from background; comparisons between Bayesian and frequentist approaches; blind analyses; treatment of systematics; p-values or likelihood ratios for hypothesis testing; goodness-of-fit techniques; the “look elsewhere” effect; and how to combine results from different analyses.

Subsequent meetings were devoted to topics in specific areas within high-energy physics. Thus, in 2007 and 2011, CERN hosted two more meetings focusing on issues relevant for data analysis at the Large Hadron Collider (LHC), and particularly on searches for new physics. At the 2011 meeting, a whole day was devoted to unfolding, that is, correcting observed data for detector smearing effects. More recently, two PHYSTAT-ν workshops took place at the Institute for Physics and Mathematics of the Universe in Japan (2016) and at Fermilab (2017). They concentrated on issues that arise in analysing data from neutrino experiments, which are now reaching exciting levels of precision. In between these events, there were two smaller workshops at the Banff International Research Station in Canada, which featured the “Banff Challenges” – in which participants were asked to decide which of many simulated data sets contained a possible signal of new physics.

The PHYSTAT workshops have largely avoided having parallel sessions so that participants have the opportunity to hear all of the talks. From the very first meetings, the atmosphere has been enhanced by the presence of statisticians; more than 50 have participated in the various meetings over the years. Most of the workshops start with a statistics lecture at an introductory level to help people with less experience in this field understand the subsequent talks and discussions. The final pair of summary talks are then traditionally given by a statistician and a particle physicist.

A key role

PHYSTAT has played a role in the evolution of the way particle physicists employ statistical methods in their research, and has also had a real influence on specific topics. For instance, at the SLAC meeting in 2003, Jerry Friedman (a SLAC statistician who was previously a particle physicist) spoke about boosted decision trees for separating signal from background; such algorithms are now very commonly used for event selection in particle physics. Another example is unfolding, which was discussed at the 2011 meeting at CERN; the Lausanne statistician Victor Panaretos spoke about theoretical aspects, and subsequently his then student Mikael Kuusela became part of the CMS experiment, and has provided much valuable input to analyses involving unfolding. PHYSTAT is also one of the factors that has helped in raising the level of respectability with which statistics is regarded by particle physicists. Thus, graduate summer schools (such as those organised by CERN) now have lecture courses on statistics, some conferences include plenary talks, and books on particle-physics methodology have chapters devoted to statistics. With the growth in size and complexity of data in this field, a thorough grounding in statistics is going to become even more important.

Recently, Olaf Behnke of DESY in Hamburg has taken over the organisation and planning of the PHYSTAT programme and already there are ideas regarding having a monthly lecture series, a further PHYSTAT-ν workshop at CERN in January 2019 and a PHYSTAT-LHC meeting in autumn 2019, and possibly one devoted to statistical issues in dark-matter experiments. In all probability, the future of PHYSTAT is bright.

Accelerator aficionados meet in Vancouver

The 9th International Particle Accelerator Conference (IPAC18) was held in Vancouver, Canada, from 29 April to 4 May. Hosted by TRIUMF and jointly sponsored by the IEEE Nuclear and Plasma Sciences Society and the APS Division of Physics of Beams, the event attracted more than 1210 delegates from 31 countries, plus 80 industry exhibits staff. The scientific programme included 63 invited talks and 62 contributed orals, organised according to eight main classes. While impossible to summarise the full programme in a short article, below are some of the highlights from IPAC18 that demonstrate the breadth and vibrancy of the accelerator field at this time.

A foray into the future of accelerators by Stephen Brooks of Brookhaven National Laboratory was a walk on the wild side. The idea of a single-particle collider was presented as a possibility to achieve diffraction-limited TeV beams to bridge the potential “energy desert” between current technology and the next energy regime of interest. Relevant technological and theoretical challenges were discussed, including multiple ideas for overcoming emittance growth from synchrotron radiation, focusing beams (via gravitational lensing!) and obtaining nucleus-level alignment, as was how to reduce the cost of future accelerators.

The rise of X-ray free-electron lasers in the past decade, opening new scientific avenues in areas highly related to wider society, was a strong theme of the conference. In addition, in the session devoted to photon sources and electron accelerators, Michael Spata described the Jefferson Laboratory’s 12 GeV upgrade of CEBAF, which began full-power operation in April after overcoming numerous challenges (including installation and operation of a new 4 kW helium liquefier, and field-emission limitations in the cryomodules). James Rosenzweig (UCLA) described progress towards an all-optical “fifth-generation” light source. Here, a TW laser pulse would be split into two, with half being used to accelerate high-quality electron bunches as they co-propagate in a tapered undulator, and the other half striking the accelerated electron beam head on so that the back-scattered photons are shifted to much shorter wavelengths. The scheme could lead to a compact, tunable multi-MeV gamma-ray source, and successful demonstrations have already taken place at the RUBICONICS test stand at UCLA.

Concerning novel particle sources and acceleration techniques, plasma-wakefield acceleration featured large. CERN’s Marlene Turner described progress at the AWAKE experiment, which aims to use a high-energy proton beam to generate a plasma wake that is then used to accelerate an electron beam. Last year, the AWAKE team demonstrated self-modulation of the proton beam and measured the formation of the plasma wakefield. Now the team has installed the equipment to test the acceleration of an injected electron beam, which is expected to be completed in 2018. Felicie Albert of Lawrence Livermore National Laboratory also described the use of laser-wakefield technology to generate betatron X rays, which could enable new measurements at X-ray free-electron lasers.

With IPAC18 coinciding with TRIUMF’s 50th anniversary (CERN Courier May 2018 p31), laboratory director Jonathan Bagger described the evolution of TRIUMF from its founding in 1968 by three local universities to the present-day set-up with 20 member universities, users drawn from 38 countries and an annual budget of CA$100 million. Also in the hadron-accelerator session was a talk by Sergei Nagaitsev of Fermilab about the path to the Long-Baseline Neutrino Facility, which is actually three parallel paths: one for the proton beams (PIP-II), one for the detector which will be located in the Homestake mine in South Dakota (DUNE) 1300 km away, and one for the facilities at Fermilab and Homestake. The three projects will engage more than 175 institutions from around the world with the aim of investigating leptonic CP violation and the mass hierarchy in the neutrino sector. The International Facility for Antiproton and Ion Research (FAIR) under construction in Germany (CERN Courier July/August 2007 p4) was another focus of this session, with Mei Bai of GSI Darmstadt summarising the significant upgrade of the heavy-ion synchrotron SIS18 that will drive the world’s most intense uranium beams for future FAIR operation.

In the session devoted to beam dynamics and electromagnetic fields, Valery Telnov (Budker Institute) introduced a cautionary note about bremsstrahlung at the interaction points of future electron–positron colliders (such as FCC-ee) that will impact beam lifetimes whereas present-generation colliders (such as SuperKEKB) are dominated by synchrotron radiation in the arcs. Tessa Charles of the University of Melbourne, meanwhile, introduced the method of “caustics” to understand and optimise longitudinal beam-dynamics problems, such as how to minimise coherent synchrotron radiation effects in recirculation arcs.

The proton linac for the European Spallation Source (ESS) under construction in Sweden was presented by Morten Jensen during the session on accelerator technology. He outlined the variety of radio-frequency (RF) power sources used in the ESS proton linac and the development of the first-ever MW-class “inductive out tubes” for the linac’s high-beta cavities, which have been tested at CERN and reached record-beating performances of 1.2 MW output for 8.3 kW input power. Pending the development of a production series, the accelerator community may have a new RF workhorse.

As indicated, these are just a few of the many scientific highlights from IPAC18. Industry was also a major presence. In an industry panel discussion, speakers talked about successful models for technology transfer, while talks such as that from Will Kleeven (IBA) described the Rhodotron compact industrial CW electron accelerator producing intense beams with energies in the range from around 1 to 10 MeV, which has key industry applications including polymer cross-linking, sterilisation, food treatment and container security scanning.

IPAC is committed to welcoming young researchers, offering more than 100 student grants and heavily discounted fees for all students. Almost 1500 posters were presented by authors from 233 institutions over four days. The regional attendance distribution was 24% from Asia, 41% from Europe and 35% from the Americas, demonstrating the truly international nature of our field. The 10th IPAC will take place in Melbourne, Australia, on 19–24 May 2019.

Richard Taylor 1929–2018

Richard E Taylor died at the age of 88 on 22 February at his home on the Stanford campus in the US. Taylor was the co-recipient of the 1990 Nobel Prize in Physics, along with Henry Kendall and Jerome Friedman of MIT, for their discoveries of scaling in deep-inelastic electron–proton scattering. It was these results that led to the experimental demonstration of the existence of quarks.

Taylor was born in Medicine Hat, Alberta, Canada, to Clarence and Delia Taylor. He was interested in a career as a surgeon, but an early explosion while using a chemistry set as a child cost him parts of two fingers and the thumb on his left hand – and thus pushed him towards a career in science. He was an undergraduate at the University of Alberta, receiving a bachelor and then master of science in 1952. At Alberta, he married Rita Bonneau in 1951.

Taylor then went to Stanford, working at the Stanford High Energy Physics Laboratory (HEPL). In 1958, he was invited by colleagues at École Normale Supérieure in Orsay to work on experiments for their new accelerator at the Laboratoire de l’Accelerateur Lineare. After three years, he returned to the US, spending a year at Lawrence Berkeley National Laboratory. He then returned to Stanford to complete his PhD under Robert Mosely in 1962. Wolfgang Panofsky invited him to join the core group building the Stanford Linear Accelerator Center (SLAC), roughly 1 km west of the main Stanford campus. Taylor was given responsibility for the “Beam Switchyard” at the end of the linear accelerator that analysed and steered beams to experiments and for the large “End Station A” and its electron spectrometers. Taylor organised a talented group at SLAC including David Coward and Herbert (Hobie) DeStaebler, which carried out the design and construction of these major facilities. The three electron spectrometers with momentum ranges centered around 1.6, 8 and 20 GeV/c made the critical measurements that established SLAC in the forefront of particle physics.

Taylor led his group at SLAC into a collaboration with Caltech and MIT that foresaw the rise of powerful particle-physics collaborations now at the scale of a few thousand physicists for the major LHC experiments. That collaboration proposed and carried out a series of experiments beginning with the elastic scattering of electrons off protons at high momentum transfer in 1967. The measurements extended those made by Richard Hofstadter at HEPL, but led to no surprises.

The proposal for deep-inelastic scattering had no mention of point-like particles in the nucleon. The inelastic cross sections beyond the nucleon resonances were unexpectedly large and flat with increasing momentum transfer, especially when compared to elastic scattering. The data also displayed a simplifying feature called scaling – a prediction by Bjorken from current algebra – suggesting that deep-inelastic cross sections could be expressed as a function of one kinematic variable. These results were extended by Taylor’s group and MIT into more kinematic regions and to studies of the neutron with a deuterium target.

At the “Rochester Conference” in Vienna in 1968, Panofsky summed up the first public results of the experiments with the comment: “Therefore, theoretical speculations are focused on the possibility that these data might give evidence on the behaviour of point-like, charged structures within the nucleon.” Following a visit to SLAC in August 1968, Richard Feynman introduced his “naïve parton theory” in which electrons scattered from point-like free partons give both the observed weak momentum-transfer dependence and scaling. Subsequent experiments by Taylor and collaborators allowed the two nucleon structure functions to be separated, determining that the partons were spin-½ particles. Evaluations of sum rules derived by Bjorken and Kurt Gottfried were consistent with charge assignments in the nascent quark model. Finally, the Gargamelle neutrino and antineutrino results at CERN confirmed the Gell–Mann–Zweig quark model, and these experiments collectively gave rise to the Standard Model of particle physics.

Taylor’s connections to Paris, and later DESY and CERN, continued as a theme through his life. He was awarded a doctorate (Honoris Causa) by the Université de Paris-Sud. After becoming a member of the SLAC faculty in 1968, Taylor won a Guggenheim fellowship and spent a sabbatical year at CERN. He received an Alexander von Humboldt award and spent the 1981–1982 academic year at DESY. Taylor’s group at SLAC was a lively place, with many young European visitors who became staunch colleagues and friends.

In 1978, an experiment at SLAC led by Charles Prescott and Taylor demonstrated parity violation in polarised electron–deuterium scattering – a very challenging experiment that followed negative results from atomic-physics experiments. Parity violation was the essential component of the unification of the electromagnetic and weak interactions, another key chunk of the Standard Model that led to the Nobel Prize for Sheldon Glashow, Abdus Salam and Steven Weinberg in 1979.

Taylor also was awarded the W K H Panofsky Prize, and was a fellow of the American Physical Society, American Association for the Advancement of Science, Royal Society and the Royal Society of Canada. He was also a member of the American Academy of Arts and Sciences and the Canadian Association of Physics, a foreign associate of the National Academy of Science, and Companion of the Order of Canada.

Taylor stayed rooted to his Canadian origins, often vacationing in Medicine Hat where he maintained a home and enjoyed fly fishing in the local streams. He always saw himself as an experimentalist, saying in a 2008 Nobel-prize interview: “My job was to measure things and to make sure that the measurements were right. It is the job of the theoretical community to understand why things are the way that I see them when I do experiments.”

Taylor was a large man and pretended to enjoy a reputation of being somewhat fierce. His friends and colleagues all knew him as a gentle soul, caring deeply for SLAC and always promoting the younger generations of scientists. He is survived by his wife Rita and son Ted.

Ferdinand Hahn 1959–2018

It was with great sadness that we learned that Ferdi Hahn passed away on 4 March. He was an enthusiastic and highly skilled colleague, and an openhearted friend.

Ferdi first came to CERN in 1987 as technical student of the University of Wuppertal, when he joined the barrel-RICH project for the DELPHI experiment at LEP. As part of his diploma thesis, he participated in the photon detector project, SYBIL, a TPC-like drift chamber with single photoelectron detection, which was a prototype of the DELPHI barrel-RICH system.

Here, Ferdi became very much acquainted with all hardware and software aspects of such a test program, both in the innumerable technical matters and in the analysis of the data taken. From 1990, as a CERN fellow, he was heavily involved in the commissioning of the drift tubes of the RICH detector, a particularity of the DELPHI experiment, followed by the development of the temperature control of the barrel RICH. A specific part of the detector was not delivered in time, so Ferdi immediately drove 800 km to the company and back again to allow the start of data taking on time in 1989. Later, Ferdi completed his PhD with a measurement of the differential cross-sections of charged kaons and protons using the DELPHI detector, taking advantage of the unique RICH system.

In 1995 Ferdi joined the CERN physics department as a member of the DELPHI gas group. As section leader in the support groups to CERN experiments and deputy group leader of the DELPHI detector unit, he perfected the operation of the many and complex DELPHI gas systems. He also structured the LHC experiments gas working-group, which led to a professional and efficient system for all LHC detectors.

After having led the detector technology group of the physics department between 2007 and 2008, Ferdi then took over the technical coordination of the NA62 experiment with considerable commitment and great competence in many experimental aspects. Through the preparation of the Technical Design Report and the coordination of the entire installation of the experiment, his exquisite ability to bring collaborators from all kinds of cultures together was clearly an asset for the success of the project.

Knowing that the NA62 experiment was operating smoothly, Ferdi happily agreed to support the physics department as deputy head in 2015. As part of the management, he was in charge of the coordination of the technical groups in the department, including the planning of personnel. With his pleasant manner, patience and exemplary communication skills, he solved numerous tricky problems.

Ferdi was treasured as a close colleague by many; it was a pleasure to work with him. His open character and smile made it easy to discuss subjects, even when they involved complicated issues. He was enthusiastic and full of energy, always ready to help. His friendly way of dealing with people was backed up by a deep competence in technical issues. He was one of a kind and will be sadly missed.

SuperKEKB steps out at the intensity frontier

On 26 April the SuperKEKB accelerator at the KEK laboratory in Japan collided its first beams of electrons and positrons, marking the start of an ambitious data-taking campaign that will allow ultraprecise measurements of Standard Model (SM) parameters.

These are the first particle collisions at KEK in eight years, following the closure in 2010 of the KEKB machine to prepare for its next phase. Many subsystems of the accelerator had to be upgraded, the most important involving the use of nanobeam technology to squeeze the vertical beam size at the interaction point to around 50 nm – 20 times smaller than it was at KEKB. This required a complicated system of superconducting final-focus magnets and low-emittance beams (CERN Courier September 2016 p32).

SuperKEKB will work at the so-called intensity frontier to produce copious amounts of B and D mesons and τ leptons, enabling precise measurements of rare decays that test the SM with unprecedented sensitivity. Since the first beams were stored over a month ago, KEK teams have worked to tune the two beams for first collisions at the centre of the Belle II detector – the “super-B factory” upgrade of its predecessor, Belle. When fully commissioned, Belle II will detect and reconstruct events at the much higher rates provided by the 40-fold higher design luminosity of SuperKEKB compared to KEKB. The Belle II outer detector is already in place, but the full inner detector will not be installed until the end of the year, and the first physics run with the complete detector is projected to start in February 2019.

In 2009 KEKB achieved a record instantaneous luminosity of 2.1 × 1034 cm–2 s–1, but SuperKEKB is targeting 8 × 1035 cm–2 s–1. The huge increase is projected to deliver to Belle II a dataset of about 50 billion BB meson pairs – 50 times larger than the entire data sample of the KEKB/Belle project – in about 10 years of operation.

According to Belle II spokesperson Tom Browder, it is not realistic to expect design luminosity straight away. “There will be a number of steps as the beam is progressively squeezed to smaller and smaller sizes, and we fight through each new technical challenge with nanobeams,” he explains. “Our luminosity profile assumes that we progressively resolve these problems at the same rate as KEKB or PEP-II [at SLAC]. In this sense, our programme resembles that of the LHC.”

Belle II has physics goals related to those of the LHCb experiment (CERN Courier April 2018 p23), set against the relatively cleaner environment of electron–positron collisions but with a lower production rate of heavy hadrons with respect to the LHC collisions. Examples include investigating whether there are new CP-violating phases in the quark sector, whether there are sources of lepton-flavour violation (LFV) beyond the SM, whether there is a dark sector of particle physics at the same mass scale as ordinary matter, and whether there are flavour-changing neutral currents beyond the SM. Browder says the Belle II collaboration expects to work on all of the goals, especially on LFV studies, and to catch up with LHCb on certain measurements as soon as a significant amount of luminosity is achieved. “Even with the very early data samples, the team should be able to have impactful results on the dark sector and new hadrons,” he says.

LHC physics soars ahead

On 28 April, 13 days ahead of schedule, operators at CERN’s Large Hadron Collider (LHC) successfully injected 1200 bunches of protons into the machine and brought them into collision – formally marking the beginning of the LHC’s 2018 physics season, and the final leg of Run 2.

Stable beams were first declared on 17 April, when the LHC experiments started to take data with three bunches per beam at very low luminosities. A stepwise increase of the number of bunches resulted in the maximum number of 2556 bunches per beam being reached on 5 May.

During the last steps of the intensity ramp-up, the average peak luminosity for ATLAS and CMS was close to 2.1 × 1034 cm–2 s–1 – equalling or even surpassing the record peak luminosity with stable beams reached in 2017 – although the final calibration of the luminosity measurements still needs to be performed.

For the rest of the year, the LHC is dedicated to production mode for physics, with the operation of the machine being consolidated in parallel. As the Courier went to press in mid-May, the integrated luminosity for ATLAS and CMS had already surpassed 10 fb–1 (with a target of 60 fb–1 planned for 2018).

The faster-than-anticipated commissioning phase of the 2018 LHC restart has led to a revised machine schedule: the LHC will provide 131 days of physics operations with 25 ns-spaced proton beams, 17 days of special runs with protons, and 24 days of lead–lead collisions at the end of the year (the proton run will finish on 28 October). From December, the machine will enter Long Shutdown 2 in preparation for its high-luminosity upgrade.

KLOE-2 completes data-taking at Frascati Φ-factory

On 30 March the KLOE-2 experiment concluded its data-taking campaign at the electron–positron collider DAΦNE at the INFN National Laboratory of Frascati (LNF), Italy. This marks the conclusion of a two-decades long period of activity at the Frascati lab, which began with the first data collected by KLOE in 1999 and then continued with KLOE-2 since November 2014. In total, an integrated luminosity of around 8 fb−1 (corresponding to around 24 billion φ-mesons) was acquired, representing the largest ever sample collected at the φ-resonance peak.

In terms of machine physics, the KLOE/DAΦNE programme has brought a wealth of results and a few world firsts. The KLOE-2 run saw the first application of the “crab-waist” concept – an interaction scheme developed in Frascati with the transverse dimensions of the beams and their crossing angle tuned to maximise the machine luminosity – in the presence of a high-field detector solenoid. The implementation of this innovative configuration by the DAΦNE team allowed KLOE-2 to collect an integrated luminosity of 5.5 fb−1 in a period of just over three years.

Record performances in terms of peak luminosity (2.4 × 1032 cm–2 s–1) and maximum daily integrated luminosity (14 pb–1/day) have been achieved for an electron–positron collider running at such centre-of-mass energies (approximately 1 GeV).

The general-purpose KLOE detector, comprising a 4 m-diameter drift chamber surrounded by a lead-scintillating-fibre electromagnetic calorimeter with very good energy and timing performance at low energies, underwent several upgrades including a cylindrical gas-electron-multiplier (GEM) detector for the inner tracker. To improve its vertex reconstruction capabilities near the interaction region, KLOE-2 was the first high-energy experiment using GEM technology with a cylindrical geometry – a novel idea that was developed at LNF.

Together with its predecessor KLOE, the KLOE-2 data sample is rich in physics. The analysis of KLOE data provided, and continues to provide, a variety of significant results on: neutral and charged kaon properties; tests of discrete symmetries; tests of the unitary of the quark mixing matrix; light-scalar-meson spectroscopy; η-meson decays; hadronic cross sections and the anomalous magnetic moment (g-2) of the muon; and searches for dark photons.

Analyses of the KLOE-2 data is ongoing, in particular extending the KLOE physics programme in precision tests of fundamental discrete symmetries and the quantum coherence of entangled neutral kaon pairs. The roughly 60-strong collaboration will also study rare KS and η-meson decays and strong interactions in low-energy processes, in addition to γγ physics and the search for possible manifestations of dark matter.

Overall, the KLOE programme has involved hundreds of Italian and foreign physicists in a challenging human and scientific enterprise. But activities at the DAΦNE accelerator complex do not stop here. They are now continuing with the PADME and Siddharta-2 experiments, designed to search for dark photons and to study exotic atoms and strong interactions at low energies, respectively. Frascati Laboratory is also planning to revamp the DAΦNE complex, becoming a world-class test facility for R&D in accelerator physics, and is applying to host the future EuPRAXIA infrastructure for a European plasma-based free-electron Laser.

“The KLOE experiment has been a scientific milestone for the laboratory and for particle physics,” says LNF director, Pierluigi Campana. “DAΦNE will continue to produce physics for PADME and Siddharta-2, and we are thinking towards its future after 2020.”

OPERA concludes on tau appearance

The OPERA experiment, located at the Gran Sasso Laboratory of the Italian National Institute for Nuclear Physics (INFN), was designed to conclusively prove that muon-neutrinos can oscillate into tau-neutrinos by studying beams of muons sent from CERN 730 km away.

In a paper published on 22 May, describing the very final results of the experiment on neutrino oscillations, the OPERA collaboration has reported the observation of a total of 10 candidate events for a muon- to tau-neutrino conversion. This result demonstrates unambiguously that muons morph into tau neutrinos on their way from CERN to Gran Sasso.

The OPERA collaboration observed the first tau-neutrino event (evidence of muon-neutrino oscillation) in 2010, followed by four additional events reported between 2012 and 2015. A new analysis strategy applied to the full data sample collected between 2008 and 2012 led to the new total of 10 candidate events, with an extremely high level of significance. “We also report the first direct observation of the tau-neutrino lepton number, the parameter that discriminates neutrinos from antineutrinos,” says Giovanni de Lellis, OPERA spokesperson. “It is extremely gratifying to see today that our legacy results largely exceed the level of confidence we had envisaged in the experiment proposal.”

Beyond its contribution to neutrino physics, OPERA pioneered the use of large-scale emulsion films with fully automated and high-speed readout technologies with submicrometre accuracy. These technologies are now used in a wide range of other scientific areas, from dark-matter searches to investigations of volcanoes, and from the optimisation of hadron therapy for cancer treatment to the exploration of secret chambers in the Great Pyramid. The OPERA collaboration has also made its data public through the CERN open data portal, allowing researchers outside the collaboration to conduct novel research and offering tools such as a visualiser to help adapt the datasets for educational use.

US and India team up on neutrino physics

On 16 April, US energy secretary Rick Perry and Indian Atomic Energy Secretary Sekhar Basu signed an agreement in New Delhi to expand the two countries’ collaboration in neutrino science. It opens the way for jointly advancing the Long-Baseline Neutrino Facility (LBNF) and the international Deep Underground Neutrino Experiment (DUNE) in the US and the India-based Neutrino Observatory (INO).

More than 1000 scientists from over 170 institutions in 31 countries work on LBNF/DUNE, construction for which got under way in July 2017. The project will direct the world’s most intense beams of neutrinos from Fermilab accelerators (driven by the new PIP-II machine) to detectors 1300 km away. INO scientists, meanwhile, will observe neutrinos that are produced in Earth’s atmosphere. Scientists from more than 20 institutions are working on INO, which is currently going through approval procedures.

The India–US agreement builds on one signed in 2013 authorising the joint development and construction of particle-accelerator components. Scientists from four institutions in India – BARC in Mumbai, IUAC in New Delhi, RRCAT in Indore and VECC in Kolkata – are contributing to the design and construction of magnets and superconducting particle-accelerator components for PIP-II at Fermilab and the next generation of particle accelerators in India.

Under the new agreement, US and Indian institutions will expand this to include neutrino research projects. DUNE, located about 1.5 km underground, will use almost 70,000 tonnes of liquid argon to detect neutrinos; and an additional detector will measure the neutrino beam at Fermilab as it leaves the accelerator complex. Prototype neutrino detectors are already under construction at CERN, which is also a partner in LBNF/DUNE. INO will use a different technology: an iron calorimeter. Its detector will feature what could be the world’s biggest magnet, allowing INO to be the first experiment able to distinguish signals produced by atmospheric neutrinos and antineutrinos produced when cosmic rays strike the atmosphere.

More than a dozen Indian institutions are involved in the collaboration on neutrino research. According to former INO spokesperson Naba Monda of the Saha Institute of Nuclear Physics, “this agreement is a positive step towards making INO a global centre for fundamental research. Students working at INO will get opportunities to interact with international experts.”

bright-rec iop pub iop-science physcis connect