Comsol -leaderboard other pages

Topics

Quantum Fields: From the Hubble to the Planck Scale

By Michael Kachelriess
Oxford University Press

This book treats two fields of physics that are usually taught separately – quantum field theory (QFT) on one side and cosmology and gravitation on the other – in a more unified manner. Kachelriess uses this unusual approach because he is convinced that, besides studying a subject in depth, what is often difficult is to put the pieces into a general picture. Thus, he makes an effort
to introduce QFT together with its most important applications to cosmology and astroparticle physics in a coherent framework.

The path-integral approach is employed from the start and the use of tools such as Green’s functions in quantum mechanics and in scalar field-theory is illustrated. Massless spin-1 and spin-2 fields are introduced on an equal footing, and gravity is presented as a gauge theory in analogy with the Yang–Mills case. The book also deals with various concepts relevant to modern research, such as helicity methods and effective theories, as well as applications to advanced research topics.

This volume can serve as a textbook for courses in QFT, astroparticle physics and cosmology, and students interested in working at the interface between these fields can certainly appreciate the uncommon approach used. It was also the intention of the author to make the book suitable for self study, so all explanations and derivations are given in detail. Nevertheless, a solid knowledge of calculus, classical and quantum mechanics, electrodynamics and special relativity is required.

What goes up… Gravity and Scientific Method

By Peter Kosso
Cambridge University Press

Peter Kosso states that his book is “about the science of gravity and the scientific method”; I would say that it is about how scientific knowledge develops over time, using the historical evolution of our understanding of gravity as a guiding thread. The author has been a professor of philosophy and physics, with expert knowledge on how the scientific method works, and this book was born out of his classes. The topic is presented in a clear way, with certain subjects explored more than once as if to ensure that the student gets the point. The text was probably repeatedly revised to remove any wrinkles in its surface and provide smooth reading, setting out a few basic concepts along the way. The downside of this “textbook style” is that it is unexpectedly dry for a book aimed at a broad audience.

As the author explains, a scientific observation must refer to formal terms with universally-agreed meaning, ideally quantifiable in a precise and systematic way, to facilitate the testing of hypotheses. Thinking in the context of a certain theory will specify the important questions and guide the collection of data, while irrelevant factors are to be ignored (Newton’s famous apple could just as well have been an orange, for example). But theoretical guidance comes with the risk that the answers might too easily conform to the expectation and, indeed, the nontrivial give-and-take between theory and observation is a critical part of scientific practice. In particular, the author insists that it is naïve to think that a theory is abandoned or significantly revised as soon as an experimental observation disagrees with the corresponding prediction.

Considering that the scientific method is the central topic of this book, it is surprising to notice that no reference is made to Karl Popper and many other relevant thinkers; this absence is even more remarkable since, on the contrary, Thomas Kuhn is mentioned a few times. One might expect such a book to reflect a basic enlightenment principle more faithfully: the price of acquiring knowledge is that it will be distorted by the conditions of its acquisition, so that keeping a critical mind is a mandatory part of the learning process. For instance, when the reader is told that the advancement of science benefits from the authority of established science (the structural adhesive of Kuhn’s paradigm), it would have been appropriate to also mention the “genetic fallacy” committed when we infer the validity and credibility of an idea from our knowledge of its source. The author could then have pointed the interested reader to suitable literature, one option (among many) being Kuhn vs. Popper; the struggle for the soul of science by Steve Fuller.

What goes up… is certainly an excellent guide to the science of gravity and its historical evolution, from the standpoint of a 21st-century expert. It is interesting, for instance, to compare the “theories of principle” of Aristotle and Einstein with the “constructive theory” of Newton. While Newton started from a wealth of observations and looked for a universal description, unifying the falling apple with the orbiting Moon, Einstein gave more importance to the beauty of the concepts at the heart of relativity than to its empirical success. I enjoyed reading about the discovery of Neptune from the comparison between the precise observations of the orbit of Uranus and the Newtonian prediction, and about the corresponding (unsuccessful) search for the planet Vulcan, supposedly responsible for Mercury’s anomalous orbit until general relativity provided the correct explanation. And it is fascinating to read about the “direct observation” of dark matter in the context of the searches for Neptune and Vulcan. It is important (but surely not easy) to ensure “that a theory is accurate in the conditions for which it is being used to interpret the evidence”, and that it is “both well-tested and independent of any hypothesis for which the observations are used as evidence”.

The text is well written and accessible. My teenage children learned about non-Euclidean geometry from figures in the book and were intrigued by the thought that gravity is not a force field but rather a metric field, which determines the straightest possible lines (geodesics) between two points in space–time. I think, however, that progress in humankind’s understanding of gravity and related topics could be narrated in a more captivating way. People who prefer more vivid and passionate accounts of the lives and achievements of Copernicus, Brahe, Kepler, Galileo, Newton and many others would more likely enjoy The Sleepwalkers by Arthur Koestler or From the Closed World to the Infinite Universe by Alexandre Koyré. I also vehemently recommend chapter one of Only the Longest Threads by Tasneem Zehra Husain, a delightful account of Newton’s breakthrough from the perspective of someone living in the early 18th century.

Welcome to the Universe

by Neil deGrasse Tyson, Michael A Strauss and J Richard Gott
Princeton University Press

It is commonly believed that popular-science books should abstain as much as possible from using equations, apart from the most iconic ones, such as E = mc2. The three authors of Welcome to the Universe boldly defy this stereotype in a book that is intended to guide readers with no previous scientific education from the very basics (the first chapters explain the scientific notation, how to round-up numbers and some trigonometry) to cutting-edge research in astrophysics and cosmology.

This book reflects the content of a course that the authors gave for a decade to non-science majors at Princeton University. They are a small dream team of teachers and authors: Tyson is a star of astrophysics outreach, Strauss a renowned observational astronomer and Gott a theoretical cosmologist with other successful popular-science books to his name. The authors split the content of the book into three equal parts (stars and planets, galaxies, relativity and cosmology), making no attempt at stylistic uniformity. Apparently this was the intention, as they keep their distinct voices and refer frequently to their own research experiences to engage the reader. Despite this, the logical flow remains coherent, with a smooth progression in complexity.

Welcome to the Universe promises and delivers a lot. Non-scientist readers will get a rare opportunity to be taken from a basic understanding of the subject to highly advanced content, not only giving them the “wow factor” (although the authors do appeal to this a lot) but also approaching the same level of depth as a masters course in physics. A representative example is the lengthy derivation of E = mc2, the popular formula that everyone is familiar with but few know how to explain. And while that particular example is probably demanding to the layperson, most chapters are very pleasant to read, with a good balance of narration and analysis. The authors also make a point of explaining why recognised geniuses such as Einstein and Hawking got their fame in the first place. Scientifically-educated readers will find many insights in this volume too.

While I generally praise this book, it does have a few weak points. Some of the explanations are non-rigorous and confusing at the same time (an example of this is the sentence: “the formula has a constant h that quantises energy”). In addition, an entire chapter boasts of the role of one of the authors in the debate on whether Pluto has the status of a planet or not, which I found a bit out of place. But these issues are more irritating than harmful, and overall this book achieves an excellent balance between clarity and accuracy. The authors introduce several original analogies and provide an excellent non-technical explanation of the counterintuitive behaviour of the outer parts of a dying star, which expand while the inner parts contract.

I also appreciated the general emphasis on how measurements are done in practice, including an interesting digression on how Cavendish measured Newton’s constant more than two centuries ago. However, there are places where one feels the absence of such an explanation, for example, the practical limitations of measuring the temperatures of distant bodies are glossed over with a somewhat patronising “all kinds of technical reasons”.

This text comes with a problem book that is a real treasure trove. The exercises proposed are very diverse, reflecting the variety of audiences that the authors clearly target with their book. Some are meant to practice basic competences about units, orders of magnitude and rounding. Others demand readers to think outside of the box (e.g. by playing with geodesics in flatland, we see how to construct an object that is larger inside than outside, and have to estimate its mass using only trigonometry). For some of the quantitative exercises, the solution is provided twice: once in a lengthy way and then in a clever way. People more versed in literature than mathematics will find an exercise that demands you write a scientifically accurate, short science-fiction story (guidelines for grading are offered to the teachers) and one that simply asks, “If you could travel in time, which epoch would you visit and why?”

The book ends with a long and inspiring digression on the role of humans in the universe, and Gott’s suggestion of using the Copernican principle to predict the longevity of civilisations – and of pretty much everything – is definitely food for thought.

Loops and legs in quantum field theory

The meeting poster. Credit: H Klaes

The international conference Loops and Legs in Quantum Field Theory 2018 took place from 29 April to 4 May near Rheinfels Castle in St Goar, Rhine, Germany. The conference brought together more than 100 researchers from 18 countries to discuss the latest results in precision calculations for particle physics at colliders and associated mathematical, computer-algebraic and numerical calculation technologies. It was the 14th conference in the series, with 87 talks delivered.

Organised biennially by the theory group of DESY at Zeuthen, the locations for Loops and Legs are usually remote parts of the German countryside to provide a quiet atmosphere and room for intense scientific discussions. The first conference took place in 1992, just as the HERA collider started up, and the next event, close to the start of LEP2 in 1994, concentrated on precision physics at e+e colliders. Since 1996, general precision calculations for physics at high-energy colliders form its focus.

This year, the topics covered new results on: the physics of jets; hadronic Higgs-boson and top-quark production; multi-gluon amplitudes; multi-leg two-loop QCD corrections; electroweak corrections at hadron colliders; the Z resonance in e+e scattering; soft resummation, e+e tt̅; precision determinations of parton distribution functions; the heavy quark masses and the fundamental coupling constants; g-2; and NNLO and N3LO QCD corrections for various hard processes.

On the technologies side, analytic multi-summation methods, Mellin–Barnes techniques, the solution of large systems of ordinary differential equations and large-scale computer algebra methods were discussed, as well as unitarity methods, cut-methods in integrating Feynman integrals, and new developments in the field of elliptic integral solutions. These techniques finally allow analytic and numeric calculations of the scattering cross-sections for the key processes measured at the LHC.

All of these results are indispensable to make the LHC, in its high-luminosity phase, a real success and to help hunt down signs of physics beyond the Standard Model (CERN Courier April 2017 p18). The calculations need to match the experimental precision in measuring the couplings and masses, in particular for the top-quark and the Higgs sector, and an even more precise understanding of the strong interactions.

Since the first event, when the most advanced results were single-scale two-loop corrections in QCD, the field has taken a breath-taking leap to inclusive five-loop results – like the β functions of the Standard Model, which control the running of the coupling constant to high precision – to mention only one example. In general, the various subfields of this discipline witness a significant advance every two years or so. Many promising young physicists and mathematicians participate and present results. The field became interdisciplinary very rapidly because of the technologies needed, and now attracts many scientists from computing and mathematics.

The theoretical problems, on the other hand, also trigger new research, for example in algebraic geometry, number theory and combinatorics. This will be the case even more with future projects, like an ILC, and planned machines such as the FCC, which needs even higher precision. The next conference will be held at the beginning of May 2020.

Heavy-flavour highlights from Beauty 2018

Beauty 2018

The international conference devoted to B physics at frontier machines, Beauty 2018, was held in La Biodola, Isola d’Elba, Italy, from 6–11 May, organised by INFN Pisa. The aims of the conference series are to review the latest results in heavy-flavour physics and discuss future directions. This year’s edition, the 17th in the series, attracted around 80 scientists from all over the world. The programme comprised 58 invited talks, of which 13 were theory-based.

In recent years, several puzzling anomalies have emerged from LHCb and b-factory data (CERN Courier April 2018 p23), and discussion of these set the scene for a very inspiring atmosphere at the conference.

Heavy-flavour decays, in particular those of hadrons that contain b quarks, offer powerful probes of physics beyond the Standard Model (SM). In recent years, several puzzling anomalies have emerged from LHCb and b-factory data (CERN Courier April 2018 p23), and discussion of these set the scene for a very inspiring atmosphere at the conference. In particular, the ratio of branching fractions RD(*) = BR(B  D(*)τν)/BR(B  D(*)lν), where l = μ, e, provide a test of lepton universality and, intriguingly, now give combined experimental values which are about 4σ away from the SM expectations. Furthermore, the ratios RK = BR(B+ K+μ+μ)/BR(B+ K+e+e) and the corresponding measurement, RK*, yield results that are each around 2.5σ away from unity. Other potential deviations from the SM are seen in the observable, P5´, of the angular distribution of decay products in the rare decay B0 K*μ+μ, and also measurements in related decay channels. Hence, the release of new LHCb results from LHC Run 2 is eagerly awaited later this year.

The rare decay Bs μ+μ, already observed at the 6σ level two years ago by a combined analysis of CMS and LHCb data, has now been observed by LHCb alone at a level greater than 5σ, and is consistent with the SM. The effective lifetime of the decay offers additional tests of new physics, and a first measurement has now been made: 2.04 ± 0.44 (stat) ± 0.05 (syst) ps – also consistent with the SM but with large uncertainties.

Theoretical overview talks put recent results such as those above in context. Regarding the flavour anomalies, models involving leptoquarks and new Z´ bosons are currently receiving much attention. Impressive progress has also been made in lattice-QCD calculations and in our understanding of hadronic form factors, which are crucial as inputs for theoretical predictions. Continued interplay between theory and experiment will be essential to understand the emerging data from the LHC and also from the Belle-II experiment in Japan, which has recently started taking data (CERN Courier June 2018 p7).

Concerning CP violation in the b sector, LHCb reported a new world-best determination of the angle γ of the unitarity triangle from a combination of measurements: degrees, which differs from the prediction from other unitarity-triangle constraints by around 2σ. Regarding CP violation in Bs0 J/ψ φ decays, which is predicted to be very small in the SM, the experimental knowledge from a combination of LHC experiments has now reached φs = 21 ± 31 mrad, which is compatible with the SM.

Presentations were also devoted to hadron spectroscopy and exotic states, where there has been huge interest since the recent discovery of pentaquark-like states by LHCb (CERN Courier April 207 p31). The udsb tetraquark candidate reported by the D0 experiment at Fermilab just over two years ago has not been confirmed in LHC data and, significantly, neither by its sister experiment CDF. A plethora of other new results were reported at Beauty 2018, including from LHCb: a doubly-charmed baryon, Ξcc++, and a Ξb** state, as well as a spectroscopy “gold mine” of X, Y and Z states from BES-III in China. Kaon physics was also discussed. With the completion of 2016 data analysis, the NA62 experiment at CERN has reached SM-sensitivity for the ultra-rare K+π+νν decay channel. A single candidate event was found with 0.15 background events expected, and a lower limit on the branching ratio of 14 × 10−10 at 95% confidence has been set.

The future experimental programme of flavour physics is full of promise. One of the highlights of the conference was a report on first data from Belle-II; further exciting options will emerge beyond 2021 when LHC Run 3 commences, with LHCb running at an increased luminosity of 2 × 1033 cm−2s−1 with an improved trigger, and high-luminosity upgrades to ATLAS and CMS to follow. The scientific programme of Beauty 2018 was complemented by a variety of social events, which, coupled with the stimulating presentations, made the conference a huge success at this exciting time for B physics.

LHCP reports from Bologna

LHCP participants

Some 450 researchers from around the world headed to historic Bologna, Italy, on 4–9 June to attend the sixth Large Hadron Collider Physics (LHCP) conference. The many talks demonstrated the breadth of the LHC physics programme, as the collider’s experiments dig deep into the high-energy 13 TeV dataset and look ahead to opportunities following the high-luminosity LHC upgrade.

Both ATLAS and CMS have now detected the Higgs boson’s direct Yukawa coupling to the top quark, following earlier analyses, and the results are in agreement with the prediction from the Standard Model (SM). Further results on Higgs interactions included the determination by ATLAS of the boson’s coupling to the tau lepton with high significance, which agrees well with the previous observation by CMS. Measuring the coupling between the Higgs and the other SM particles is a key element of the LHC physics programme, with bottom quarks now in the collaborations’ sights.

The Bologna event also saw news on the spectroscopy front. CMS reported that it has resolved, for the first time, the J = 1 and J = 2 states of the Xib(3P) particle, using 13 TeV data corresponding to an integrated luminosity of 80 fb–1. The measured mass difference between the two states, 10.60 ± 0.64 (stat) ± 0.17 (syst) MeV, is consistent with most theoretical calculations (see CMS resolves inner structure of bottomonium). Meanwhile, the LHCb collaboration reported the measurement of the lifetime of the doubly charmed baryon Xicc++ discovered by the collaboration last year, obtaining a value of 0.256 +0.024 –0.022 (stat) ± 0.014 (syst) ps, which is within the predicted SM range (see Charmed baryons strike back).

The Cabibbo–Kobayashi–Maskawa (CKM) matrix, which quantifies the couplings between quarks of different flavours and possible charge-parity (CP) violation in the quark system, was another focus of the conference. LHCb presented a new measurement of the gamma angle, which is the least well measured of the three angles defining the CKM unitary triangle and is associated with the up–bottom quark matrix element. The collaboration obtained a value of 74° with an uncertainty of about 5°, making it the most precise measurement of gamma from a single experiment.

Nuclei–nuclei collisions also shone, with the ALICE collaboration showcasing measurements of the charged-particle multiplicity density, nuclear modification factor and anisotropic flow in Xe–Xe collisons at an energy of 5.44 TeV per nucleon (see Anisotropic flow in Xe–Xe collisions). These and other nuclei–nuclei measurements are providing a deeper insight into extreme states of matter such as the quark–gluon plasma.

Searches for physics beyond the SM by the LHC experiments so far continue to come up empty-handed, slicing into the allowed parameter space of many theoretical models such as those involving dark matter. However, as was also emphasised at this years’ LHCP, there are many possible models and the range of parameters they span is large, requiring researchers to deploy “full ingenuity” in searching for new physics.

These are just a few of the many highlights of this year’s LHCP, which also included updates on the experiments’ planned upgrades for the high-luminosity LHC and perspectives on physics opportunities at future colliders.

Neutrino physics shines bright in Heidelberg

Heidelberg

The 28th International Conference on Neutrino Physics and Astrophysics took place in Heidelberg, Germany, on 4–9 June. It was organised by the Max Planck Institute for Nuclear Physics and the Karlsruhe Institute of Technology. With 814 registrations, 400 posters and the presence of Nobel laureates, Art McDonald and Takaaki Kajita, it was the most attended of the series to date – showcasing many new results.

Several experiments presented their results for the first time at Neutrino 2018. T2K in Japan and NOvA in the US updated their results, strengthening their indication of leptonic CP violation and normal-neutrino mass ordering, and improving their precision in measuring the atmospheric oscillation parameters. Taken together with the Super-Kamiokande results of atmospheric neutrino oscillations, these experiments provide a 2σ indication of leptonic CP violation and a 3σ indication of normal mass ordering. In particular, NOvA presented the first 4σ evidence of ν̅μν̅e transitions compatible with three-neutrino oscillations.

The next-generation long-baseline experiments DUNE and Hyper-Kamiokande in the US and Japan, respectively, were discussed in depth. These experiments have the capability to measure CP violation and mass ordering in the neutrino sector with a sensitivity of more than 5σ, with great potential in other searches like proton decay, supernovae, solar and atmospheric neutrinos, and indirect dark-matter searches.

All the reactor experiments – Daya Bay, Double Chooz and Reno – have improved their results, providing precision measurements of the oscillation parameter θ13 and of the reactor antineutrino spectrum. The Daya Bay experiment, integrating 1958 days of data taking, with more than four million antineutrino events on tape, is capable of measuring the reactor mixing angle and the effective mass splitting with a precision of 3.4% and 2.8%, respectively. The next-generation reactor experiment JUNO, aiming at taking data in 2021, was also presented.

The third day of the conference focused on neutrinoless double-beta decay (NDBD) experiments and neutrino telescopes. EXO, KamLAND-Zen, GERDA, Majorana Demonstrator, CUORE and SNO+ presented their latest NDBD search results, which probe whether neutrinos are Majorana particles, and their plans for the short-term future. The new GERDA results pushed their NDBD lifetime limit based on germanium detectors to 0.9 × 1026 years (90% CL), which represents the best real measurement towards a zero-background next-generation NDBD experiment.  CUORE also updated its results based on tellurium to 0.15 × 1026 years.

Neutrino telescopes are of great interest for multi-messenger studies of astrophysical objects at high energies. Both IceCube in Antarctica and ANTARES in the Mediterranean were discussed, together with their follow-up IceCube Gen2 and KM3NeT facilities. IceCube has already collected 7.5 years of data, selecting 103 events (60 of which have an energy of more than 60 TeV) and a best-fit power law of E–2.87. IceCube does not provide any evidence for neutrino point sources and the measured νe:νμτ neutrino-flavour composition is 0.35:0.45:0.2. A recent development in neutrino physics has been the first observation of coherent elastic neutrino–nucleus scattering as discussed by the COHERENT experiment (CERN Courier October 2017 p8), which opens the possibility of searches for new physics.

A very welcome development at Neutrino 2018 was the presentation of preliminary results from the KATRIN collaboration about the tritium beta-decay end-point spectrum measurement, which allows a direct measurement of neutrino masses. The experiment has just been inaugurated at KIT in Germany and aims to start data taking in early 2019 with a sensitivity of about 0.24 eV after five years. The strategic importance of a laboratory measurement of neutrino masses cannot be overestimated.

A particularly lively session at this year’s event was the one devoted to sterile-neutrino searches. Five short-baseline nuclear reactor experiments (DANSS, NEOS, STEREO, PROSPECT and SoLid) presented their latest results and plans regarding the so-called reactor antineutrino anomaly. These are experiments aimed at detecting the oscillation effects of sterile neutrinos at reactors free from any assumption about antineutrino fluxes. There was no reported evidence for sterile oscillations, with the exception of the DANSS experiment reporting a 2.8σ effect, which is not in good agreement with previous measurements of this anomaly. These experiments are only at the beginning of data taking and more refined results are expected in the near future, even though it is unlikely that any of them will be able to provide a final sterile-neutrino measurement with a sensitivity much greater than 3σ.

Further discussion was raised by the results reported by MiniBooNE at Fermilab, which reports a 4.8σ excess of electron-like events by combining their neutrino and antineutrino runs. The result is compatible with the 3.8σ excess reported by the LSND experiment about 20 years ago in an experiment taking data in a neutrino beam created by pion decays at rest at Los Alamos. Concerns are raised by the fact that even sterile-neutrino oscillations do not fit the data very well, while backgrounds potentially do (and the MicroBooNE experiment is taking data at Fermilab with the specific purpose of precisely measuring the MiniBooNE backgrounds). Furthermore, as discussed by Michele Maltoni in his talk about the global picture of sterile neutrinos, no sterile neutrino model can, at the same time, accommodate the presumed evidence of νμνe oscillations by MiniBooNE and the null results reported by several different experiments (among which is MiniBooNE itself) regarding νμ disappearance at the same Δm2.

The lively sessions at Neutrino 2018, summarised in the final two beautiful talks by Francesco Vissani (theory) and Takaaki Kajita (experiment), reinforce the vitality of this field at this time (see A golden age for neutrinos).

A golden age for neutrinos

Prototype detector module

On 3 July 1998, researchers working on the Super-Kamiokande experiment in Japan announced the first evidence for atmospheric-neutrino flavour oscillations. Since neutrinos can only oscillate among different flavours if at least some of them have a non-zero mass, the result proved that neutrinos are massive, albeit with very small mass values. This is not expected in the Standard Model.

Neutrino physics was already an active field, but the 1998 observation sent it into overdrive. The rich scientific programme and record attendance of the Neutrino 2018 conference in Heidelberg last month (see Neutrino physics shines bright in Heidelberg) is testament to our continued fascination with neutrinos. Many open questions remain: what generates the tiny masses of the known neutrinos, and what is their mass ordering? Are there more than the three known neutrino flavours, such as additional sterile or right-handed versions? Is there CP violation in the neutrino sector and, if so, how large is it? In addition, there are solar neutrinos, atmospheric neutrinos, cosmic/supernova neutrinos, relic neutrinos, geo-neutrinos, reactor neutrinos and accelerator-produced neutrinos – allowing for a plethora of experimental and theoretical activity.

Many of these questions are expected to be answered in the next decade thanks to vigorous experimental efforts. Concerning neutrino-flavour oscillations, new results are anticipated in the short term from the accelerator-based T2K and NOvA experiments in Japan and the US, respectively. These experiments probe the CP-violating phase in the neutrino-flavour mixing matrix and the ordering of the neutrino mass states; evidence for large CP violation could be established, in particular thanks to the planned ND280 near-detector upgrade of T2K.

Albert De Roeck

The next generation of accelerator-based experiments is already under way. The Deep Underground Neutrino Experiment (DUNE) in South Dakota, US, which will use a neutrino beam sent from Fermilab, is taking shape and two large prototypes of the DUNE far detector are soon to be tested at CERN. In Japan, plans are shaping up for Hyper-Kamiokande, a large detector with a fiducial volume around 10 times larger than that of Super-Kamiokande, and this effort is complemented with other sensitivity improvements and a possible second detector in Korea for analysing a neutrino beam sent from J-PARC in Japan. These experiments, which are planned to come online in 2026, will allow precision neutrino-oscillation measurements and provide decisive statements on the neutrino mass hierarchy and CP-violating phase.

Important insights are also expected from reactor sources. In China, the JUNO experiment should start in 2021 and could settle the mass-hierarchy question and determine complementary oscillation parameters. Meanwhile, very-short-baseline reactor experiments – such as PROSPECT, STEREO, SoLid, NEOS and DANSS – are soon to join the hunt for sterile neutrinos. Together with detectors at the short-baseline neutrino beam at Fermilab (SBND, MicroBooNE and ICARUS), the next few years should see conclusive results on the existence of sterile neutrinos. In particular, the recently reported update on the intriguing excess seen by the MiniBooNE experiment will be scrutinised.

Together with the ever-increasing sensitivities achieved by double-beta-decay experiments, which test whether neutrinos have a Majorana mass term, the SHiP experiment is proposed to search for right-handed neutrinos, while KATRIN in Germany has just started its campaign to measure the mass of the electron antineutrino with sub-eV precision. The interplay with astronomy and cosmology, using detectors such as IceCUBE and KM3NeT, which survey atmospheric neutrinos, further underlines the vibrancy and breadth of modern neutrino physics. Also, the European Spallation Source, under construction in Sweden, is investigating the possibility of a precise neutrino-measurement programme.

Neutrino experiments are spread around the globe, but Europe is a strong player. A discussion forum on neutrino physics for the update of the European strategy for particle physics will be hosted by CERN on 22–24 October. Clearly, neutrino science promises many exciting results in the near future.

CERN marks beginning of a luminous future

Time capsule

A ceremony at CERN on 15 June celebrated the start of civil-engineering works for the high-luminosity upgrade of the Large Hadron Collider (HL-LHC). The upgrade will allow about 10 times more data to be accumulated by the LHC experiments between 2026 and 2036, corresponding to a total integrated luminosity of 3000 fb–1, thereby enhancing the chances of discovery and bringing increased precision to measurements.

The HL-LHC project began in earnest in November 2011 as an international endeavour today involving 29 institutes from 13 countries. Two years later, the project was identified as one of the main priorities of the European Strategy for Particle Physics. The upgrade, targeting a luminosity of at least 5 × 1034 cm–2 s–1, was formally approved by the CERN Council in June 2016.

Although it concerns only about 5% of the current machine, the HL-LHC is a major upgrade requiring a number of innovative technologies, many of which pave the way for future higher-energy colliders. At its heart are powerful new dipole and quadrupole magnets that operate at unprecedented fields of 11 and 11.5 T, respectively, and which employ novel niobium-tin superconducting cables. The quadrupoles, which will be installed on both sides of the collision points, will squeeze the proton beams to increase the probability of a collision (CERN Courier March 2017 p23).

Sixteen brand-new radio-frequency “crab cavities” will also be installed around the ATLAS and CMS experiments to maximise the overlap of the proton bunches at the collision points (CERN Courier May 2018 p18). Their function is to tilt the bunches so that they appear to move sideways, and the first ever tests of this technology in a proton beam were successfully carried out at the Super Proton Synchrotron in May.

To prepare the CERN accelerator complex for the immense challenges of the HL-LHC, the LHC Injectors Upgrade project (LIU) was launched in 2010. In addition to enabling the necessary injector chains to deliver the HL-LHC beams, the LIU project is also tasked with replacing ageing equipment and improving radioprotection measures (CERN Courier October 2017 p32).

Overall, more than 1.2 km of the current LHC will need to be replaced with new components. This requires civil-engineering work at two main sites in Switzerland and in France, involving the construction of new buildings, shafts, caverns and underground galleries (CERN Courier March 2017 p28). The LHC will continue to operate until
early December.

“The High-Luminosity LHC will extend the LHC’s reach beyond its initial mission, bringing new opportunities for discovery, measuring the properties of particles such as the Higgs boson with greater precision, and exploring the fundamental constituents of the universe ever more profoundly,” said CERN Director-General Fabiola Gianotti during the ceremony.

On 25 June, the Canadian government announced a contribution of C$10 million to the HL-LHC, with an additional C$2 million in in-kind contributions. Working with Canadian researchers and industry, the TRIUMF laboratory will lead the production of five cryogenic modules for the HL-LHC crab cavities.

Muons accelerated in Japan

Installation

Muons have been accelerated by a radio-frequency accelerator for the first time, in an experiment performed at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, Japan. The work paves the way for a compact muon linac that would enable precision measurements of the muon anomalous magnetic moment and the electric dipole moment.

Around 15 years ago, the E821 storage-ring experiment at Brookhaven National Laboratory (BNL) reported the most precise measurement of the muon anomalous magnetic moment (g-2). Achieving an impressive precision of 0.54 parts per million (ppm), the measured value differs from the Standard Model prediction by more than three standard deviations. Following a major effort over the past few years, the BNL storage ring has been transported to and upgraded at Fermilab and recently started taking data to improve on the precision of E821. In the BNL/Fermilab setup, a beam of protons enters a fixed target to create pions, which decay into muons with aligned spins. The muons are then transferred to the 14 m-diameter storage ring, which uses electrostatic focusing to provide vertical confinement, and their magnetic moments are measured as they precess in a magnetic field.

The new J-PARC experiment, E34, proposes to measure muon g-2 with an eventual precision of 0.1 ppm by storing ultra-cold muons in a mere 0.66 m-diameter magnet, aiming to reach the BNL precision in a first phase. The muons are produced by laser-ionising muonium atoms (bound states of a positive muon and an electron), which, since they are created at rest, results in a muon beam with very little spread in the transverse direction – thus eliminating the need for electrostatic focusing.

Turning on the RFQ

The ultracold muon beam is stored in a high-precision magnet where the spin-precession of muons is measured by detecting muon decays. This low-emittance technique, which allows a smaller magnet and lower muon energies, enables researchers to circumvent some of the dominant systematic uncertainties in the previous g-2 measurement. To avoid decay losses, the J-PARC approach requires muons to be accelerated via a conventional radio-frequency accelerator.

In October 2017, a team comprising physicists from Japan, Korea and Russia successfully demonstrated the first acceleration of negative muonium ions, reaching an energy of 90 keV. The experiment was conducted using a radio-frequency quadrupole linac (RFQ) installed at a muon beamline at J-PARC, which is driven by a high-intensity pulsed proton beam. Negative muonium atoms were first accelerated electrostatically and then injected into the RFQ, after which they were guided to a detector through a transport beamline. The accelerated negative muonium atoms were identified from their time of flight: because a particle’s velocity at a given energy is uniquely determined from its mass, its type is identified by measuring the velocity (see figure).

The researchers are now planning to further accelerate the beam from the RFQ. In addition to precise measurements in particle physics, the J-PARC result offers new muon-accelerator applications including the construction of a transmission muon microscope for use in materials and life-sciences research, says team member Masashi Otani of KEK laboratory. “Part of the construction of the experiment has started with partial funding, which includes the frontend muon beamline and detector. The experiment can start properly three years after full funding is provided.”

Muon acceleration is also key to a potential muon collider and neutrino factory, for which it is proposed that the large, transverse emittance of the muon beam can be reduced using ionisation cooling (see Muons cooled for action).

bright-rec iop pub iop-science physcis connect