Comsol -leaderboard other pages

Topics

Birth of a symmetry

Weinberg’s paper “A Model of Leptons”, published in Physical Review Letters (PRL) on 20 November 1967, determined the direction of high-energy particle physics through the final decades of the 20th century. Just two and a half pages long, it is one of the most highly cited papers in the history of theoretical physics. Its contents are the core of the Standard Model of particles physics, now almost half a century old and still passing every experimental test.

Most particle physicists today have grown up with the Standard Model’s orderly account of the fundamental particles and interactions, but things were very different in the 1960s. Quantum electrodynamics (QED) had been well established as the description of the electromagnetic interaction, but there were no mature theories of the strong and weak nuclear forces. By the 1960s, experimental discoveries showed that the weak force exhibits some common features with QED, in particular that it might be mediated by a vector boson analogous to the photon. Theoretical arguments also suggested that QED’s underlying “U(1)” group structure could be generalised to the larger group SU(2), but there was a serious problem with such a scheme: the W boson suspected to mediate the weak force would have to be very massive empirically, whereas the mathematical symmetry of the theory required it to be massless like the photon.

The importance of symmetries in understanding the fundamental forces was already becoming clear at the time, in particular how nature might hide its symmetries. Could “hidden symmetry” lead to a massive W boson while preserving the mathematical consistency of the theory? It was arguably Weinberg’s developments, in 1967, that brought this concept to life.

Strong inspiration

Weinberg’s inspiration was an earlier idea of Nambu in which fermions – such as the proton or neutron – can behave like a left- or right-handed screw as they move. If mass is ignored, these two “chiral” states act independently and the theory leads to the existence of a particle with properties similar to those of the pion – specifically a pseudoscalar, which means that it has no spin and its wavefunction changes sign under mirror symmetry. Nambu’s original investigations, however, had not examined how the three versions of the pion, with positive, negative or zero charge, shared their common “pion-ness” when interacting with one another. This commonality, or symmetry, is mathematically expressed by the group SU(2), which had been known in nuclear physics since the 1930s and in mathematics for much longer.

It was this symmetry that Weinberg used as his point of departure in building a theory of the strong force, where nucleons interact with pions of all charges and the proton and neutron themselves form two “faces” of the underlying SU(2) structure. Empirical observations of the interactions between pions and nucleons showed that the underlying symmetry of SU(2) tended to act on the left- or right-handed chiral possibilities independently. The mathematical structure of the resulting equations to describe this behaviour, as Weinberg discovered, is called SU(2)×SU(2).

However, in nature this symmetry is not perfect because nucleons have mass. Had they been massless, they would have travelled at the speed of light, the left- and right-handed possibilities acting truly independently of one another and the symmetry left intact. That nucleons have a mass, so that the left and right states get mixed up when perceived by observers in different inertial frames, breaks the chiral symmetry. Nambu had investigated this effect as far back as 1959, but without the added richness of the SU(2)×SU(2) mathematical structure that Weinberg brought to the problem. Weinberg had been investigating this more sophisticated theory in around 1965, initially with considerable success. He derived theorems that explained the observed interactions of pions and nucleons at low energies, such as in nuclear physics. He was able to predict how pions behaved when they scattered from one another and, with a few well-defined assumptions, paved the way for a whole theory of hadronic physics at low energies.

Meanwhile, in 1964, Brout and Englert, Higgs, Kibble, Guralnik and Hagen had demonstrated that the vector bosons of a Yang–Mills theory (one that is like QED but where attributes such as electric charge can be exchanged by the vector bosons themselves) put forward a decade earlier could become massive without spoiling the fundamental gauge symmetry. This “mass-generating mechanism” suggested that a complete Yang–Mills theory of the strong interaction might be possible. In addition to the well-known pion, examples of massive vector particles that feel the strong force had already been found, notably the rho-meson. Like the pion, this too occurs in three charged varieties: positive, negative and zero. Superficially these rho-mesons had the hallmarks of being the gauge bosons of the strong interactions, but they also have mass. Was the strong interaction the theatre for applying the mass-generating mechanism?

Despite at first seeming so promising, the idea failed to fit the data. For some phenomena, the SU(2)×SU(2) symmetry empirically is broken, but for others where spin didn’t matter it works perfectly. When these patterns were incorporated into the maths, the rho-meson stubbornly remained massless, contrary to reality.

Epiphany on the road

In the middle of September 1967, while driving his red Camaro to work at MIT, Weinberg realised that he had been applying the right ideas to the wrong problem. Instead of the strong interactions, for which the SU(2)×SU(2) idea refused to work, the massless photon and the hypothetical massive W boson of the electromagnetic and weak interactions fitted perfectly with this picture. To call this possibility “hypothetical” hardly does justice to the time: the W boson was not discovered until 1984, and in 1967 was so disregarded as to receive at best a passing mention, if any, in textbooks.

Weinberg needed a concrete model to illustrate his general idea. The numerous strongly interacting hadrons that had been discovered in the 1950s and 1960s were, for him, a quagmire, so he restricted his attention to the electron and neutrino. Here too it is worth recalling the state of knowledge at the time. The constituent quark model with three flavours – up, down and strange – had been formulated in 1964, but was widely disregarded. The experiments at SLAC that would help establish these constituents were a year away from announcing their results, and Bjorken’s ideas of a quark model, articulated at conferences that summer, were not yet widely accepted either. Finally, with only three flavours of quark, Weinberg’s ideas would lead to empirically unwanted “strangeness-changing neutral currents”. All these problems would eventually be solved, but in 1967 Weinberg made a wise choice to focus on leptons and leave quarks well alone.

Following the discovery of parity violation in the 1950s, it was clear that the electron can spin like a left- or right-handed screw, whereas the massless neutrino is only left-handed. The left–right symmetry, which had been a feature of the strong interaction, was gone. Instead of two SU(2), the mathematics now only needed one, the second being replaced by the unitary group U(1). So Weinberg set up the equations of SU(2)×U(1) – the same structure that, unknown to him, had been proposed by Sheldon Glashow in 1961 and by Abdus Salam and John Ward in 1964 in attempts to marry the electromagnetic and weak interactions. His theory, like theirs, required two massive electrically charged bosons – the W+ and W carriers of the weak force – and two neutral bosons: the massless photon and a massive Z0. If correct, it would show that the electromagnetic and weak forces are unified, taking physics a step closer to the goal of a single theory of all fundamental interactions.

“The history of attempts to unify weak and electromagnetic interactions is very long, and will not be reviewed here.” So began the first footnote in Steven Weinberg’s seminal November 1967 paper, which led to him being awarded the 1979 Nobel Prize in Physics with Salam and Glashow. Weinberg’s footnote mentioned Fermi’s primitive idea for unification in 1934, and also the model that Glashow proposed in 1961.

Clarity of thought

Weinberg started his paper by articulating the challenge of unifying the electroweak forces as both an opportunity and a threat. He focused on the leptons – those fermions, such as the electron and neutrino, which do not feel the strong force. “Leptons interact only with photons, and with the [weak] bosons that presumably mediate weak interactions. What could be more natural than to unite these spin-one bosons [the photon and the weak bosons] into a multiplet,” he pondered. That was the opportunity. The threat was that “standing in the way of this synthesis are the obvious differences in the masses of the photon and [weak] boson.”

Weinberg then suggests a solution: perhaps “the symmetries relating the weak and electromagnetic interactions are exact [at a fundamental level] but are [hidden in practice]”. He then draws attention to the ideas of Higgs, Brout, Englert, Guralnik, Hagen and Kibble, and uses these to give masses to the W and Z in his model. In a further important insight, Weinberg shows how this symmetry-breaking mechanism leaves the photon massless.

His opening paragraph ended with the prescient observation that: “The model may be renormalisable.” The argument upon which this remark is based appears at the very end of the paper, although with somewhat less confidence than the promise hinted at the beginning. He begins the final paragraph with a question: “Is this model renormalisable?” The extent of his intuition is revealed in his argument: although the presence of a massive vector boson hitherto had been a scourge, the theory with which he had begun had no such mass and, as such, was “probably renormalisable”. So, he pondered: “The question is whether this renormalisablity is lost [by the spontaneous breaking of the symmetry].” And the conclusion: “If this model is renormalisable, what happens when we extend it…to the hadrons?”

By speculating that his model may be renormalisable, Weinberg was hugely prescient, as ’t Hooft and Veltman would prove four years later. And perhaps it was a chance encounter at the Solvay Congress in Belgium two weeks before his paper was submitted that helped convince Weinberg that he was on the right track.

Solvay secrets

By the end of September 1967, Weinberg had his ideas in place as he set off to Belgium to attend the 14th Solvay Congress on Fundamental Problems in Elementary Particle Physics, held in Brussels from 2 to 7 October. He did not speak about his forthcoming paper, but did make some remarks after other talks, in particular following a presentation by Hans Peter Durr about a theorem of Jeffrey Goldstone and spontaneous symmetry breaking. During a general discussion session following Durr’s talk, Weinberg mused: “This raises a question I can’t answer: are such models renormalisable?” He continued with a similar argument to that which later appeared in his paper, ending with: “I hope someone will be able to find out whether or not [this] is a renormalisable theory of weak and electromagnetic interactions.”

There was remarkably little reaction to Weinberg’s remarks, and he himself has recalled “a general lack of interest”. The only recorded statement came from François Englert, who insisted that the theory is renormalisable; then, remarkably, there is no further discussion. Englert and Robert Brout, then relatively junior scientists, had both attended the same Brussels meeting.

At some point during the Solvay conference, Weinberg presented a hand-written draft of his paper to Durr, and 40 years later I obtained a copy by a roundabout route. Weinberg himself had not seen it in all that time, and thought that all record of his Nobel-winning manuscript had been lost. The original manuscript is notable for there being no sign of second thoughts, or editing, which suggests that it was a provisional final draft of an idea that had been worked through in the preceding days. The only hint of modification after the first draft had been written is a memo squeezed in at the end of a reference to Higgs, to include references to Brout and Englert, and to Guralnik, Hagen and Kibble, for the idea of spontaneous symmetry breaking, on which the paper was based. Weinberg’s intuition about the renormalisability of the model is already present in this manuscript, and is identical to what appears in his PRL paper. There is no mention of Glashow’s SU(2)×U(1) model in the draft, but this is included in the version that was published in PRL the following month. This is the only substantial difference. This manuscript was submitted to the editors of PRL on Weinberg’s return to the US, and received by them on 17 October. It appeared in print on 20 November.

Lasting impact

Weinberg’s genius was to assemble together the various pieces of a jigsaw and display the whole picture. The basic idea of mass generation was due to the assorted theorists mentioned above, in the summer of 1964. However, a crucial feature of Weinberg’s model was the trick of being able to give masses to the W and Z while leaving the photon massless. This extension of the mass-generating mechanism was due to Tom Kibble, in 1967, which Weinberg recognises and credits.

As was the case with his comments in Brussels the previous month, Weinberg’s paper appeared in November 1967 to a deafening silence. “Rarely has so great an accomplishment been so widely ignored,” wrote Sidney Coleman in Science in 1979. Today, Weinberg’s paper has been cited more than 10,000 times. Having been cited but twice in the four years from 1967 to 1971, suddenly it became so important that researchers have cited it three times every week throughout half a century. There is no parallel for this in the history of particle physics. The reason is that in 1971 an event took place that has defined the direction of the field ever since: Gerard ’t Hooft made his debut, and he and Martinus Veltman demonstrated the renormalisability of spontaneously broken Yang–Mills theories. A decade later the W and Z bosons were discovered by experiments at CERN’s Super Proton Synchrotron. A further 30 years were to pass before the discovery of the Higgs boson at the Large Hadron Collider completed the electroweak menu. And in the meantime, completing the Standard Model, quantum chromodynamics was established as the theory of the strong interactions, based on the group SU(3).

This episode in particle physics is not only one of the seminal breakthroughs in our understanding of the physical world, but touches on the profound link between mathematics and nature. On one hand it shows how it is easier to be Beethoven or Shakespeare than to be Steven Weinberg: change a few notes in a symphony or a phrase in a play, and you can still have a wonderful work of art; change a few symbols in Weinberg’s equations and the edifice falls apart – for if nature does not read your creation, however beautiful it might be, its use for science is diminished. Like all great theorists, Weinberg revealed a new aspect of reality by writing symbols on a sheet of paper and manipulating them according to the logic of mathematics. It took decades of technological progress to enable the discoveries of W and Higgs bosons and other entities that were already “known” to mathematics 50  years ago.

• This article draws on material from Frank Close’s history of the path to discovery of the Higgs boson: The Infinity Puzzle (Oxford University Press).

Symmetries, groups and massive insight led to electroweak unification

Weinberg’s 1967 achievement is rooted in the notation of group theory, which is the mathematical language describing the symmetries of a system, and built upon many earlier successes including that of quantum electrodynamics (QED). QED is perhaps the simplest example of a general class of “non-abelian gauge theories”. Since the all-important electric charge in QED is a single number, it can be described mathematically in terms of the first unitary group, U(1). In the 1950s Yang and Mills constructed generalisations of QED in which the U(1) number was replaced by matrices, such as in the groups SU(2) or SU(3). The weak force exhibited tantalising hints that a SU(2) generalisation of QED might be involved, but there was a serious problem: a “W boson” – the analogue of QED’s photon – would have to be very massive empirically, whereas the mathematical symmetry of the theory required it to be massless – like the photon. The only way to give the W and Z particles mass yet leave the photon massless was if nature contained “hidden symmetries” that were somehow broken.

In 1961 Goldstone discovered a theorem suggesting, inter alia, that a theory of the weak force involving hidden symmetry is impossible. However, in 1963, condensed-matter theorist Philip Anderson pointed out that superconductivity manages to evade Goldstone’s theorem, and demonstrated this mathematically in a theory without relativity. The following year several theorists, including Peter Higgs, generalised Anderson’s insights to include relativity. Among the implications were that a theory involving fermions with no mass – with so-called chiral symmetry – could hide this property in empirically consistent ways when particles become massive; that there should be a massive boson without spin (the Higgs boson); and that the W boson could also gain mass while preserving the underlying mathematical symmetry of the theory. It was Weinberg’s 1967 paper that brought all of these pieces together, and today we know that nature follows this path, with the weak and electromagnetic interactions described by a single SU(2)×U(1) structure. At the time, however, the breakthrough was hardly noticed.

Optical survey pinpoints dark-matter structure

During the last two decades the WMAP and Planck satellites have produced detailed maps of the density distribution of the universe when it was only 380,000 years old – the moment electrons and protons recombined into neutral hydrogen, producing today’s cosmic microwave background (CMB). The CMB measurements show that the distribution of both normal and dark matter in the universe is inhomogeneous, which is explained via a combination of inflation, dark matter and dark energy: initial quantum fluctuations in the very early universe expanded and continued to grow as gravity pulled matter together while dark energy worked to force it apart. Data from the CMB have allowed cosmologists to predict a range of cosmological parameters such as the fractions of dark energy, dark matter and normal matter.

Now, using new optical measurements of the current universe from the international Dark Energy Survey (DES), these predictions can be tested independently. DES is an ongoing, five-year survey that aims to map 300 million galaxies and tens of thousands of galaxy clusters using a 570 megapixel camera to capture light from galaxies eight billion light-years away (see figure). The camera, one of the most powerful in existence, was built and tested at Fermilab in the US and is mounted on the 4 m Blanco telescope in Chile.

The DES data sample is set to grow from 26 million to 300 million galaxies

To measure how the clumps seen in the CMB evolved from the early universe into their current state, the DES collaboration first mapped the distribution of galaxies in the universe precisely. The researchers then produced detailed maps of the matter distribution using weak gravitational lensing, which measures small distortions of the optical image due to the mass between an observer and multiple sources. The galaxies observed by DES are elongated by only a few per cent due to lensing and, since galaxies are intrinsically elliptical, it is not possible to measure the lensing from individual galaxy measurements.

The first year of DES data, which includes measurements of 26 million galaxies, has allowed researchers to measure cosmological parameters such as the matter density with a precision comparable to those made using the CMB data. The matter-density parameter, which indicates the total fraction of matter in the universe, measured using optical light is found to be fully compatible with Planck data based on measurements of microwave radiation emitted around 13 billion years ago. Combining the measurements of Planck and DES places further constraints on this crucial parameter, indicating that only about 30% of the universe consists of matter while the rest consists of dark energy. The results are also compatible with other important cosmological parameters such as the fluctuation amplitude, which indicates the amplitude of the initial density fluctuations, and further constrain measurements of the Hubble constant and even the sum of the neutrino masses.

The DES results allow for a fully independent measurement of parameters initially derived using a map of the early universe. With the DES data sample set to grow from 26 million to 300 million galaxies, cosmological parameters will be measured with even higher precision and allow more detailed comparisons with the CMB data.

ITER’s massive magnets enter production

The ITER site

It is 14 m high, 9 m wide and weighs 110 tonnes. Fresh off a production line at ASG in Italy, and coated in epoxy Kapton-glass panels (image top left), it is the first superconducting toroidal-field coil for the ITER fusion experiment under construction in Cadarache, Southern France. The giant D-shaped ring contains 4.5 km of niobium-tin cable (each containing around 1000 individual superconducting wires) wound into a coil that will carry a current of 68,000 A, generating a peak magnetic field of 11.8 T to confine a plasma at a temperature of 150 million degrees. The coil will soon be joined by 18 others like it, 10 manufactured in Europe and nine in Japan. After completion at ASG, the European coils will be shipped to SIMIC in Italy, where they will be cooled to 78 K, tested and welded shut in a 180 tonne stainless-steel armour. They will then be impregnated with special resin and machined using one of the largest machines in Europe, before being transported to the ITER site.

Science doesn’t get much bigger than this, even by particle-physics standards. ITER’s goal is to demonstrate the feasibility of fusion power by maintaining a plasma in a self-sustaining “ignition” phase, and was established by an international agreement ratified in 2007 by China, the European Union (EU), Euratom, India, Japan, Korea, Russia and the US. Following years of delay relating to the preferred site and project costs, ITER entered construction a decade ago and is scheduled to produce first plasma by December 2025. The EU contribution to ITER, corresponding to roughly half the total cost, amounts to €6.6 billion for construction up to 2020.

Fusion for energy

The scale of ITER’s components is staggering. The vacuum vessel that will sit inside the field coils is 10 times bigger than anything before it, measuring 19.4 m across, 11.4 m high and requiring new welding technology to be invented. The final ITER experiment will weigh 23,000 tonnes, almost twice that of the LHC’s CMS experiment. The new toroidal-field coil is the first major magnetic element of ITER to be completed. A series of six further poloidal coils, a central solenoid and a number of correction coils will complete ITER’s complex magnetic configuration. The central solenoid (a 1000 tonne superconducting electromagnet in the centre of the machine) must be strong enough to contain a force of 60 MN – twice the thrust of the Space Shuttle at take-off.

Vacuum-pressure impregnation tooling

Fusion for Energy (F4E), the EU organisation managing Europe’s contribution to ITER, has been collaborating with industrial partners such as ASG Superconductors, Iberdrola Ingeniería y Construcción, Elytt Energy, CNIM, SIMIC, ICAS consortium and Airbus CASA to deliver Europe’s share of components in the field of magnets. At least 600 people from 26 companies have been involved in the toroid production and the first coil is the result of almost a decade of work. This involved, among other things, developing new ways to jacket superconducting cables based on materials that are brittle and much more difficult to handle than niobium-titanium. In total, 100,000 km of niobium-tin strands are necessary for ITER’s toroidal-field magnets, increasing worldwide production by a factor 10.

Since 2008, F4E has signed ITER-related contracts reaching approximately €5 billion, with the magnets amounting to €0.5 billion. Firms that are involved, such as SIMIC where the coils will be tested and Elytt, which has developed some of the necessary tooling, have much to gain from collaborating in ITER. According to Philippe Lazare, CEO of CNIM Industrial Systems Division: “In order to manufacture our share of ITER components, we had to upgrade our industrial facilities, establish new working methods and train new talent. In return, we have become a French reference in high-precision manufacturing for large components.”

CERN connection

Cooling the toroidal-field magnets requires about 5.8 tonnes of helium at a temperature of 4.5 K and a pressure of 6 bar, putting helium in a supercritical phase slightly warmer than it is in the LHC. But ITER’s operating environment is totally different to an accelerator, explains head of F4E’s magnets project team Alessandro Bonito-Oliva: “The magnets have to operate subject to lots of heat generated by neutron irradiation from the plasma and AC losses generated inside the cable, which has to be removed, whereas at CERN you don’t have this problem. So the ITER coolant has to be fairly close to the wire – this is why we used forced-flow of helium inside the cable.” A lot of ITER’s superconductor technology work was driven by CERN in improving the characteristics of superconductors, says Bonito-Oliva: “High-energy physics mainly looks for very high current performance, while in fusion it is also important to minimise the AC losses, which generally brings a reduction of current performance. This is why Nb3Sn strands for fusion and accelerators are slightly different.

CERN entered formal collaboration with ITER in March 2008 via a co-operation agreement concerning the design of high-temperature superconducting current leads and other magnet technologies, with CERN’s superconducting laboratory in building 163 becoming one of the “reference” laboratories for testing ITER’s superconducting strands. Niobium-tin is the same material that CERN is pursuing for the high-field magnets of the High Luminosity LHC and also a possible future circular collider, although the performance demands of accelerator magnets requires significant further R&D. Head of CERN’s technology department, Jose Miguel Jimenez, who co-ordinates the collaboration between CERN and ITER, says that in addition to helping with the design of the cable, CERN played a big role in advising for high-voltage testing of the cable insulation and, in particular, with the metallurgical aspect. “Metallurgy is one of the key areas of technology transfer from CERN to ITER. Another is the HTS current leads, which CERN has helped to design in collaboration with the Chinese group working on the ITER tokamak, and in simulating the heat transfer under real conditions,” he explains. “We also helped with the cryoplants, magnetic-field quality, and on central interlocks and safety systems based on our experience with the LHC.”

ATLAS finds evidence for Higgs to bb

CCnew7_07_17

Five years ago, the ATLAS and CMS collaborations at the LHC announced the discovery of a new particle with properties consistent with those of a Standard Model Higgs boson. Since then, based on proton–proton collision data collected at energies of 7 and 8 TeV during LHC Run 1 and at 13 TeV during Run 2, many measurements have confirmed this hypothesis. Several decay modes of the Higgs boson have been observed, but the dominant decay into pairs of b quarks, which is expected to contribute at a level of 58%, had up to now escaped detection – largely due to the difficulty in observing this decay mode at a hadron collider.

On 6 July, at the European Physical Society conference in Venice, the ATLAS collaboration announced that they had found evidence for H → bb, representing an immense analysis achievement. By far the largest source of Higgs bosons is their production via gluon fusion, gg  H  bb, but this is overwhelmed by the huge background of bb events, which are produced at a rate 10 million times higher. The associated production of a Higgs with a W or Z vector boson (jointly denoted V) offers the most sensitive alternative, despite having a production rate roughly 20 times lower than H bb, because the vector bosons are detected via their decay to leptons and therefore allow efficient triggering and background rejection. Nevertheless, the signal remains orders of magnitude smaller than the backgrounds, which arise from the associated production of vector bosons with jets and from top-quark production.

To find evidence for the H  bb decay in the VH production channel, it is necessary to use detailed information on the properties of the decay products. The jets arising from b quarks contain b hadrons, whose long lifetime can be used in sophisticated b-tagging algorithms to discriminate them from jets originating from the fragmentation of gluons or other quark species. These algorithms have benefitted significantly from the new innermost pixel layer installed in ATLAS before Run 2. The kinematic properties of the decay products can also be used to enhance the signal-over-background ratio. The property with the most discriminatory power is the invariant mass of the two-b-jet system, which for the signal accumulates at the mass of the Higgs boson (see figure). To increase the sensitivity of the analysis, this mass is used together with several other kinematic variables as input to a multivariate analysis.

Based on data collected during the first two years of LHC Run 2 in 2015 and 2016, evidence for the H  bb decay is obtained at the level of 3.5σ, slightly increased to 3.6σ after combination with the Run 1 results (compared to an expected significance of 4σ). The measured signal yield is in agreement with the Standard Model expectation, within an uncertainty of 30%. The associated VZ production, with Z  bb, allows for a powerful cross-check of the analysis, as the final states are very similar except for the location of the two-b-jet mass peak (see figure); VZ production is observed with a significance of 5.8σ in the Run 2 data, in agreement with the Standard Model prediction.

This analysis opens a way to study about 90% of the Higgs boson decays expected in the Standard Model, which is a sharp increase from the approximately 30% observed previously. With much more data expected by the end of Run 2 in 2018, a definitive 5σ observation of the H  bb decay may be in sight, with the increased precision providing new opportunities to challenge the Standard Model.

Evidence suggests all stars born in pairs

The reason why some stars are born in pairs while others are born singly has long puzzled astronomers. But a new study suggests that no special conditions are required: all stars start their lives as part of a binary pair. The result has implications not only in the field of star evolution but also for studies of binary neutron-star and binary black-hole formation. It also suggests that our own Sun was born together with a companion that has since disappeared.

Stars are born in dense molecular clouds measuring light-years across, within which denser regions can collapse under their own gravity to form high-density cores opaque to optical radiation, which appear as dark patches. When the densities reach the level where hydrogen fusion begins, the cores can form stars. Although young stars already emit radiation before the onset of the hydrogen-burning phase, it is absorbed in the dense clouds that surround them, making star-forming regions difficult to study. Yet, since clouds that absorb optical and infrared radiation re-emit it at much longer wavelengths, it is possible to probe them using radio telescopes.

Sarah Sadavoy of the Max Planck Institute for Astronomy in Heidelberg and Steven Stahler of the University of California at Berkeley used data from the Very Large Array (VLA) radio telescopes in New Mexico, together with micrometre-wavelength data from the James Clerk Maxwell Telescope (JCMT) in Hawaii, to study the dense gas clumps and the young stars forming in them in the Perseus cluster – a star-forming region about 600 light-years away. Data from the JCMT show the location of dense cores in the gas, while the VLA provides the location of the young stars within them.

Studying the multiplicity as well as the location of the young stars inside the dense regions, the researchers found a total of 19 binary systems, 45 single-star systems and five systems with a higher multiplicity. Focusing on the binary pairs, they observed that the youngest binaries typically have a large separation of 500 astronomical units (500 times the Sun–Earth distance). Furthermore, the young stars were aligned along the long axis of the elongated cloud. Older binary systems, with an age between 500,000 and one million years, were found typically to be closer together and separated around a random axis.

Subsequent to cataloguing all the young stars, the team compared the observed star multiplicity and the features seen in the binary pairs to simulations of stars being formed either as single or binary systems. The only way the model could reproduce the data was if its starting conditions contained no single stars but only stars that started out as part of wide binaries, implying that all stars are formed as part of a binary system. After formation, the stars either move closer to one another into a close binary system or move away from each other. The latter option is likely to be what happened in the case of the Sun, its companion having drifted away long ago.

If indeed all stars are formed in pairs, it would have big implications for models of stellar birth rates in molecular clouds as well as for the formation of binary systems of compact objects. The studied nearby Perseus cluster could, however, just be a special case, and further studies of other star-forming regions are therefore required to know if the same conditions exist elsewhere in the universe.

LHCb discovers new baryon

The LHCb collaboration has discovered a new weakly decaying particle: a baryon called the Ξ++cc, which contains two charm quarks and an up quark. The discovery of the new particle, which was observed decaying to the final-state Λ+c Kπ+π+ and is predicted by the Standard Model, was presented at the European Physical Society conference in Venice on 6 July.

Although the quark model of hadrons predicts the existence of doubly heavy baryons – three-quark states that contain two heavy (c or b) quarks – this is the first time that such states have been observed unambiguously with overwhelming statistical significance (well in excess of 5σ with respect to background expectations). The properties of the newly discovered Ξ++cc baryon shed light on a long-standing puzzle surrounding the experimental status of doubly charmed baryons, opening an exciting new branch of investigation for LHCb.

The team scrutinised large high-purity samples of Λ+c p Kπ+ decays in LHC data recorded at 8 and 13 TeV in 2012 and 2016, respectively, and discovered an isolated narrow structure in the Λ+c Kπ+π+ mass spectrum (associating the Λ+c baryon with further particles) at a mass of around 3620 MeV/c2. After eliminating all known potential artificial sources, the collaboration concluded that the highly significant peak is a previously unobserved state. Corroboration that it is the weakly decaying Ξ++cc came from examining a subset of data in which the reconstructed baryons lived for a measurable period before decaying. Such a requirement eliminates all promptly decaying particles, leaving only long-lived ones that are the hallmark of weak transitions.

Although the existence of baryons with valence-quark content ccu and ccd (corresponding to the Ξ++cc and its isospin partner Ξ+cc) is expected, the experimental status of these states has been controversial. In 2002, the SELEX collaboration at Fermilab in the US claimed the first observation of this class of particle by observing a significant peak of about 16 events at a mass of 3519±1 MeV/c2 in the Λ+c Kπ+ mass spectrum, which they identified as the closely related state Ξ+cc. Puzzlingly, the short lifetime (which was too small to be measured at SELEX) and the very large production rate of the state seemed not to match theoretical expectations for the Ξ+cc. Despite SELEXʼs confirmation of the observation in a second decay mode, all subsequent searches – including efforts at the FOCUS, BaBar and Belle experiments – failed to find evidence for doubly charmed baryons. That left both theorists and experimentalists awaiting a firm observation by a more powerful heavy-flavour detector such as LHCb. Although the new result from LHCb does not fully resolve the puzzle (with a mass difference of 103±2 MeV/c2, LHCbʼs Ξ++cc and SELEXʼs Ξ+cc seem irreconcilable as isospin partners), the discovery is a crucial step to an empirical understanding of the nature of doubly heavy baryons.

ATLAS probes Higgs boson at 13 TeV

The ATLAS collaboration has released new results on measurements of the properties of the Higgs boson using the full LHC proton–proton collision data set collected at a centre-of-mass energy of 13 TeV in 2015 and 2016, corresponding to an integrated luminosity of 36.1 fb–1.

One of the most sensitive measurement channels involves Higgs boson decays via two Z bosons to four leptons (two pairs of oppositely charged electrons or muons). Although only occurring in about one in every 8000 Higgs decays, it gives the cleanest signature of all the Higgs decay modes.

Using this channel, ATLAS measured both the inclusive and differential cross-sections for Higgs boson production. Although these have been measured before at lower LHC collision energy, the increased integrated luminosity and larger cross-section compared to LHC Run 1 allows their magnitudes to be determined with increased precision. In total, around 70 Higgs boson to four-lepton events were measured with a fit to the invariant mass distribution, allowing the inclusive cross-section to be measured with an accuracy of about 16%.

Candidate Higgs boson events were corrected for detector measurement effects and classified according to their kinematic properties to measure differential production cross-sections. Among these, the measurement of the momentum of the Higgs boson transverse to the beam axis probes different Higgs boson production mechanisms. By measuring the number and properties of jets produced in these events, Higgs boson production via the fusion of two gluons was studied. The measured inclusive and differential cross-sections were found to be in agreement with the Standard Model (SM) predictions. The results were used to constrain possible anomalous Higgs boson interactions with SM particles.

Astronomers spot first failed supernova

Massive stars are traditionally expected to end their life cycle by triggering a supernova, a violent event in which the stellar core collapses into a neutron star, potentially followed by a further collapse into a black hole. During this process, a shock wave ejects large amounts of material from the star into interstellar space with large velocities, producing heavy elements in the process, while the supernova outshines all the stars in its host galaxy combined.

In the past few years, however, there has been mounting evidence that not all massive-star deaths are accompanied by these catastrophic events. Instead, it seems that for some stars only a small part of their outer layers is ejected before the rest of the volume collapses into a massive black hole. For instance, there are hints that the birth rate and supernova rate of massive stars do not match. Furthermore, results from the LIGO gravitational-wave observatory in the US indicate the existence of black holes with masses more than 30 times that of the Sun, which is easier to explain if stars can collapse without a large explosion.

The results would explain why we observe less supernovae than expected

Motivated by this indirect evidence, researchers from Ohio State University began a search for stars that quietly form a black hole without triggering a supernova. Using the Large Binocular Telescope (LBT) in Arizona, in 2015 the team identified its first candidate. The star, called N6946-BH1, was approximately 25 times more massive than the Sun and lived in the Fireworks galaxy, which is known for hosting a large number of supernovae. Previously presenting a stable luminosity, the star was seen to become brighter, although not at the level expected for a supernova, during 2009, before completely disappearing in optical wavelengths in 2010 (see image).

The lack of emission observed by the LBT triggered follow-up searches for the star, both using the Hubble Space Telescope (HST) and the Spitzer Space Telescope (SST). While the HST did not find signs of the star in the optical wavelength, the SST did observe infrared emission. A careful analysis of the data disfavoured alternative explanations such as a large dust cloud obscuring the optical emission from the star, and the infrared data were also shown to be compatible with emission from remaining matter falling into a black hole.

If the star did indeed directly collapse into a black hole, as these findings suggest, the in-falling matter is expected to radiate in the X-ray region. The team is therefore waiting for observations from the space-based Chandra X-ray Observatory to search for this emission.

If confirmed in X-ray data, this result would be the first measurement of the birth of a black hole and the first measurement of a failed supernova. The results would explain why we observe less supernovae than expected and could reveal the origin of the massive black holes responsible for the gravitational waves seen by LIGO, in addition to having implications for the production of heavy elements in the universe.

The Higgs adventure: five years in

Where were you on 4 July 2012, the day the Higgs boson discovery was announced? Many people will be able to answer without referring to their diary. Perhaps you were among the few who had managed to secure a seat in CERN’s main auditorium, or who joined colleagues in universities and laboratories around the world to watch the webcast. For me, the memory is indelible: 3.00 a.m. in Watertown, Massachusetts, huddled over my laptop at the kitchen table. It was well worth the tired eyes to witness remotely an event that will happen once in a lifetime.

“I think we have it, no?” was the question posed in the CERN auditorium on 4 July 2012 by Rolf Heuer, CERN’s Director-General at the time. The answer was as obvious as the emotion on faces in the crowd. The then ATLAS and CMS spokespersons, Fabiola Gianotti and Joe Incandela, had just presented the latest Higgs search results based on roughly two years of LHC operations at energies of 7 and 8 TeV. Given the hints for the Higgs presented a few months earlier in December 2011, the frenzy of rumours on blogs and intense media interest during the preceding weeks, and a title for the CERN seminar that left little to the imagination, the outcome was anticipated. This did not temper excitement.

Since then, we have learnt much about the properties of this new scalar particle, yet we are still at the beginning of our understanding. It is the final and most interesting particle of the Standard Model of particle physics (SM), and its connections to many of the deepest current mysteries in physics mean the Higgs will remain a focus of activities for experimentalists and theorists for the foreseeable future.

Speculative theories

The Higgs story began in the 1960s with speculative ideas. Theoretical physicists understood how the symmetries of materials can spontaneously break down, such as the spontaneous alignment of atoms when a magnet is cooled from high temperatures, but it was not yet understood how this might happen for the symmetries present in the fundamental laws of physics. Then, in three separate publications by Brout and Englert, by Higgs, and by Guralnik, Hagen and Kibble in 1964, the broad particle-physics structures for spontaneous symmetry breaking were fleshed out. In this and subsequent work it became clear that a scalar field was a cornerstone of the general symmetry-breaking mechanism. This field may be excited and oscillate, much like the ripples that appear on a disturbed pond, and the excitation of the Higgs field is known as the Higgs boson.

As the detailed theoretical structure of symmetry breaking in nature was later developed, in particular by Weinberg, Glashow, Salam, ’t Hooft and Veltman, the precise role of the Higgs in the SM evolved to its modern form. In addition to explaining what we see in modern particle detectors, the Higgs plays a leading role in the evolution of the universe. In the hot early epoch an infinitesimally small fraction of a second after the Big Bang, the Higgs field spontaneously “slipped” from having zero average value everywhere in space to having an average value equivalent to about 246 GeV. When this happened, any field that was previously kept massless by the SU(2) × U(1) gauge symmetries of the SM instantly became massive.

Before delving further into the vital role of the Higgs, it is worth revisiting a couple of common misconceptions. One is that the Higgs boson gives mass to all particles. Although all of the known massive fundamental particles obtain their mass by interacting with the pervasive Higgs field, there are non-elementary particles, such as the proton, whose mass is dominated by the binding energy of the strong force that holds its constituent gluons and quarks together. So very little of the mass we see in nature comes directly from the Higgs field. Another misconception is that the Higgs boson gives mass to everything it interacts with. On the contrary, the Higgs has very important interactions with two massless fundamental fields: the photon and the gluon. The Higgs is not charged under the forces associated with the photon and the gluon (quantum electrodynamics and quantum chromodynamics), and therefore cannot give them mass, but it can still interact with them. Indeed, somewhat ironically, it was precisely its interactions with massless gluons and photons that revealed the existence of the Higgs boson in the summer of 2012.

The one remaining unmeasured free parameter of the SM at that time, which governs which production and decay modes the particle can have, was the Higgs boson mass. In the early days it was not at all clear what the mass of the Higgs boson would be, since in the SM this is an input parameter of the theory. Indeed, in 1975, in the seminal paper about its experimental phenomenology by Ellis, Gaillard and Nanopoulos, it is notable that the allowed Higgs mass range at that time spanned four orders of magnitude, from 18 MeV to over 100 GeV, with experimental prospects in the latter energy range opaque at best (figure 1).

How the Higgs was found

By 4 July 2012 the picture was radically different. The Higgs no-show at previous colliders, including LEP at CERN and the Tevatron at Fermilab, had cornered its mass to be greater than 114 GeV and not to lie between 147–180 GeV, while theoretical limits on the allowed properties of W- and Z-boson scattering required it to be below around 800 GeV. If nature used the SM version of the Higgs mechanism, there was nowhere left to hide once CERN’s LHC switched on. In the end, the Higgs weighed in at the relatively light mass of 125 GeV. How the different Higgs cross-sections, which are related to the production rate for various processes, depend on the mass are shown in figure 2, left.

Producing the Higgs would alone not be sufficient for discovery. It would also have to be observed, which depends on the different fractional ways in which the Higgs boson will decay (figure 2, right). If heavy, one would have to search for decays to the weak gauge bosons, W and Z; if lighter, a cocktail of decays would light up detectors. Going further, if thousands of Higgs bosons could be produced, then decays to pairs of photons may show up. Thus, by the time of the LHC operation, the basic theoretical recipe was relatively simple: pick a Higgs mass, calculate the SM predictions and search.

On the other hand, the experimental recipe was far from simple. The LHC, a particle accelerator capable of colliding protons at energies far beyond anything previously achieved, was a necessity. But energy alone was not enough, as sufficient numbers of Higgs bosons also had to be produced. Although occurring at a low rate, Higgs decays into pairs of massless photons would prove to be experimentally clean and furnish the best opportunity for discovery. Once detection efficiencies, backgrounds, and requirements of statistical significance are folded into the mix, on the order of 100,000 Higgs bosons would be required for discovery. This is no short order, yet that is what the accelerator teams delivered to the detectors.

With the accelerator running, it remained to observe the thing. This would push ingenuity to its limits. Physicists on the ATLAS and CMS detectors would need to work night and day to filter through the particle detritus from innumerable proton–proton collisions to select data sets of interest. The search set tremendous challenges for the energy-resolution and particle-identification capabilities of the detectors, not to mention dealing with enormous volumes of data. In the end, the result of this labour reduced to a couple of plots (figure 3). The discovery was clear for each collaboration: a significance pushing the 5σ “discovery” threshold. In further irony for the mass-giving Higgs, the discovery was driven primarily by the rare but powerful diphoton decays, followed closely by Higgs decays to Z bosons. Global media erupted in a science-fuelled frenzy. It turns out that everyone gets excited when a fundamental building block of nature is discovered.

The hard work begins

The joy in the experimental and theoretical communities in the summer of 2012 was palpable. If we were to liken early studies of the electroweak forces to listening to a crackling radio, LEP had given us black and white TV and the LHC was about to show us the world in full cinematic colour. Particle physicists now had the work they had waited a lifetime to do. Is it the SM Higgs boson, or something else, something exotic? All we knew at the time was that there was a new boson, with mass of roughly 125 GeV, that decayed to photons and Z bosons.

Despite the huge success of the SM, there was every reason to hope that the new boson would not be of the common variety. The Higgs brings us face-to-face with questions that the SM cannot answer, such as what constitutes dark matter (observed to make up roughly 80% of all the matter in the universe). Unlike the other SM  particles, it is uncharged and without spin, and can therefore interact easily with any other neutral scalar particles. This makes it a formidable tool in the hunt for dark matter – a possibility we often call the “Higgs portal”. The ATLAS and CMS collaborations have been busy exploring the Higgs portal and we now know that the Higgs decay rate into invisible new dark particles must be less than 34% of its total rate into known particles. This is an incredible thing to know for a particle that is itself so elusive, and a significant early step for dark-sector physics.

Another deep puzzle, even more esoteric than dark matter and which has driven the theoretical community to distraction for decades, is called the hierarchy problem. We know that at higher energies (smaller sizes) there must be more structure to the laws of nature: the scale of quantum gravity, the Planck scale, is one example, but there are hints of others. For any other SM particle, this new physics at high energies has no dramatic effect, since fundamental particles with nonzero spin possess special protective symmetries that shield them from large quantum corrections. But the Higgs possesses no such symmetry, and is thus a sensitive creature: quantum-mechanical effects will give large corrections to its mass, pulling it all the way up to the masses of the new particles it is interacting with. That has clearly not happened, given the mass we measure in experiments, so what is going on?

Thus the discovery of the Higgs brings the hierarchy problem to the fore. If the Higgs is composite, being made up of other particles, in a similar fashion to the ubiquitous QCD pion, then the problem simply goes away because there is no fundamental scalar in the first place. Another popular theory, supersymmetry, postulates new space–time symmetries, which protect the Higgs boson from these quantum corrections and could modify its properties. Measurements of the Higgs interactions thus indirectly probe this deepest of questions in modern particle physics. For example, we now know the interaction between the Higgs boson and the Z boson to an accuracy at the level of 10%, a significant constraint on these theories.

It is also crucial that we understand the way the Higgs interacts with fermions. Anyone who has ever looked up the masses of the quarks and leptons will see that they follow cryptic hierarchical patterns, while families of fermions can also mix into one another through the emission of a W boson in peculiar patterns that we do not yet understand. By playing a star role in generating particle masses, and as a supporting actor by also generating the mixings, the Higgs could shed light on these mysteries.

At the time of the Higgs discovery in 2012, the only interactions we were certain of concerned bosons: photons, W and Z bosons, and, to a certain degree, gluons. There was emerging evidence for interactions with top quarks, but it was circumstantial, coming from the role of the top quark in the quantum-mechanical process that generates Higgs interactions with gluons and photons. After a four-year wait, in 2016 ATLAS and CMS combined forces to reach the first 5σ direct discovery of Higgs interactions with a fermion: the τ lepton, to be precise. This was a significant milestone, not least because it also happened to give the first direct evidence of Higgs interactions with leptons.

CChig6_06_17

The scope of the Higgs programme has also broadened since the early days of the discovery. This applies not only to the precision with which certain couplings are measured, but also to the energy at which they are measured. For example, when the Higgs boson is produced via the fusion of two gluons at the LHC, additional gluons or quarks may be emitted at high energies. By observing such “associated production” we may gain information about the magnitude of a Higgs interaction and about its detailed structure. Hence, if new particles that influence Higgs boson interactions exist at high energies, probing Higgs couplings at high energies may reveal their existence. The price to be paid for associated production is that the probability, and hence the rate, is low (figure 2). As an ever increasing number of Higgs production events have been recorded at the LHC in the past five years, this has allowed physicists to begin mapping the nature of the Higgs boson’s interactions.

What’s next?

We have much to anticipate. Although the Higgs is too light to be able to decay into pairs of top quarks, experimentalists will study its interactions with the top quark by observing Higgs produced in association with pairs of top quarks. Another anticipated discovery, which is difficult to pick out above other background processes, is the decay of the Higgs to bottom quarks. Amazingly, despite the incredibly rare signal rate, the upgraded High-Luminosity LHC will be able to discover Higgs decays to muons. This would be the first observation of Higgs interactions with the second generation of fermions, pointing a floodlight towards the flavour puzzle. These measurements will bring the overall picture of how the Higgs generates particle masses into closer focus. Even now, after only five years, the picture is becoming clear: Higgs physics is becoming a precision science at the LHC (figure 4).

There is more to Higgs physics than a shopping list of couplings, however. By the end of the LHC’s operation in the mid-2030s, more than one hundred million Higgs bosons will have been produced. That will allow us to search for extremely rare and exotic Higgs production and decay modes, perhaps revealing a first crack in the SM. On the opposing flank, by observing the standard production processes in extreme kinematic corners, such as Higgs production at very high momentum, we will be able to measure its interactions over a range of energies. In both cases the challenge will not only be experimental, as the SM predictions must also keep pace with the accuracy of the measurements – a fact which is already driving revolutions in our theoretical understanding.

Setting our sights on the distant future of Higgs physics, it would be remiss to overlook the “white whale” of Higgs physics: the Higgs self-interaction. In yet another unique twist, the Higgs is the only particle in the SM that can scatter off itself (figure 5). In contrast, gluons only interact with other non-identical gluons. If we could access the Higgs self-interactions, by determining how a Higgs boson scatters on itself in measurements of Higgs boson pair-production processes, we would be measuring the shape of the Higgs scalar potential. This is tremendously important because, in theory, it determines the fate of the entire universe: if the scalar potential “turns back over” again at high field values, it would imply that we live in a metastable state. There is mounting evidence, in the form of the measured SM parameters such as the mass of the top quark, that this may be the case. Unfortunately, with the LHC we will not be able to measure this interaction well enough to definitively determine the shape of the Higgs scalar potential, and so we must ultimately look to future colliders to answer this question, among others.

The Higgs is the keystone of the SM and therefore everything we learn about this new particle is central to the deepest laws of nature. When huddled over my laptop at 3.00 a.m. on 4 July 2012, I was 27 years old and in the first year of my first postdoctoral position. To me, and presumably the rest of my generation, it felt like a new scientific continent had been discovered, one that would take a lifetime to explore. On that day we finally knew it existed. Today, after five years of feverish exploration, we have in our hands a sketch of the coastline. We have much to learn before the mountains and valleys of the enigmatic Higgs boson are revealed.

Search for sterile neutrinos triples up

CCsbn1_05_17

This summer, two 270 m3 steel containment vessels are making their way by land, sea and river from CERN in Europe to Fermilab in the US, a journey that will take five weeks. Each vessel houses one of the 27,000-channel precision wire chambers of the ICARUS detector, which uses advanced liquid-argon technology to detect neutrinos. Having already operated successfully in the CERN to Gran Sasso neutrino beam from 2010 to 2012, and spent the past two years being refurbished at CERN, ICARUS will team up with two similar detectors at Fermilab to deliver a new physics opportunity: the ability to resolve some intriguing experimental anomalies in neutrino physics and perform the most sensitive search to date for eV-scale sterile neutrinos. This new endeavour, comprised of three large liquid-argon detectors (SBND, MicroBooNE and ICARUS) sitting in a single intense neutrino beam at Fermilab, is known as the Short-Baseline Neutrino (SBN) programme.

The sterile neutrino is a hypothetical particle, originally introduced by Bruno Pontecorvo in 1967, which doesn’t experience any of the known forces of the Standard Model. Sterile-neutrino states, if they exist, are not directly observable since they don’t interact with ordinary matter, but the phenomenon of neutrino oscillations provides us with a powerful probe of physics beyond the Standard Model. Active–sterile mixing, just like standard three-neutrino mixing, could generate additional oscillations among the standard neutrino flavours but at wavelengths that are distinct from the now well-measured “solar” and “atmospheric” oscillation effects. Anomalies exist in the data of past neutrino experiments that present intriguing hints of possible new physics. We now require precise follow-up experiments to either confirm or rule out the existence of additional, sterile-neutrino states.    

On the scent of sterile states

The discovery nearly two decades ago of neutrino-flavour oscillations led to the realisation that each of the familiar flavours (νe, νμ, ντ ) is actually a linear superposition of states of distinct masses (ν1, ν2, ν3 ). The wavelength of an oscillation is determined by the difference in the squared masses of the participating mass states, m2i – m2j. The discoveries that were awarded the 2015 Nobel Prize in Physics correspond to the atmospheric mass-splitting Δm2ATM = |m23– m22| = 2.5 × 10–3 eV2 and the solar mass-splitting Δm2SOLAR = m22 – m21 = 7.5 × 10–5 eV2, so-named because of how they were first observed. Any additional and mostly sterile mass states, therefore, could generate a unique oscillation driven by a new mass scale in the neutrino sector: m2mostly sterile – m2mostly active.

The most significant experimental hint of new physics comes from the LSND experiment performed at the Los Alamos National Laboratory in the 1990s, which observed a 3.8σ excess of electron antineutrinos appearing in a mostly muon antineutrino beam in a region where standard mixing would predict no significant effect. Later, in the 2000s, the MiniBooNE experiment at Fermilab found excesses of both electron neutrinos and electron antineutrinos, although there is some tension with the original LSND observation. Other hints come from the apparent anomalous disappearance of electron antineutrinos over baselines less than a few hundred metres at nuclear-power reactors (the “reactor anomaly”), and the lower than expected rate in radioactive-source calibration data from the gallium-based solar-neutrino experiments GALLEX and SAGE (the “gallium anomaly”). Numerous other searches in appearance and disappearance channels have been conducted at various neutrino experiments with null results (including ICARUS when it operated in the CERN to Gran Sasso beam), and these have thus constrained the parameter space where light sterile neutrinos could still be hiding. A global analysis of the available data now limits the possible sterile–active mass-splitting, m2mostly sterile – m2mostly active, to a small region around 1–2 eV2

CCsbn2_05_17

Long-baseline accelerator-based neutrino experiments such as NOvA at Fermilab, T2K in Japan, and the future Deep Underground Neutrino Experiment (DUNE) in the US, which will involve detectors located 1300 km from the source, are tuned to observe oscillations related to the atmospheric mass-splitting, Δm2ATM ~ 10–3 eV2. Since the mass-squared difference between the participating states and the length scale of the oscillation they generate are inversely proportional to one another, a short-baseline accelerator experiment such as SBN, with detector distances of the order 1 km, is most sensitive to an oscillation generated by a mass-squared difference of order 1 eV2 – exactly the region we want to search.

Three detectors, one beam

The SBN programme has been designed to definitively address this question of short-baseline neutrino oscillations and test the existence of light sterile neutrinos with unprecedented sensitivity. The key to SBN’s reach is the deployment of multiple high-precision neutrino detectors, all of the same technology, at different distances along a single high-intensity neutrino beam. Use of an accelerator-based neutrino source has the bonus that both electron-neutrino appearance and muon-neutrino disappearance oscillation channels can be investigated simultaneously.

The neutrino source is Fermilab’s Booster Neutrino Beam (BNB), which has been operating at high rates since 2002 and providing beam to multiple experiments. The BNB is generated by impinging 8 GeV protons from the Booster onto a beryllium target and magnetically focusing the resulting hadrons, which decay to produce a broad-energy neutrino beam peaked around 700 MeV that is made up of roughly 99.5% muon neutrinos and 0.5% electron neutrinos.

The three SBN detectors are each liquid-argon time projection chambers (LArTPCs) located along the BNB neutrino path (see images above). MicroBooNE, an 87 tonne active-mass LArTPC, is located 470 m from the neutrino production target and has been collecting data since October 2015. The Short-Baseline Near Detector (SBND), a 112 tonne active-mass LArTPC to be sited 110 m from the target, is currently under construction and will provide the high-statistics characterisation of the un-oscillated BNB neutrino fluxes that is needed to control systematic uncertainties in searches for oscillations at the downstream locations. Finally, ICARUS, with 476 tonnes of active mass and located 600 m from the BNB target, will achieve a sufficient event rate at the downstream location where a potential oscillation signal may be present. Many of the upgrades to ICARUS implemented during its time at CERN over the past few years are in response to unique challenges presented by operating a LArTPC detector near the surface, as opposed to the underground Gran Sasso laboratory where it operated previously. The SBN programme is being realised by a large international collaboration of researchers with major detector contributions from CERN, the Italian INFN, Swiss NSF, UK STFC, and US DOE and NSF. At Fermilab, new experimental halls to house the ICARUS and SBND detectors were constructed in 2016 and are now awaiting the LArTPCs. ICARUS and SBND are expected to begin operation in 2018 and 2019, respectively, with approximately three years of ICARUS data needed to reach the programme’s design sensitivity.

A rich physics programme

In a combined analysis, the three SNB detectors allow for the cancellation of common systematics and can therefore test the νμ→ νe oscillation hypothesis at a level of 5σ or better over the full range of parameter space originally allowed at 99% C.L. by the LSND data. Recent measurements, especially from the NEOS, IceCube and MINOS experiments, have constrained the possible sterile-neutrino parameters significantly and the sensitivity of the SBN programme is highest near the most favoured values of Δm2. In addition to νe appearance, SBN also has the sensitivity to νμ disappearance needed to confirm an oscillation interpretation of any observed appearance signal, thus providing a more robust result on sterile-neutrino-induced oscillations (figure 1).

CCsbn3_05_17

SBN was conceived to unravel the physics of light sterile neutrinos, but the scientific reach of the programme is broader than just the searches for short-baseline neutrino oscillations. The SBN detectors will record millions of neutrino interactions that can be used to make precise measurements of neutrino–argon interaction cross-sections and perform detailed studies of the rather complicated physics involved when neutrinos scatter off a large nucleus such as argon. The SBND detector, for example, will see of the order 100,000 muon-neutrino interactions and 1000 electron-neutrino interactions per month. For comparison, existing muon-neutrino measurements of these interactions are based on only a few thousand total events and there are no measurements at all with electron neutrinos. The position of the ICARUS detector also allows it to see interactions from two neutrino beams running concurrently at Fermilab (the Booster and Main Injector neutrino beams), allowing for a large-statistics measurement of muon and electron neutrinos in a higher-energy regime that is important for future experiments.

In fact, the science programme of SBN has several important connections to the future long-baseline neutrino experiment at Fermilab, DUNE. DUNE will deploy multiple 10 kt LArTPCs 1.5 km underground in South Dakota, 1300 km from Fermilab. The three detectors of SBN present an R&D platform for advancing this exciting technology and are providing direct experimental activity for the global DUNE community. In addition, the challenging multi-detector oscillation analyses at SBN will be an excellent proving ground for sophisticated event reconstruction and data-analysis techniques designed to maximally exploit the excellent tracking and calorimetric capabilities of the LArTPC. From the physics point of view, discovering or excluding sterile neutrinos plays an important role in the ability of DUNE to untangle the effects of charge-parity violation in neutrino oscillations, a primary physics goal of the experiment. Also, precise studies of neutrino–argon cross-sections at SBN will help control one of the largest sources of systematic uncertainties facing long-baseline oscillation measurements.    

Closing in on a resolution

The hunt for light sterile neutrinos has continued for several decades now, and global analyses are regularly updated with new results. The original LSND data still contain the most significant signal, but the resolution on Δm2 was poor and so the range of values allowed at 99% C.L. spans more than three orders of magnitude. Today, only a small region of mass-squared values remain compatible with all of the available data, and a new generation of improved experiments, including the SBN programme, are under way or have been proposed that can rule on sterile-neutrino oscillations in exactly this region.

There is currently a lot of activity in the sterile-neutrino area. The nuPRISM and JSNS2 proposals in Japan could also test for νμ→ νe appearance, while new proposals like the KPipe experiment, also in Japan, can contribute to the search for νμ disappearance. The MINOS+ and IceCube detectors, both of which have already set strong limits on νμ disappearance, still have additional data to analyse. A suite of experiments is already currently under way (NEOS, DANSS, Neutrino-4) or in the planning stages (PROSPECT, SoLid, STEREO) to test for electron-antineutrino disappearance over short baselines at reactors, and others are being planned that will use powerful radioactive sources (CeSOX, BEST). These electron-neutrino and -antineutrino disappearance searches are highly complementary to the search modes being explored at SBN. 

The Fermilab SBN programme offers world-leading sensitivity to oscillations in two different search modes at the most relevant mass-splitting scale as indicated by previous data. We will soon have critical new information regarding the possible existence of eV-scale sterile neutrinos, resulting in either one of the most exciting discoveries across particle physics in recent years or the welcome resolution of a long-standing unresolved puzzle in neutrino physics.

LArTPCs rule the neutrino-oscillation waves
  A schematic diagram of the ICARUS liquid-argon time projection chamber (LArTPC) detector, where electrons create signals on three rotated wire planes. The concept of the LArTPC for neutrino detection was first conceived by Carlo Rubbia in 1977, followed by many years of pioneering R&D activity and the successful operation of the ICARUS detector in the CNGS beam from 2010 to 2012, which demonstrated the effectiveness of single-phase LArTPC technology for neutrino physics. A LArTPC provides both precise calorimetric sampling and 3D tracking similar to the extraordinary imaging features of a bubble chamber, and is also fully electronic and therefore potentially scalable to large, several-kilotonne masses. Charged particles propagating in the liquid argon ionise argon atoms and free electrons drift under the influence of a strong, uniform electric field applied across the detector volume. The drifted ionisation electrons induce signals or are collected on planes of closely spaced sense wires located on one side of the detector boundary, with the wire signals proportional to the amount of energy deposited in a small cell. The very low electron drift speeds, in the range of 1.6 mm/μs, require a continuous read-out time of 1–2 milliseconds for a detector a few metres across. This creates a challenge when operating these detectors at the surface, as the SBN detectors will be at Fermilab, so photon-detection systems will be used to collect fast scintillation light and time each event.

 

bright-rec iop pub iop-science physcis connect