On 11 December 2018, 25 years after its inaugural meeting, the BaBar collaboration came together at the SLAC National Accelerator Laboratory in California to celebrate its many successes. David Hitlin, BaBar’s first spokesperson, described the inaugural meeting of what was then called the Detector Collaboration for the PEP-II “asymmetric” electron–positron collider, which took place at SLAC at the end of 1993. By May 1994 the collaboration had chosen the name BaBar in recognition of its primary goal to study CP violation in the neutral B-B̅ meson system. Jonathan Dorfan, PEP-II project director, recounted how PEP-II was constructed by SLAC, LBL and LLNL. Less than six years later, PEP-II and the BaBar detector were built and the first collision events were collected on 26 May 1999. Twenty-five years on, and BaBar has now chalked up more than 580 papers on CP violation and many other topics.
BaBar has now chalked up more than 580 papers on CP violation and many other topics.
The “asymmetric” descriptor of the collider refers to Pier Oddone’s concept of using unequal electron and positron beam energies – tuned to 10.58 GeV, the mass of the ϒ(4S) meson and just above the threshold for producing a pair of B mesons. This relativistic boost enabled measurements of the distance between the points where the mesons decay, which is critical for the study of CP violation. Equally critical was the entanglement of the B meson and anti-B meson produced in the ϒ(4S) decay, as it marked whether it was the B0 or B̅0 that decayed to the same CP final state by tagging the flavour of the other meson.
By October 2000 PEP-II had achieved its design luminosity of 3 × 1033 cm–2 s–1 and less than a year later BaBar published its observation of CP violation in the B0 meson system based on a sample of 32 × 106 pairs of B0-B̅0 mesons – on the same day that Belle, its competitor at Japan’s KEK laboratory, published the same observation. These results led to Makoto Kobayashi and Toshihide Maskawa sharing the 2008 Nobel Prize in Physics. The ultimate luminosity achieved by PEP-II, in 2006, was 1.2 × 1034 cm–2s–1. BaBar continued to collect data on or near the ϒ(4S) meson until 2007 and in 2008 collected large samples of ϒ(2S) and ϒ(3S) mesons before PEP-II was shut down. In total, PEP-II produced 471 × 106 B-B̅ pairs for BaBar studies – as well as a myriad of other for other investigations.
The anniversary event also celebrated technical innovations, including “trickle injection” of beam particles intoPEP-II, which provided a nearly 40% increase in integrated luminosity; BaBar’s impressive particle identification, made possible by the DIRC detector; and the implementation of a computing model – spurred by PEP-II delivering significantly more than design luminosity – whereby countries provided in-kind computing support via large “Tier-A” centres. This innovation paved the way for CERN’s Worldwide LHC Computing Grid.
Notable physics results from BaBar include the first observation in 2007 of D–D̅ mixing, while in 2008 the collaboration discovered the long-sought ηb, the lowest energy particle of the bottomonium family. The team also searched for lepton-flavour violation in tau–lepton decays, publishing in 2010 what remain the most stringent limits onτ → μγand τ → eγ branching fractions. In 2012, making it onto Physics World’s top-ten physics results of the year, the BaBar collaboration made the first direct observation of time-reversal violation by measuring the rates at which the B0 meson changes quantum states. Also published in 2012 was evidence for an excess of B̅→ D(*)τ– ν̅τ decays, which challenges lepton universality and is an important part of the current Belle II and LHCb physics programmes. Several years after data-taking ended, it was recognised that BaBar’s data could also be mined for evidence of dark-sector objects such as dark photons, leading to the publication of two significant papers in 2014 and 2017. Another highlight, published last year, is a joint BaBar–Belle paper that resolved an ambiguity concerning the quark-mixing unitarity triangle.
Although BaBar stopped collecting data in 2008, this highly collegial team of researchers continues to publish impactful results. Moreover, BaBar alumni continue to bring their experience and expertise to subsequent experiments, ranging from ATLAS, CMS and LHCb at the LHC, Belle II at SuperKEKB, and long-baseline neutrino experiments (T2K, DUNE, HyperK) to dark-matter (LZ, SCDMS) and dark-energy (LSST) experiments in particle astrophysics.
There has never been a better time to be a physicist. The questions on the table today are not about this-or-that detail, but profound ones about the very structure of the laws of nature. The ancients could (and did) wonder about the nature of space and time and the vastness of the cosmos, but the job of a professional scientist isn’t to gape in awe at grand, vague questions – it is to work on the next question. Having ploughed through all the “easier” questions for four centuries, these very deep questions finally confront us: what are space and time? What is the origin and fate of our enormous universe? We are extremely fortunate to live in the era when human beings first get to meaningfully attack these questions. I just wish I could adjust when I was born so that I could be starting as a grad student today! But not everybody shares my enthusiasm. There is cognitive dissonance. Some people are walking around with their heads hanging low, complaining about being disappointed or even depressed that we’ve “only discovered the Higgs and nothing else”.
So who is right?
It boils down to what you think particle physics is really about, and what motivates you to get into this business. One view is that particle physics is the study of the building blocks of matter, in which “new physics” means “new particles”. This is certainly the picture of the 1960s leading to the development of the Standard Model, but it’s not what drew me to the subject. To me, “particle physics” is the study of the fundamental laws of nature, governed by the still mysterious union of space–time and quantum mechanics. Indeed, from the deepest theoretical perspective, the very definition of what a particle is invokes both quantum mechanics and relativity in a crucial way. So if the biggest excitement for you is a cross-section plot with a huge bump in it, possibly with a ticket to Stockholm attached, then, after the discovery of the Higgs, it makes perfect sense to take your ball and go home, since we can make no guarantees of this sort whatsoever. We’re in this business for the long haul of decades and centuries, and if you don’t have the stomach for it, you’d better do something else with your life!
Isn’t the Standard Model a perfect example of the scientific method?
Sure, but part of the reason for the rapid progress in the 1960s is that the intellectual structure of relativity and quantum mechanics was already sitting there to be explored and filled in. But these more revolutionary discoveries took much longer, involving a wide range of theoretical and experimental results far beyond “bump plots”. So “new physics” is much more deeply about “new phenomena” and “new principles”. The discovery of the Higgs particle – especially with nothing else accompanying it so far – is unlike anything we have seen in any state of nature, and is profoundly “new physics” in this sense. The same is true of the other dramatic experimental discovery in the past few decades: that of the accelerating universe. Both discoveries are easily accommodated in our equations, but theoretical attempts to compute the vacuum energy and the scale of the Higgs mass pose gigantic, and perhaps interrelated, theoretical challenges. While we continue to scratch our heads as theorists, the most important path forward for experimentalists is completely clear: measure the hell out of these crazy phenomena! From many points of view, the Higgs is the most important actor in this story amenable to experimental study, so I just can’t stand all the talk of being disappointed by seeing nothing but the Higgs; it’s completely backwards. I find that the physicists who worry about not being able to convince politicians are (more or less secretly) not able to convince themselves that it is worth building the next collider. Fortunately, we do have a critical mass of fantastic young experimentalists who believe it is worth studying the Higgs to death, while also exploring whatever might be at the energy frontier, with no preconceptions about what they might find.
What makes the Higgs boson such a rich target for a future collider?
It is the first example we’ve seen of the simplest possible type of elementary particle. It has no spin, no charge, only mass, and this extreme simplicity makes it theoretically perplexing. There is a striking difference between massive and massless particles that have spin. For instance, a photon is a massless particle of spin one; because it moves at the speed of light, we can’t “catch up” with it, and so we only see it have two “polarisations”, or ways it can spin. By contrast the Z boson, which also has spin one, is massive; since you can catch up with it, you can see it spinning in any of three directions. This “two not equal to three” business is quite profound. As we collide particles at ever increasing energies, we might think that their masses are irrelevant tiny perturbations to their energies, but this is wrong, since something must account for the extra degrees of freedom.
The whole story of the Higgs is about accounting for this “two not equal to three” issue, to explain the extra spin states needed for massive W and Z particles mediating the weak interactions. And this also gives us a good understanding of why the masses of the elementary particles should be pegged to that of the Higgs. But the huge irony is that we don’t have any good understanding for what can explain the mass of the Higgs itself. That’s because there is no difference in the number of degrees of freedom between massive and massless spin-zero particles, and related to this, simple estimates for the Higgs mass from its interactions with virtual particles in the vacuum are wildly wrong. There are also good theoretical arguments, amply confirmed in analogous condensed-matter systems and elsewhere in particle physics, for why we shouldn’t have expected to see such a beast lonely, unaccompanied by other particles. And yet here we are. Nature clearly has other ideas for what the Higgs is about than theorists do.
Is supersymmetry still a motivation for a new collider?
Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions or any of the other ideas that have been developed over the past 40 years for physics beyond the Standard Model. Certainly many of the versions of these ideas, which were popular in the 1980s and 1990s, are either dead or on life support given the LHC data, but others proposed in the early 2000s are alive and well. The fact that the LHC has ruled out some of the most popular pictures is a fantastic gift to us as theorists. It shows that understanding the origin of the Higgs mass must involve an even larger paradigm change than many had previously imagined. Ironically, had the LHC discovered supersymmetric particles, the case for the next circular collider would be somewhat weaker than it is now, because that would (indirectly) support a picture of a desert between the electroweak and Planck scales. In this picture of the world, most people wanted a linear electron–positron collider to measure the superpartner couplings in detail. It’s a picture people very much loved in the 1990s, and a picture that appears to be wrong. Fine. But when theorists are more confused, it’s the time for more, not less experiments.
What definitive answers will a future high-energy collider give us?
First and foremost, we go to high energies because it’s the frontier, and we look around for new things. While there is absolutely no guarantee we will produce new particles, we will definitely stress test our existing laws in the most extreme environments we have ever probed. Measuring the properties of the Higgs, however, is guaranteed to answer some burning questions. All the drama revolving around the existence of the Higgs would go away if we saw that it had substructure of any sort. But from the LHC, we have only a fuzzy picture of how point-like the Higgs is. A Higgs factory will decisively answer this question via precision measurements of the coupling of the Higgs to a slew of other particles in a very clean experimental environment. After that the ultimate question is whether or not the Higgs looks point-like even when interacting with itself. The simplest possible interaction between elementary particles is when three particles meet at a space–time point. But we have actually never seen any single elementary particle enjoy this simplest possible interaction. For good reasons going back to the basics of relativity and quantum mechanics, there is always some quantum number that must change in this interaction – either spin or charge quantum numbers change. The Higgs is the only known elementary particle allowed to have this most basic process as its dominant self-interaction. A 100 TeV collider producing billions of Higgs particles will not only detect the self-interaction, but will be able to measure it to an accuracy of a few per cent. Just thinking about the first-ever probe of this simplest possible interaction in nature gives me goosebumps.
What are the prospects for future dark-matter searches?
Beyond the measurements of the Higgs properties, there are all sorts of exciting signals of new particles that can be looked for at both Higgs factories and 100 TeV colliders. One I find especially important is WIMP dark matter. There is a funny perception, somewhat paralleling the absence of supersymmetry at the LHC, that the simple paradigm of WIMP dark matter has been ruled out by direct-detection experiments. Nope! In fact, the very simplest models of WIMP dark matter are perfectly alive and well. Once the electroweak quantum numbers of the dark-matter particles are specified, you can unambiguously compute what mass an electroweak charged dark-matter particle should have so that its thermal relic abundance is correct. You get a number between 1–3 TeV, far too heavy to be produced in any sizeable numbers at the LHC. Furthermore, they happen to have miniscule interaction cross sections for direct detection. So these very simplest theories of WIMP dark matter are inaccessible to the LHC and direct-detection experiments. But a 100 TeV collider has just enough juice to either see these particles, or rule out this simplest WIMP picture.
What is the cultural value of a 100 km supercollider?
Both the depth and visceral joy of experiments in particle physics is revealed in how simple it is to explain: we smash things together with the largest machines that have ever been built, to probe the fundamental laws of nature at the tiniest distances we’ve ever seen. But it goes beyond that to something more important about our self-conception as people capable of doing great things. The world has all kinds of long-term problems, some of which might seem impossible to solve. So it’s important to have a group of people who, over centuries, give a concrete template for how to go about grappling with and ultimately conquering seemingly impossible problems, driven by a calling far larger than themselves. Furthermore, suppose it’s 200 years from now, and there are no big colliders on the planet. How can humans be sure that the Higgs or top particles exist? Because it says so in dusty old books? There is an argument to be made that as we advance we should be able to do the things we did in the past. After all, the last time that fundamental knowledge was shoved in old dusty books was in the dark ages, and that didn’t go very well for the West.
What about justifying the cost of the next collider?
There are a number of projects and costs we could be talking about, but let’s call it $5–25 billion. Sounds like a lot, right? But the global economy is growing, not shrinking, and the cost of accelerators as a fraction of GDP has barely changed over the past 40 years – even a 100 TeV collider is in this same ballpark. Meanwhile the scientific issues at stake are more profound than they have been for many decades, so we certainly have an honest science case to make that we need to keep going.
People sometimes say that if we don’t spend billions of dollars on colliders, then we can do all sorts of other experiments instead. I am a huge fan of small-scale experiments, but this argument is silly because science funding is infamously not a zero-sum game. So, it’s not a question of, “do we want to spend tens of billions on collider physics or something else instead”, it is rather “do we want to spend tens of billions on fundamental physics experiments at all”.
Another argument is that we should wait until some breakthrough in accelerator technology, rather than just building bigger machines. This is naïve. Of course miracles can always happen, but we can’t plan doing science around miracles. Similar arguments were made around the time of the cancellation of the Superconducting Super Collider (SSC) 30 years ago, with prominent condensed-matter physicists saying that the SSC should wait for the development of high-temperature superconductors that would dramatically lower the cost. Of course those dreamed-of practical superconductors never materialised, while particle physics continued from strength to strength with the best technology available.
What do you make of claims that colliders are no longer productive?
It would be only to the good to have a no-holds barred, public discussion about the pros and cons of future colliders, led by people with a deep understanding of the relevant technical and scientific issues. It’s funny that non-experts don’t even make the best arguments for not building colliders; I could do a much better job than they do! I can point you to an awesomely fierce debate about future colliders that already took place in China two years ago: (Int. J. Mod. Phys. A31 1630053 and 1630054). C N Yang, who is one of the greatest physicists of the 20th century and enormously influential in China, came out with a strong attack on colliders, not only in China but more broadly. I was delighted. Having a serious attack meant there could be a serious response, masterfully provided by David Gross. It was the King Kong vs Godzilla of fundamental physics, played out on the pages of major newspapers in China, fantastic!
What are you working on now?
About a decade ago, after a few years of thinking about the cosmology of “eternal inflation” in connection with solutions to the cosmological constant and hierarchy problems, I concluded that these mysteries can’t be understood without reconceptualising what space–time and quantum mechanics are really about. I decided to warm up by trying to understand the dynamics of particle scattering, like collisions at the LHC, from a new starting point, seeing space-time and quantum mechanics as being derived from more primitive notions. This has turned out to be a fascinating adventure, and we are seeing more and more examples of rather magical new mathematical structures, which surprisingly appear to underlie the physics of particle scattering in a wide variety of theories, some close to the real world. I am also turning my attention back to the goal that motivated the warm-up, trying to understand cosmology, as well as possible theories for the origin of the Higgs mass and cosmological constant, from this new point of view. In all my endeavours I continue to be driven, first and foremost, by the desire to connect deep theoretical ideas to experiments and the real world.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
The study of the production of quarkonia, the bound states of heavy quark–antiquark pairs, is an important goal of the ALICE physics programme. The quarkonium yield is suppressed in heavy-ion collisions when compared with proton–proton collisions because the binding force is screened by the hot and dense medium. This suppression is expected to be greatest for events with high “centrality”, when the heavy ions collide head-on.
The ALICE collaboration has recently analysed the suppression of inclusive bottomonium (bb̅) production in lead-lead collisions relative to proton–proton collisions. This reduction is quantified in terms of the nuclear modification factor RAA, which is the ratio of the measured yield in lead-lead to proton–proton collisions corrected by the number of binary nucleon–nucleon collisions. An RAA value of unity would indicate no suppression whereas zero indicates full suppression. The bottomonium states ϒ(1S) and ϒ(2S) were measured via their decays to muon pairs at a centre-of-mass energy per nucleon–nucleon pair of 5.02 TeV, in the rapidity range 2.5 < η< 4, with a maximum transverse momentum of 15 GeV/c. No significant variation of RAA is observed as a function of transverse momentum and rapidity, however, production is suppressed with increasing centrality (figure 1). A decrease in RAA from 0.60±0.10(stat)±0.04(syst) for the peripheral 50–90% of collisions to 0.34±0.03(stat)±0.02(syst) for the 0–10% most central collisions was observed.
Theoretical models must deal with the competing effects of melting and (re)generation of the ϒ within the quark-gluon plasma,the shadowing of parton densities in the initial state and “feed-down” from higher resonance states. Due to uncertainties on the parton density, is not yet known whether the direct production of ϒ(1S) is suppressed, or merely the feed-down from ϒ(2S) and other higher-mass states. Nevertheless, the precision of these measurements imposes significant new constraints on the modelling of ϒ production in lead-lead collisions.
Newly published results from the MINOS+ experiment at Fermilab in the US cast fresh doubts on the existence of the sterile neutrino – a hypothetical fourth neutrino flavour that would constitute physics beyond the Standard Model. MINOS+ studies how muon neutrinos oscillate into other neutrino flavours as a function of distance travelled, using magnetised-iron detectors located 1 and 735 km downstream from a neutrino beam produced at Fermilab.
Neutrino oscillations, predicted more than 60 years ago, and finally confirmed in 1998, explain the observed transmutation of neutrinos from one flavour to another as they travel. Tantalising hints of new-physics effects in short-baseline accelerator-neutrino experiments have persisted since 1995, when the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory reported an 88±23 excess in the number of electron antineutrinos emerging from a muon–antineutrino beam. This suggested that muon antineutrinos were oscillating into electron antineutrinos along the way, but not in the way expected if there are only three neutrino flavours.
The plot thickened in 2007 when another Fermilab experiment, MiniBooNE, an 818 tonne mineral-oil Cherenkov detector located 541 m downstream from Fermilab’s Booster neutrino beamline, began to see a similar effect. The excess grew, and last November the MiniBooNE collaboration reported a 4.5σ deviation from the predicted event rate for the appearance of electron neutrinos in a muon neutrino beam. In the meantime, theoretical revisions in 2011 meant that measurements of neutrinos from nuclear reactors also show deviations suggestive of sterile-neutrino interference: the so-called “reactor anomaly”.
Tensions have been running high. The latest results from MINOS+, first reported in 2017 and recently accepted for publication in Physical Review Letters, fail to confirm the MiniBooNE signal. The MINOS+ results are also consistent with those from a comparable analysis of atmospheric neutrinos in 2016 by the IceCube detector at the South Pole. “LSND, MiniBooNE and the reactor data are fairly compatible when interpreted in terms of sterile neutrinos, but they are in stark conflict with the null results from MINOS+ and IceCube,” says theorist Joachim Kopp of CERN. “It might be possible to come up with a model that allows compatibility, but the simplest sterile neutrino models do not allow this.” In late February, the long-baseline T2K experiment in Japan joined the chorus of negative searches for the sterile neutrino, although excluding a different region of parameter space.
Whereas MiniBooNE and LSND sought to observe a second-order flavour transition (in which a muon neutrino morphs into a sterile and then electron neutrino), MINOS+ and IceCube are sensitive to a first-order muon-to-sterile transition that would reduce the expected flux of muon neutrinos. Such “disappearance” experiments are potentially more sensitive to sterile neutrinos, provided systematic errors are carefully modelled.
“The MiniBooNE observations interpreted as a pure sterile neutrino oscillation signal are incompatible with the muon-neutrino disappearance data,” says MINOS+ spokesperson Jenny Thomas of University College London. “In the event that the most likely MiniBooNE signal were due to a sterile neutrino, the signal would be unmissable in the MINOS/MINOS+ neutral-current and charged-current data sets.” Taking into account simple unitarity arguments, adds Thomas, the latest MINOS+ analysis is incompatible with the MiniBooNE result at the 2σ level and at 3σ sigma below a “mass-splitting” of 1 eV2 (see figure 1).
The sterile-neutrino hypothesis is also in tension with cosmological data, says theorist Silvia Pascoli of Durham University. “Sterile neutrinos with these masses and mixing angles would be copiously produced in the early universe and would make up a significant fraction of hot dark matter. This is somewhat at odds with cosmological observations.”
One possibility for the surplus electron–neutrino-like events in MiniBooNE is insufficient accuracy in the way neutrino–nucleus interactions in the detector are modelled – a challenge for neutrino-oscillation experiments generally. According to MiniBooNE collaborator Teppei Katori, one effect proposed to account for the MiniBooNE anomaly is neutral-current single-gamma production. “This rare process has many theoretical interests, both within and beyond the Standard Model, but the calculations are not yet tractable at low energies (around 1 GeV) as they are in the non-perturbative QCD region,” he says.
MINOS+ is now analysing its final dataset and working on a direct comparison with MiniBooNE to look for electron-neutrino appearance as well as the present study on muon-neutrino disappearance. Clarification could also come from other short-baseline experiments at Fermilab, in particular MicroBooNE, which has been operating since 2015, and two liquid-argon detectors ICARUS and SBND (CERN Courier June 2017 p25). The most exciting possibility is that new physics is at play. “One viable explanation requires a new neutral-current interaction mediated by a new GeV-scale vector boson and sterile neutrinos with masses in the hundreds of MeV,” explains Pascoli. “So far this has not been excluded. And it is theoretically consistent. We have to wait and see.”
On 18 February the CMS and MoEDAL collaborations at CERN signed an agreement that will see a 6 m-long section of the CMS beam pipe cut into pieces and fed into a SQUID in the name of fundamental research. The 4 cm diameter beryllium tube – which was in place (right) from 2008 until its replacement by a new beampipe for LHC Run 2 in 2013 – is now under the proud ownership of MoEDAL spokesperson Jim Pinfold and colleagues, who will use it to search for the existence of magnetic monopoles.
Magnetic monopoles with multiple magnetic charge, if produced in high-energy particle collisions at the LHC, are so highly ionising that they could stop in the material surrounding the collision points and bind there with the beryllium nuclei of the beam pipe. To detect the trapped monopoles, Pinfold and coworkers will pass the beam-pipe material through superconducting loops and look for a non-decaying current using highly precise SQUID-based magnetometers.
Materials from the CDF and D0 detectors at the Tevatron and from the H1 detector at HERA were subjected to such searches during the 1990s, and the first pieces of beam pipe from the LHC experiments, taken from the CMS region, were tested in 2012. But these were from regions far from the collision point, whereas the new study will use material surrounding the CMS central-interaction region. “It’s the most directly exposed piece of material of the experiment that the monopoles encounter when produced and moving away from the collision point,” says Albert De Roeck of CMS and MoEDAL, who was involved in the previous LHC and HERA studies. “Although no signs of monopoles have shown up in data so far, this new study pushes the search for monopoles with magnetic charge well beyond the five Dirac charges currently achievable with the MoEDAL detector.”
MoEDAL technical coordinator Richard Soluk and a small team of technicians will first cut the beampipe into bite-sized pieces at a special facility constructed at the Centre for Particle Physics at the University of Alberta, Canada, where they have to be especially careful because beryllium is highly toxic. The resulting pieces, carefully enshrined in plastic, will then be shipped back to Europe to the SQUID Magnetometer Laboratory at ETH Zurich, where the freshly sliced beam pipe will undergo a short measurement campaign planned for early summer. “On the analysis front we have to estimate how many monopoles would have been trapped in the beam pipe during its deployment at CMS as a function of monopole mass, spin, magnetic charge, kinetic energy and production mechanism,” says Pinfold.
The latest search is complementary to general monopole searches that have already been carried out by the ATLAS and MoEDAL collaborations. Deployed at LHC Point 8, MoEDAL contains more than 100 m2 of nuclear-track detectors that are sensitive only to new physics and has a dedicated trapping detector consisting of around one tonne of aluminum.
“Most modern theories such as GUTs and string theory require the existence of monopoles,” says Pinfold. “The monopole is the most important particle not yet found.”
On 16 June 2018, a bright burst of light was observed by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Hawaii, which automatically searches for optical transient events. The event, which received the automated catalogue name “AT2018cow”, immediately received a lot of attention and acquired a shorter name: “the Cow”. While transient objects are observed on the sky every day – caused, for example, by nearby asteroids or supernovae – two factors make the Cow intriguing. First, the very short time it took for the event to reach its extreme brightness and fade away again indicates that this event is nothing like anything observed before. Second, it took place relatively close to Earth, 200 million light years away in a star-forming arm of a galaxy in the Hercules constellation, making it possible to study the event in a wide range of wavelengths.
Soon after the ATLAS detection, the object was observed by more than 20 different telescopes around the world, revealing it to be 10–100 times brighter than a typical supernova. In addition to optical measurements, the object was observed for several days by space-based X- and gamma-ray telescopes such as NuSTAR, XMM-Newton, INTEGRAL and Swift, which also observed it in the UV energy range, as well as by radio telescopes on Earth. The IceCube observatory in Antarctica also identified two possible neutrinos coming from the Cow, although the detection is still compatible with a background fluctuation. The combination of all the data – demonstrating the power of multi-messenger astronomy – confirmed that this was not an ordinary supernova, but potentially something completely different.
Bright spark
While standard supernovae take several days to reach maximum brightness, the Cow did so in just 1.5 days, after which the brightness also started to decrease much faster than a typical supernova. Another notable feature was the lack of heavy-element decays. Normally, elements such as 56Ni produced during the explosion are the main source of supernovae brightness, but the Cow only revealed signs of lighter elements such as hydrogen and helium. Furthering the event’s mystique is the variability of the X-ray emission several days after its discovery, which is a clear sign of an energy source at its centre. Half a year after its discovery, two opposing theories aim to explain these features.
The first theory states that an unlucky compact object was destroyed when coming too close to a black hole – a phenomenon called a tidal disruption event. The fast increase in brightness excludes normal stars. On the other hand, a smaller object (such as a neutron star, a very dense star consisting of neutron matter) cannot explain the hydrogen and helium observed in the remnant, since it contains no proper elements. The remaining possibility is a white dwarf, a dense star remaining after a normal star has ceased fusion but kept from gravitational collapse into a neutron star or black hole by the electron-degeneracy pressure in its core. The observed emission from the Cow could be explained if a white dwarf was torn apart by tidal forces in the vicinity of a massive black hole. One problem with this theory, however, is the event’s location, since black holes with the sizes required for such an event are normally not found in the spiral arms of galaxies.
The opposing theory is that the Cow was a special type of supernova in which either a black hole or a quickly rotating highly magnetic neutron star, a magnetar, is produced. While the bright emission in the optical and UV bands are produced by the supernova-like event, the variable X-ray emission is produced by radiating gas falling into the compact object. Normally the debris of a supernova blocks most of the light from reaching us, but the progenitor of the Cow was likely a relatively low-mass star that caused little debris. A hint of its low mass was also found in the X-ray data. If so, the observations would constitute the first observation of the birth of a compact object, making these data very valuable for further theoretical development. Such magnetar sources could also be responsible for ultra-high-energy cosmic rays as well as high-energy neutrinos, two of which might have been observed already. The debate on the nature of the Cow continues, but the wealth of information gathered so far indicates the growing importance of multi-messenger astronomy.
Precision measurements of diboson processes at the LHC are powerful probes of the gauge structure of the Standard Model at the multi-TeV energy scale. Among the most interesting directions in the diboson physics programme is the study of gauge-boson polarisation. The existence of three polarisation states is predicted by the Standard Model. The transverse polarisation is composed of right- and left-handed states, with spin either parallel or antiparallel to the momentum vector of the boson. The third state, a longitudinally-polarised component, is generated when the W and Z bosons acquire mass through electroweak symmetry breaking, and is therefore under particular scrutiny.
New phenomena can alter the polarisation predicted by the Standard Model due to interference between new-physics amplitudes and diagrams with gauge-boson self-interactions. WZ production, with its clean experimental signature, offers a sensitive way to search for such anomalies by providing a direct probe of the WWZ gauge coupling, due to the s-channel “Z-strahlung” contribution, where the W radiates a Z.
Building on precision WZ measurements previously reported by the ATLAS and CMS collaborations, a recent ATLAS result constitutes the most precise WZ measurement at a centre-of-mass energy of 13 TeV, and provides the first measurement of the polarisation of pair-produced vector bosons in hadron collisions. Based on 36.1 fb-1 of data collected in 2015 and 2016 by the ATLAS detector, and using leptonic decay modes of the gauge bosons to electrons or muons, ATLAS has achieved a precision of 4.5% for the WZ cross section measured in a fiducial phase space closely matching the detector acceptance. The kinematics of WZ events, including the underlying dynamics of accompanying hadronic jets, has been studied in detail by measuring the cross section as a function of several observables.
The polarisation states for W and Z bosons can be probed through distributions of the angle of the leptons relative to the bosons from which they decayed (figure 1, left). A binned profile-likelihood fit of templates describing the three helicity states allowed ATLAS to extract the W and Z polarisations in the fiducial measurement region. Because of the incomplete knowledge of the neutrino momentum originating from the W-boson decay, it is more difficult to measure the helicity fractions of the W than of the Z. The fraction of a longitudinally-polarised W boson in WZ events is found to be 0.26 ± 0.06 (figure 1, right), while the longitudinal fraction of the Z boson is found to be 0.24 ± 0.04. The analysis leads to an observed significance of 4.2 standard deviations for the presence of longitudinally-polarised W bosons, and 6.5 standard deviations for longitudinally-polarised Z bosons.
Improved precision
The measurements are dominated by statistical uncertainties, but future datasets will improve precision and allow the collaboration to probe new-physics effects in events where both the Z and the W are longitudinally polarised. The ultimate target is to measure the scattering of longitudinally polarised vector bosons: this would be a direct test of electroweak symmetry breaking.
The Standard Model (SM) allows neutral flavoured mesons such as the D0 to oscillate into their antiparticles. Having first observed this process in 2012, the LHCb collaboration has recently made some of the world’s most precise measurements of this behaviour, which is potentially sensitive to new physics. The oscillation of the D0 (cu̅) into its antiparticle, the D̅0 (c̅u), occurs through the exchange of massive virtual particles. These might include as-yet undiscovered particles, so the measurements are sensitive to non-Standard Model dynamics at large energy scales. By examining D0 and D̅0 mesons separately, it is also possible to search for the violation of charge–parity (CP) symmetry in the charm sector. Such effects are predicted to be very small. Therefore, given LHCb’s current level of experimental precision, any sign of CP violation would be a clear indication of physics beyond the Standard Model.
Given LHCb’s current level of experimental precision, any sign of CP violation would be a clear indication of physics beyond the Standard Model.
Due to quantum-mechanical mixing between the neutral charm meson’s mass and flavour eigenstates, the probabilities of observing either it or its antiparticle vary as a function of time. This mixing can be described by two parameters, x and y, which relate the properties of the mass eigenstates: x is the normalised difference in mass, and y is the normalised difference in width, or inverse lifetime. The mixing rate is very slow, making these parameters difficult to measure. Isolating the differences between the D0 and D̅0mesons is an even greater challenge. For these two papers, LHCb was able to achieve small statistical uncertainties thanks to the large samples of charm mesons collected during Run 1, and minimised systematic uncertainties by measuring ratios of yields to cancel detector effects.
In the first paper, LHCb physicists studied the effective lifetime of the mesons. As a consequence of mixing, the effective decay width to CP-even final states, such as K+K– and π+π–, differs from the average width measured in decays such as D0 → K– π+. The parameter yCP, which in the limit of CP symmetry is equal to y, can be deduced from the ratio of decay rates to these two final states as a function of time. LHCb measured yCP with the same precision as all previous measurements combined, obtaining a value consistent with the world-average value of y.
In the second analysis, LHCb reconstructed D0 decays into the final state K0S π+π– to measure the parameter x, which had not previously been shown to differ from zero. In this mode, mixing manifests as small variations in the decay rate in different parts of phase space as a function of time. Measuring it requires good control over experimental effects as a function of both phase space and decay time. LHCb achieved this by measuring the ratios of the yields in complementary regions of phase space (mirrored in the Dalitz plane) as a function of time. The measured value of x is the world’s most precise, and in combination with previous measurements there is now evidence that it differs from zero.
As well as the mixing itself, both analyses are also sensitive to mixing-induced CP violation. While CP violation was not observed, the limits on its parameters were greatly improved (figure 1). This is a good example of how different decay modes give complementary information and, when taken together, can have a big impact. LHCb will continue to perform measurements with additional modes and the larger samples collected in Run 2.
The fundamental structure of nucleons is described by the properties and dynamics of their constituent quarks and gluons, as described by QCD. The gluon’s self-interaction complicates this picture considerably. Non-linear recombination reactions, where two gluons fuse, are predicted to lead to a saturation of the gluon density. This largely unexplored phenomenon is expected to occur when the gluons in a hadron overlap transversally, and is enhanced for hadrons with high atomic numbers. Gluon saturation may be studied in lead-proton collisions at the LHC in the kinematic region where the gluon density is high and the gluons have sizable transverse dimensions.
Gluon saturation has been at the focal point of the heavy-ion community for decades. Precision measurements at HERA, RHIC and previously at the LHC agree with the predictions made by saturation models, however, the measurements do not allow an unambiguous interpretation of whether gluon saturation occurs in nature. This is a strong motivation both for the LHC experiments and for the planned Electron Ion Collider (CERN Courier October 2018 p31).
The CMS collaboration recently submitted a paper on gluon saturation in proton-lead collisions to the Journal of High Energy Physics (JHEP). The collisions that were used for this analysis occurred in 2013 at a centre-of-mass energy of 5 TeV and were detected using the CMS experiment’s CASTOR calorimeter. This is a very forward calorimeter of CMS, where “forward” refers to regions of the detector that are close to the beam pipe. Therefore, unlike any other LHC experiment, CMS can measure jets at very forward angles (–6.6<|η|<–5.2) and with transverse momenta (pT) as low as 3 GeV. This is the first time that a jet-energy spectrum measurement from the CASTOR calorimeter has been submitted to a journal.
Forward jets with a small pT can target high-density-regime gluons with ample transverse dimensions, making CASTOR ideal for a study of gluon saturation. By colliding protons with lead ions, the sensitivity of the CASTOR jet spectra to saturation effects was further enhanced. This enabled CASTOR to overcome the ambiguity associated with the interpretation of the previous measurements.
The jet-energy spectrum obtained using CASTOR was compared to two saturation models (figure 1, left). These were the “Katie KS” model and predictions from the AAMQS collaboration; the latter are based on the colour-glass-condensate model. In the Katie KS model, the strength of the non-linear gluon recombination reactions can be varied. Upon comparison with the model, it was seen that the linear and non-linear predictions differed by an order of magnitude for the lowest energy bins of the spectrum, which correspond to low-pT jets. Meanwhile, they converged at the highest energies, confirming the high sensitivity of the measurement to gluon saturation. The AAMQS predictions underestimated the data progressively, up to an order of magnitude, in the region most strongly affected by saturation. Overall, neither model described the spectrum correctly.
The spectrum was also compared to two cosmic ray models (EPOS-LHC and QGSJetII-04) and to the HIJING event generator (figure 1, right). The former models underestimated the data by over two orders of magnitude while HIJING, which incorporates an implementation of nuclear shadowing, agreed well with the data. Nuclear shadowing is an interference effect between the nucleons of a heavy ion. Like gluon saturation, it is expected to lead to a decrease in the probability for a proton-lead collision to occur, however further data analysis is required for more definite conclusions on nuclear shadowing.
These results establish CASTOR jets as an experimental reality and their sensitivity to saturation effects is encouragement for further, more refined CASTOR jet studies.
It is 20 years since the discovery that the expansion of the universe is accelerating, yet physicists still know precious little about the underlying cause. In a classical universe with no quantum effects, the cosmic acceleration can be explained by a constant that appears in Einstein’s equations of general relativity, albeit one with a vanishingly small value. But clearly our universe obeys quantum mechanics, and the ability of particles to fluctuate in and out of existence at all points in space leads to a prediction for Einstein’s cosmological constant that is 120 orders of magnitude larger than observed. “It implies that at least one, and likely both, of general relativity and quantum mechanics must be fundamentally modified,” says Clare Burrage, a theorist at the University of Nottingham in the UK.
With no clear alternative theory available, all attempts to explain the cosmic acceleration introduce a new entity called dark energy (DE) that makes up 70% of the total mass-energy content of the universe. It is not clear whether DE is due to a new scalar particle or a modification of gravity, or whether it is constant or dynamic. It’s not even clear whether it interacts with other fundamental particles or not, says Burrage. Since DE affects the expansion of space–time, however, its effects are imprinted on astronomical observables such as the cosmic microwave background and the growth rate of galaxies, and the main approach to detecting DE involves looking for possible deviations from general relativity on cosmological scales.
Unique environment
Collider experiments offer a unique environment in which to search for the direct production of DE particles, since they are sensitive to a multitude of signatures and therefore to a wider array of possible DE interactions with matter. Like other signals of new physics, DE (if accessible at small scales) could manifest itself in high-energy particle collisions either through direct production or via modifications of electroweak observables induced by virtual DE particles.
Last year, the ATLAS collaboration at the LHC carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. The results demonstrate the ability of collider experiments to access new regions of parameter space and provide complementary information to cosmological probes.
Unlike dark matter, for which there exists many new-physics models to guide searches at collider experiments, few such frameworks exist that describe the interaction between DE and Standard Model (SM) particles. However, theorists have made progress by allowing the properties of the prospective DE particle and the strength of the force that it transmits to vary with the environment. This effective-field-theory approach integrates out the unknown microscopic dynamics of the DE interactions.
The new ATLAS search was motivated by a 2016 model by Philippe Brax of the Université Paris-Saclay, Burrage, Christoph Englert of the University of Glasgow, and Michael Spannowsky of Durham University. The model provides the most general framework for describing DE theories with a scalar field and contains as subsets many well-known specific DE models – such as quintessence, galileon, chameleon and symmetron. It extends the SM lagrangian with a set of higher dimensional operators encoding the different couplings between DE and SM particles. These operators are suppressed by a characteristic energy scale, and the goal of experiments is to pinpoint this energy for the different DE–SM couplings. Two representative operators predict that DE couples preferentially to either very massive particles like the top quark (“conformal” coupling) or to final states with high-momentum transfers, such as those involving high-energy jets (“disformal” coupling).
Signatures
“In a big class of these operators the DE particle cannot decay inside the detector, therefore leaving a missing energy signature,” explains Spyridon Argyropoulos of the University of Iowa, who is a member of the ATLAS team that carried out the analysis. “Two possible signatures for the detection of DE are therefore the production of a pair of top-antitop quarks or the production of high-energy jets, associated with large missing energy. Such signatures are similar to the ones expected by the production of supersymmetric top quarks (“stops”), where the missing energy would be due to the neutralinos from the stop decays or from the production of SM particles in association with dark-matter particles, which also leave a missing energy signature in the detector.”
The ATLAS analysis, which was based on 13 TeV LHC data corresponding to an integrated luminosity of 36.1 fb–1, re-interprets the result of recent ATLAS searches for stop quarks and dark matter produced in association with jets. No significant excess over the predicted background was observed, setting the most stringent constraints on the suppression scale of conformal and disformal couplings of DE to normal matter in the context of an effective field theory of DE. The results show that the characteristic energy scale must be higher than approximately 300 GeV for the conformal coupling and above 1.2 TeV for the disformal coupling.
The search for DE at colliders is only at the beginning, says Argyropoulos. “The limits on the disformal coupling are several orders of magnitudes higher than the limits obtained from other laboratory experiments and cosmological probes, proving that colliders can provide crucial information for understanding the nature of DE. More experimental signatures and more types of coupling between DE and normal matter have to be explored and more optimal search strategies could be developed.”
With this pioneering interpretation of a collider search in terms of dark-energy models, ATLAS has become the first experiment to probe all forms of matter in the observable universe, opening a new avenue of research at the interface of particle physics and cosmology. A complementary laboratory measurement is also being pursued by CERN’s CAST experiment, which studies a particular incarnation of DE (chameleon) produced via interactions of DE with photons.
But DE is not going to give up its secrets easily, cautions theoretical cosmologist Dragan Huterer at the University of Michigan in the US. “Dark energy is normally considered a very large-scale phenomenon, but you may justifiably ask how the study of small systems in a collider can say anything about DE. Perhaps it can, but in a fairly model-dependent way. If ATLAS finds a signal that departs from the SM prediction it would be very exciting. But linking it firmly to DE would require follow-up work and measurements – all of which would be very exciting to see happen.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.