Comsol -leaderboard other pages

Topics

Galactic map sheds light on dark energy

The largest 3D map of distant galaxies ever made has allowed one of the most precise measurements yet of dark energy, which is currently driving the accelerating expansion of the universe. The new measurements, which were carried out by the Baryon Oscillation Spectroscopic Survey (BOSS) programme of the Sloan Digital Sky Survey-III, took five years to make and include 1.2 million galaxies over one quarter of the sky – equating to a volume of 650 cubic billion light-years.

BOSS measures the expansion rate by determining the size of baryonic acoustic oscillations, which are remnants of primordial acoustic waves. “We see a dramatic connection between the sound-wave imprints seen in the cosmic microwave background to the clustering of galaxies 7–12 billion years later,” says co-leader of the BOSS galaxy-clustering working group Rita Tojeiro. “The ability to observe a single well-modelled physical effect from recombination until today is a great boon for cosmology.”

The map shows galaxies being pulled towards each other by dark matter, while on much larger scales it reveals the effect of dark energy ripping the universe apart. It also reveals the coherent movement of galaxies toward regions of the universe with more matter, with the observed amount of in-fall explained well by general relativity. The results have been submitted to the Monthly Notices of the Royal Astronomical Society.

Belle II super-B factory experiment takes shape at KEK

Since CERN’s LHC switched on in the autumn of 2008, no new particle colliders have been built. SuperKEKB, under construction at the KEK laboratory in Tsukuba, Japan, is soon to change that. In contrast to the LHC, which is a proton–proton collider focused on producing the highest energies possible, SuperKEKB is an electron–positron collider that will operate at the intensity frontier to produce enormous quantities of B mesons.

At the intensity frontier, physicists search for signatures of new particles or processes by measuring rare or forbidden reactions, or finding deviations from Standard Model (SM) predictions. The “mass reach” for new-particle searches can be as high as 100 TeV/c2, provided the couplings of the particles are large, which is well beyond the reach of direct searches at current colliders. The flavour sector provides a particularly powerful way to address the many deficiencies of the SM: at the cosmological scale, the puzzle of the baryon–antibaryon asymmetry remains unexplained by known sources of CP violation; the SM does not explain why there should be only three generations of elementary fermions or why there is an observed hierarchy in the fermion masses; the theory falls short on accounting for the small neutrino mass, and it is also not clear whether there is only a single Higgs boson.

SuperKEKB follows in the footsteps of its predecessor KEKB, which recorded more than 1000 fb–1 (one inverse attobarn, ab–1) of data and achieved a world record for instantaneous luminosity of 2.1 × 1034 cm–2 s–1. The goals for SuperKEKB are even more ambitious. Its design luminosity is 8 × 1035 cm–2 s–1, 40 times that of previous B-factory experiments, and the machine will operate in “factory” mode with the aim of recording an unprecedented data sample of 50 ab–1.

The trillions of electron–positron collisions provided by SuperKEKB will be recorded by an upgraded detector called Belle II, which must be able to cope with the much larger beam-related backgrounds resulting from the high-luminosity environment. Belle II, which is the first “super-B factory” experiment, is designed to provide better or comparable performance to that of the previous Belle experiment at KEKB or BaBar at SLAC in Stanford, California. With the SM of weak interactions now well established, Belle II will focus on the search for new physics beyond the SM.

SuperKEKB was formally approved in October 2010, began construction in November 2011 and achieved its “first turns” in February this year (CERN Courier April 2016 p11). By the time of  completion of the initial accelerator commissioning before Belle-II roll-in (so-called “Phase 1”), the machine was storing a current of 1000 mA in its low-energy positron ring (LER) and 870 mA in the high-energy electron ring (HER). As currently scheduled, SuperKEKB will produce its first collisions in late 2017 (Phase 2), and the first physics run with the full detector in place will take place in late 2018 (Phase 3). The experiment will operate until the late 2020s.

B-physics background

The Belle experiment took data at the KEKB accelerator between 1999 and 2010. At roughly the same time, the BaBar experiment operated at SLAC’s PEP-II accelerator. In 2001, these two “B factories” established the first signals of CP violation, therefore revealing matter–antimatter asymmetries, in the B-meson sector. They also provided the experimental foundation for the 2008 Nobel Prize in Physics, which was awarded to theorists Makoto Kobayashi and Toshihide Maskawa for their explanation through complex phases in weak interactions.

In addition to the observation of large CP violation in the low-background “golden” B  J/ψ KS-type decay modes, these B-factory experiments allowed many important measurements of weak interactions involving bottom and charm quarks as well as τ leptons. The B factories also discovered an unexpected crop of new strongly interacting particles known as the X, Y and Z states. Since 2008, a third major B factory, LHCb, entered the game. One of the four main LHC detectors, LHCb has made a large number of new measurements of B and Bs mesons and B baryons produced in proton–proton collisions. The experiment has tightly constrained new physics phases in the mixing-induced weak decays of Bs mesons, confirmed Belle’s discovery of the four-quark state Z(4430), and discovered the first two clear pentaquark states. Together with LHCb, Belle II is expected to be equally prolific and may discover signals of new physics in the coming decade.

Asymmetric collisions

The accelerator technology underpinning B factories is quite different from that of high-energy hadron colliders. For the coherent production of quantum-mechanically entangled pairs of B and B mesons, measurements of time-dependent CP asymmetries require that we know the difference in the decay times between the two B mesons. With equal energy beams, the B mesons travel only tens of microns from their production point and cannot experimentally be distinguished in silicon vertex detectors. To allow the B factory experiments to observe the time difference or spatial separation of the B vertices, the beams have asymmetric energies, and the centre of mass system is therefore boosted along the axis of the detector. For example, at PEP-II, 9 GeV electron and 3.1 GeV positron beams were used, while at KEKB the beam energies were 8 GeV and 3.5 GeV.

Charged particles within a beam undergo thermal motion just like gas molecules: they scatter to generate off-momentum particles at a rate given by the density and the temperature of the beam. Such off-momentum particles reduce the beam lifetime, increase beam sizes and generate detector background. To maximise the beam lifetime and reduce intra-beam scattering, SuperKEKB will collide 7 and 4 GeV electron and positron beams, respectively.

Two strategies were employed at the B factories to separate the incoming and outgoing beams: PEP-II used magnetic separation in a strong dipole magnet near the interaction point, while KEKB used a crossing angle of 22 mrad. SuperKEKB will extend the approach of KEKB with a crossing angle of 83 mrad, with separate beamlines for the two rings and no shared magnets between them. While the beam currents will be somewhat higher at SuperKEKB than they were at KEKB, the most dramatic improvement in luminosity is the result of very flat low-emittance “cool beams” and much stronger focusing at the interaction point. Specifically, SuperKEKB uses the nano-beam scheme inspired by the design of Italian accelerator physicist Pantaleo Raimondi, which promises to reduce the vertical beam size at the interaction point to around 50 nm – 20 times smaller than at KEKB.

Although the former TRISTAN (and KEKB) tunnels were reused for the SuperKEKB facility, many of the other accelerator components are new or upgraded from KEKB. For example, the 3 km-circumference vacuum chamber of the LER is new and is equipped with an antechamber and titanium-nitride coating to fight against the problem of photoelectrons. This process, in which low-energy electrons generated as photoelectrons or by ionisation of the residual gas in the beam pipe are attracted by the positively charged beam to form a cloud around the beam, was a scourge for the B factories and is also a major problem for the LHC. Many of the LER magnets are new, while a significant number of the HER magnets were rearranged to achieve a lower emittance, powered by newly designed high-precision power supplies at the ppm level. The RF system has been rearranged to double the beam current with a new digital-control system, and many beam diagnostics and control systems were rebuilt from scratch.

During Phase 1 commissioning, after many iterations the LER optics were corrected to achieve design emittance. To achieve low-emittance positron beams, a new damping ring has been constructed that will be brought into operation in 2017. To meet the charge and emittance requirements of SuperKEKB, the linac injector complex has been upgraded and includes a new low-emittance electron gun. Key components of the accelerator – including the beam pipe, superconducting magnets, beam feedback and diagnostics – were developed in collaboration with international partners in Italy (INFN Frascati), the US (BNL), and Russia (BINP), and further joint work, which will also involve CERN, is expected.

During Phase 1, intensive efforts were made to tune the machine to minimise the vertical emittances in both rings. This was done via measurements and corrections using orbit-response matrices. The estimated vertical emittances were below 10 pm in both rings, which is close to the design values. There were discrepancies, however, with the beam sizes measured by X-ray size monitors, especially in the HER, which is under investigation.

The early days of Belle and BaBar were plagued by problems, with beam-related backgrounds resulting from the then unprecedented beam currents and strong beam focusing. In the case of Belle, the first silicon vertex detector was destroyed by an unexpected synchrotron radiation “fan” produced by an electron beam passing through a steering magnet. Fortunately, the Belle team was able to build a new replacement detector quickly and move on to compete in the race with BaBar to measure CP asymmetries in the B sector. As a result of these past experiences, we have adopted a rather conservative commissioning strategy for the SuperKEKB/Belle-II facility. This year, during the earliest Phase 1 of operation, a special-purpose device called BEAST II consisting of seven types of background measurement devices was installed at the interaction point to characterise the expected Belle-II background.

At the beginning of next year, the Belle-II outer detector will be “rolled in” to the beamline and all components except the vertex detectors will be installed. The complex quadrupole superconducting final-focusing magnets are among the most challenging parts of the accelerator. In autumn 2017, the final-focusing magnets will be integrated with Belle II and the first runs of Phase 2 will commence. A new suite of background detectors will be installed, including a cartridge containing samples of the Belle-II vertex detectors. The first goal of the Phase-2 run is to achieve a luminosity above 1034 cm–2 s–1 and to verify that the backgrounds are low enough for the vertex detector to be installed.

Belle reborn

With Belle II expected to face beam-related backgrounds 20 times higher than at Belle, the detector has been reborn to achieve the experiment’s main physics goals – namely, to measure rare or forbidden decays of B and D mesons and the τ lepton with better accuracy and sensitivity than before. While Belle II reuses Belle’s spectrometer magnet, many state-of-the-art technologies have been included in the detector upgrade. A new vertex-detector system comprising a two-layer pixel detector (PXD) based on “DEPFET” technology and a four-layer double-sided silicon-strip detector (SVD) will be installed. With the beam-pipe radius of SuperKEKB having been reduced to 10 mm, the first PXD layer can be placed just 14 mm from the interaction point to improve the vertex resolution significantly. The outermost SVD layer is located at a larger radius than the equivalent system at Belle, resulting in higher reconstruction efficiency for Ks mesons, which is important for many CP-violation measurements.

A new central drift chamber (CDC) has been built with smaller cell sizes to be more robust against the higher level of beam background hits. The new CDC has a larger outer radius (1111.4 mm as opposed to 863 mm in Belle) and 56 compared to 50 measurement layers, resulting in improved momentum resolution. Combined with the vertex detectors, Belle II has improved D* meson reconstruction and hence better full-reconstruction efficiency for B mesons, which often include D*s among their weak-interaction decay products.

Because good particle identification is vital for successfully identifying rare processes in the presence of very large background (for example, the measurement of B  Xd γ must contend with B  Xs γ background processes that are an order-of-magnitude larger), two newly developed ring-imaging Cherenkov detectors have been introduced at Belle II. The first, the time-of-propagation (TOP) counter, is installed in the barrel region and consists of a finely polished and optically flat quartz radiator and an array of pixelated micro-channel-plate photomultiplier tubes that can measure the propagation time of internally reflected Cherenkov photons with a resolution of around 50 ps. The second, the aerogel ring-imaging Cherenkov counter (A-RICH), is located in Belle II’s forward endcap region and will detect Cherenkov photons produced in an aerogel radiator with hybrid avalanche photodiode sensors.

The electromagnetic calorimeter (ECL) reuses Belle’s thallium-doped cesium-iodide crystals. New waveform-sampling read-out electronics have been implemented to resolve overlapping signals such that π0 and γ reconstruction is not degraded, even in the high-background environment. The flux return of the Belle-II solenoid magnet, which surrounds the ECL, is instrumented to detect KL mesons and muons (KLM). All of the endcap KLM layers and the innermost two layers of the barrel KLM were replaced with new scintillator-based detectors read out by solid-state photomultipliers. Signals from all of the Belle-II sub-detector components are read out through a common optical-data-transfer system and backend modules. GRID computing distributed over KEK-Asia-Australia-Europe-North America will be used to process the large data volumes produced at Belle II by high-luminosity collisions, which, like LHCb, are expected to be in the region of 1.8 GB/s.

Construction of the Belle-II experiment is in full swing, with fabrication and installation of sub-detectors progressing from the outer to the inner regions. A recent milestone was the completion of the TOP installation in June, while installation of the CDC, A-RICH and endcap ECL will follow soon. The Belle-II detector will be rolled into the SuperKEKB beamline in early 2017 and beam collisions will start later in the year, marking Phase 2. After verifying the background conditions in beam collisions, Phase 3 will see the installation of the vertex-detector system, after which the first physics run can begin towards the end of 2018.

Unique data set

As a next-generation B factory, Belle II will serve as our most powerful probe yet of new physics in the flavour sector, and may discover new strongly interacting particles such as tetraquarks, molecules or perhaps even hybrid mesons. Collisions at SuperKEKB will be tuned to centre-of-mass energies corresponding to the masses of the ϒ resonances, with most data to be collected at the Υ(4S) resonance. This is just above the threshold for producing quantum-correlated B-meson pairs with no fragmentation particles, which are optimal for measuring weak-interaction decays of B mesons.

SuperKEKB is both a super-B factory and a τ-charm factory: it will produce a total of 50 billion b b, c c and τ+ τ pairs over a period of eight years, and a team of more than 650 collaborators from 23 countries is already preparing to analyse this unique data set. The key open questions to be addressed include the search for new CP-violating phases in the quark sector, lepton-flavour violation and left–right asymmetries (see panel opposite).

Rare charged B decays to leptonic final states are the flagship measurements of the Belle-II research programme. The leptonic decay B τν occurs in the SM via a W-annihilation diagram with an expected branching fraction of 0.82+0.05–0.03 × 10−4, which would be modified if a non-standard particle such as a charged Higgs interferes with the W. Since the final state contains multiple neutrinos, it is measurable only in an electron–positron collider experiment where the centre-of-mass energy is precisely known. Belle II should reach a precision of 3% on this measurement, and observe the channel B μν for tests of lepton-flavour universality.

Perhaps the most interesting search at Belle II will be the analogous semi-leptonic decays, B  D*τν and B  Dτν, which are similarly sensitive to charged Higgs bosons. Recently, the combined measurements of these processes from Babar, Belle and LHCb have pointed to a curious 4σ deviation of the decay rates compared to the SM prediction (see figure X). Since no such deviation is seen in B τν, making it difficult to resolve the nature of the potential underlying new physics, the Belle-II data set will be required to settle the issue.

Another 4σ anomaly persists in B  K* l+l flavour-changing neutral-current loop processes observed by LHCb, which may be explained by the actions of new gauge bosons. By allowing the study of closely related processes, Belle II will be able to confirm if this really is a sign of new physics and not an artifact of theoretical predictions. More precisely calculable inclusive transitions b  sγ and b  s l+l will be compared to the exclusive ones measured by LHCb. The ultimate data set will also give access to B  K*νν and Kνν, which are experimentally challenging channels but also the most precise theoretically.

Beyond the Standard Model

There are many reasons to choose Belle II to address these and other puzzles with the SM, and in general the experiment will complement the physics reach of LHCb. The lower-background environment at Belle compared to LHCb allows researchers to reconstruct final states containing neutral particles, for instance, and to design efficient triggers for the analysis of τ particles. With asymmetric beam energies, the Lorentz boost of the electron–positron system is ideal for measurements of lifetimes, mixing parameters and CP violation.

The B factories established the existence of matter–antimatter asymmetries in the b-quark sector, in addition to the CP violation that was discovered 52 years earlier in the s-quark sector. The B factories established that a single irreducible complex phase in the weak interaction is sufficient to explain all CP-violating effects observed to date. This completed the SM description of the weak-interaction couplings of quarks.To move beyond this picture, two super-B factories were initially proposed: one at Tor Vegata near Frascati in Italy, and one at KEK in Japan. Although the former facility was not funded, there was a synergy and competition in the two designs. The super-B factory at KEK follows the legacy of the B factories, with Belle II and LHCb both vying to establish the first solid existence of new physics beyond the SM.

Key physics questions to be addressed by SuperKEKB and Belle II

• Are there new CP-violating phases in the quark sector?
The amount of CP violation (CPV) in the SM quark sector is orders-of-magnitude too small to explain the baryon–antibaryon asymmetry. New insights will come from examining the difference between B0 and B0 decay rates, namely via measurements of time-dependent CPV in penguin transitions (second-order W interactions) of b  s and b  d quarks. CPV in charm mixing, which is negligible in the SM, will also provide information on the up-type quark sector. Another key area will be to understand the mechanisms that produced large amounts of CPV in the time-integrated rates of hadronic B decays, such as B  Kπ and B  Kππ, observed by the B factories and LHCb.

• Does nature have multiple Higgs bosons?
Many extensions to the SM predict charged Higgs bosons in addition to the observed neutral SM-like Higgs. Extended Higgs sectors can also introduce extra sources of CP violation. The charged Higgs will be searched for in flavour transitions to τ leptons, including B → τν, as well as B → Dτν and B → D*τν, where 4σ anomalies have already been observed.

• Does nature have a left–right symmetry, and are there flavour-changing neutral currents beyond the SM?
The LHCb experiment finds 4σ evidence for new physics in the decay B  K*μ+μ, which is sensitive to all heavy particles in the SM. Left–right symmetry models provide interesting candidates for this anomaly. Such extensions to the SM introduce new heavy bosons that predominantly couple to right-handed fermions that allow a new pattern of flavour-changing currents, and can be used to explain neutrino mass generation. To further characterise potential new physics, here we need to examine processes with reduced theoretical uncertainty, such as inclusive b  s l+l, b  sν ν transitions and time-dependent CPV in radiative B meson decays. Complementary constraints coming from electroweak precision observables and from direct searches at the LHC have pushed the mass limit for left–right models to several TeV.

• Are there sources of lepton-flavour violation (LFV) beyond the SM?
LFV is a key prediction in many neutrino mass-generation mechanisms, and may lead to τμγ enhancement at the level of 10−8. Belle II will analyse τ lepton decays for a number of searches, which include LFV, CP violation and measurements of the electric dipole moment and (g−2) of the τ. The expected sensitivities to τ decays at Belle II will be unrivalled due to correlated production with minimal collision background. The detector will provide sensitivities seven times better than Belle for background-limited modes such as τμγ (to about 5 × 10–9) and up to 50 times better for the cleanest searches, such as τ eee (at the level of 5 × 10–10).

• Is there a dark sector of particle physics at the same mass scale as ordinary matter?
Belle II has unique sensitivity to dark matter via missing energy decays. While most searches for new physics at Belle II are indirect, there are models that predict new particles at the MeV to GeV scale – including weakly and non-weakly interacting massive particles that couple to the SM via new gauge symmetries. These models often predict a rich sector of hidden particles that include dark-matter candidates and gauge bosons. Belle II is implementing a new trigger system to capture these elusive events.

• What is the nature of the strong force in binding hadrons?
With B factories and hadron colliders having discovered a large number of states that were not predicted by the conventional meson interpretation, changing our understanding of QCD in the low-energy regime, quarkonium is high on the agenda at Belle II. A clean way of studying new particles is to produce them near resonance, achievable by adjusting the machine energy, while Belle II has good detection capabilities for all neutral and charged particles.

ESO signs largest ever ground-based astronomy contract

The European Extremely Large Telescope (E-ELT) will be the largest optical/near-infrared telescope in the world, boasting a primary mirror 39 m in diameter. Its aim is to measure the properties of the first stars and galaxies and to probe the nature of dark matter and dark energy, in addition to tracking down Earth-like planets.

At a ceremony in Garching bei München, Germany, on 25 May, the European Southern Observatory (ESO) signed a contract with the ACe Consortium for the construction of the dome and telescope structure of the E-ELT. With an approximate value of €400 million it is the largest contract ever awarded by ESO and the largest contract ever in ground-based astronomy. The occasion also saw the unveiling of the construction design of the E-ELT, which is due to enter operation in 2024.

The construction of the E-ELT dome and telescope structure can now commence, taking telescope engineering into new territory. The contract includes not only the enormous 85 m-diameter rotating dome, with a total mass of around 5000 tonnes, but also the telescope mounting and tube structure, with a total moving mass of more than 3000 tonnes. Both of these structures are by far the largest ever built for an optical/infrared telescope and dwarf all existing ones.

The E-ELT is being built on Cerro Armazones, a 3000 m-high peak about 20 km from ESO’s Paranal Observatory. The access road and leveling of the summit have already been completed and work on the dome is expected to start on site in 2017.

Neutron-star mergers create heaviest elements

The origin of some of the heaviest chemical elements is due to rapid neutron capture, but the precise location where this cosmic alchemy takes place has been under debate for several decades. While core-collapse supernovae were thought to be the prime production site, a new study suggests that elements heavier than zinc originate from the merger of two neutron stars. Such a dramatic event would have been responsible for the extreme heavy-element enrichment observed in several stars of an ancient dwarf galaxy called Reticulum II.

Nuclear fusion in the core of massive stars produces elements up to and including iron, which is a stable nucleus with the highest binding energy per nucleon. Building heavier nuclei requires energy to compensate for the loss of nuclear binding and is therefore almost impossible to achieve experimentally. But under certain conditions, stars can produce heavier elements by allowing them to capture protons or neutrons.

The relative abundance of certain elements therefore tells researchers whether nucleosynthesis followed an s- or an r-process.

Neutron capture, which is unaffected by Coulomb repulsion, occurs either slowly (s) or rapidly (r). Slow neutron captures occur at a pace that allows the nucleus to undergo beta decay prior to a new capture, and therefore to grow following the line of nuclear stability. The r-process, on the other hand, causes a nucleus to accumulate many additional neutrons prior to radioactive decay. The relative abundance of certain elements therefore tells researchers whether nucleosynthesis followed an s- or an r-process. The rare-earth element europium is a typical r-process element, as are gold, lead and uranium.

For the r-process to work, nuclei need to be under heavy neutron bombardment in conditions that are only found in dramatic events such as a core-collapse supernova or in mergers of two neutron stars. The supernova hypothesis has long been the most probable candidate for the r-process, whereas other scenarios involving rarer events, such as encounters between a neutron star and a black hole, have only been considered since the 1970s. One way to distinguish between the two hypotheses is to study low-metallicity galaxies in which the enrichment of heavy elements is low. This enables astrophysicists to determine if the enrichment is a continuous process or the result of rare events, which would result in stronger differences from one galaxy to the other.

Alexander Ji from the Massachusetts Institute of Technology, US, and colleagues were lucky to find extreme relative abundances of r-process elements in stars located in the ultra-faint dwarf galaxy Reticulum II. Although nearby and in orbit around the Milky Way, this galaxy was only recently discovered and found to be among the most metal-poor galaxies known. This means that Reticulum II formed all of its stars within about the first three-billion years after the Big Bang, and is therefore only enriched in elements heavier than helium by a few generations of stars.

High-resolution spectroscopic measurements of the nine brightest stars in Reticulum II carried out by the team indicate a very strong excess of europium and barium compared with iron in seven of the stars. These abundances exceed by two-to-three orders of magnitude those in any other ultra-faint dwarf galaxy, suggesting that a single rare event produced these r-process elements. The results also show that this event could be a neutron-star merger, but not an ordinary core-collapse supernova. Although it is not possible to conclude that the majority of our gold and uranium comes from neutron-star mergers, the study certainly gives more weight to such a hypothesis in the 60 year-long debate about the origin of r-process elements.

Protons accelerated to PeV energies

The High Energy Stereoscopic System (HESS) – an array of Cherenkov telescopes in Namibia – has detected gamma-ray emission from the central region of the Milky Way at energies never reached before. The likely source of this diffuse emission is the supermassive black hole at the centre of our Galaxy, which would have accelerated protons to peta-electron-volt (PeV) energies.

The Earth is constantly bombarded by high-energy particles (protons, electrons and atomic nuclei). Being electrically charged, these cosmic rays are randomly deflected by the turbulent magnetic field pervading our Galaxy. This makes it impossible to directly identify their source, and led to a century-long mystery as to their origin. A way to overcome this limitation is to look at gamma rays produced by the interaction of cosmic rays with light and gas in the neighbourhood of their source. These gamma rays travel in straight lines, undeflected by magnetic fields, and can therefore be traced back to their origin.

When a very-high-energy gamma ray reaches the Earth, it interacts with a molecule in the upper atmosphere, producing a shower of secondary particles that emit a short pulse of Cherenkov light. By detecting these flashes of light using telescopes equipped with large mirrors, sensitive photodetectors, and fast electronics, more than 100 sources of very-high-energy gamma rays have been identified over the past three decades. HESS is the only state-of-the-art array of Cherenkov telescopes that is located in the southern hemisphere – a perfect viewpoint for the centre of the Milky Way.

Earlier observations have shown that cosmic rays with energies up to approximately 100 tera-electron-volts (TeV) are produced by supernova remnants and pulsar-wind nebulae. Although theoretical arguments and direct measurements of cosmic rays suggest a galactic origin of particles up to PeV energies, the search for such a “Pevatron” accelerator has been unsuccessful, so far.

The HESS collaboration has now found evidence that there is a “Pevatron” in the central 33 light-years of the Galaxy. This result, published in Nature, is based on deep observations – obtained between 2004 and 2013 – of the surrounding giant molecular cloud extending approximately 500 light-years. The production of PeV protons is deduced from the obtained spectrum of gamma rays, which is a power law extending to multi-TeV energies without showing a high-energy cut-off. The spatial localisation comes from the observation that the cosmic-ray density decreases with a 1/r relation, where r is the distance from the galactic centre. The 1/r profile indicates a quasi-continuous central injection of protons during at least about 1000 years.

Given these properties, the most plausible source of PeV protons is Sagittarius A*, the supermassive black hole at the centre of our Galaxy. According to the authors, the acceleration could originate in the accretion flow in the immediate vicinity of the black hole or further away, where a fraction of the material falling towards the black hole is ejected back into the environment. However, to account for the bulk of PeV cosmic rays detected on Earth, the currently quiet supermassive black hole would have had to be much more active in the past million years. If true, this finding would dramatically influence the century-old debate concerning the origin of these enigmatic particles.

At the heart of every LHC collision

At the heart of every LHC collision are the constituents of protons: the quarks and gluons, collectively known as partons. These partons can undergo hard-scattering processes, producing a plethora of final states ranging from the massless to the very massive, such as W and Z bosons or top-quark pairs. Understanding these production cross-sections and their evolution as a function of the centre-of-mass energy, √s, of the LHC are important components to understanding all of the measurements performed by ATLAS, including searches for new physics beyond the Standard Model.

Figure 1 illustrates some of the cross-section measurements made by ATLAS at √s = 7, 8 and 13 TeV. The new 13 TeV data collected in 2015 greatly extend the lever arm of the investigation of the √s evolution, with increased cross-sections for W and Z bosons and top-quark pairs by factors of approximately two and three, respectively, from their values at 8 TeV.

The final states observed from hard scattering tell a story of which partons participated in the collisions: e.g. top-quark production is related to the gluon composition of the proton, whereas Z-boson production provides insight into the quark sea, and W-boson production on the relationship between the valence quarks. These measurements are pieces of the proton puzzle, and because the √s evolution changes the range of the parton momentum fractions probed by the collisions, the 13 TeV data open up a new kinematic region of investigation.

Via hard scattering, one can also test the predictions of perturbative QCD – a key component of the Standard Model. Single and dibosons are currently predicted at next-to-next-to-leading order (NNLO), and top-quark pair production at NNLO plus next-to-next-to-leading log (NNLL). As √s increases, the mix of the hard-scattering processes changes, and the precision measurements become increasingly dependent on the knowledge of growing electroweak corrections currently available at NLO. With higher √s, rarer processes like Z-boson pair production (ZZ) become more accessible and open an enticing window onto potential new physics.

As is evident from figure 1, results match well with Standard Model expectations. Apart from a common beam-luminosity uncertainty, the measurements at 13 TeV have an experimental precision ranging from under 1% for Z bosons, to 3% for W bosons and top-quark pairs, to 14% for ZZ – the latter still being dominated by statistical uncertainties. However, measuring ratios of cross-sections can benefit from the cancellation of many experimental uncertainties. This is evident from the W+/W cross-section ratio at 13 TeV, which has a total systematic uncertainty of less than 1%, rivalling the precision of the current predictions of parton-distribution functions but whose central value is consistently lower than predictions. Results such as those presented here will contribute significantly to the understanding of the large 13 TeV data set expected in the coming years.

CALET sees events in millions

Just a few months after its launch and the successful completion of the on-orbit commissioning phase aboard the International Space Station, the CALorimetric Electron Telescope (CALET) has started observations of high-energy charged particles and photons coming from space. To date, more than a hundred million events at energies above 10 GeV have been recorded and are under study.

CALET is a space mission led by JAXA with the participation of the Italian Space Agency (ASI) and NASA. CALET is also a CERN-recognised experiment; the collaboration used CERN’s beams to calibrate the instrument, which was launched from the Tanegashima Space Center on 19 August 2015, on board the Japanese H2-B rocket. After berthing with the ISS a few days later, CALET was robotically extracted from the transfer-vehicle HTV5, operated by JAXA, and installed on the external platform JEM-EF of the Japanese module (KIBO). The check-out phase went smoothly, and after data calibration and verification, CALET moved to regular observation mode in mid-October 2015. The data-taking will go on for period of two years, initially, with a target of five years.

The first data sets are confirming that all of the instruments are working extremely well.

CALET is designed to study electrons, nuclei and γ-rays coming from space. In particular, one of its main goals is to perform precision measurements of the detailed shape of the electron spectrum above 1 TeV. High-energy electrons are expected to come from less than a few-thousand light-years from Earth, as they quickly lose energy travelling in space. Their detection might reveal the presence of nearby astronomical source(s) where electrons are accelerated. The high end of the spectrum is particularly interesting because it could provide a clue to possible signatures of dark matter.

The first data sets are confirming that all of the instruments are working extremely well. The event image above (raw data) shows the detailed shape of the development of a shower of secondary particles generated by the impact of a candidate electron with an estimated energy greater than 1 TeV. The high-resolution energy measurement is provided by CALET’s deep, homogeneous calorimeter equipped with lead-tungstate (PbWO4) crystals preceded by a high-granularity (1 mm scintillating fibres) pre-shower calorimeter with advanced imaging capabilities. The depth of the instrument ensures good containment of electromagnetic showers in the TeV region.

In the coming months, thanks to its ability to identify cosmic nuclei from hydrogen to beyond iron, CALET will be able to study the high-energy hadronic component of cosmic rays. CALET will focus on the deviation from a pure power law that has been recently observed in the energy spectra of light nuclei. It will extend the present data to energies in the multi-TeV region with accurate measurements of the curvature of the spectrum as a function of energy, and of the abundance ratio of secondary to primary nuclei – an important ingredient to understand cosmic-ray propagation in the Galaxy.

Gamma-ray excess is not from dark matter

An excess of gamma rays at energies of a few GeV was found to be a good candidate for a dark-matter signal. Two years later, a pair of research articles refute this interpretation by showing that the excess photons detected by the Fermi Gamma-ray Space Telescope are not smoothly distributed as expected for dark-matter annihilation. Their clustering reveals instead a population of unresolved point sources, likely millisecond pulsars.

The Milky Way is thought to be embedded in a dark-matter halo with a density gradient increasing towards the galactic centre. The central region of our Galaxy is therefore a prime target to find an electromagnetic signal from dark-matter annihilation. If dark matter is made of weakly interacting massive particles (WIMPs) heavier than protons, such a signal would naturally be in the GeV energy band. A diffuse gamma-ray emission detected by the Fermi satellite and having properties compatible with a dark-matter origin created hope in recent years of finally detecting this elusive form of matter more directly than only through gravitational effects.

Two independent studies published in Physical Review Letters are now disproving this interpretation. Using different statistical-analysis methods, the two research teams found that the gamma rays of the excess emission at the galactic centre are not distributed as expected from dark matter. They both find evidence for a population of unresolved point sources instead of a smooth distribution.

The study, led by Richard Bartels of the University of Amsterdam, the Netherlands, uses a wavelet transformation of the Fermi gamma-ray images. The technique consists of a convolution of the photon count map with a wavelet kernel shaped like a Mexican hat, with a width tuned near the Fermi angular resolution of 0.4° in the relevant energy band of 1–4 GeV. The intensity distribution of the derived wavelet peaks is found to be inconsistent with that expected from a truly diffuse origin of the emission. The distribution suggests instead that the entire excess emission is due to a population of mostly undetected point sources with characteristics matching those of millisecond pulsars.

In the coming decade, new facilities at radio frequencies will be able to detect hundreds of new millisecond pulsars in the central region of the Milky Way.

These results are corroborated by another study led by Samuel Lee of the Broad Institute in Cambridge and Princeton University. This US team used a new statistical method – called a non-Poissonian template fit – to estimate the contribution of unresolved point sources to the gamma-ray excess emission at the galactic centre. The team’s results predict a new population of hundreds of point sources hiding below the detection threshold of Fermi. The possibility of detecting the brightest ones in the years to come with ongoing observations would confirm this prediction.

In the coming decade, new facilities at radio frequencies will be able to detect hundreds of new millisecond pulsars in the central region of the Milky Way. This would definitively rule out the dark-matter interpretation of the GeV excess seen by Fermi. In the meantime, the quest towards identifying the nature of dark matter will go on, but little by little the possibilities are narrowing down.

LIGO: a strong belief

On 11 February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations published a historic paper in which they showed a gravitational signal emitted by the merger of two black holes. The signal has been observed with 5σ significance and is the first direct observation of gravitational waves.

This result comes after 20 years of hard work by a large collaboration of scientists operating the two LIGO observatories in the US. Barry Barish, Linde professor of physics, emeritus, at the California Institute of Technology and former director of the Global Design Effort for the International Linear Collider (ILC), led the LIGO endeavour from 1994 to 2005. On the day of the official announcement to the scientific community and the public, Barish was at CERN to give a landmark seminar that captivated the whole audience gathered in the packed Main Auditorium.

The CERN Courier had the unique opportunity to interview Barish just after the announcement.

Professor Barish, this achievement comes after 20 years of hard work, uncertainties and challenges. This is what research is all about, but what was the greatest challenge you had to overcome during this long period?

It really was to do anything that takes 20 years and still be supported and have the energy to reach completion. We started long before that, but the project itself started in 1994. LIGO is an incredible technical achievement. The idea that you can take on high risk in such a scientific endeavour requires a lot of support, diligence and perseverance. In 1994, we convinced the US National Science Foundation to fund the project, which became the biggest programme to be funded. After that, it took us 10 years to build it and to make it work well, plus 10 years to improve the sensitivity and bring it to the point where we were able to detect the gravitational waves. And along the way no one had done this before.

Indeed, the experimental set-up we used to detect the gravitational signal is an enormous extrapolation from anything that was done before. As a physicist, you learn that extrapolating a factor of two can be within reach, but a factor of 10 sounds already like a dream. If you compare the first 40 m interferometer we built on the CALTECH campus with the two 4000 m interferometers we have now, you already have an idea of the enormous leap we had to make. The leap of 100 in size also involved at least that in complexity and sophistication, eventually achieving more than 10,000 times the sensitivity of the original 40 m prototype.

The two signals were perfectly consistent, and this gave us total trust in our data.

The experimental confirmation of the existence of the gravitational waves could have a profound impact on the future of astrophysics and gravitational physics. What do you think are the most important consequences of the discovery?

The discovery opens two new areas of research for physics. One is on the general-relativity theory itself. Gravitational waves are a powerful way of testing the heart of the theory by investigating the strong-field realm of gravitational physics. Even with just this first event – the merging of two black holes – we have created a true laboratory where you can study all of this, and understanding general relativity at an absolutely fundamental level is now opening up.

The second huge consequence of the discovery is that we can now look at the universe with a completely new “telescope”. So far, we have used and built all kinds of telescopes: infrared, ultraviolet, radio, optical… And the idea of recent years has been to look at the same things in different bandwidths.

However, no such previous instrument could have seen what we saw with the LIGO interferometers. Nature has been so generous with us that the very first event we have seen is new astrophysics, as astronomers had never seen stellar black holes of these masses. With just the first glimpse at the universe with gravitational waves, we now know that they exist in pairs and that they can merge. This is all new astrophysics. When we designed LIGO, we thought that the first thing we would see gravitational waves emitted by was neutron stars. It would still be a huge discovery, but it would not be new astrophysical information. We have been really lucky.

Over the next century, this field will provide a completely new way of doing an incredible amount of new science. And somehow we had a glimpse of that with the first single event.

What were your feelings upon seeing the event on your screen?

We initially thought that it could be some instrumental crazy thing. We had to worry about many possible instrumental glitches, including whether someone had purposely injected a fake event into our data stream. To carefully check the origin of the signal, we tracked back the formation of the event data from the two interferometers, and we could see that the signal was recorded within seven milliseconds – exactly the time we expect for the same event to appear on the second interferometer. The two signals were perfectly consistent, and this gave us total trust in our data.

I must admit that I was personally worried as, in physics, it is always very dangerous to claim anything with only one event. However, we proceeded to perform the analysis in the most rigorous way and, indeed, we followed the normal publication path, namely the submission of the paper to the referees. They confirmed that what we submitted was scientifically well-justified. In this way, we had the green light to announcing the discovery to the public.

At the seminar you were welcomed very warmly by the audience. It was a great honour for the CERN audience to have you give the talk in person, just after your colleagues’ announcement in the US. What are you bringing back from this experience?

I was very happy to be presenting this important achievement in the temple of science. The thing that made me feel that we made the case well was that people were interested in what we have done and are doing. In the packed audience, nobody seemed to question our methodology, analysis or the validity of our result. We have one single event, but this was good enough to convince me and also my colleagues that it was a true discovery. I enjoyed receiving all of the science questions from the audience – it was really a great moment for me.

• The LIGO and Virgo collaborations are currently working on analysing the rest of the data from the run that ended on 12 January. New information is expected to be published in the coming months. In the meantime, the discovery event is available in open data (see https://losc.ligo.org) for anyone who wants to analyse it.

Neutrons in full flight at CERN’s n_TOF facility

Accurate knowledge of the interaction probability of neutrons with nuclei is a key parameter in many fields of research. At CERN, pulsed bunches from the Proton Synchrotron (PS) hit the spallation target and produce beams of neutrons with unique characteristics. This allows scientists to perform high-resolution measurements, particularly on radioactive samples.

The story of the n_TOF facility goes back to 1998, when Carlo Rubbia and colleagues proposed the idea of building a neutron facility to measure neutron-reaction data needed for the development of an energy amplifier. The facility eventually became fully operational in 2001, with a scientific programme covering neutron-induced reactions relevant for nuclear astrophysics, nuclear technology and basic nuclear science. During the first major upgrade of the facility in 2009, the old spallation target was removed and replaced by a new target with an optimised design, which included a decoupled cooling and moderation circuit that allowed the use of borated water to reduce the background due to in-beam hydrogen-capture γ rays. A second improvement was the construction of a long-awaited “class-A” workplace, which made it possible to use unsealed radioactive isotopes in the first experimental area (EAR1) at 200 m from the spallation target. In 2014, n_TOF was completed with the construction of a second, vertical beamline and a new experimental area – EAR2.

One of the most striking features of neutron–nucleus interactions is the resonance structures observed in the reaction cross-sections at low-incident neutron energies. Because the electrically neutral neutron has no Coulomb barrier to overcome, and has a negligible interaction with the electrons in matter, it can directly penetrate and interact with the atomic nucleus, even at very low kinetic energies in the order of electron-volts. The cross-sections can show variations of several orders of magnitude on an energy scale of only a few eV. The origin of these resonances is related to the excitation of nuclear states in the compound nuclear system formed by the neutron and the target nucleus, at excitation energies lying above the neutron binding energy of typically several MeV. In figure 1, the main cross-sections for a typical heavy nucleus are shown as a function of energy. The position and extent of the resonance structures depend on the nucleus. Also shown on the same energy scale are Maxwellian neutron energy distributions for fully moderated neutrons by water at room temperature, for fission neutrons, and for typical neutron spectra in the region from 5 to 100 keV, corresponding to the temperatures in stellar environments of importance for nucleosynthesis.

The wide neutron energy range is one of the key features of the n_TOF facility.

In nuclear astrophysics, an intriguing topic is understanding the formation of nuclei present in the universe and the origin of chemical elements. Hydrogen and smaller amounts of He and Li were created in the early universe by primordial nucleosynthesis. Nuclear reactions in stars are at the origin of nearly all other nuclei, and most nuclei heavier than iron are produced by neutron capture in stellar nucleosynthesis. Neutron-induced reaction cross-sections also reveal the nuclear-level structure in the vicinity of the neutron binding energy of nuclei. Insight into the properties of these levels brings crucial input to nuclear-level density models. Finally, neutron-induced reaction cross-sections are a key ingredient in applications of nuclear technology, including future developments in medical applications and the transmutation of nuclear waste, accelerator-driven systems and nuclear-fuel-cycle investigations.

The wide neutron energy range is one of the key features of the n_TOF facility. The kinetic energy of the particles is directly related to their time-of-flight: the start time is given by the impact of the proton beam on the spallation target and the arrival time is measured in the EAR1 and EAR2 experimental areas. The high neutron energies are directly related to the 20 GeV/c proton-induced spallation reactions in the lead target. Neutrons are subsequently partially moderated to cover the full energy range. Energies as low as about 10 MeV corresponding to long times of flight can be exploited and measured at n_TOF because of its pulsed bunches spaced by multiples of 1.2 s, sent by the PS. This allows long times of flight to be measured without any overlap into the next neutron cycle.

Higher flux

Another unique characteristic of n_TOF is the very high number of neutrons per proton burst, also called instantaneous neutron flux. In the case of research with radioactive samples irradiated with the neutron beam, the high flux results in a very favourable ratio between the number of signals due to neutron-induced reactions and those due to radioactive decay events, which contribute to the background. While the long flight path of EAR1 (200 m from the spallation target) results in a very high kinetic-energy resolution, the short flight path of EAR2 (20 m from the target) has a neutron flux that is higher than that of EAR1 by a factor of about 25. The neutron fluxes in EAR1 and EAR2 are shown in figure 2. The higher flux opens the possibility for measurements on nuclei with very low mass or low reaction cross-sections within a reasonable time. The shorter flight distance of about a factor 10 also ensures that the entire neutron energy region is measured in a 10 times shorter interval. For measurements of neutron-induced cross-sections on radioactive nuclei, this means 10 times less acquired detector signals due to radioactivity. Therefore the combination of the higher flux and the shorter time interval results in an increase of the signal-to-noise ratio of a factor 250 for radioactive samples. This characteristic of EAR2 was, for example, used in the first cross-section measurement in 2014, when the fission cross-section of the highly radioactive isotope 240Pu was successfully measured. An earlier attempt of this measurement in EAR1 was not conclusive. An example from 2015 is the measurement of the (n,α) cross-section of the also highly radioactive isotope 7Be, relevant for the cosmological Li problem in Big Bang nucleosynthesis.

The most important neutron-induced reactions that are measured at n_TOF are neutron-capture and neutron-fission reactions. Several detectors have been developed for this purpose. A 4π calorimeter consisting of 40 BaF2 crystals has been in use for capture measurements since 2004. Several types of C6D6-based liquid-scintillator detectors are also used for measurements of capture γ rays. Different detectors have been developed for charged particles. For fission measurements, ionisation chambers, parallel-plate avalanche counters and the fission-fragment spectrometer STEFF have been operational. MicroMegas-based detectors have been used for fission and (n,α) measurements. Silicon detectors for measuring (n,α) and (n,p) reactions have been developed and used more recently, even for in-beam measurements.

The measurements at CERN’s neutron time-of-flight facility n_TOF, with its unique features, contribute substantially to our knowledge of neutron-induced reactions. This goes together with cutting-edge developments in detector technology and analysis techniques, the design of challenging experiments, and training a new generation of physicists working in neutron physics. This work has been actively supported since the beginning of n_TOF by the European Framework Programmes. A future development currently being studied is a possible upgrade of the spallation target, to optimise the characteristics of the neutron beam in EAR2. The n_TOF collaboration, consisting of about 150 researchers from 40 institutes, looks forward to another year of experiments from its scientific programme in both EAR1 and EAR2, continuing its 15 year history of measuring high-quality neutron-induced reaction data.

 

Further Reading
CERN-Proceedings-2015-001, p32

bright-rec iop pub iop-science physcis connect