Comsol -leaderboard other pages

Topics

QCD and Heavy Quarks: In Memoriam Nikolai Uraltsev

By I I Bigi, P Gambino and T Mannel (eds)
World Scientific

41wmKZHvshL._SX328_BO1,204,203,200_

The book collects together articles on QCD and heavy-quark physics written in memory of Nikolai Uraltsev, who passed away unexpectedly in February 2013. Uraltsev was an excellent theorist with acute intuition, who dedicated his career to the study of phenomenological particle physics, in particular quantum chromodynamics and its non-perturbative properties. He is also considered one of the fathers of heavy-quark expansion. By writing this book, Uraltsev’s closest colleagues and friends intended to honour his groundbreaking work, as well as give testimonies of their personal relationships with him.

The text gives an overview of some aspects of QCD, including CP violation in hadronic processes and hadronic matrix elements in weak decays. Three selected works by Uraltsev are also reproduced in the appendix.

Quantum Field Theory and the Standard Model

By Matthew D Schwartz
Cambridge University Press
Also available at the CERN bookshop

71DqLQg8mML

Providing a comprehensive and modern introduction to quantum field theory, this textbook covers the development of particle physics from its foundations to the recent discovery of the Brout–Englert–Higgs boson. Based on a course taught by the author at Harvard University for many years, the text starts from the principle that quantum field theory (QFT) is primarily a theory of physics and, as such, it provides a set of tools for performing practical calculations. The book develops field theory, quantum electrodynamics, renormalisation and the Standard Model, including modern approaches and state-of-the-art calculation techniques.

With a combination of intuitive explanations of abstract concepts, experimental data and mathematical rigour, the author makes the subject accessible to students with different backgrounds and interests.

Gaseous Radiation Detectors: Fundamentals and Applications (Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology)

By Fabio Sauli
Cambridge University Press
Also available at the CERN bookshop

CCboo1_10_15

In the last few decades, fast revolutionary developments have taken place in the field of gaseous detectors. At the start of the 1970s, multiwire proportional chambers were invented. These detectors and their descendants (drift chambers, time-projection chambers, ring-imaging Cherenkov detectors, etc) rapidly replaced cloud and bubble chambers, as well as spark counters, in many high-energy physics experiments. At the end of the last century, resistive-plate chambers and micropattern detectors were introduced, which opened up new avenues in applications.

Ironically, for a long time, no books had been published on gaseous detectors and their fast evolution. For this reason, in spite of thousands of scientific publications covering the rapid and exciting developments in the field of gaseous detectors, no simple and analytical description has been made available for a wide audience of non-professionals, including, for example, students.

Suddenly “an explosion” took place: several books dedicated to modern gaseous detectors and their applications appeared on the market, almost all at the same time.

Sauli’s book is certainly one of the best of these. The author, a leading figure in the field, has succeeded in writing a remarkable and charming book, which I strongly recommend to anyone interested in learning about recent progress, open questions and future perspectives of gaseous detectors. Throughout its 490 pages, it offers a broad coverage of the subject.

The first five chapters focus on fundamentals: the interaction of charged particles and photons with matter, the drift and diffusion of electrons and ions, and avalanche multiplications. This first part of the book offers a refreshing mix of basic facts and up-to-date research, but avoids giving too much space to formulas and complicated mathematics, so non-specialists can also gain from the reading.

The remaining eight chapters are dedicated to specific detectors, from single-wire proportional counters to state-of-the-art micro-pattern gaseous detectors. This latter part of the book gives exhaustive detail and describes the design and operational features, including signal development, time and position resolutions, and other important characteristics. The last chapter deals with degeneracy and ageing – serious problems that detectors can experience if the gas composition and construction materials are not chosen carefully.

This fascinating book is easy to read, so it is suitable for everyone, and in particular, I believe, for young people. I was especially impressed by the care with which the author prepared many figures, which in some cases include details that I have not seen in previous texts of this kind. The high-quality figures and photographs contribute significantly to making this book well worth reading. In my opinion, it is not only remarkably complementary to other recently published monographs, but it can also serve as a main textbook for those who are new to the field.

The only omission I have observed in this otherwise wide-ranging and well-researched book, is the lack of discussion on secondary processes and ion back flows, which are very important in the operation of some modern photosensitive detectors, including, for example, ALICE and COMPASS ring-imaging detectors.

There could be a few other improvements in a future edition. For instance, it would be useful to expand the description of the growing applications of gaseous detectors, especially resistive-plate chambers and micropattern detectors.

All in all, this is a highly recommendable book, which provides an interesting guided tour from the past to present day of gaseous detectors and the physics behind their operation.

High Luminosity LHC moves forward

October 2015 was a turning point for the High Luminosity LHC (HL-LHC) project, marking the end of the European-funded HiLumi LHC Design Study activities, and the transition to the construction phase, which is also reflected in the redesigned logo that was recently presented.

So far, the LHC has only delivered 10% of the total planned number of collisions. To extend its discovery potential even further, the LHC will go through the HL-LHC major upgrade around 2025, which will increase the luminosity by a factor of 10 beyond the original design value (from 300 to 3000 fb–1). The HL-LHC machine will provide more accurate measurements and will enable the scientific community to study new phenomena discovered by the LHC, as well as new rare processes. The HiLumi upgrade programme relies on a number of key innovative technologies, such as cutting-edge 12 Tesla superconducting magnets, very compact and ultra-precise superconducting cavities for beam rotation, and 100 m-long high-power superconducting links with zero energy dissipation. In addition, the higher luminosities will make new demands on vacuum, cryogenics and machine protection, and will require new concepts for collimation and beam diagnostics, advanced modelling for the intense beam and novel schemes of beam crossing to maximise the physics output of these collisions.

From design to construction

The green light for the beginning of this new HL-LHC phase, marked by main hardware prototyping and industrialisation, was given with the approval of the first version of the Technical Design Report – the document that describes in detail how the LHC upgrade programme will be carried out. This happened at the 5th Joint HiLumi LHC-LARP Annual Meeting, which took place at CERN from 26 to 30 October and saw the participation of more than 200 experts from all over the world to discuss the results and achievements of the HiLumi LHC Design Study. In the final stage of the more than four-year-long design phase, an international board of independent experts worked on an in-depth cost-and-schedule review. As a result, the total cost of the project – amounting to CHF 950 million – will be included in the CERN budget until 2026.

In addition to the project management work-package (WP), a total of 17 WPs involving more than 200 researchers and engineers addressed the technological and technical challenges related to the upgrade. During the 48 months of the HiLumi Design Study, the accelerator-physics and performance team defined the parameter sets and machine optics that would allow HiLumi LHC to reach the very ambitious performance target of an integrated luminosity of 250 fb–1 per year. The study of the beam–beam effects confirmed the feasibility of the nominal scenario based on the baseline β* levelling mechanism, providing sufficient operational margin for operation with the new ATS (Achromatic Telescopic Scheme) at the nominal levelling luminosity of 5 × 1034 cm–2s–1, with the possibility to reach up to 50% more. The magnet design activity, focusing on the design of the insertion magnets, launched the hardware fabrication of short models of the Nb3Sn quadrupoles’ triplet (QXF), separation dipole, two-in-one large aperture quadrupole and 11 T dipole for Dispersion Suppressor collimators. Single short coils in the mirror configuration have already been successfully tested for the triplet. The first model of the QXF triplet containing two CERN and two LARP coils was assembled in the US in the summer, and is being tested this autumn, while a short model of the 11 T dipole fabricated at CERN reached 12 T. To protect the magnets from the higher beam currents, the collimation team focused on the design and verification of the new generation of collimators. The team presented a complete technical solution for the collimation in and around the insertions in HL-LHC, providing improved flexibility against optics changes. The crab-cavities activity finalised and launched the manufacturing of the crab-cavity interfaces, including the helium vessels and the cryo-module assembly. All cavity parts stamped in the US will be assembled and surface processed in the US, in addition to electron-beam welding and testing. Last but not least, as part of their efforts to develop a superconducting transmission line, the cold powering activity hit a world-record current of 20 kA at 24 K in a 40 m-long MgB2 electrical transmission line. The team has finalised the development and launched the procurement of the first MgB2 PIT round wires. This is an important achievement that will enable the start of large cabling activity in industry, as required for the production of a prototype cold-powering system for the HL-LHC.

In addition to the technological challenges, the HL-LHC project has also seen an important expansion of the civil-engineering and technical infrastructure at P1 (ATLAS) and P5 (CMS), with new tunnels and underground halls needed to house the new cryogenic equipment, the electrical power supply and various plants for electricity, cooling and ventilation.

A winning combination

Such an extensive technical, technological and civil endeavour would not be possible without collaboration with industry. To address the specific technical and procurement challenges, the HL-LHC project is working in close collaboration with leading companies in the field of superconductivity, cryogenics, electrical power engineering and high-precision mechanics. To enhance the co-operation with industry on the production of key technologies that are not yet considered by commercial partners due to their novelty and low production demand, the newly launched QUACO project, recently funded by the EU, is bringing together several research infrastructures with similar technical requirements in magnet development to act as a single buyer group.

A new record for the RMC test magnet at CERN

High magnetic fields are the Holy Grail of high-energy accelerators. The strength of the dipole field determines the maximum energy the beam can achieve on a given orbit, and large-aperture, high-gradient quadrupoles, with high peak field, govern the beam collimation at the interaction points. This is why, this September, members of the CERN Magnet Group in the Technology Department had big grins on their faces when the RMC racetrack test magnet attained a peak field of 16.2 T, twice the nominal field of the LHC dipoles, and the highest field ever reached in this configuration.

This result was achieved thanks to a different superconductor –the intermetallic and brittle compound Nb3Sn – used for the coils and the new “bladder-and-keys” technology developed at LBNL to withstand the extremely powerful electromagnetic forces.

The beginning of this success story dates back more than 10 years, when experts started to realise that Nb–Ti alloy, the workhorse of the LHC (and of all superconducting accelerators until then), and the conventional collar structure enclosing the superconducting coils in a locked, laminated assembly, would soon run out of steam. A technological quantum leap was needed.

Seeds sown

The first seeds of a European programme were sown in 2004, when a group of seven European laboratories and universities (CCLRC RAL, CEA, CERN, CIEMAT, INFN, Twente University and Wrocŀław University), under the co-ordination of CEA Saclay, decided to join forces to develop the technologies for the next-generation high-field magnets. Initially conceived to develop a 15 T dipole with a bore of 88 mm, the NED JRA EU-funded programme subsequently became an R&D programme to develop a new conductor. Its main result was an industrial Nb3Sn powder-in-tube (PIT) conductor with high current densities, designed to reach fields up to 15 T.

Three of the NED JRA partners – CEA, CERN and RAL – saw the importance of exploiting the new technology and continued the R&D beyond the NED JRA programme. Inspired by programmes at neighbouring laboratories, in particular LBNL, they started to develop a sub-scale model magnet with racetrack coils: the short model coil (SMC). This intermediate step led the partners to learn the basic principles of Nb3Sn coil construction. In fact, the SMC became a fast-turnaround test-bed for medium-sized cables, and is still in use at CERN. In 2011, the second SMC assembly successfully achieved 12.5 T. In a subsequent SMC assembly in 2012, the field went up to 13.5 T. With these results, CERN and its European partners demonstrated that they were on track to master Nb3Sn magnet technology.

Towards high fields

Since 2009, CERN and CEA have continued work on the technology, initially under the FP7-EuCARD project activities, and today within the scope of the CEA/CERN magnet collaboration. The focus of the FP7-Eucard high-field magnet (HFM) work package became the construction of a 13 T dipole magnet with a 100 mm aperture, which will be used to upgrade the FRESCA facility at CERN: FRESCA2. To achieve the 13 T objective, the CERN-CEA team designed the magnet for a field of 15 T using the state-of-the-art Nb3Sn technology: a 1 mm wire supplied by the only two manufacturers in the world capable of meeting the critical current specification, one in Germany (powder-in-tube, or PIT wire) and one in the US (rod restack process, or RRP wire). The cable for FRESCA2, was designed to have 40 strands and to carry nearly 20 kA at 1.9 K for a magnetic field of 16 T. This is an impressive set of values compared with the LHC, where the dipole cables can carry 13  kA at 1.9 K for a magnetic field of 9 T.

In spite of the engineering margins in the design, FRESCA2 proved to be a challenging goal. CERN therefore decided to design and construct an intermediate step, consisting of two racetrack, flat coils and no bore, made with the same 40 strand cable and fabrication procedures as for FRESCA2. This magnet was named the “racetrack model coil” (RMC).

Two initial assembly configurations were built using either RRP and PIT cables, and then a third one – called RMC_03 – was trained up to a maximum current of 18.5 kA at a temperature of 1.9 K. Based on the calculation of the field, this current corresponds to peak fields of 16 T in the coil wound with PIT cable and 16.2 T in the coil wound with RRP cable. With this result – a new record in this configuration – CERN has reached LBNL in the domain of high dipole fields.

Nb3Sn will be used to build the IR QXF quadrupoles and the 11 T dispersion suppressor dipoles for the high-luminosity upgrade of the LHC (CERN Courier January/February 2013 p28). The RMC record paves the way for a promising demonstration of this technique for future developments. In particular, cables of the same type as for FRESCA2 are also being considered for the Future Circular Collider (FCC) studies (CERN Courier April 2014 p16).

A lot of hard work remains before CERN and its collaborating partners will be able to achieve a 16 T field inside a beam aperture with the required field quality for an accelerator, so the development work on FRESCA2 continues. The coils are under construction and a test station is being built in SM18 to host the giant magnet, which should be ready for testing by next summer.

CMS data-scouting and a search for low-mass dijet resonances

Proton beams crossed inside each of the CMS and ATLAS detectors 20 million times a second during the 2012 LHC proton–proton run. However, the physics programme of CMS is based on only a small subset of these crossings, corresponding to about 1000 events per second for the highest beam intensities attained that year. This restriction is due to technological limitations on the speed at which information can be recorded. The CMS detector has around 70 million electronics channels, yielding up to about half-a-million bytes per event. This volume of data makes it impossible to record every event that occurs. A so-called trigger system is used in real time to select which events to retain. Events are typically required to contain at least one object with a large transverse momentum relative to the proton beam axis. This restriction is effective at reducing the event rate but it also reduces sensitivity to new phenomena that might occur at a smaller transverse-momentum scale, and therefore it reduces sensitivity to the production of new particles, or “resonances”, below certain mass values. While many important studies have been performed with the standard triggers, the necessary reduction imposed by these triggers seriously limits sensitivity to resonances with masses below around 1 TeV that decay to a two-jet (“dijet”) final state, where a “jet” refers to a collimated stream of particles, such as pions and kaons, which is the signature of an underlying quark or gluon.

To recover sensitivity to events that would otherwise be lost, CMS implemented a new triggering scheme, which began in 2011, referred to as “data scouting”. A dedicated trigger algorithm was developed to retain events with a sum of jet transverse energies above the relatively low threshold of 250 GeV, at a rate of about 1000 events per second. To compensate for this large rate and to remain within the boundaries imposed by the available bandwidth and disk-writing speed, the event size was reduced by a factor of 1000 by retaining only the jet energies and momenta in an event, reconstructed at a higher-level trigger stage. Because of the minimal amount of information recorded, no subsequent offline data processing was possible, and the scouted data were appropriate for a few studies only, such as the dijet resonance search. The resonance search was implemented directly in the CMS data-quality monitoring system so that, should deviations from the Standard Model expectation be observed, the trigger could be adjusted to collect the events in the full event format.

The first results to use data-scouting were reported by CMS in 2012. These results were based on 0.13 fb–1 of proton–proton collisions at a center-of-mass energy √s = 7 TeV, collected during the last 16 hours of the 2011 run. New results on dijet resonances have now been presented, which employ data-scouting in a much larger sample of 18.8 fb–1 collected at √s = 8 TeV in 2012. The results are summarised in the figure, which shows exclusion limits on the coupling strength (gB) of a hypothetical baryonic Z´B boson that decays to a dijet final state, as a function of the Z´B mass. The CMS results, shown in comparison with previous results, demonstrate the success of the data-scouting method: using very limited disk-writing resources, corresponding to only about 10% of what is typically allocated for a CMS analysis, the exclusion limits for low-mass resonances (below around 1 TeV) are improved by more than a factor of four. Although no evidence for a new particle is found, data-scouting has established itself as a valuable tool in the search for new physics at the LHC.

LHCb measures the effective double parton scattering cross-section with unprecedented precision

In high-energy hadron–hadron collisions, each incoming hadron behaves as a loosely bound system of massless partons – quarks, antiquarks and gluons. The interaction of incoming hadrons is described as the pair-wise interactions of partons from one incoming hadron with partons from the other hadron. This model agrees well with numerous precise experimental data, in particular with the production of heavy-flavoured particles. Such processes have been studied at the Tevatron and at the LHC. All experimental data agree well with the dominant contribution coming from the single pair-wise collision of gluons, producing a single charm–anticharm (cc) or bottom–antibottom (bb) pair, the so-called single parton scattering (SPS) paradigm. However, evidence for the production of multiple cc pairs in single hadron–hadron collisions has been reported by the NA3 and WA75 collaborations, who observed J/ψJ/ψ pairs, and three charmed mesons, respectively. Also, Bc meson production requires a cc and a bb pair. Yet these processes are consistent with a SPS of gluons.

At higher energy, the probability of a second hard parton interaction becomes non-negligible. The evidence for this kind of process, named double parton scattering (DPS), was obtained a long time ago by the AFS and UA2 collaborations. At the LHC where the energies of colliding protons are much larger, the DPS contribution is expected to be more prominent, and even dominant for some processes.

Assuming the independence of partons, the rate of DPS processes is proportional to a product of the independent rates for two pair-wise parton collisions. A consequence is that the corresponding proportionality coefficient is universal, independent of the collision energy, and of the considered process. The inverse of this proportionality coefficient has the dimension of an area and is named the “effective DPS cross-section”, σeff.

The significant role of DPS processes at the LHC has been demonstrated by the LHCb collaboration via the measurement of the simultaneous production of J/ψ mesons and charm hadrons. The measured rates are found to be 30 times larger than predicted from the SPS paradigm and in agreement with DPS predictions, showing the dominance of the DPS contribution in these processes. The measured parameter σeff is found to be in excellent agreement with those determined by the CDF collaboration from the study of jet events, but significantly more precise.

Recently, LHCb has extended this analysis to studying the simultaneous production of Υ mesons and charm hadrons. Such final states rely on the simultaneous production of cc and bb pairs. The full Run 1 data set has been used in this analysis.

The measured production rates exceed significantly the theory predictions, based on the SPS approach, but agree nicely with the DPS paradigm.

The measured parameter σeff is in very good agreement with all previous determinations and within the most precise measurements of this important parameter. The current best measurements of the σeff parameter are summarised in the figure.

ATLAS observes long-range elliptic anisotropies in √s = 13 and 2.76 TeV pp collisions

Measurements of two-particle angular correlations in proton–proton (pp) collisions at the LHC have shown a feature commonly called the “ridge”: an enhancement in production of particle pairs at small azimuthal angle separation, Δφ, that extends over a large pseudorapidity separation, Δη. This feature has been demonstrated and the two-particle correlation function has been measured in pp collisions at √s = 13 TeV. The ridge has been observed and studied in more detail in proton–nucleus (p+A) and nucleus–nucleus (A+A) collisions where a long-range correlation is observed on the away-side (Δφ ~ π) as well. Both the near- and away-side ridges in p+A and A+A collisions have been shown to result from a modulation of the single-particle azimuthal angle distributions, whose convolution produces the observed features in the two-particle Δφ distribution. However, prior to this measurement, it was not yet known whether the ridge in pp collisions arises from single-particle modulations or even if it is present on the away-side or not.

Recently, ATLAS has performed an analysis of the long-range component of the two-particle correlations in pp collisions at 2.76 TeV and 13 TeV in different intervals of charged-particle multiplicity, Nrecch. The yield, Y(Δφ), of particles associated with a “trigger” particle, for |Δη| > 2, is shown in the figure. A peak at Δφ ~ 0 is associated with the ridge while the peak at Δφ ~ π contains known contributions from dijets and, possibly, contributions from an away-side ridge. To disentangle the ridge and jet contributions, a new template fitting procedure, demonstrated in the figure, was used. The Y(Δφ) distributions were fitted by a sum (red curve) of two components: Y(Δφ) measured in low-multiplicity (0 < Nrecch < 20) collisions (open points), which accounts for the “jet” contribution, and a sinusoidally modulated component (blue dashed lines) which represents the long-range correlation like that observed in p+A and A+A collisions. These template fits successfully describe the two-particle correlations in all Nrecch intervals at both energies. Furthermore, the sinusoidal component was found to be present at all Nrecch intervals, indicating that the long-range correlation is a feature that is present at all multiplicities and not only in rare high-multiplicity events. The fractional amplitudes of the sinusoidal components are observed to be nearly constant with multiplicity and to be approximately the same at the two centre-of-mass energies.

These results suggest that the ridges in pp, p+A and A+A collisions arise from similar mechanisms. The observed weak dependence of the fractional amplitudes on Nrecch and centre-of-mass energy should provide a strong constraint on the physical mechanism responsible for producing the ridge.

ALICE looks to the skies (part II)

The ALICE experiment, designed for the LHC heavy-ion programme, is particularly well-suited for the detection and study of very high-energy cosmic events. The apparatus is located in a cavern 52 m underground, with 28 m of overburden rock, offering excellent conditions for the detection of muons produced by the interaction of cosmic rays in the upper atmosphere.

During pauses in the LHC operation (no beam circulating) between 2010 and 2013, the experiment collected cosmic-ray data for 30 days of effective time. Specific triggers were constructed from the information delivered by three detectors: ACORDE (A COsmic Ray DEtector), TOF (Time-Of-Flight) and SPD (Silicon Pixel Detector). The tracks of muons crossing the ALICE apparatus were reconstructed from the signals recorded by the TPC (time projection chamber). The unique ability of the TPC to track events with a large number of muons, unimaginable with standard cosmic-ray apparatus, has opened up the opportunity of studying the muon multiplicity distribution (MMD), and in particular rare events with extremely high muon density.

Atmospheric muons are created in extensive air showers that originate from the interaction of primary cosmic rays with nuclei in the upper atmosphere. The MMD has been measured by several experiments in the past, in particular by the ALEPH and DELPHI detectors at LEP. Neither of these two experiments was able to identify the origin of the high-multiplicity events observed. In particular, ALEPH concluded that the bulk of the data can be described using standard hadronic production mechanisms, but not the highest-multiplicity events, for which the measured rate exceeds the model predictions by over an order of magnitude, even when assuming that the primary cosmic rays are solely composed of iron nuclei.

The MMD measured from the data set collected by ALICE exhibits a smooth distribution up to a muon multiplicity of around 70. At larger multiplicities, five events have been detected with more than 100 muons, confirming the detection of similar events by ALEPH and DELPHI. The event with the highest multiplicity (276 muons) shown in the figure corresponds to a density of around 18 muons/m2.

These particular events triggered the question whether the data can be explained by a standard cosmic-ray composition, with usual hadronic-interaction models, or whether more exotic mechanisms are required. To answer these questions, as a first step, the MMD has been reproduced at low-to-intermediate multiplicities using the standard event generator CORSIKA, associated with QGSJET as the hadronic-interaction model. CORSIKA simulates the development of extensive air showers following the collision of a cosmic ray with the nuclei in the atmosphere. The shower particles are tracked through the atmosphere until they reach the ground.

These simulations successfully described the magnitude and shape of the measured MMD in the low-to-intermediate multiplicity, so the same model was then used to explore the origin of the five high-multiplicity events. This investigation revealed that these rare events can only be produced by primary cosmic rays with energies higher than 10,000 TeV. More importantly, the observed detection rate of one event every 6.2 days can be reproduced quite well by the simulations, assuming that all cosmic rays were due to iron nuclei (heavy composition). For proton nuclei (light composition) the expected rate would be of one event every 11.6 days.

Hence, for the first time, the rate of these rare events has been satisfactorily reproduced using conventional hadronic-interaction models. However, the large error in the measured rate (50%) prevents us from drawing a firm conclusion on the exact composition of these events, with heavy nuclei being, on average, the most likely candidates. This conclusion is in agreement with the deduced energy of these primaries being higher than 10,000 TeV, a range in which the heavy component of cosmic rays prevails.

The data collected this year will be extremely valuable in performing more detailed analysis using all of the measured variables and different hadronic-interaction models, and will therefore allow further progress in comprehending the origin of high-muon-multiplicity events.

Eleven-year search misses gravitational waves

A pair of supermassive black holes (SMBHs) in close orbit around each other are expected to produce gravitational waves. The background “rumble” of gravitational waves resulting from many such systems in the universe should be detectable via precise timing of radio pulsars. But an 11 year search has now failed to detect the predicted signal, suggesting the need to revise current models of black-hole binaries in merging galaxies.

There is observational evidence of the presence of a SMBH – a black hole with a mass of at least one-million solar masses – in almost every spiral galaxy. Therefore, when two galaxies merge, their black holes are thought to be drawn together and to form an orbiting pair (CERN Courier November 2015 p17). The separation of the two black holes would then decrease with time, first via interaction with nearby stars and gas, and finally via the emission of gravitational waves.

According to the general theory of relativity, gravitational waves are space–time distortions moving with the speed of light. Precisely 100 years after Einstein’s publication, they have not yet been directly detected. Millisecond pulsars offer an indirect way to detect them. Such neutron stars rotate on themselves hundreds of times per second, and produce extremely regular trains of radio pulses (CERN Courier November 2013 p11). A gravitational wave passing between the Earth and a millisecond pulsar squeezes and stretches space, changing the distance between them by about 10 m. This changes, very slightly, the time when the pulsar’s signals arrive on Earth on a timescale of ~ 0.1 to 30 years. The method requires precision in the pulse arrival time of the order of 10 ns.

Using the 64 m Parkes radio telescope in Australia, scientists monitored 24 millisecond pulsars during 11 years. They focused on four of them having the highest timing precision, but could not find any sign of gravitational waves. The study, published in Science, was led by Ryan Shannon from the Commonwealth Science and Industrial Research Organization and the International Centre for Radio Astronomy Research, Australia. It aimed at detecting the stochastic background of gravitational waves resulting from merging galaxies throughout the universe.

The obtained upper limit on the amplitude of the gravitational wave background is below the expectations of current models. A possible explanation of the discrepancy is linked to the environment of the black-hole pairs at the centre of merging galaxies. In the presence of more surrounding gas, the black holes would lose more rotational energy via friction. Their orbit would shrink quicker, therefore shortening the time when gravitational waves are emitted. Another possibility is that the galaxy merger rate is lower than expected.

Whatever the explanation, it means that the detection of gravitational waves by timing pulsars will require more intense monitoring. This has, however, no implications for ground-based gravitational-wave detectors such as Advanced LIGO (the Laser Interferometer Gravitational-Wave Observatory), which look for higher-frequency signals generated by other sources, such as coalescing neutron stars.

bright-rec iop pub iop-science physcis connect