New-Zealand company MARS Bioimaging Ltd has used technology developed at CERN to perform the first colour 3D X-ray of a human body, offering more accurate medical diagnoses. Father and son researchers Phil and Anthony Butler from Canterbury and Otago universities in New Zealand spent a decade building their product using Medipix read-out chips, which were initially developed to address the needs of particle tracking in experiments at the Large Hadron Collider.
The CMOS-based Medipix read-out chip works like a camera, detecting and counting each individual particle hitting the pixels when its shutter is open. The resulting high-resolution, high-contrast images make it unique for medical-imaging applications. Successive generations of chips have been developed during the past 20 years with many applications outside high-energy physics. The latest, Medipix3, is the third generation of the technology, developed by a collaboration of more than 20 research institutes – including the University of Canterbury.
MARS Bioimaging Ltd was established in 2007 to commercialise Medipix3 technology. The firm’s product combines spectroscopic information generated by a Medipix3-enabled X-ray detector with powerful algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector, hence identifying different components of body parts such as fat, water, calcium and disease markers.
So far, researchers have been using a small version of the MARS scanner to study cancer, bone and joint health, and vascular diseases that cause heart attacks and strokes. In the coming months, however, orthopaedic and rheumatology patients in New Zealand will be scanned by the new apparatus in a world-first clinical trial. “In all of these studies, promising early results suggest that when spectral imaging is routinely used in clinics it will enable more accurate diagnosis and personalisation of treatment,” said Anthony Butler.
High-energy gamma rays provide a window into the physics of cosmic objects at extreme energies, such as black holes, supernova remnants and pulsars. In addition to revealing the nature of such objects, high-energy gamma-ray signals test general relativity and the Standard Model of particle physics. Take for example gamma-ray bursts, which can last from 10 milliseconds to several hours and are emitted by sources located up to several billion light-years away from Earth. A comparison between the arrival times of the bursts’ X-rays and gamma rays has been used to exclude modifications of Einstein’s general relativity that predict different arrival times. Also, in some theories in which dark matter is in the form of weakly interacting massive particles (WIMPs), dark-matter particles can annihilate into gamma-ray photons and other Standard Model particles. Significant effort is therefore being spent in searches for dark-matter annihilation signals in the gamma-ray band, including searches towards the Milky Way centre, which is estimated to contain a large amount of dark matter.
Studies of individual gamma-ray emitting sources and diffuse gamma-ray emission, which could include a galactic dark-matter annihilation signal, have benefited greatly from the launch of the large-area telescope on board NASA’s Fermi Gamma-ray Space Telescope (Fermi-LAT) in June 2008. Fermi-LAT, which observes gamma rays with energies from about 20 MeV to 1 TeV, has discovered more than 3000 point sources that have enabled researchers to significantly improve models of known galactic and extragalactic gamma-ray-emitting objects. But Fermi-LAT has also thrown up some surprising discoveries (figure 1). One of these is the so-called Fermi bubbles – two large gamma-ray lobes above and below the galactic centre that, intriguingly, have no clear counterpart in the X-ray and radio bands.
A second unexpected discovery by Fermi-LAT was an excess of gamma-ray radiation near the galactic centre with an energy of a few GeV. Interestingly, the excess has properties that are consistent with an annihilation signal from dark-matter particles with a mass of a few tens of GeV. The excess is visible up to 10 or 15 degrees away from the galactic centre – an elephant at a distance of four metres from an observer would have a similar apparent size. The Fermi bubbles, spanning 110 degrees from the northern to the southern edge, have an apparent size comparable to that of an elephant located one metre away.
Finally, there is a third, even larger, feature in the gamma-ray, radio and X-ray bands called Loop I. The challenge of explaining these three “elephants” in the gamma-ray sky has puzzled physicists and astronomers for years – tens of years in the case of Loop I. Are the features related to each other? Are they located near the galactic centre or close to us? And is the GeV gamma-ray excess caused by dark-matter annihilation or by astrophysical phenomena such as pulsars?
Loop I
The largest of the gamma-ray elephants, Loop I, has been known since the 1950s from its radio emission (figure 2a). Its large angular size – it stretches up to 80 degrees above the galactic plane – could easily be explained if it were a nearby feature. For instance, it could be the combined emission from a “superbubble”, the collective remnant of several supernova explosions taking place in a localised region. Such a bubble easily reaches a size of a few hundred light-years, and if the distance to the bubble was also a few hundred light-years, then it would appear very large, up to 90 degrees in angular size. In this scenario, the galactic magnetic field would drape around the expanding bubble and high-energy cosmic-ray electrons from sources in the galactic disk, compressed by the expansion of the bubble, would produce synchrotron emission that would appear as a huge, ring-like structure in the sky. A possible location of the underlying supernova explosions would be the Scorpius–Centaurus stellar cluster located a few hundred light-years away from Earth.
Loop I, or at least its brightest part, known as the North Polar Spur, is also seen at other wavelengths, in particular at gamma-ray (figure 1) and soft X-ray (figure 2b) wavelengths. While the gamma rays can be produced through inverse Compton emission by the same cosmic-ray electrons that produce the synchrotron radio emission, the soft X-ray emission is probably produced by hot interstellar gas. The approximate angular alignment between the radio and X-ray emissions of the North Polar Spur suggests that they both belong to Loop I. Yet there are several differences between the X-ray and radio emissions. For example, a bright, ring-like feature in X-rays that is crossing the North Polar Spur could be explained by the collision of the hypothetical Loop I superbubble with another bubble containing the solar system, the local hot bubble. One can even trace back the motion of stars within a few hundred light-years from us to find a population of stars with members that could have exploded as supernovae up to about 10 million years ago and inflated the local hot bubble.
However, apparent X-ray absorption at the southern part of the North Polar Spur by neutral gas located along the line of sight points to a different interpretation. Detailed spectral modelling of this absorption has recently shown that the amount of gas required to explain the absorption puts the X-ray emitting structure at distances far beyond a few hundred light-years. This lower bound on the distance to the X-ray structure favours models of Loop I as a galactic-scale phenomenon, for example the product of a large-scale outflow from the galactic-centre region, as opposed to the nearby superbubble. More X-ray data is needed to pin down the nature of Loop I, but if this feature is indeed a large-scale galactic structure, then it might be related to the second elephant in the sky – the Fermi bubbles.
Fermi bubbles
The Fermi bubbles consist of two large gamma-ray lobes above and below the galactic centre, each of which is slightly larger than the distance from Earth to the galactic centre (about 25,000 light-years). They appear smaller than Loop I and were discovered in 2010 with about a year and a half of Fermi-LAT data. From observations of galaxies other than the Milky Way, we know of two possible mechanisms for creating such bubbles: emission from a supermassive black hole at the galactic centre, or a period of intensive star formation (a starburst) and supernova explosions. Which of these processes is responsible for the formation of the Fermi bubbles in our galaxy is not yet known.
Even the mechanism for producing the gamma rays in the first place is not yet resolved: it could be due to interactions between cosmic-ray protons and galactic gas, or inverse Compton scattering of high-energy electrons off interstellar radiation fields. Both of these options have caveats. For the first, it’s unclear, for instance, how one can collect and keep the high density of cosmic rays required to compensate for the low density of gas at large distances from the galactic plane. It’s also unclear whether the pressure of cosmic rays will expel the gas and create a cavity that will make the gas density even lower. For the inverse-Compton-scattering hypothesis, one would need electrons with energies up to 1 TeV. If these electrons were accelerated to such energies at the beginning of the expansion of the Fermi bubbles, then the bubbles’ expansion velocity would be about 10,000 km s–1 – at least 10 times larger than the typical observed outflow velocities.
Moreover, even though the Fermi bubbles are similar in shape to gamma-ray lobes in other galaxies, which are typically visible in X-ray and radio wavelengths, they have no clear counterpart in X-rays and radio waves at high latitudes. Perhaps the Fermi bubbles are unique to the Milky way. Then again, perhaps astronomers have simply struggled to detect in other galaxies gamma-ray lobes that are “quiet” in the radio and X-ray bands.
A study of the gamma-ray emission from the Fermi bubbles at low latitudes could shed light on their origin, as it may point to the supermassive black hole at the galactic centre or to a region away from the centre, which would support the starburst scenario. Although the diffuse foreground and background gamma-ray emission from the Milky Way near the galactic centre is very bright, making it hard to interpret the observations, several analyses of Fermi-LAT gamma-ray data have revealed an increased intensity of gamma-ray emission from the Fermi bubbles near the galactic plane and a displacement of the emission relative to the galactic centre (figure 3). The higher intensity of the emission at the base of the Fermi bubbles opens up the possibility for a detection with ground-based very-high-energy gamma-ray Cherenkov telescopes, such as the upcoming Cherenkov Telescope Array, which is expected to start taking data with a partial array in 2022 and with the full array in 2025. At low energies, below 100 GeV, the flux from the base of the Fermi bubbles may also be confused with the third elephant in the sky – the galactic-centre GeV excess.
Galactic-centre GeV excess
The first hints of an extended excess of gamma rays from the centre and bulge of the galaxy at energies around a few GeV and with an almost spherical morphology were presented in 2009, before the discovery of the Fermi bubbles. However, given that the diffuse foreground gamma-ray emission along the galactic plane is very bright, and also rather uncertain towards the galactic centre, it took a long time to prove that the excess is not caused by mis-modelling of foreground components (such as inverse Compton scattering of high-energy electrons and hadronic interactions of the stationary distribution of cosmic rays along the line of sight). The spectrum of the excess has a peak at a few GeV, hence the name “GeV excess”, whereas the components of the diffuse foreground have a power-law structure around a few GeV.
Intriguingly, the combined GeV centre and bulge emission has properties that are largely compatible with expectations from a dark-matter annihilation signal: the emission is extended, up to at least 10–15 degrees away from the galactic centre, with a profile that is consistent with that from a slightly contracted dark-matter-halo profile (figure 4). At energies below about 1 GeV, the gamma-ray emission grows steeply, and has a maximum at a few GeV with a cut-off or a significant softening at higher energies, which is expected for a signal from dark-matter annihilation.
Given the high stakes of claiming a discovery of a dark-matter annihilation signal, corroborating evidence for this hypothesis must be found, or alternative astrophysical explanations must be confidently excluded. Unfortunately, neither has happened up to now. Quite the contrary: there are several sources of gamma-ray emission near the galactic centre that could, within uncertainties, together account for all of the bulge and centre emission. For example, massive molecular-gas clouds near the galactic centre show clear indications of star-formation activity, which results in cosmic-ray production and associated gamma-ray emission in the inner galaxy. While the hadronic cosmic rays from such activity are not likely to explain the GeV excess, because their gamma-ray emission is not as extended as the GeV excess, inverse Compton emission from cosmic-ray electrons linked with such an activity can be extended over many degrees and is expected to contribute to the GeV emission. However, given that the energy spectrum expected for this inverse Compton emission is significantly flatter than the observed GeV excess, it is unlikely that this component accounts completely for the GeV-excess emission.
Arguably, the most plausible explanation for the GeV-excess emission from the galactic bulge and centre is a population of thousands of millisecond pulsars – highly magnetised neutron stars with a rotational period of 1–10 ms. They can emit gamma rays for billions of years before they lose energy, and their gamma-ray spectrum, as observed by Fermi-LAT, is similar to the spectrum of the GeV excess. It is plausible that millisecond pulsars in the bulge follow a similar spatial distribution as the majority of bulge stars. Indeed, recent analyses showed that the profile of the GeV-excess emission in the inner galaxy is better described by the boxy stellar bulge, rather than by a spherically symmetric dark-matter profile. Moreover, several detailed statistical analyses found evidence that the emission is more likely to be from a population of numerous but faint point sources, such as millisecond pulsars in the bulge, rather than from truly diffuse emission, such as that resulting from the annihilation of dark-matter particles (figure 5).
Future observations with radio telescopes such as MeerKAT in South Africa, expected to start taking data this year, and its successor, the Square Kilometre Array (SKA), the first construction phase of which is expected to end in 2020, should be able to test whether millisecond pulsars exist in the inner galaxy and can explain the GeV excess.
Additional multi-wavelength observations will provide new information about the three elephants in the sky. In particular, the eROSITA experiment, the successor of the X-ray ROSAT satellite, will survey the whole sky in X-rays and will be one order of magnitude more sensitive than ROSAT. With the eROSITA data, astronomers will search for a possible cavity carved out by cosmic rays in the Fermi bubbles and will estimate the distance to the North Polar Spur using the absorption of soft X-rays from the spur by the distribution of gas along the line of sight.
Possible connections
On the high-energy gamma-ray front, the upcoming Cherenkov Telescope Array is expected to detect the Fermi bubbles near the galactic plane above a few hundred GeV. This detection should help to answer the question of whether the Fermi bubbles are linked to the galaxy’s central supermassive black hole or to a different source away from the galactic centre. On the other side of the electromagnetic spectrum, the new generation of radio-telescope arrays, MeerKAT and SKA, should, as mentioned, be able to confirm or rule out the millisecond-pulsar hypothesis for the GeV excess. If the millisecond-pulsar hypothesis is excluded, then the dark-matter interpretation will remain as one of the plausible explanations. By contrast, a confirmation of the millisecond-pulsar hypothesis will significantly constrain the dark-matter hypothesis.
The presence of the three elephants in the gamma-ray sky in approximately the same direction raises the question of whether they are connected. One of the possible connections between the Fermi bubbles and Loop I is that Loop I is created by galactic gas pushed away by the expansion of the bubbles. In this case, the two elephants would become one, where Loop I is an outer part and the Fermi bubbles are an inner part. This scenario looks especially plausible for the northern bubble because Loop I extends beyond it.
The overlap between the GeV excess and the Fermi bubbles in the galactic-centre region provides the exciting possibility of a connection between the two. Models that try to explain the GeV excess with an additional population of cosmic-ray electrons, star formation and cosmic-ray acceleration processes, can connect the gamma-ray emission in the bulge with that at higher latitudes in the Fermi bubbles. Also, the mechanism underpinning the formation of the bubbles – whether it is linked to activity of the galaxy’s central supermassive black hole or to a burst of star formation – might affect the properties of the GeV excess. Future observations and analyses will help to settle the nature – common or not – of the three elephants in the sky, and might point to new physics such as dark-matter annihilation in the Milky Way. Studying the gamma-ray sky will no doubt be an exciting journey for many years to come.
A new experiment at Fermilab in the US, designed to make the most precise measurement of the muon’s magnetic moment, has completed its first physics data-taking campaign, showing promising results. Experiment E989 is a reincarnation of the muon g-2 experiment at Brookhaven National Laboratory (BNL), which ran in the late 1990s and early 2000s and found the muon’s anomalous magnetic moment, aμ, to be approximately 3.5 sigma above the Standard Model prediction. The Fermilab experiment aims to resolve this long-standing discrepancy, revealing whether it is due to a statistical fluctuation or to the existence of new particles that are influencing the muon’s behaviour.
The international E989 collaboration hopes to measure aμ to a final precision of 140 parts per billion, improving on the BNL result by a factor of four. Following months of commissioning efforts beginning last autumn, the experiment started taking data in February. Its net accumulated dataset is already almost twice that obtained by BNL, although much of the initial run involved varying the operating conditions to optimise data collection and explore systematics.
The principle behind the Fermilab and BNL experiments is the same: muons start with their spins aligned with their direction of motion, but as they journey around the storage ring they precess at a frequency proportional to the magnetic field and to the value of aμ. At experiment E989, muons are vertically focused in the ring via a system of electric quadrupoles, and the precession frequency is determined using a set of 24 electromagnetic calorimeters located along the inner circumference of the ring. The new experiment reuses the 1.45 T superconducting storage ring from BNL, which was shipped from Long Island to Chicago in 2015 and has since been rebuilt, its magnetic field now shimmed to a uniformity that exceeds BNL’s by a factor of three. Nearly all of the other aspects of the experiment are new.
The Fermilab Muon Campus – which will also serve the Muon-to-Electron Conversion experiment in the future – provides an intense polarised muon beam that is devoid of the pion contamination that challenged the BNL measurement. Bunches of muons are injected into the storage ring and then “kicked” during their first rotation around the ring. “This is one of the most challenging aspects and one that the collaboration continues to develop because the kick quality affects the net storage efficiency and the momentum distribution,” explains E989 member and former co-spokesperson David Hertzog.
A representative sample from a 60-hour-long dataset (see figure) demonstrates precession-frequency modulation on top of an exponentially decaying muon population. The collaboration is now evaluating data samples and developing different and independent approaches to extract the precession frequency and minimise systematic uncertainties. E989 researchers are also working to evaluate the average magnetic field and important beam-dynamics parameters.
In parallel, theorists are working hard on Standard Model calculations to reduce the uncertainties in the predicted value of aμ – in particular concerning hadronic corrections, which are the most challenging to evaluate due to the complexities of quantum chromodynamics (QCD). In June, Alexander Keshavarzi from the University of Liverpool, UK, and colleagues used electron–positron collision data to reevaluate the hadronic contribution to aμ, leading to the highest precision prediction so far. The following month, Thomas Blum of the University of Connecticut, US, and co-workers in the RBC and UKQCD collaborations reported a complete first-principles calculation of the leading-order hadronic contribution to aμ from lattice QCD and quantum electrodynamics, showing improved precision.
Physicists will have to wait a bit longer for E989 to release a first measurement of aμ, however. “Until we can closely examine the data quality – both precession data from detectors and field data from NMR probes – we are unable to predict the timetable,” says Hertzog. “Our aim is sometime in 2019, but we will unblind only after we are certain that the analysis is complete – so stay tuned.”
Einstein’s theory of gravity, general relativity, is known to work well on scales smaller than an individual galaxy. For example, the orbits of the planets in our solar system and the motion of stars around the centre of the Milky Way have been measured precisely and shown to follow the theory. But general relativity remains largely untested on larger length scales. This makes it hard to rule out alternative theories of gravity, which modify how gravity works over large distances to explain away mysterious cosmic substances such as dark matter. Now a precise test of general relativity on a galactic scale excludes some of these alternative theories.
Using data from the Hubble Space Telescope, a team led by Thomas Collett from the University of Portsmouth in the UK has found that a nearby galaxy dubbed ESO 325-G004 is surrounded by a ring-like structure known as an Einstein ring – a striking manifestation of gravitational lensing. As the light from a background object passes a foreground object, the gravity of the foreground object bends and magnifies the light of the background one into a ring. The ring system found by Collett’s group is therefore a perfect laboratory with which to test general relativity on galactic scales.
But it isn’t easy to make such a test, because the size and structure of the ring depend on several factors, including the distance of the background galaxy from Earth, and the distance, mass and shape of the foreground (lensing) galaxy. In previous tests the uncertainty on some of these factors resulted in large systematic errors in the modelling of the gravitational-lensing effect, allowing only weak constraints to be placed on alternative theories of gravity. Now Collett and colleagues’ discovery of an Einstein ring around a relatively close galaxy, ESO 325-G004, along with high-resolution observations of that same galaxy taken with the Multi Unit Spectroscopic Explorer (MUSE) on the European Southern Observatory (ESO) Very Large Telescope, has allowed the most precise test of general relativity outside the Milky Way.
The researchers derived the distances of the background galaxy and the lensing galaxy from measurements of their redshifts. Measuring the mass and the shape of the lensing galaxy is more complex, but was made possible here thanks to the MUSE observations that allowed the team to perform measurements of the motions of the stars that make up the galaxy relative to the galaxy’s centre. Since these motions are governed by the gravitational fields inside the galaxy, they can be used to indirectly measure the mass and shape of ESO 325-G004.
The team put all of these measurements together and determined the gravitational effect that ESO 325-G004 should have on the background galaxy’s light if general relativity holds true. The result, which, technically, tests the scale invariance of a parameter in general relativity called gamma, is almost in perfect agreement with general relativity, with an uncertainty of only 9%. Not only does it show that gravity behaves on a galactic scale in the same way as it does in our solar system, it also disfavours alternative gravity models, in particular those that attempt to remove the need for dark energy.
On 28 June, the US Department of Energy and the Italian Embassy, on behalf of the Italian Ministry of Education, Universities and Research, signed a collaboration agreement concerning the international Short Baseline Neutrino (SBN) programme hosted at Fermilab. The SBN programme, started in 2015, comprises the development, installation and operation of three neutrino detectors on the Fermilab site: the Short Baseline Near Detector, located 110 m from the neutrino beam source; MicroBooNE, located 470 m from the source; and ICARUS, located 600 m from the source. ICARUS was refurbished at CERN last year after a long and productive scientific life at Gran Sasso National Laboratory.
The SBN programme aims to search for exotic and highly non-reactive sterile neutrinos and resolve anomalies observed in previous experiments (CERN Courier June 2017 p25). Due to their different distances from the source, but employing the same liquid-argon technology, the three SBN detectors will be able to distinguish whether their measurements are due to transformations between neutrino types involving a sterile neutrino or are due to other previously-unknown neutrino interactions.
The signing of the SBN programme agreement is an addendum to a broader collaboration agreement on neutrino research that the US and Italy signed in 2015.
The Brout–Englert–Higgs mechanism solves the apparent theoretical impossibility of allowing weak vector bosons (the W and Z) to acquire mass. The discovery of the Higgs boson in 2012 via its decays into photons, Z and W pairs was therefore a triumph of the Standard Model (SM), which is built upon this mechanism. But the Higgs field is also predicted to provide mass to charged fermions (quarks and leptons) via “Yukawa couplings”, with interaction strengths proportional to the particle mass. The observation by ATLAS and CMS of the Higgs boson decaying into pairs of τ leptons provided the first direct evidence of this type of interaction and, since then, both experiments have confirmed the Yukawa coupling between the Higgs boson and the top quark.
Observing this decay mode and measuring its rate is mandatory to confirm (or not) the mass generation for fermions via Yukawa interactions, as predicted in the SM.
Six years after the Higgs-boson discovery, ATLAS had observed about 30% of its decays predicted by the SM. However, the favoured decay of the Higgs boson into a pair of b quarks, which is predicted to account for almost 60% of all possible decays, had remained elusive up to now. Observing this decay mode and measuring its rate is mandatory to confirm (or not) the mass generation for fermions via Yukawa interactions, as predicted in the SM.
At the 2018 International Conference on High Energy Physics (ICHEP) held in Seoul on July 4–11, ATLAS reported for the first time the observation of the Higgs boson decaying into pairs of b quarks at a rate consistent with the SM prediction. Evidence of the H→bb decay was earlier provided at the Tevatron in 2012, and one year ago by the ATLAS and CMS collaborations, independently. Given the abundance of H→bb decays, why did it take so long to achieve this observation?
The main reason is that the most copious production process for the Higgs boson in proton–proton collisions leads to a pair of particle jets originating from the fragmentation of b quarks (b-jets), and these are almost indistinguishable from the overwhelming background of b-quark pairs produced via the strong interaction. To overcome this challenge, it was necessary to consider production processes that are less copious, but exhibit features not present in strong interactions. The most effective of these is the associated production of the Higgs boson with a W or Z boson. The leptonic decays W→lν, Z→ll and Z→νν (where l stands for an electron or a muon) allow for efficient triggering and a powerful reduction of strong-interaction backgrounds.
However, the Higgs-boson signal remains orders of magnitude smaller than the remaining backgrounds arising from top-quark or vector-boson production, which can lead to similar signatures. One way to discriminate the signal from such backgrounds is to select on the mass, mbb, of pairs of b-jets identified by sophisticated b-tagging algorithms. When all WH and ZH channels are combined and the backgrounds (apart from WZ and ZZ production) subtracted from the data, the mbb distribution (figure, left) exhibits a clear peak arising from Z-boson decays to b-quark pairs, which validates the analysis procedure. The shoulder on the upper side is consistent in shape and rate with the expectation from Higgs-boson production.
Since this is not yet statistically sufficient to constitute an observation, the mass of the b-jet pair is combined with other kinematic variables that show distinct differences between the signal and the various backgrounds. This combination of multiple variables is performed using boosted decision trees for which a combination of all channels, reordered in terms of signal-to-background ratio, is shown in the right figure. The signal closely follows the distribution predicted by the SM with the presence of H→bb decays.
The analysis of 13 TeV data collected by ATLAS during Run 2 of the LHC between 2015 and 2017 leads to a significance of 4.9σ. This result was combined with those from a similar analysis of Run 1 data and from other searches by ATLAS for the H→bb decay mode, namely where the Higgs boson is produced in association with a top quark pair or via vector boson fusion. The significance achieved by this combination is 5.4σ, qualifying for observation.
Furthermore, combining the present analysis with others that target Higgs-boson decays to pairs of photons and Z bosons measured at 13 TeV yields the observation at 5.3σ of associated ZH or WH production, in agreement with the SM prediction. ATLAS has now observed all four primary Higgs-boson production modes at hadron colliders: fusion of gluons to a Higgs boson; fusion of weak bosons to a Higgs boson; associated production of a Higgs boson with two top quarks; and associated production of a Higgs boson with a weak boson. With these observations, a new era of detailed measurements in the Higgs sector opens up, through which the SM will be further challenged.
During its closed session on 14 June, the CERN Council decided, by consensus, the venues and dates for two key meetings concerning the upcoming update of the European strategy for particle physics. An open symposium, during which the high-energy physics community will be invited to debate scientific input into the strategy update, will take place in Granada, Spain, on 13–16 May 2019. The European strategy group’s drafting session will take place early the following year, on 20–24 January 2020, in Bad Honnef, Germany.
In addition, a special session organised by the European Committee for Future Accelerators on 14 July 2019, during the European Physical Society conference on high-energy physics in Ghent, Belgium, will provide a further opportunity for the community to feed into the drafting session (CERN Courier April 2018 p7).
The completion of the Large Hadron Collider (LHC) in autumn 2008, involving a vast international collaboration and a ten-figure – yet extremely tight – budget, presented unprecedented obstacles. When the LHC project started in earnest in late 1994, many of the most important technologies, production methods and instruments necessary to build and operate a multi-TeV proton collider simply did not exist. CERN therefore had to navigate the risks of lowest-bidder economics, and balance the need for innovation and creativity versus quality control and strict procurement procedures. The impact of long lead times for essential components and tooling, in addition to contingency for business failures, cost overruns and disputes, also had to be minimised.
Procurement for the LHC demanded a new philosophy, especially regarding the management of risk, to keep the LHC close to budget. Excluding personnel costs, the total amount charged to the CERN budget for the LHC was 4.3 billion Swiss francs, which includes: a share of R&D expenses; machine construction, tests and pre-operation; LHC computing; and a contribution to the cost of the detectors. Associated procurement activities covered everything from orders for a few tens of Swiss francs to contracts exceeding 100 million Swiss francs each, from purchases of a single unit to the series manufacturing of hundreds of thousands of components delivered over periods of several years. To give some figures, the construction of the LHC required: 1170 price enquiries and tender invitations to be issued; the negotiation, drafting and placing of 115,700 purchase orders and 1040 contracts; and the commitment of 6364 different suppliers and contractors, not including subcontractors.
Unconventional contracting
CERN’s organisational model also required LHC spending to take account of many national interests and to ensure a fair industrial return to Member States. In addition, CERN made special arrangements with a number of non-Member States for the handling of their respective additional contributions, part of which was provided in cash and part as in-kind deliverables. Procurement for the main components of the LHC fell into three distinct categories: civil engineering; superconducting magnets and their associated components; and cryogenics.
Although the main tunnel for the LHC already existed, the total value of necessary civil-engineering activities was around 500 million Swiss francs, requiring an unconventional division of tasks between CERN, consulting engineers and contractors (figure 1). The next major procurement task was to supply CERN with the LHC’s superconducting magnets, the contractual, technical and logistical challenges of which are difficult to exaggerate. The LHC contains some 1800 superconducting twin-aperture main dipole and quadrupole magnets, as well as their ancillary corrector magnets, all of which are very large and needed to be assembled with absolute precision. The total value of the magnets amounted to approximately 50% of the value of the whole LHC machine, with two thirds of this amount taken up by the dipoles alone (figure 2). CERN opted for an unusual policy to manufacture the LHC magnets, acting both as supplier and client to contractors, and the perils of this approach became apparent when one of the contractors unexpectedly became insolvent.
Problems also impacted the third major LHC procurement stage: the unprecedented cryogenics system required to cool the superconducting magnets to their 1.9 K operating temperature. A 27 kilometre-long helium distribution line called the QRL was designed to distribute the cryogenic cooling power to the LHC (figure 3), and, since several firms in CERN Member States were competent in such technology, CERN outsourced the task. But, by the spring of 2003, serious technical production problems, in addition to the insolvency of one of the subcontractors, forced CERN to take on a number of QRL tasks itself to keep the LHC on track.
At the end of 2018, the LHC will enter a two-year shutdown to prepare for the high-luminosity LHC (HL-LHC) upgrade, which aims to increase the total amount of data collected by the LHC by a factor of 10 and enable the machine to operate into the 2030s. Following five years of design study and R&D, the HL-LHC project was formally approved by the CERN Council in June 2016 with a budget of 950 million Swiss francs (excluding the upgrade of injectors and experiments). Tendering for civil engineering and for construction and testing of the main hardware components has started, and some of the contracts are in place. A total of around 90 invitations to tender and price enquiries have been issued, and orders and contracts for some 131 million Swiss francs have already been placed. In June, a groundbreaking ceremony at CERN marked the beginning of HL-LHC civil engineering.
From a procurement point of view, the HL-LHC is a very different beast to the LHC. First, despite the relatively large total project cost, the production volumes of components required for HL-LHC are much smaller. Hence, although the HL-LHC will rely on a number of key innovative technologies to modify the most complex and critical parts of the LHC (see box), these concern just 1.2 km of the total machine’s 27 km circumference. Second, the HL-LHC project is being executed roughly two decades later, in a totally different technological and industrial landscape.
A key factor in much of CERN’s procurement activities is that each new accelerator or upgrade brings more challenging requirements and performances, pushing industry to its limits. In the case of the LHC, the large production volumes were an incentive for potential suppliers to invest time and resources, but this is not always the case with the much smaller volumes of the HL-LHC. Sometimes the market is simply not willing to invest the time and money required as the perceived market is too small, which can lead to CERN designing its own prototypes or working alongside industry for many years to ensure that firms build the necessary competence and skills. Whereas in the days of LHC procurement, companies were more willing to take a long-term view, today many companies’ objectives are based on shorter-term results.
This makes it increasingly important for CERN to convey the many other benefits of collaborating on projects such as the HL-LHC. Not only is there kudos to be gained by being associated with projects at the limits of technology, but there are clear commercial pay-offs. A study related to LHC procurement and contracting, published in 2003, demonstrated clear benefits to CERN suppliers: some 38% had developed new products, 44% had improved technological learning, 60% acquired new customers thanks to the CERN contracts, and all firms questioned had derived great value from using CERN as a marketing reference. Another, more recent, study of the cost-benefits analysis of the LHC and its upgrade is currently being conducted by economists at the University of Milan, providing evidence of a positive and statistically significant correlation between LHC procurement and supplier R&D effort, innovation capacity, productivity and economic performance (see “LHC upgrade brings benefits beyond physics“).
The success of any major big-science project depends on the quality and competence of the suppliers and contractors. There is no “one-size-fits-all” solution in procurement for different requirements and, if a strategy does not work as planned because of unforeseeable conditions, it must be changed. The 36-strong CERN procurement team maintains a supplier database and organises and attends industry events to connect with businesses. It also uses national industrial liaison officers to help find suitable companies in those countries and reaches out to other research labs, all while involving engineers and physicists in the search for new potential suppliers. In the end, the realisation of major international projects such as the LHC and HL-LHC is all about good teamwork between the people responsible for the various activities within the host facility and their suppliers and contractors.
Parts of this article were drawn from the recently republished book The Large Hadron Collider: A Marvel of Technology, edited by L Evans.
CERN is a unique international research infrastructure whose societal impacts go well beyond advancing knowledge in high-energy physics. These do not just include technological spillovers and benefits to industry, or unique inventions such as the World Wide Web, but also the training of skilled individuals and wider cultural effects. The scale of modern particle-physics research is such that single projects, such as the Large Hadron Collider (LHC) at CERN, offer an opportunity to weigh up the returns on public investment in fundamental science.
Recently, the European Commission (EC) introduced requirements for large research infrastructures to estimate their socioeconomic impact. A quantitative estimate can be obtained via a social cost–benefit analysis (CBA), a well-established methodology in economics. Successfully passing a social CBA test is required for co-financing major projects with the European Regional Development Fund and the Cohesion Fund. The EC’s Horizon 2020 programme also specifically mentions that the preparatory phase of new projects that are members of the European Strategy Forum on Research Infrastructures (ESFRI) should include a social CBA.
Against this background, our team at the University of Milan in Italy was invited by CERN’s Future Circular Collider (FCC) study to carry out a social CBA of the high-luminosity LHC (HL-LHC) upgrade project, also preparing the ground for further analysis of larger, post-LHC projects. Involving three years of work and extending an initial study concerning the LHC carried out between 2014 and 2016, the report assesses the HL-LHC’s economic costs and benefits until 2038, once the machine has ceased operations. Here, we summarise the main findings of our analysis, which also includes the most comprehensive survey to date concerning the public’s willingness to pay for CERN investment projects.
Estimating value
Since the aim of the HL-LHC project is to extend the discovery potential of the LHC after 2025, it is also expected to prolong its impact on society. To evaluate such an effect, we require a CBA model that estimates the expected net present value (NPV) of a project at the end of a defined observation period. The NPV is calculated from the net flow of discounted benefits generated by the investment. Uncertainty surrounding the estimation of costs and benefits is tackled with Monte Carlo simulations based on probabilities attached to the variables underlying the analysis. For the HL-LHC, the relevant benefits were taken to be: the value of training for early-stage researchers; technological or industrial spillovers to industry; cultural effects for the public; academic publications for scientists; and the public-good value for citizens (figure 1). A research infrastructure passes the CBA test when, over time, the cumulated benefits exceed its costs for society, i.e. when the expected NPV is greater than zero. It is the methodology of a CBA not to account for scientific discoveries and results, since the aim of such studies is to quantify extra benefits that come from this type of investment.
Two scenarios were considered: a baseline scenario with the HL-LHC upgrade and a counterfactual scenario that includes the operation of the LHC until the end of its life without the upgrade. In both scenarios, the total costs include past and future expenditures attributed to the LHC accelerator complex and by the four main LHC experiment collaborations: ATLAS, CMS, LHCb and ALICE. The difference between the total cost (which includes capital and operational expenditures) in the two scenarios is about 2.9 billion Swiss francs.
HL-LHC benefits
For the HL-LHC, one of the most significant benefits, representing at least a third of the total, was the value of training for early-stage researchers (figure 2). It was shown that the 2038 cohort of early-stage researchers will enjoy a “salary premium” due to their experience at the HL-LHC or LHC until 2080, as confirmed by surveys of students, formers students and more than 330 team leaders.
The economic benefit from industrial spillovers, software and communication technologies is another major factor, together representing 40% of the project’s total benefits. Software and communication technology represents 24% of the total benefits in this category, while the restcomes from the additional profits for high-tech companies involved in the HL-LHC (figure 3). We looked at the value of hi-tech procurement contracts for the HL-LHC, drawing from three different empirical analyses: an econometric study of the company accounts in the long-term, before and after the first contract with CERN; a survey of more than 650 CERN suppliers; and 28 case studies. In the case of HL-LHC, incremental profits for firms represent 16% of the total benefits from sales to customers other than CERN, and this percentage increases to 29% if we consider the difference between HL-LHC and the counterfactual scenario of no HL-LHC upgrade.
CERN and society
Cultural effects, while uncertain because they depend on future announcements of discoveries and communication strategies, were estimated to contribute 13% to the total HL-LHC benefits. More than half of this comes from onsite visitors to CERN and its travelling exhibitions.
Contributing just 2% of the total benefits in the HL-LHC scenario, scientific publications (relating to their quantity and citations, not their contents) represent the smallest overall socioeconomic benefit category. This is expected given the relatively small size of the high-energy physics community compared to other social groups.
The public-good value of HL-LHC, estimated to be 12% of the total, was inferred from a survey of taxpayers’ willingness to pay for a research infrastructure such as CERN. A first estimate was carried out in our assessment of the LHC benefits published in 2016, but recently we have refined this estimate based on an extensive survey in one of CERN’s two host states, France (see box). A similar survey is planned in CERN’s other host state, Switzerland.
Taking all this into account, including the uncertainties in critical variables and relying on Monte Carlo simulations to estimate the probabilities of costs, benefits and the NPV of the project, our analysis showed that the HL-LHC has a clear, quantifiable economic benefit for society (figure 4). Overall, the ratio between the incremental benefits and incremental costs of the HL-LHC with respect to the continued operation of the LHC under normal consolidation (i.e. without high-luminosity upgrade) is 1.8. This means that each Swiss franc invested in the HL-LHC upgrade project pays back approximately 1.8 Swiss francs in societal benefits, mainly stemming from the value of the skills acquired by students and postdocs, and from industrial spillovers. The study is also based on very conservative assumptions about the potential benefits.
What conclusions should CERN draw from this analysis? First, given that the benefits to early-stage researchers are the single most important societal benefit, CERN could invest more in activities facilitating the transition to the international job market. Similarly, cooperative relations with suppliers of technology and the development of innovative software, data storage, networking and computing solutions are strategic levers that CERN could use to boost its social benefits. Finally, cultural effects, especially those related to onsite visitors and social media, have great potential for generating societal benefits, hence outreach and
communication strategies are important.
There are also lessons regarding CERN’s investments in future particle accelerators. The HL-LHC project yields significant socio-economic value, well in excess of its costs and in addition to its scientific output. Extrapolating these results, it can be expected that future colliders at CERN, like those considered by the FCC study, would bring the same kind of social benefits, but on a bigger scale. Further research is needed on the socio-economic impact of new long-term investment scenarios.
On 28 June, 200 servers from the CERN computing centre were donated to Kathmandu University (KU) in Nepal. The equipment, which is no longer needed by CERN, will contribute towards a new high-performance computing facility for research and educational purposes.
With more than 15,000 students across seven schools, KU is the second largest university in Nepal. But infrastructure and resources for carrying out research are still minimal compared to universities of similar size in Europe and the US. For example, the KU school of medicine is forced to periodically delete medical imaging data because disk storage is at a premium, undermining the value of the data for preventative screening of diseases, or for population health studies. Similarly, R&D projects in the schools of science and engineering fulfill their needs by borrowing computing time abroad, either through online data transfer, marred by bandwidth, or by physically taking data tapes to institutes abroad for analysis.
“We cannot emphasise enough the need for a high-performance computing facility at KU, and, speaking of the larger national context, in Nepal,” says Rajendra Adhikari, an assistant professor of physics at KU. “The server donation from CERN to KU will have a historically significant impact in fundamental research and development at KU and in Nepal.”
A total of 184 CPU servers and 16 disk servers, in addition to 12 network switches, were shipped from CERN to KU. The CPU servers’ capacity represents more than 2500 processor cores and 8 TB of memory, while the disk servers will provide more than 700 TB of storage. The total computing capacity is equivalent to more than 2000 typical desktop computers.
Since 2012, CERN has regularly donated computing equipment that no longer meets its highly specific requirements but is still more than adequate for less exacting environments. To date, a total of 2079 servers and 123 network switches have been donated to countries and international organisations, namely Algeria, Bulgaria, Ecuador, Egypt, Ghana, Mexico, Morocco, Pakistan, the Philippines, Senegal, Serbia, the SESAME laboratory in Jordan, and now Nepal. In the process leading up to the KU donation, the government of Nepal and CERN signed an International Cooperation Agreement to formalise their relationship (CERN Courier October 2017 p28).
“It is our hope that the server handover is one of the first steps of scientific partnership. We are committed to accelerate the local research programme, and to collaborate with CERN and its experiments in the near future,” says Adhikari.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.