Comsol -leaderboard other pages

Topics

ATLAS observes Higgs-boson decay to b quarks

A report from the ATLAS experiment

The Brout–Englert–Higgs mechanism solves the apparent theoretical impossibility of allowing weak vector bosons (the W and Z) to acquire mass. The discovery of the Higgs boson in 2012 via its decays into photons, Z and W pairs was therefore a triumph of the Standard Model (SM), which is built upon this mechanism. But the Higgs field is also predicted to provide mass to charged fermions (quarks and leptons) via “Yukawa couplings”, with interaction strengths proportional to the particle mass. The observation by ATLAS and CMS of the Higgs boson decaying into pairs of τ leptons provided the first direct evidence of this type of interaction and, since then, both experiments have confirmed the Yukawa coupling between the Higgs boson and the top quark.

Observing this decay mode and measuring its rate is mandatory to confirm (or not) the mass generation for fermions via Yukawa interactions, as predicted in the SM.

Six years after the Higgs-boson discovery, ATLAS had observed about 30% of its decays predicted by the SM. However, the favoured decay of the Higgs boson into a pair of b quarks, which is predicted to account for almost 60% of all possible decays, had remained elusive up to now. Observing this decay mode and measuring its rate is mandatory to confirm (or not) the mass generation for fermions via Yukawa interactions, as predicted in the SM.

At the 2018 International Conference on High Energy Physics (ICHEP) held in Seoul on July 4–11, ATLAS reported for the first time the observation of the Higgs boson decaying into pairs of b quarks at a rate consistent with the SM prediction. Evidence of the Hbb decay was earlier provided at the Tevatron in 2012, and one year ago by the ATLAS and CMS collaborations, independently. Given the abundance of Hbb decays, why did it take so long to achieve this observation?

The main reason is that the most copious production process for the Higgs boson in proton–proton collisions leads to a pair of particle jets originating from the fragmentation of b quarks (b-jets), and these are almost indistinguishable from the overwhelming background of b-quark pairs produced via the strong interaction. To overcome this challenge, it was necessary to consider production processes that are less copious, but exhibit features not present in strong interactions. The most effective of these is the associated production of the Higgs boson with a W or Z boson. The leptonic decays Wlν, Zll and Z→νν (where l stands for an electron or a muon) allow for efficient triggering and a powerful reduction of strong-interaction backgrounds.

Results

However, the Higgs-boson signal remains orders of magnitude smaller than the remaining backgrounds arising from top-quark or vector-boson production, which can lead to similar signatures. One way to discriminate the signal from such backgrounds is to select on the mass, mbb, of pairs of b-jets identified by sophisticated b-tagging algorithms. When all WH and ZH channels are combined and the backgrounds (apart from WZ and ZZ production) subtracted from the data, the mbb distribution (figure, left) exhibits a clear peak arising from Z-boson decays to b-quark pairs, which validates the analysis procedure. The shoulder on the upper side is consistent in shape and rate with the expectation from Higgs-boson production.

Since this is not yet statistically sufficient to constitute an observation, the mass of the b-jet pair is combined with other kinematic variables that show distinct differences between the signal and the various backgrounds. This combination of multiple variables is performed using boosted decision trees for which a combination of all channels, reordered in terms of signal-to-background ratio, is shown in the right figure. The signal closely follows the distribution predicted by the SM with the presence of Hbb decays.

The analysis of 13 TeV data collected by ATLAS during Run 2 of the LHC between 2015 and 2017 leads to a significance of 4.9σ. This result was combined with those from a similar analysis of Run 1 data and from other searches by ATLAS for the Hbb decay mode, namely where the Higgs boson is produced in association with a top quark pair or via vector boson fusion. The significance achieved by this combination is 5.4σ, qualifying for observation.

Furthermore, combining the present analysis with others that target Higgs-boson decays to pairs of photons and Z bosons measured at 13 TeV yields the observation at 5.3σ of associated ZH or WH production, in agreement with the SM prediction. ATLAS has now observed all four primary Higgs-boson production modes at hadron colliders: fusion of gluons to a Higgs boson; fusion of weak bosons to a Higgs boson; associated production of a Higgs boson with two top quarks; and associated production of a Higgs boson with a weak boson. With these observations, a new era of detailed measurements in the Higgs sector opens up, through which the SM will be further challenged.

Dates fixed for strategy update

During its closed session on 14 June, the CERN Council decided, by consensus, the venues and dates for two key meetings concerning the upcoming update of the European strategy for particle physics. An open symposium, during which the high-energy physics community will be invited to debate scientific input into the strategy update, will take place in Granada, Spain, on 13–16 May 2019. The European strategy group’s drafting session will take place early the following year, on 20–24 January 2020, in Bad Honnef, Germany.

In addition, a special session organised by the European Committee for Future Accelerators on 14 July 2019, during the European Physical Society conference on high-energy physics in Ghent, Belgium, will provide a further opportunity for the community to feed into the drafting session (CERN Courier April 2018 p7).

Procurement at the forefront of technology

Prototype quadrupole magnet

The completion of the Large Hadron Collider (LHC) in autumn 2008, involving a vast international collaboration and a ten-figure – yet extremely tight – budget, presented unprecedented obstacles. When the LHC project started in earnest in late 1994, many of the most important technologies, production methods and instruments necessary to build and operate a multi-TeV proton collider simply did not exist. CERN therefore had to navigate the risks of lowest-bidder economics, and balance the need for innovation and creativity versus quality control and strict procurement procedures. The impact of long lead times for essential components and tooling, in addition to contingency for business failures, cost overruns and disputes, also had to be minimised.

Procurement for the LHC demanded a new philosophy, especially regarding the management of risk, to keep the LHC close to budget. Excluding personnel costs, the total amount charged to the CERN budget for the LHC was 4.3 billion Swiss francs, which includes: a share of R&D expenses; machine construction, tests and pre-operation; LHC computing; and a contribution to the cost of the detectors. Associated procurement activities covered everything from orders for a few tens of Swiss francs to contracts exceeding 100 million Swiss francs each, from purchases of a single unit to the series manufacturing of hundreds of thousands of components delivered over periods of several years. To give some figures, the construction of the LHC required: 1170 price enquiries and tender invitations to be issued; the negotiation, drafting and placing of 115,700 purchase orders and 1040 contracts; and the commitment of 6364 different suppliers and contractors, not including subcontractors.

Unconventional contracting

CERN’s organisational model also required LHC spending to take account of many national interests and to ensure a fair industrial return to Member States. In addition, CERN made special arrangements with a number of non-Member States for the handling of their respective additional contributions, part of which was provided in cash and part as in-kind deliverables. Procurement for the main components of the LHC fell into three distinct categories: civil engineering; superconducting magnets and their associated components; and cryogenics.

Fig. 1.

Although the main tunnel for the LHC already existed, the total value of necessary civil-engineering activities was around 500 million Swiss francs, requiring an unconventional division of tasks between CERN, consulting engineers and contractors (figure 1). The next major procurement task was to supply CERN with the LHC’s superconducting magnets, the contractual, technical and logistical challenges of which are difficult to exaggerate. The LHC contains some 1800 superconducting twin-aperture main dipole and quadrupole magnets, as well as their ancillary corrector magnets, all of which are very large and needed to be assembled with absolute precision. The total value of the magnets amounted to approximately 50% of the value of the whole LHC machine, with two thirds of this amount taken up by the dipoles alone (figure 2). CERN opted for an unusual policy to manufacture the LHC magnets, acting both as supplier and client to contractors, and the perils of this approach became apparent when one of the contractors unexpectedly became insolvent.

Fig. 2.

Problems also impacted the third major LHC procurement stage: the unprecedented cryogenics system required to cool the superconducting magnets to their 1.9 K operating temperature. A 27 kilometre-long helium distribution line called the QRL was designed to distribute the cryogenic cooling power to the LHC (figure 3), and, since several firms in CERN Member States were competent in such technology, CERN outsourced the task. But, by the spring of 2003, serious technical production problems, in addition to the insolvency of one of the subcontractors, forced CERN to take on a number of QRL tasks itself to keep the LHC on track.

Fig. 3.

At the end of 2018, the LHC will enter a two-year shutdown to prepare for the high-luminosity LHC (HL-LHC) upgrade, which aims to increase the total amount of data collected by the LHC by a factor of 10 and enable the machine to operate into the 2030s. Following five years of design study and R&D, the HL-LHC project was formally approved by the CERN Council in June 2016 with a budget of 950 million Swiss francs (excluding the upgrade of injectors and experiments). Tendering for civil engineering and for construction and testing of the main hardware components has started, and some of the contracts are in place. A total of around 90 invitations to tender and price enquiries have been issued, and orders and contracts for some 131 million Swiss francs have already been placed. In June, a groundbreaking ceremony at CERN marked the beginning of HL-LHC civil engineering.

From a procurement point of view, the HL-LHC is a very different beast to the LHC. First, despite the relatively large total project cost, the production volumes of components required for HL-LHC are much smaller. Hence, although the HL-LHC will rely on a number of key innovative technologies to modify the most complex and critical parts of the LHC (see box), these concern just 1.2 km of the total machine’s 27 km circumference. Second, the HL-LHC project is being executed roughly two decades later, in a totally different technological and industrial landscape.

A key factor in much of CERN’s procurement activities is that each new accelerator or upgrade brings more challenging requirements and performances, pushing industry to its limits. In the case of the LHC, the large production volumes were an incentive for potential suppliers to invest time and resources, but this is not always the case with the much smaller volumes of the HL-LHC. Sometimes the market is simply not willing to invest the time and money required as the perceived market is too small, which can lead to CERN designing its own prototypes or working alongside industry for many years to ensure that firms build the necessary competence and skills. Whereas in the days of LHC procurement, companies were more willing to take a long-term view, today many companies’ objectives are based on shorter-term results.

This makes it increasingly important for CERN to convey the many other benefits of collaborating on projects such as the HL-LHC. Not only is there kudos to be gained by being associated with projects at the limits of technology, but there are clear commercial pay-offs. A study related to LHC procurement and contracting, published in 2003, demonstrated clear benefits to CERN suppliers: some 38% had developed new products, 44% had improved technological learning, 60% acquired new customers thanks to the CERN contracts, and all firms questioned had derived great value from using CERN as a marketing reference. Another, more recent, study of the cost-benefits analysis of the LHC and its upgrade is currently being conducted by economists at the University of Milan, providing evidence of a positive and statistically significant correlation between LHC procurement and supplier R&D effort, innovation capacity, productivity and economic performance (see “LHC upgrade brings benefits beyond physics“).

The success of any major big-science project depends on the quality and competence of the suppliers and contractors. There is no “one-size-fits-all” solution in procurement for different requirements and, if a strategy does not work as planned because of unforeseeable conditions, it must be changed. The 36-strong CERN procurement team maintains a supplier database and organises and attends industry events to connect with businesses. It also uses national industrial liaison officers to help find suitable companies in those countries and reaches out to other research labs, all while involving engineers and physicists in the search for new potential suppliers. In the end, the realisation of major international projects such as the LHC and HL-LHC is all about good teamwork between the people responsible for the various activities within the host facility and their suppliers and contractors.

Parts of this article were drawn from the recently republished book The Large Hadron Collider: A Marvel of Technology, edited by L Evans.

LHC upgrade brings benefits beyond physics

Summer students

CERN is a unique international research infrastructure whose societal impacts go well beyond advancing knowledge in high-energy physics. These do not just include technological spillovers and benefits to industry, or unique inventions such as the World Wide Web, but also the training of skilled individuals and wider cultural effects. The scale of modern particle-physics research is such that single projects, such as the Large Hadron Collider (LHC) at CERN, offer an opportunity to weigh up the returns on public investment in fundamental science.

Recently, the European Commission (EC) introduced requirements for large research infrastructures to estimate their socioeconomic impact. A quantitative estimate can be obtained via a social cost–benefit analysis (CBA), a well-established methodology in economics. Successfully passing a social CBA test is required for co-financing major projects with the European Regional Development Fund and the Cohesion Fund. The EC’s Horizon 2020 programme also specifically mentions that the preparatory phase of new projects that are members of the European Strategy Forum on Research Infrastructures (ESFRI) should include a social CBA.

Fig. 1.

Against this background, our team at the University of Milan in Italy was invited by CERN’s Future Circular Collider (FCC) study to carry out a social CBA of the high-luminosity LHC (HL-LHC) upgrade project, also preparing the ground for further analysis of larger, post-LHC projects. Involving three years of work and extending an initial study concerning the LHC carried out between 2014 and 2016, the report assesses the HL-LHC’s economic costs and benefits until 2038, once the machine has ceased operations. Here, we summarise the main findings of our analysis, which also includes the most comprehensive survey to date concerning the public’s willingness to pay for CERN investment projects.

Estimating value

Since the aim of the HL-LHC project is to extend the discovery potential of the LHC after 2025, it is also expected to prolong its impact on society. To evaluate such an effect, we require a CBA model that estimates the expected net present value (NPV) of a project at the end of a defined observation period. The NPV is calculated from the net flow of discounted benefits generated by the investment. Uncertainty surrounding the estimation of costs and benefits is tackled with Monte Carlo simulations based on probabilities attached to the variables underlying the analysis. For the HL-LHC, the relevant benefits were taken to be: the value of training for early-stage researchers; technological or industrial spillovers to industry; cultural effects for the public; academic publications for scientists; and the public-good value for citizens (figure 1). A research infrastructure passes the CBA test when, over time, the cumulated benefits exceed its costs for society, i.e. when the expected NPV is greater than zero. It is the methodology of a CBA not to account for scientific discoveries and results, since the aim of such studies is to quantify extra benefits that come from this type of investment.

Fig. 2.

Two scenarios were considered: a baseline scenario with the HL-LHC upgrade and a counterfactual scenario that includes the operation of the LHC until the end of its life without the upgrade. In both scenarios, the total costs include past and future expenditures attributed to the LHC accelerator complex and by the four main LHC experiment collaborations: ATLAS, CMS, LHCb and ALICE. The difference between the total cost (which includes capital and operational expenditures) in the two scenarios is about 2.9 billion Swiss francs.

HL-LHC benefits

For the HL-LHC, one of the most significant benefits, representing at least a third of the total, was the value of training for early-stage researchers (figure 2). It was shown that the 2038 cohort of early-stage researchers will enjoy a “salary premium” due to their experience at the HL-LHC or LHC until 2080, as confirmed by surveys of students, formers students and more than 330 team leaders.

Fig. 3.

The economic benefit from industrial spillovers, software and communication technologies is another major factor, together representing 40% of the project’s total benefits. Software and communication technology represents 24% of the total benefits in this category, while the rest  comes from the additional profits for high-tech companies involved in the HL-LHC (figure 3). We looked at the value of hi-tech procurement contracts for the HL-LHC, drawing from three different empirical analyses: an econometric study of the company accounts in the long-term, before and after the first contract with CERN; a survey of more than 650 CERN suppliers; and 28 case studies. In the case of HL-LHC, incremental profits for firms represent 16% of the total benefits from sales to customers other than CERN, and this percentage increases to 29% if we consider the difference between HL-LHC and the counterfactual scenario of no HL-LHC upgrade.

CERN and society

Cultural effects, while uncertain because they depend on future announcements of discoveries and communication strategies, were estimated to contribute 13% to the total HL-LHC benefits. More than half of this comes from onsite visitors to CERN and its travelling exhibitions.

Contributing just 2% of the total benefits in the HL-LHC scenario, scientific publications (relating to their quantity and citations, not their contents) represent the smallest overall socioeconomic benefit category. This is expected given the relatively small size of the high-energy physics community compared to other social groups.

The public-good value of HL-LHC, estimated to be 12% of the total, was inferred from a survey of taxpayers’ willingness to pay for a research infrastructure such as CERN. A first estimate was carried out in our assessment of the LHC benefits published in 2016, but recently we have refined this estimate based on an extensive survey in one of CERN’s two host states, France (see box). A similar survey is planned in CERN’s other host state, Switzerland.

Fig. 4.

Taking all this into account, including the uncertainties in critical variables and relying on Monte Carlo simulations to estimate the probabilities of costs, benefits and the NPV of the project, our analysis showed that the HL-LHC has a clear, quantifiable economic benefit for society (figure 4). Overall, the ratio between the incremental benefits and incremental costs of the HL-LHC with respect to the continued operation of the LHC under normal consolidation (i.e. without high-luminosity upgrade) is 1.8. This means that each Swiss franc invested in the HL-LHC upgrade project pays back approximately 1.8 Swiss francs in societal benefits, mainly stemming from the value of the skills acquired by students and postdocs, and from industrial spillovers. The study is also based on very conservative assumptions about the potential benefits.

What conclusions should CERN draw from this analysis? First, given that the benefits to early-stage researchers are the single most important societal benefit, CERN could invest more in activities facilitating the transition to the international job market. Similarly, cooperative relations with suppliers of technology and the development of innovative software, data storage, networking and computing solutions are strategic levers that CERN could use to boost its social benefits. Finally, cultural effects, especially those related to onsite visitors and social media, have great potential for generating societal benefits, hence outreach and
communication strategies are important.

Exhibition

There are also lessons regarding CERN’s investments in future particle accelerators. The HL-LHC project yields significant socio-economic value, well in excess of its costs and in addition to its scientific output. Extrapolating these results, it can be expected that future colliders at CERN, like those considered by the FCC study, would bring the same kind of social benefits, but on a bigger scale. Further research is needed on the socio-economic impact of new long-term investment scenarios.

Boosting high-performance computing in Nepal

Computing equipment

On 28 June, 200 servers from the CERN computing centre were donated to Kathmandu University (KU) in Nepal. The equipment, which is no longer needed by CERN, will contribute towards a new high-performance computing facility for research and educational purposes.

With more than 15,000 students across seven schools, KU is the second largest university in Nepal. But infrastructure and resources for carrying out research are still minimal compared to universities of similar size in Europe and the US. For example, the KU school of medicine is forced to periodically delete medical imaging data because disk storage is at a premium, undermining the value of the data for preventative screening of diseases, or for population health studies. Similarly, R&D projects in the schools of science and engineering fulfill their needs by borrowing computing time abroad, either through online data transfer, marred by bandwidth, or by physically taking data tapes to institutes abroad for analysis.

“We cannot emphasise enough the need for a high-performance computing facility at KU, and, speaking of the larger national context, in Nepal,” says Rajendra Adhikari, an assistant professor of physics at KU. “The server donation from CERN to KU will have a historically significant impact in fundamental research and development at KU and in Nepal.”

A total of 184 CPU servers and 16 disk servers, in addition to 12 network switches, were shipped from CERN to KU. The CPU servers’ capacity represents more than 2500 processor cores and 8 TB of memory, while the disk servers will provide more than 700 TB of storage. The total computing capacity is equivalent to more than 2000 typical desktop computers.

Since 2012, CERN has regularly donated computing equipment that no longer meets its highly specific requirements but is still more than adequate for less exacting environments. To date, a total of 2079 servers and 123 network switches have been donated to countries and international organisations, namely Algeria, Bulgaria, Ecuador, Egypt, Ghana, Mexico, Morocco, Pakistan, the Philippines, Senegal, Serbia, the SESAME laboratory in Jordan, and now Nepal. In the process leading up to the KU donation, the government of Nepal and CERN signed an International Cooperation Agreement to formalise their relationship (CERN Courier October 2017 p28).

“It is our hope that the server handover is one of the first steps of scientific partnership. We are committed to accelerate the local research programme, and to collaborate with CERN and its experiments in the near future,” says Adhikari.

Search for WISPs gains momentum

MADMAX

Understanding the nature of dark matter is one of the most pressing problems in physics. This strangely nonreactive material is estimated, from astronomical observations, to make up 85% of all matter in the universe. The known particles of the Standard Model (SM) of particle physics, on the other hand, account for a paltry 15%.

Physicists have proposed many dark-matter candidates. Two in particular stand out because they arise in extensions of the SM that solve other fundamental puzzles, and because there are a variety of experimental opportunities to search for them. The first is the neutralino, which is the lightest supersymmetric partner of the SM neutral bosons. The second is the axion, postulated 40 years ago to solve the strong CP problem in quantum chromodynamics (QCD). While the neutralino belongs to the category of weakly interacting massive particles (WIMPs), the axion is the prime example of a very weakly interacting sub-eV particle (WISP).

Neutralinos as WIMPs have dominated the search for cold dark matter since the mid-1980s, when it was realised that massive particles with a cross section of the order of the weak interaction would result in precisely the right density to explain dark matter. There have been tremendous efforts to hunt for WIMPs both at hadron colliders, especially now at CERN’s Large Hadron Collider (LHC), and in large underground detectors, such as CDMS, CRESST, DARKSIDE, LUX, PandaX and XENON. However, up to now, no WIMP has been observed (CERN Courier July/August 2018 p9).

Fig. 1.

Very light bosons as WISPs are a firm prediction of models that solve problems of the SM by the postulation of a new symmetry which is broken spontaneously in the vacuum. Such extensions contain an additional scalar field with a potential shaped like a Mexican hat – similar to the Higgs potential in the SM (figure 1). This leads to spontaneous breaking of symmetry at a scale corresponding to the radius of the trough of the hat: excitations in the direction along the trough correspond to a light Nambu–Goldstone (NG) boson, while the excitation in the radial direction perpendicular to the trough corresponds to a heavy particle with a mass determined by the symmetry-breaking scale. The strengths of the interactions between such light bosons and regular SM particles are inversely proportional to the symmetry-breaking energy scale and are therefore very weak. Being light, very weakly interacting and cold due to their non-thermal production history, these particles qualify as natural WISP cold dark-matter candidates.

Primordial production

In fact, WISP dark matter is inevitably produced in the early universe. When the temperature in the primordial plasma drops below the symmetry-breaking scale, the boson fields are frozen at a random initial value in each causally-connected region. Later, they relax towards the minimum of their potential at zero fields and oscillate around it. Since there is no significant damping of these field oscillations via decays or interactions, the bosons behave as a very cold dark-matter fluid. If symmetry breaking occurs after the likely inflationary-expansion epoch of the universe (corresponding to a post-inflationary symmetry-breaking scenario), WISP dark matter would also be produced by the decay of topological defects from the realignment of patches of the universe with random initial conditions. A huge region in parameter space spanned by WISP masses and their symmetry-breaking scales can give rise to the observed dark-matter distribution.

The axion is a particularly well-motivated example of a WISP. It was proposed to explain the results of searches for a static electric dipole moment of the neutron, which would constitute a CP-violating effect of QCD. The size of this CP-violation, parameterised by the angle θ, is predicted to have an arbitrary value between –π and π, yet experiments show its absolute value to be less than 10–10. If θ is replaced by a dynamical field, θ(x), as proposed by Peccei and Quinn in 1977, QCD dynamics ensures that the low-energy effective potential of the axion field has an absolute minimum at θ = 0. Therefore, in vacuum, the CP violating effects due to the θ angle in QCD disappear – providing an elegant solution to the strong CP problem. The axion is the inevitable particle excitation of θ(x), and its mass is determined by the unknown breaking scale of the global symmetry.

Fig. 2.

Lattice-QCD calculations performed last year precisely determined the temperature and corresponding time after the Big Bang when axion cold dark-matter could have formed as a function of the axion mass. It was found that, in the post-inflationary symmetry breaking scenario, the axion mass has to exceed 28 μeV; otherwise, the predicted amount of dark matter overshoots the observed amount. Taking into account the additional production of axion dark-matter from the decay of topological defects, an axion with a mass between 30 μeV and 10 meV may account for all of the dark matter in the universe. In the pre-inflationary symmetry breaking scenario, smaller masses are also possible.

Axions are not the only WISP species that could account for dark matter. There could be axion-like particles (ALPs), which are very similar to axions but do not solve the CP problem of QCD, or lightweight, weakly interacting, so-called hidden photons, for example. String theory suggests a plenitude of ALPs, which could have couplings to photons, leptons or light quarks.

Due to their tiny masses, WISPs might also be produced inside stars or alter the propagation of photons in the universe. Observations of stellar evolutions hint at such signals: red giants, helium-burning stars and white dwarfs seem to be experiencing unseen energy losses exceeding those expected from neutrino emission. Intriguingly, these anomalies can be explained in a unified manner by the existence of a sub-keV-mass axion or ALP with a coupling both to electrons and photons. Additionally, observations suggest that the propagation of TeV photons in the universe suffers less than expected from interactions with the extragalactic background light. This, in turn, could be explained by the conversion of photons into ALPs and back in astrophysical magnetic fields, interestingly with about the same axion–photon coupling strength as indicated by the observed stellar anomalies. Both effects have been known for almost 10 years. They are scientifically disputed, but a WISP explanation has not yet been excluded.

Experimental landscape

Most experiments searching for WISPs exploit their possible mixing with photons. Given the small masses and feeble interactions of axions and ALPs, however, building experiments that are sensitive enough to detect them is a considerable challenge. In the 1980s, Pierre Sikivie of the University of Florida in the US suggested a way forward based on the conversion of axions to photons: in a static magnetic field, the axion can “borrow” a virtual photon from the field and turn into a real photon (figure 2). Most experiments search for axions and ALPs in this way, with three main approaches being pursued: haloscopes, which look directly for dark-matter WISPs in the galactic halo of our Milky Way; helioscopes, which search for ALPs or axions emitted by the Sun; and laboratory experiments, which aim to generate and detect ALPs in a single setup.

Fig. 3.

Direct axion dark-matter searches differ in two aspects from WIMP dark-matter searches. First, axion dark matter would convert to photons, while WIMPs are scattered off matter. Second, the particle-number density for axion dark-matter, due to its low mass, is about 15 orders of magnitude larger than it is for WIMP dark matter. In fact, cold dark-matter axions and ALPs behave like a highly degenerate Bose–Einstein condensate with a de Broglie wavelength of the order of metres or kilometres for μeV and neV masses, respectively. Dark-matter axions and ALPs are thus much better pictured as a classical-field oscillation. In a magnetic field, they induce tiny electric-field oscillations with a frequency determined by the axion mass. If the de Broglie wavelength of the dark-matter axion is larger than the experimental setup, the tiny oscillations are spatially coherent in the experiment and can, in principle, be “easily” detected using a resonant microwave cavity tuned to the correct but unknown frequency. The sensitivity of such an experiment increases with the magnetic field strength squared, the volume of the cavity and its quality factor. Unfortunately, since the range of axion mass predicted by theories is huge, methods are required to tune the cavity to the frequency range corresponding to the respective axion masses.

This cavity approach has been the basis of most searches for axion dark-matter in the past decades, in particular the Axion Dark Matter Experiment (ADMX) at the University of Washington, US. Using a tuning rod inside the cavity to change the resonance frequency and, recently, by reducing noise in its detector system, the ADMX team has shown that it can reach axion dark-matter sensitivity. ADMX, which has been pioneering the field for two decades, is currently taking data and could find dark-matter axions at any time, provided the axion mass lies in the range 2–10 μeV. Meanwhile, the HAYSTAC collaboration at Yale University has very recently demonstrated that the same experimental approach can be expanded up to an axion mass of around 30 μeV. Since smaller-volume cavities (usually with lower quality factors) are needed to probe higher frequencies, however, the single-cavity approach is limited to axion masses below about 40 μeV. One novel method to probe higher masses is to use multiple matched cavities, as for example followed by the ADMX and the South Korean Center for Axion and Precision Physics.

Transitions

A different way to exploit the tiny electric-field oscillations from dark-matter axions in a strong magnetic field is to use transitions between materials with different dielectric constants: at surfaces, the axion-induced electromagnetic oscillations have a discontinuity, which is to be balanced by radiation from the surface. For a mirror with a surface area of 1 m² in a 10 T field, this would lead to an undetectable emission of around 10–27 W if axions make up all of the dark matter. Furthermore, the emission power does not depend on the axion mass. In principle, if a parabolic mirror with a surface area of 10,000 m² could be magnetised with a 10 T field, the predicted radiation power (10–23 W) could be focused and detected using state-of-the-art amplification techniques, but such an experiment seems impractical at present.

Fig. 4.

Alternatively, many magnetised dielectric discs in parallel can be placed in front of a mirror (figure 3): since the emission from all surfaces is coherent, constructive interference can boost the signal sensitivity for a given frequency range determined by the spacing between the discs. First studies performed in the past years at the Max Planck Institute for Physics in Munich have revealed that, for axion masses around 100 μeV, the sensitivity could be good enough to cover the predicted dark-matter axion mass range. The MADMAX (Magnetized Disc and Mirror Axion Experiment) collaboration, formed in October 2017, aims to use this approach to close the sensitivity gap in the well-motivated range for dark-matter axions with masses around 100 μeV. First design studies indicate that it is feasible to build a dipole magnet with the required properties using established niobium-titanium superconductor technology. As a first step, a prototype experiment is planned consisting of a booster with a reduced number of discs installed inside a prototype magnet. The experiment will be located at DESY in Hamburg, and first measurements sensitive to new ALPs parameter ranges are planned within the next few years.

Model independent searches

These direct searches for axion dark matter are very promising, but they are hampered at present by the unknown axion mass and rely on cosmological assumptions. Other, less-model dependent, experiments are required to further probe the existence of ALPs.

Fig. 5.

ALPs with energies of the order of a few keV could be produced in the solar centre, and could be detected on Earth by pointing a strong dipole magnet at the Sun: axions entering the magnet could be converted into photons in the same way they are in cavity experiments. The difference is that the Sun would emit relativistic axions with an energy spectrum very similar to the thermal spectrum in its core, so experiments need to detect X-ray photons and are sensitive to axion masses up to a maximum depending on the length of the apparatus (figure 4, top). This helioscope technique was brought to the fore by the CERN Axion Solar Telescope (CAST), shown in figure 5, which began operations in 2002 and has excluded axion masses above 0.02 eV. As a successor, the International Axion Observatory (IAXO) was formally founded in July 2017 and received an advanced grant from the European Research Council earlier this year. The near-term goal of the collaboration is to build a scaled-down prototype version of the experiment, called babyIAXO, which is under discussion for possible location at DESY.

Fig. 6.

The third, laboratory-based, approach to search for WISPs also aims to generate and detect ALPs without any model assumption. In the first section of such an experiment, laser light is sent through a strong magnetic field so that ALPs might be generated via interactions of optical photons with the magnetic field. The second section is separated from the first one by a light-tight wall that can only be penetrated by ALPs. These would stream through a strong magnetic field behind the wall, allowing them to be re-converted into photons and giving the impression of light shining through a wall (figure 4, bottom).

Such experiments have been performed since the early 1990s, but no hint for any ALP has shown up. Today, the most advanced project in this laboratory-based category is ALPS II, currently being set up at DESY (figure 6). This experiment will use two optical resonators implemented into the apparatus to “recycle” the light before and increase the re-conversion probability of ALPs into photons behind the wall, allowing ALPS II to reach sensitivities beyond ALP–photon coupling limits from helioscopes. It also plans to use 20 dipoles from the former HERA collider, each of which has to be mechanically straightened, to generate the magnetic field.

Gaining momentum

Fig. 7.

Searches for very lightweight axions and ALPs, potentially explaining all of the dark matter around us, are strongly gaining momentum. CERN has been supporting such activities in the past (with solar-axion and dark-matter searches at CAST, and the OSQAR and CROWS experiments using the shining-light-through-walls approach) and is also involved in the R&D phase for next-generation experiments such as IAXO (CERN Courier September 2014 p17). With the new initiatives of MADMAX and IAXO, both of which could be located at DESY, and the ALPS II experiment under construction there, experimental axion physics in Europe is set to probe a large fraction of a well-motivated parameter space (figure 7). In addition to complementary experiments worldwide, the next 10 years or so should shine a bright light on WISPs as the solution to the dark-matter riddle, with thrilling data runs expected to start in the early 2020s.

Cosmic Anger: Abdus Salam – The First Muslim Nobel Scientist

by Gordon Fraser. Oxford University Press. Hardback ISBN 9780199208463 £25 ($49.95).

The late Abdus Salam – the only Nobel scientist from Pakistan – came from a small place in the Punjab called Jhang. The town is also famous for “Heer-Ranjha”, a legendary love story of the Romeo-and-Juliet style that has a special romantic appeal in the countryside around the town. Salam turned out to be another “Ranjha” from Jhang, whose first love happened to be theoretical physics. Cosmic Anger, Salam’s biography by Gordon Fraser, is a new, refreshing look at the life of this scientific genius from Pakistan.

CCboo1_09_08

I have read several articles and books about Salam and also met him several times, but I still found Fraser’s account instructive. What I find intriguing and interesting about Cosmic Anger is first the title, and second that each chapter of the book gives sufficient background and historical settings of the events that took place in the life of Salam. In this regard the first three chapters are especially interesting, in particular the third, where the author talks about Messiahs, Mahdis and Ahmadis. This shows in a definitive way the in-depth knowledge that Fraser has about Islam and the region where Salam was born.

In chapter 10, Fraser discusses the special relationship between Salam and the former President of Pakistan, Ayub Khan. I feel that more emphasis was required about the fact that for 16 years, from 1958 to 1974, Salam had the greatest influence on the scientific policies of Pakistan. On 4 August 1959, while inaugurating the Atomic Energy Commission, President Ayub said: “In the end, I must say how happy I am to see Prof. Abdus Salam in our midst. His attainments in the field of science at such a young age are a source of pride and inspiration for us and I am sure that his association with the commission will help to impart weight and prestige to the recommendations.” Salam was involved in setting up the Atomic Energy Commission and other institutes such as the Pakistan Institute of Nuclear Science and Technology and the Space and Upper Atmosphere Research Commission in Pakistan.

Finally, I find the book to be a well written account of the achievements of a genius who was a citizen of the world, destined to play a memorable role in the global development of science and technology. At the same time, in many ways Salam was very much a Pakistani. In the face of numerous provocations and frustrations, he insisted on keeping his nationality. He loved the Pakistani culture, its language, its customs, its cuisine and its soil where he was born and is buried.

Gravitational Waves Vol 1: Theory and Experiments

By Michele Maggiore, Oxford University Press. Hardback ISBN 9780198570745 £45 ($90).

This is a complete book for a field of physics that has just reached maturity. Gravitational wave (GW) physics recently arrived at a special stage of development. On the theory side, most of the generation mechanisms have been understood and some technical controversies have been settled. On the experimental side, several large interferometers are now operating around the world, with sensitivities that could allow the first detection of GWs, even if with a relatively low probability. The GW community is also starting vigorous upgrade programmes to bring the detection probability to certitude in less than a decade from now.

The need for a textbook that treats the production and detection of GWs systematically is clear. Michele Maggiore has succeeded in doing this in a way that is fruitful not only for the young physicist starting to work in the field, but also for the experienced scientist needing a reference book for everyday work.

CCboo2_09_08

In the first part, on theory, he uses two complementary approaches: geometrical and field-theoretical. The text fully develops and compares both, which is of great help for a deep understanding of the nature of GWs. The author also derives all equations completely, leaving just the really straightforward algebra for the reader. A basic knowledge of general relativity and field theory is the only prerequisite.

Maggiore explains thoroughly the generation of gravitational radiation by the most important astrophysical sources, including the emitted power and its frequency distribution. One full chapter is dedicated to the Hulse-Taylor binary pulsar, which constituted the first evidence for GW emission. The “tricky” subject of post-Newtonian sources is also clearly introduced and developed. Exercises that are completely worked out conclude most of these theory chapters, enhancing the pedagogical character of the book.

The second part is dedicated to experiments and starts by setting up a background of data-analysis techniques, including noise spectral density, matched filtering, probability and statistics, all of which are applied to pulse and periodic sources and to stochastic backgrounds. Maggiore treats resonant mass detectors first, because they were the first detectors chronologically to have the capability of detecting signals, even if only strong ones originating in the neighbourhood of our galaxy. The study of resonant bar detectors is instructive and deals with issues that are also very relevant to understanding interferometers. The text clearly explains fundamental physics issues, such as approaching the quantum limits and quantum non-demolition measurements.

The last chapter is devoted to a complete and detailed study of the large interferometers – the detectors of the current generation – which should soon make the first detection of GWs. It discusses many details of these complex devices, including their coupling to gravitational waves, and it makes a careful analysis of all of the noise sources.

Lastly, it is important to remark on a little word that appears on the cover: “Volume 1”. As the author explains in the preface, he is already working on the second volume. This will appear in a few years and will be dedicated to astrophysical and cosmological sources of GWs. The level of this first book allows us to expect an interesting description of all “we can learn about nature in astrophysics and cosmology, using these tools”.

The Cosmological Singularity

By Vladimir Belinski and Marc Henneaux
Cambridge University Press

This monograph discusses at length the structure of the general solution of the Einstein equations with a cosmological singularity in Einstein-matter systems in four and higher space–time dimensions, starting from the fundamental work of Belinski (the book’s lead author), Khalatnikov and Lifshitz (BKL) – published in 1969.

The text is organised in two parts. The first, comprising chapters one to four, is dedicated to an exhaustive presentation of the BKL analysis. The authors begin deriving the oscillatory, chaotic behaviour of the general solution for pure Einstein gravity in four space–time dimensions by following the original approach of BKL. In chapters two and three, homogeneous cosmological models and the nature of the chaotic behaviour near the cosmological singularity are discussed. In these three chapters, the properties of the general solution of the Einstein equation are studied in the case of empty space in four space–time dimensions. The fourth chapter instead deals with different systems: perfect fluids in four space–time dimensions; gauge fields of the Yang–Mills and electromagnetic types and scalar fields, also in four space–time dimensions; and pure gravity in higher dimensions.

The second part of the book (chapters five to seven) is devoted to a model in which the chaotic oscillations discovered by BKL can be described in terms of a “cosmological billiard” system. In chapter five, the billiard description is provided for pure Einstein gravity in four dimensions, without any simplifying symmetry assumption, while the following chapter extends this analysis to arbitrary higher space–time dimensions and to general systems containing gravity coupled to matter fields. Finally, chapter seven covers the intriguing connection between the BKL asymptotic regime and Coxeter groups of reflections in hyperbolic space. Four appendices complete the treatment.

Quite technical and advanced, this book is meant for theoretical and mathematical physicists working on general relativity, supergravity and cosmology.

Gravitational Lensing

By Scott Dodelson
Cambridge University Press

Based on university lectures given by the author, this book provides an overview of gravitational lensing, which has emerged as a powerful tool in astronomy with numerous applications, ranging from the quest for extrasolar planets to the study of the cosmic mass distribution.

Gravitational lensing is a consequence of general relativity (GR): the gravitational field of a massive object causes light rays passing close to it to bend and refocus somewhere else. As a consequence, any treatment of this topic has to make reference to GR theory; nevertheless, as the author highlights, not much formalism is required to learn how to apply lensing to specific problems. Thus, using very little GR and not too complex mathematics, this text presents the basics of gravitational lensing, focusing on the equations needed to understand the phenomenon. It then dives into a number of applications, including multiple images, time delays, exoplanets, microlensing, cluster masses, galaxy shape measurements, cosmic shear and lensing of the cosmic microwave background.

Written with a pedagogical approach, this book is meant as a textbook for one-semester undergraduate or graduate courses. But it can also be used for independent study by researchers interested in entering this fascinating and fast-evolving field.

bright-rec iop pub iop-science physcis connect