Comsol -leaderboard other pages

Topics

Statistical Data Analysis for the Physical Sciences

By Adrian Bevan
Cambridge University Press
Hardback: £40 $75
Paperback: £18.99 $31.99

E-book: $26
Also available at the CERN bookshop

CCboo2_07_14

The numerous foundational errors and misunderstandings in this book make it inappropriate for use by students or research physicists at any level. There is space here to indicate only a few of the more serious problems.

The fundamental concepts – probability, probability density function (PDF) and likelihood function – are confused throughout. Likelihood is defined as being “proportional to probability”, and both are confused with a PDF in section 3.8(6). Exercise 3.11 invites the reader to “re-express the PDF as a likelihood function”, which is absurd because the two are functions of different kinds of arguments.

Probability and probability density are confused most notably in section 5.5 (χ2 distribution), where the “probability of χ2” is given as the value of the PDF instead of its integral from χ2 to infinity. (The latter quantity is in fact the p value, which is introduced later in section 8.2, but is needed here already.) The student who evaluates the PDFs labelled P(χ2, ν) in figure 5.6 to do exercises 5.10 to 5.12 will get the wrong answers, but the numbers given in table E11 – miraculously – are correct p values. Fortunately the formulas in the book were not used for the tables.

From the beginning there is confusion about what is Bayesian and what is not. Bayesian probability is defined correctly as a degree of belief, but Bayes’s theorem is introduced in the section entitled “Bayesian probability”, even though it can be used equally well in frequentist statistics, and in fact nearly all of the examples use frequentist probabilities. The different factors in Bayes’s theorem are given Bayesian names (one of which is wrong: the likelihood function is inexplicably called “a priori probability”), but the examples labelled “Bayesian” do not use the theorem in a Bayesian way. Worse, the example 3.7.4, labelled Bayesian, confuses the two arguments of conditional probability throughout, and equation 3.17 is wrong (as can be seen by comparing it with P(A) in section 3.2, which is correct). On the other hand, in section 8.7.1 a similar example – with frequentist probabilities again – is presented clearly and correctly. Example 3.7.5 (also labelled Bayesian) is, as far as I can see, nonsense (what is outcome A?).

The most serious errors occur in chapter 7 (confidence intervals). Confidence intervals are frequentist by definition, otherwise they should be called credible intervals. But the treatment here is a curious mixture of Bayesian, frequentist and pure invention. The definition of the confidence level (CL) is novel and involves integration under a PDF that could be the Bayesian posterior but in some examples turns out to be a likelihood function. Coverage is then defined in a frequentist-inspired way (invoking repeated experiments), but it is not the correct frequentist definition. The Feldman–Cousins (F–C) frequentist method is presented without having described the more general Neyman construction on which it is based. A good treatment of the Neyman construction would have allowed the reader to understand coverage better, which the book identifies correctly as the most important property of confidence intervals. It is true that for discrete (e.g. Poisson) data, the F–C method in general over-covers, but it should also have been stated that for this case any method (including Bayesian) that covers for all parameter values must over-cover for some. The “coverage” that this book claims to be exact for Bayesian methods is not an accepted definition because it represents subjective belief only and does not have the frequentist properties required by physicists.

The Physics of Reality: Space, Time, Matter, Cosmos

By Richard L Amoroso, Louis H Kauffman, Peter Rowlands (ed.)
World Scientific
Hardback: £111
E-book: £83

51sdCnHYvSL._SY445_SX342_QL70_ML2_

As the proceedings of the 8th Symposium Honoring Mathematical Physicist Jean-Pierre Vigier, this book introduces a new method in theory formation, completing the tools of epistemology. Like Vigier himself, the Vigier symposia are noted for addressing avant-garde, cutting-edge topics in contemporary physics. In this, several important breakthroughs are introduced for the first time. The most interesting is a continuation of Vigier’s pioneering work on tight-bound states in hydrogen. The new experimental protocol described not only promises empirical proof of large-scale extra dimensions in conjunction with avenues for testing string theory, but also implies the birth of unified field mechanics, ushering in a new age of discovery.

Countdown to physics

CCnew1_07_14

Following the restart of the first elements in CERN’s accelerator complex in June, beams are now being delivered to experiments from the Proton Synchrotron (PS) and the PS Booster.

First in line were experiments in the East Area of the PS, where the T9 and T10 beam lines are up and running. These test beams serve projects such as the Advanced European Infrastructures for Detectors at Accelerators (AIDA), which looks at new detector solutions for future accelerators, and the ALICE collaboration’s tests of components for their inner tracking system. By the evening of 14 July, beam was hitting the East Area’s target and the next day, beams were back in T9 and T10.

CCnew2_07_14

Next to receive beams for physics were experiments at the neutron time-of-flight facility, n_TOF, and the Isotope mass Separator On-Line facility, ISOLDE. On 25 July, detectors measured the first neutron beam in n_TOF’s new Experimental Area 2 (EAR2). It was a low-intensity beam, but it showed that the whole chain – from the spallation target to the experimental hall, including the sweeping magnet and the collimators – is working well. Built about 20 m above the neutron production target, EAR2 is a bunker connected to the underground facilities via a vertical flight path through a duct 80 cm in diameter, where the beamline is installed. At n_TOF, neutron-induced reactions are studied with high accuracy, thanks to the high instantaneous neutron flux that the facility provides. The first experiments will be installed in EAR2 this autumn and the schedule is full until the end of 2015.

A week later, on 1 August, ISOLDE restarted its physics programme with beams from the PS Booster, after a shutdown of almost a year and a half during which many improvements were made. One of the main projects was the installation of new robots for handling the targets that become very radioactive. The previous robots were more than 20 years old and beginning to suffer from the effects of radiation. The long shutdown of CERN’s accelerator complex, LS1, provided the perfect opportunity to replace them with more modern robots with electronic-sensor feedback. On the civil engineering side, three ISOLDE buildings have been demolished and replaced with a single building that includes a new control room, a data-storage room, three laser laboratories, and a biology and materials laboratory. In the ISOLDE hall, new permanent experimental stations have also been installed. Almost 40 experiments are planned for the remainder of 2014.

After the PS, the Super Proton Synchrotron (SPS) will be next to receive beam. On 27 June, the SPS closed its doors to the LS1 engineers, bringing almost 17 months of activities to an end. The machine has now entered the hardware-testing phase, in preparation for a restart in October.

Meanwhile at the LHC, early August saw the start of the cool down of a third sector – sector 1-2. By the end of August, five sectors of the machine should be in the process of cooling down, with one (sector 6-7) already cold. Meanwhile, the copper stabilizer continuity measurements (CSCM) have been completed in the first sector (6-7), with no defect found. CSCM tests are to start in the second sector in mid-August. Elsewhere in the machine, the last pressure tests were carried out on 31 July, and the last short-circuit tests should be complete by mid-August.

Precise measurements of top-quark production

The top quark is the heaviest-known fundamental particle, whose mass of about 173 GeV is much larger than that of the other quarks, and comparable to those of the W, Z and Higgs bosons. The copious production of top quark–antiquark pairs via the strong interaction in proton–proton collisions at the LHC allows a rich programme of studies, but it also makes top-pairs one of the key backgrounds to be understood in the search for physics beyond the Standard Model. In a recent paper, the ATLAS collaboration reports on precise measurements of the top-pair cross-section – i.e. the production rate – at centre-of-mass energies (√s) of both 7 and 8 TeV, using the full data sample from 2011 to 2012.

CCnew4_07_14

The measurements are made using a distinctive final state in which one top quark decays to an electron, a neutrino and a b quark, and the other to a muon, neutrino and b quark. This gives rise to events with an opposite-sign electron–muon pair, and collimated jets of particles “tagged” as being likely to have originated from b quarks. Events with both one and two such b-tagged jets are counted, reducing the uncertainties associated with jet reconstruction and b-quark tagging compared with earlier measurements at the LHC and at the Tevatron at Fermilab. The total uncertainties are around 4%, giving the most precise top-pair production measurements to date.

Theoretical predictions for the top-pair cross-section are now available at next-to-next-to-leading order (NNLO) accuracy in QCD, with uncertainties of about 5%. The results are in good agreement with these predictions, and give sensitivity to the fraction of the proton momentum carried by gluons. As the figure shows, the cross-section predictions depend on the assumed mass of the top quark mt, so the measurements can be interpreted as a determination of mt, giving m= 172.9+2.5–2.6 GeV. This technique measures the top-quark pole mass, and the resulting value is in good agreement with values obtained from direct reconstruction of top-quark decay products, involving different theoretical assumptions. Finally, the agreement between measurements and QCD predictions leaves little room for additional top-quark production from physics processes beyond the Standard Model, such as supersymmetry. For example, the measurements exclude supersymmetric top quarks with masses between mt and 177 GeV that decay to top quarks and invisible neutralinos – a mass range that is difficult to address with more traditional searches.

CMS releases final Run 1 results on H → γγ

The CMS collaboration achieved an important milestone this summer with completion of the analysis of the last of the five main channels that contributed to the discovery of a Higgs boson in July 2012. The subsequent measurements of the particle’s properties are now complete.

CCnew6_07_14

The results of the final analysis in the decay channel into a photon pair, H → γγ, were presented at the 2014 International Conference on High Energy Physics in Valencia and, at the same time, submitted for publication (CMS 2014a). This is one of the two Higgs-decay channels – the other being H → ZZ → four leptons – that have very good mass resolution and therefore allow the unquestionable detection of the Higgs boson and the precise measurement of its mass. However, H → γγ is probably the most difficult decay to exploit at the LHC. It requires a great deal of effort on the optimization and calibration of the electromagnetic calorimeter for photon identification and energy measurement, as well as highly sophisticated analysis methods designed to beat the large backgrounds from sources other than the Higgs.

The first preliminary results on the full Run 1 data were presented by CMS in March 2013. Since then, a large amount of work has gone into all aspects of the analysis: the understanding of the energy scale for photons was greatly improved, exclusive selections addressing all possible production processes were deployed, and major improvements in the statistical treatment of the background estimation were achieved. All of these changes have led to an increase in sensitivity of approximately 25% and to a reduction of the systematic uncertainty in the mass measurement by a factor three.

The analysis is based on various multivariate discriminants that are mainly used to separate events into a total of 25 exclusive categories that not only increase the sensitivity but also allow measurement of the different production processes for the Higgs boson in the H → γγ channel alone. The expected final sensitivity for the observation has increased from 4.2σ for the preliminary result to 5.2σ. The data show a 5.7σ excess at the Higgs boson mass of 125 GeV, therefore providing the definitive observation of the Higgs boson in the diphoton decay channel alone.

The final results of the analysis indicate that the yield of diphoton decays relative to the predictions of the Standard Model (the signal strength modifier) is 1.14+0.26–0.23 – in very good agreement with the Standard Model. In addition, the mass of the Higgs boson is measured to be 124.70±0.34 GeV – the most precise measurement to date.

The figure shows the combined weighted diphoton mass distribution, where a large excess in the region of 125 GeV is clearly visible. The publication presents a host of additional measurements, including the signal-strength modifiers associated with different production mechanisms, direct upper bounds on the Higgs boson width, the search for quasi-degenerate states decaying into two photons, and a spin analysis.

CMS also performed a preliminary combination of these results with the previously published results for the other channels (CMS 2014b). The overall signal strength from this combination is found to be 1.00±0.13, again in striking agreement with the predictions of the Standard Model.

The 1-2-3 of Ds meson spectroscopy

The LHCb collaboration has shown that a D0K structure with invariant mass 2860 MeV/c2 is composed of two resonances, one with spin 1 and the other with spin 3. This is the first time that a heavy flavoured spin-3 particle has been observed, and it should lead to new insights into hadron spectroscopy.

CCnew8_07_14

The LHCb experiment is designed primarily to study CP violation and rare decays of b hadrons. However, the large samples of decays collected are also allowing detailed studies into the spectroscopy of lighter particles that are produced in various different decay channels. LHCb has already determined the quantum numbers of the X(3872) particle and established that the Z(4430)+ state is indeed a resonance. Now, for the first time, the collaboration has used amplitude analysis techniques to study Bs→ D0Kπ+ decays. The well-defined initial and final states allow the determination of the spin and parity of any intermediate D0K resonance through the angular orientation of the decay products.

The figure shows the angular distribution of events seen in a peak with D0K invariant mass around 2860 MeV/c2. The data points are well fitted by a model that includes both spin-1 and spin-3 particles (solid blue curve). The models with either only a spin-1 (red curve) or a spin-3 (green curve) resonance are excluded with significance more than 10σ. A similar analysis of the angular distribution for events around the D*s2(2573) peak establishes, for the first time, that this resonance is indeed spin 2. In addition, the mass of this resonance is determined much more precisely than previous measurements, suggesting that renaming as D*s2(2568) might be in order.

The identification of a spin-3 resonance at a mass of 2860 MeV/c2 fits with the theoretical expectation for, in spectroscopic notation, the 2S+1LJ = 3D3 state, where S is the sum of the quark spins, L is the orbital angular momentum between the quarks and J is the total spin. It remains to be seen whether the production rate can be explained, because states with spin greater than two have never previously been observed in B-meson decays. With further analyses of the large samples available from LHCb and its upgrade, a new era of heavy-flavour spectroscopy could be beginning.

MicroBooNE detector is moved into place

CCnew9_07_14

The particle detector for MicroBooNE, a new short-baseline neutrino experiment at Fermi National Accelerator Laboratory, was gently lowered into place on 23 June. It is expected to detect its first neutrinos this winter.

The detector – a time-projection chamber surrounded by a 12-m-long cylindrical vessel – was carefully transported by truck across the Fermilab site, from the assembly building where the detector was constructed to the experimental hall nearly 5 km away. The 30-tonne object was then hoisted up by a crane, lowered through the open roof of a new building and placed into its permanent home, directly in the path of Fermilab’s Booster neutrino beamline.

When filled with 170 tonnes of liquid argon, MicroBooNE will look for low-energy neutrino oscillations to help to resolve the origin of a mysterious low-energy excess of particle events seen by the MiniBooNE experiment, which used the same beam line and relied on a Cherenkov detector filled with mineral oil.

The MicroBooNE time-projection chamber is the largest ever built in the US and is equipped with 8256 delicate gold-plated wires. The three layers of wires will capture pictures of particle interactions at different points in space and time. The superb resolution of the time-projection chamber will allow scientists to check whether the excess of MiniBooNE events is due to photons or electrons.

Using one of the most sophisticated processing programs ever designed for a neutrino experiment, computers will sift through the thousands of neutrino interactions recorded every day and create 3D images of the most interesting ones. The MicroBooNE team will use that data to learn more about neutrino oscillations and to narrow the search for a hypothesized fourth type of neutrino.

MicroBooNE is a cornerstone of Fermilab’s short-baseline neutrino programme, which could also see the addition of two more neutrino detectors along the Booster neutrino beamline, to refute or confirm hints of a fourth type of neutrino first reported by the LSND collaboration at Los Alamos National Laboratory. In its recent report, the Particle Physics Project Prioritization Panel (P5) expressed strong support for a short-baseline neutrino programme at Fermilab. The report was commissioned by the High Energy Physics Advisory Panel, which advises both the US Department of Energy and the National Science Foundation on funding priorities.

The detector technology used in MicroBooNE will serve as a prototype for a much larger liquid-argon detector that has been proposed as part of a long-baseline neutrino facility to be hosted at Fermilab. The P5 report strongly supports this larger experiment, which will be designed and funded through a global collaboration.

Global strategies for particle physics

The International Committee for Future Accelerators (ICFA) has issued a statement that endorses the strategic plans for the future of high-energy physics in Europe, Asia and the US. It also reaffirms ICFA’s support of the International Linear Collider (ILC) and its encouragement of international studies of future circular colliders.

The statement was issued at ICFA’s first meeting after the publication of the “P5” roadmap for the future of US particle physics. Previously published Asian and European strategies share common priorities. These strategies, which are the result of processes that involved each region’s particle-physics communities, provide guidelines for governments to make decisions in science policy.

• For the ICFA statement, see www.fnal.gov/directorate/icfa/ICFA_Statement_20140706.pdf.

Study hints at cosmic-ray ‘hotspot’

The distribution of ultra-high-energy cosmic rays (UHECR) recorded by the Telescope Array (TA) in the northern sky is displaying an intriguing “hotspot” in the direction of the Ursa Major constellation. The 19 events out of 72 with energies above 57 EeV (1 EeV = 1018 eV) clustering in a circle 40° in diameter represent a statistical excess of 5.1σ. The calculated probability of chance occurrence out of an isotropic distribution is only 3.7 in 10,000, but there is no obvious association with known sources in this field.

The origin of cosmic rays has intrigued physicists since their discovery in 1912. The difficulty is that, unlike light rays, these charged particles are deflected by the Galaxy’s magnetic field and their arrival direction is therefore randomized. It is only recently, through indirect methods, that the Fermi Gamma-ray Space Telescope was able to find evidence for cosmic rays being accelerated in supernova remnants (CERN Courier April 2013 p12 ).

UHECR – particles with energies above 1 EeV – are thought to be of different origin. Those with the highest energy (E > 60 EeV) are the least affected by magnetic fields and should roughly keep their original direction and point towards their emission source. They are also interesting because they cannot come from distances much further than 300-million light-years, because they would interact with the cosmic-microwave background to produce pions. In 2007 the collaboration behind the Pierre Auger Observatory (PAO) announced a correlation between the distribution in the southern sky of UHECRs and nearby active galactic nuclei (CERN Courier December 2007 p5). After the initial enthusiasm, however, additional data slightly weakened the significance of this result, rather than increasing it. Nevertheless, what is obvious is that the galactic plane is not the prime source of the UHECR. They must therefore originate from somewhere in the large-scale structures of the local universe.

A new study of the distribution of UHECR is now claiming a strongly anisotropic distribution with a large “hotspot” centred in the well-known constellation of Ursa Major. The study uses data obtained in the years 2008–2013 by the TA – the northern-sky analogue of the PAO. Covering an area of around 700 km2 in Utah, with 3 m2 scintillation detectors placed every 1.2 km on a square grid, the TA is currently the largest UHECR detector in the northern hemisphere. During the six-year study, only 72 events with energies above 57 EeV were detected. Assuming an uncertainty in the direction of 20° – a circle 40° wide is associated with each event – the researchers found a clustering of events, extending over roughly 40°, with a statistical excess of 5.1σ. To account for random clustering better, the collaboration simulated a million Monte Carlo data sets of 72 spatially random events in the field of view and obtained 365 instances of a clustering on different scales higher than the observed one. This corresponds to a chance probability of only 3.7 × 10–4, equivalent to a one-sided significance of 3.4σ.

The collaboration, with physicists mainly from the Universities of Tokyo and Utah, notes that there are no specific sources at the position of the excess. If the hotspot is real, it might be associated with the supergalactic plane, which contains local galaxy clusters such as the Ursa Major, Coma and Virgo clusters. This would imply a deflection by more than around 40° from this plane to the observed hotspot, which is too large an angle for protons and could indicate cosmic rays of rather heavy nuclei, which are deflected more by magnetic fields. The TAx4 project, an extension of the TA, would provide a speed-up of the detection rate to confirm the existence of the excess.

IAXO: the International Axion Observatory

The recent discovery of a Higgs boson at CERN appears to represent the summit in the successful experimental verification of the Standard Model of particle physics. However, although essentially all of the data from particle accelerators are so far in perfect agreement with the model’s predictions, a number of important theoretical and observational considerations point to the necessity of physics beyond the Standard Model. An especially powerful argument comes from cosmology. The currently accepted cosmological model invokes two exotic ingredients – dark matter and dark energy – which pervade the universe. In particular, the observational evidence for dark matter (via its gravitational effects on visible matter) is now overwhelming, even though the particle-physics nature of both dark matter and dark energy remains a mystery.

At the same time, the theoretical foundations of the Standard Model have shortcomings that prompt theorists to propose and explore hypothetical ways to extend it. Supersymmetry is one such hypothesis, which also naturally provides particles as candidates for dark matter, known as weakly interacting massive particles (WIMPs). Other extensions to the Standard Model predict particles that could lie hidden at the low-energy frontier, of which the axion is the prototype. The fact that supersymmetry has not yet been observed at the LHC, and that no clear signal of WIMPs has appeared in dark-matter experiments, has increased the community’s interest in searching for axions. However, there are independent and powerful motivations for axions, and dark matter composed of both WIMPs and axions is viable, implying that they should not be considered as alternative, exclusive solutions to the same problem.

After more than a decade of searching for solar axions, CAST has put the strongest limits yet on axion–photon coupling

Axions appear in Standard Model extensions that include the Peccei–Quinn mechanism, which provides the most promising solution so far to one of the problems of the Standard Model: why do strong interactions seem not to violate charge–parity symmetry, while according to QCD, the standard theory of strong interactions, they should do? Unlike many particles predicted by theories that go beyond the Standard Model, axions should be light, and it might seem that they should have been detected already. Nevertheless, they could exist and remain unnoticed because they naturally couple only weakly with Standard Model particles.

A generic property of axions is that they couple with photons in a way that axion–photon conversion (and vice versa) can occur in the presence of strong magnetic or electric fields. This phenomenon is the basis of axion production in the stars, as well as of most strategies for detecting axions. Magnets are therefore at the core of any axion experiment, as is the case for axion helioscopes, which look for axions from the Sun. This is the strategy followed by the CERN Axion Solar Telescope (CAST), which uses a decommissioned LHC test magnet (CERN Courier April 2010 p22). After more than a decade of searching for solar axions, CAST has put the strongest limits yet on axion–photon coupling across a range of axion masses, surpassing previous astrophysical limits for the first time and probing relevant axion models of sub-electron-volt mass. However, to improve these results and go deep into unexplored axion parameter space requires a completely new experiment.

The International Axion Observatory (IAXO) aims for a signal-to-noise ratio 105 better than CAST. Such an improvement is possible only by building a large magnet, together with optics and detectors that optimize the axion helioscope’s figure of merit, while building on experience and concepts of the pioneering CAST project.

The central component of IAXO is a superconducting toroid magnet. The detector relies on a high magnetic field distributed across a large volume to convert solar axions to detectable X-ray photons. The magnet’s figure of merit is proportional to the square of the product of magnetic field and length, multiplied by the cross-sectional area filled with the magnetic field. This consideration leads to a 25-m-long and 5.2-m-diameter toroid assembled from eight coils, generating 2.5 T in eight bores of 600 mm diameter, thereby having a figure of merit that is 300 times better than the CAST magnet. The toroid’s stored energy is 500 MJ.

The design is inspired by the barrel and endcap toroids of the ATLAS experiment at the LHC, which has the largest superconducting toroids ever built and currently in operation at CERN. The superconductor used is a NbTi/Cu-based Rutherford cable co-extruded with aluminum – a successful technology common to most modern detector magnets. The IAXO detector needs to track the Sun for the longest possible period, so to allow rotation around the two axes, the 250-tonne magnet is supported at its centre of mass by a system used for large telescopes (figure 1). The necessary services for vacuum, helium supply, current and controls rotate together with the magnet.

Each of the eight magnet bores will be equipped with X-ray focusing optics that rely on the fact that at X-ray energies the index of refraction is less than unity for most materials. By working at shallow (or grazing) incident angles, it is possible to make mirrors with high reflectivity. Mirrors are commonly used at synchrotrons and free-electron lasers to condition or focus the intense X-ray beams for user experiments, but IAXO requires optics with much larger apertures. For nearly 50 years, the X-ray astronomy and astrophysics community has been building telescopes following the design principle of Hans Wolter, employing two conic-shaped mirrors to provide true-imaging optics. This class of optics allows “nesting” – that is, placing concentric co-focal X-ray mirrors inside one another to achieve high throughput.

IAXO will use CERN’s expertise efficiently to venture deep into unexplored axion parameter space

The IAXO collaboration envisions using optics similar to those used on NASA’s NuSTAR – an X-ray astrophysics satellite with two focusing telescopes that operate in the 3–79 keV band. NuSTAR’s optics consist of thousands of thermally formed glass substrates deposited with multilayer coatings to enhance the reflectivity above 10 keV (figure 2). For IAXO, the multilayer coatings will be designed to match the softer 1–10 keV solar-axion spectrum.

At the focal plane in each of the optics, IAXO will have small time-projection chambers read by pixelized planes of Micromegas. These detectors (figure 2) have been developed extensively within the CAST collaboration and show promise for detecting X-rays with a record background level of 10–8–10–7 counts/keV/cm2/s. This is achieved by the use of radiopure detector components, appropriate shielding, and offline discrimination algorithms on the 3D event topology in the gas registered by the pixelized read-out.

Beyond the baseline described above, additional enhancements are being considered to explore extensions of the physics case for IAXO. Because a high magnetic field in a large volume is an essential component in any axion experiment, IAXO could evolve into a generic “axion facility” and facilitate various detection techniques. Most intriguing is the possibility of hosting microwave cavities and antennas to search for dark-matter axions in mass ranges that are complementary to those in previous searches.

The growing IAXO collaboration has recently finished the conceptual design of the experiment, and last year a Letter of Intent was submitted to the SPS and PS Experiments Committee of CERN. The committee acknowledged the physics goals of IAXO and recommended proceeding with the next stage – the creation of the Technical Design Report. These are the first steps towards the realization of the most ambitious axion experiment so far.

After more than three decades, the axion hypothesis remains one of the most compelling portals to new physics beyond the Standard Model, and must be considered seriously. IAXO will use CERN’s expertise efficiently to venture deep into unexplored axion parameter space. Complementing the successful high-energy frontier at the LHC, the IAXO facility would open a new window on the dark universe.

bright-rec iop pub iop-science physcis connect