Comsol -leaderboard other pages

Topics

FuSuMaTech initiative levels up

On 1 April more than 90 delegates gathered at CERN to discuss perspectives on superconducting magnet technology. The workshop marked the completion of phase 1 of the Future Superconducting Magnet Technology (FuSuMaTech) Initiative, launched in October 2017.

FuSuMaTech is a Horizon 2020 Future Emerging Technologies project co-funded by the European Commission, with the support of industrial partners ASG, Oxford Instruments, TESLA, SIGMAPHI, ELLYT Energy and BILFINGER, and academia partners CERN, CEA, STFC, KIT, PSI and CNRS. It aims to strengthen the field of superconductivity for projects such as the High-Luminosity LHC and  Future Circular Collider, while demonstrating the benefits of this investment to society at large.

“The need to develop higher performing magnets for future accelerators is certain, and cooperation will be essential,” said Han Dols of CERN’s knowledge transfer group. “The workshop helps reiterate common areas of interest between academia and industry, and how they might benefit from each other’s know-how. And just as importantly,” continued Dols, “FuSuMaTech is seeking to demonstrate the benefits of this investment by setting up demonstrator projects.”

The successful preparation of 10 project proposals for both R&D actions and industrial applications is one of the main achievements of FuSuMaTech Phase-1, noted project coordinator Antoine Dael. These projects include new designs for MRI gradient coils, the design of 14 and 16 T MRI magnets, and a conceptual design for new mammography magnets. New developments are also included in the proposals, with the design for a hybrid low–high temperature superconductor magnet, an e-infrastructure to collect material properties and a pulsed-heat-pipe cooling system.

In phase 2 of FuSuMaTech, launched with the signing of a declaration of intention between the FuSuMaTech partners on April 1, the 10 project proposals prepared during phase 1 will evolve into independent projects and make use of other European Union programmes. “We were really impressed with the interest we got from organisations outside of the project,” said Dael. “We currently have six industrial partners, two more have already contacted us today, and we expect others.”

Accelerator community comes together in Melbourne

More than 1100 accelerator professionals gathered in Melbourne, Australia, from 19 to 24 May 2019 for the 10th International Particle Accelerator Conference, IPAC’19. The superb Melbourne Convention and Exhibition Centre could easily cater for the 85 scientific talks, 72 industrial exhibitors and sponsors, 1444 poster presentations and several social functions throughout the week. Record levels of diversity at IPAC’19 saw 42 countries represented from six continents, and a relatively high gender balance for the field, with a quarter of speakers identifying as women.

In the wake of the update of the European Strategy for Particle Physics in Granada in May, accelerator designs that advance the energy and intensity range of a next-generation discovery machine were discussed, but there is no clear statement as to which is best. It will be up to the particle-physics community to decide which capability is needed to reach the most interesting physics. Reports on mature hadron facilities such as Japan’s J-PARC and the LHC were balanced by the photon sources and electron accelerators that are becoming an increasingly robust presence at IPAC, and which comprised a fifth of contributions in 2019. Presentations on the most recently commissioned accelerators were a particular highlight, with Japan’s SuperKEKB collider, Korea’s PAL-XFEL free-electron laser and Sweden’s MAX IV light source taking centre stage.

Exciting progress in the field of plasma- wakefield accelerators was also reported. In particular, Europe’s EuPRAXIA collaboration is aiming to create a laser wakefield accelerator to drive a free-electron laser facility for users in the next few years. The scientific programme was bookended by local Australian-grown talent. Suzie Sheehy from the University of Melbourne described the successes of particle accelerators and some of the future challenges, while Henry Chapman, a director of the Center for Free-Electron Laser Science at DESY and the University of Hamburg, gave the closing plenary on how particle accelerators have enabled groundbreaking work in coherent X-ray science.

“In Unity” was chosen as the theme for IPAC’19 and art was commissioned from Torres Strait islander Kelly Saylor to symbolise this coming together of the particle-accelerator community. The success of IPAC’19 demonstrates the ongoing need for face-to-face meetings to share and communicate ideas and collaborate on pressing scientific problems. In a pioneering effort for the IPAC series, the opening and closing sessions were live-streamed to the world. The aim is to broaden the impact of the conference and highlight the importance of particle accelerators to many fields of science, industry and medical applications.

Student poster prizes were won by Nazanin Samadi, an Iranian PhD student at the University of Saskatchewan, Canada, and Daniel Bafia of Fermilab and IIT. Among other awards, the Xie Jialin Prize went to Vittorio Vaccaro of the University of Naples, the Nishikawa Tetsuji Prize was won by Vladimir Shiltsev of Fermilab, the Hogil Kim Prize went to Xueqing Yan of Peking University, and the Mark Oliphant Prize was taken by Stanford PhD student James MacArthur.

IPAC takes place annually and alternates between Asia, Europe and the Americas. Next year it will move to Caen in France, and then to Brazil in 2021.

Topological avatars of new physics

Topologically non-trivial solutions of quantum field theory have always been a theoretically “elegant” subject, covering all sorts of interesting and physically relevant field configurations, such as magnetic monopoles, sphalerons and black holes. These objects have played an important role in shaping quantum field theories and have provided important physical insights into cosmology, particle colliders and condensed-matter physics.

In layman’s terms, a field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise. A mathematical knot (or a higher-dimensional generalisation such as a Möbius strip) is not like a regular knot in a piece of string: it has no ends and cannot be continuously deformed into a topologically trivial configuration like a circle or a sphere.

One of the most conceptually simple non-trivial configurations arises in the classification of solitons, which are finite-energy extended configurations of a scalar field behaving like the Higgs field. Among the various finite-energy classical solutions for the Higgs field, there are some that cannot be continuously deformed into the vacuum without an infinite cost in energy, and are therefore “stable”. For finite-energy configurations that are spherically symmetric, the Higgs field must map smoothly onto its vacuum solution at the boundary of space.

The ’t Hooft–Polyakov monopole, which is predicted to exist in grand unified theo­ries, is one such finite-energy topologically non-trivial solitonic configuration. The black hole is an example from general relativity of a singular space–time configuration with a non-trivial space–time topology. The curvature of space–time blows up in the singularity at the centre, and this cannot be removed either by continuous deformations or by coordinate changes: its nature is topological.

Such configurations constituted the main theme of a recent Royal Society Hooke meeting “Topological avatars of new physics”, which took place in London from 4–5 March. The meeting focused on theoretical modelling and experimental searches for topologically important solutions of relativistic quantum field theories in particle physics, general relativity and cosmology, and quantum gravity. Of particular interest were topological objects that could potentially be detectable at the Large Hadron Collider (LHC), or at future colliders.

Gerard ’t Hooft opened the scientific proceedings with an inspiring talk on form­ulating a black hole in a way consistent with quantum mechanics and time-reversal symmetry, before Steven Giddings described his equally interesting proposal. Another highlight was Nicholas Manton’s talk on the inevitability of topological non-trivial unstable configurations of the Higgs field – “sphalerons” – in the Standard Model. Henry Tye said sphalerons can in principle be produced at the (upgraded) LHC or future linear colliders. A contradictory view was taken by Sergei Demidov, who predicted that their production will be strongly suppressed at colliders.

One of the exemplars of topological physics receiving significant experimental attention is the magnetic monopole

A major part of the workshop was devoted to monopoles. The theoretical framework of light monopoles within the Standard Model, possibly producible at the LHC, was presented by Yong Min Cho. These “electroweak” monopoles have twice the magnetic charge of Dirac monopoles. Like the ’t Hooft–Polyakov monopole, but unlike the Dirac monopole, they are solitonic structures, with the Higgs field playing a crucial role. Arttu Rajantie considered relatively unsuppressed thermal production of generic monopole–antimonopole pairs  in the presence of the extreme high temperatures and strong magnetic fields of heavy-ion collisions at the LHC. David Tong discussed the ambiguities on the gauge group of the Standard Model, and how these could affect monopoles that are admissible solutions of such gauge field theories. Importantly, such solutions give rise to potentially observable phenomena at the LHC and at future colliders. Anna Achucaro and Tanmay Vachaspati reported on fascinating computer simulations of monopole scattering, as well as numerical studies of cosmic strings and other topologically non-trivial defects of relevance to cosmology.

One of the exemplars of topological physics currently receiving significant experimental attention is the magnetic monopole. The MoEDAL experiment at the LHC has reported world-leading limits on multiply magnetically charged monopoles, and Albert de Roeck gave a wide-ranging report on the search for the monopole and other highly-ionising particles, with Laura Patrizii and Adrian Bevan also reporting on these searches and the machine-learning techniques employed in them.

Supersymmetric scenarios can consistently accommodate all the aforementioned topologically non-trivial field theory configurations. Doubtless, as John Ellis described, the story of the search for this beautiful – but as yet hypothetical – new symmetry of nature, is a long way from being over. Last but not least, were two inspiring talks by Juan Garcia Bellido and Marc Kamionkowski on the role of primordial black holes as dark matter, and their potential detection by means of gravitational waves.

The workshop ended with a vivid round-table discussion of the importance of a new ~100 TeV collider. The aim of this machine is to explore beyond the historic watershed represented by the discovery of the Higgs boson, and to move us closer to understanding the origin of elementary particles, and indeed space–time itself. This Hooke workshop clearly demonstrated the importance of topological avatars of new physics to such a project.

Niobium-tin cavities for smaller accelerators

A team at Cornell University in the US has demonstrated that high-frequency superconducting radio-frequency (SRF) cavities made from niobium–tin alloy can be operated more efficiently than conventional niobium designs, representing a step towards smaller and more economical particle accelerators.

SRF cavities are the gold standard for the acceleration of charged-particle beams and are used, for example, in the LHC at CERN and the upcoming LCLS-II free-electron-laser X-ray source at SLAC. Currently, the material of choice for the best accelerating cavities is niobium, which frequently has to be operated at a temperature of around 2 K and requires costly cryogenic equipment to cool the cavity in a bath of superfluid liquid helium. The technology is only heavily used at large-scale accelerators, and not at smaller institutions or in industry due to its complexity and costs.

Researchers around the world are striving to remove some of the barriers prohibiting broader uptake of SRF technology. Two major obstacles still need to be overcome to make this possible: the temperature of operation, and the size of the cavity.

Earlier this year, a team at Cornell led by Matthias Liepe demonstrated that small, high-frequency triniobium-tin (Nb3Sn) cavities can be operated very efficiently at a temperature of 4.2 K. While seemingly only slightly warmer than the 2 K required by niobium cavities, this small rise in temperature omits the need for superfluid-helium refrigeration.

The size of the cavity is inversely related to the frequency of the oscillating radio-frequency electromagnetic field within it: as the frequency doubles, the necessary transverse size of the cavity is halved. A smaller cavity with a higher frequency also demands a smaller cryo­module; what was once 1 m in diameter, the typical size of an accelerating SRF cryomodule, can now be roughly half that size.

The vast majority of SRF cavities currently in use operate at frequencies of 1.5 GHz and below – a region favoured because RF power losses in a superconductor rapidly decrease at lower frequency. But this results in large SRF accelerating structures. Cornell graduate student Ryan Porter successfully made and tested a considerably smaller proof-of-principle Nb3Sn cavity at 2.6 GHz with promising results. “Niobium cannot operate efficiently at 2.6 GHz and 4.2 K,” Porter explains. “But the performance of this 2.6 GHz Nb3Sn cavity was just as good as the 1.3 GHz performance. Compared to a niobium cavity at the same temperature and frequency, it was 50 times more efficient.”

“This is really the first step that shows that you can get good 4.2 K performance at high frequency, and it is quite promising,” adds Liepe. “The dream is to have an SRF accelerator that can fit on top of the table.”

Clocking the merger of two white dwarfs

A never-before-seen object with a cataclysmic past has been spotted in the constellation Cassiopeia, about 10,000 light years away. The star-like object has a temperature of 200,000 K, shines 40,000 times brighter than the Sun and is ejecting matter with velocities up to 16,000 km s–1. In combination with the chemical composition of the surrounding nebula, the data indicate that it is the result of the merger of two dead stars.

Astronomers from the University of Bonn and Moscow detected the unusual object while searching for circumstellar nebulae in data from NASA’s Wide-Field Infrared Survey Explorer satellite. Memorably named J005311, and measuring about five light years across, it barely emits any optical light and radiates almost exclusively in the infrared. Additionally, the matter it emits consists mostly of oxygen and does not have any signs of hydrogen or helium, the two most abundant materials in the universe. All this makes it unlike a normal massive star and more in line with a white dwarf.

White dwarfs are “dead stars” that remain when typical stars have used up all of their hydrogen and helium fuel, at which point the oxygen- and carbon-rich star collapses into itself to form a high-mass Earth-sized object. The white dwarf is kept from further collapse into a neutron star only by the electron degeneracy pressure of the elements in its core, and its temperature is too low to enable further fusion. However, if the mass of the white dwarf increases, for example if it accretes matter from a nearby companion star, it can become hot enough to restart the fusion of carbon into heavier elements. This process is so violent that the radiation pressure it produces blows the star apart. Such “type 1A” supernovae are observed frequently and, since they are unleashed when a white dwarf reaches a very specific mass, they have a standard brightness that can be used to measure cosmic distances.

Despite having the chemical signature of a white dwarf, such an object cannot possibly burn as bright as J005311. By comparing the characteristics of J005311 with models of what happens when two white dwarfs merge, however, the explanation falls into place. As two white dwarfs, likely produced billions of years ago, orbited one another they slowly lost momentum through the emission of gravitational waves. Over time, the objects came so close to each other that they merged. This would commonly be expected to produce a type 1A supernova, but there are also models in which carbon is ignited in a more subtle way during the merging process, allowing it to start fusing without blowing the newly formed object apart. J005311’s detection appears to indicate that those models are correct, marking the first observation of a white-dwarf merger.

The rejuvenated star is, however, not expected to live for long. Based on the models it will burn through its remaining fuel within 10,000 years or so, forming a core of iron that is set to collapse into a neutron star through a violent event accompanied by a flash of neutrinos and possibly a gamma-ray burst. Using the speed of the ejected material and the distance it has reached from the star by now, it can be calculated that the merger took place about 16,000 years ago, meaning that its final collapse is not far away.

How many protons collided in ATLAS in Run 2?

The large amount of Run-2 data (collected in 2015–2018) allows the LHC experiments to probe previously unexplored rare processes, search for new physics and improve Standard Model measurements. The amount of data collected in Run 2 can be quantified by the integrated luminosity – a number which, when multiplied by the cross section for a process, yields the expected number of interactions of that type. It is a crucial figure. The uncertainty of several ATLAS Run-1 cross-section measurements, particularly of W and Z production, was dominated by systematic uncertainty on the integrated luminosity. To minimise this, ATLAS performs precise absolute and relative calibrations of several luminosity-sensitive detector systems in a three-step procedure.

The first step is an absolute calibration of the luminosity using a van-der-Meer beam-separation scan under specialised beam conditions. By displacing the beams horizontally and vertically and scanning them through each other, it is possible to measure the combined size of the colliding proton bunches. Determining in addition the total number of protons in each colliding bunch from the measurement of the beam currents, the absolute luminosity of each colliding bunch pair can be derived. Relating this to the mean number of interactions observed in the LUCID-2 detector – a set of photomultiplier tubes located 17 m in either direction along the beam pipe that detect the Cherenkov light of particles which come from the interaction – the scale for the absolute luminosity measurement of LUCID-2 is set.

The second step is to extrapolate this calibration to LHC physics conditions, where the number of interactions increases from fewer than one to around 20–50 interactions per crossing, and the pattern of proton bunches changes from isolated bunches to trains of consecutive bunches with 25 ns spacing. The LUCID-2 response is sensitive to these differences. It is corrected with the help of a track counting algorithm, which relates the number of interactions to the number of tracks reconstructed in ATLAS’s inner detector.

The final step is to monitor the stability of the LUCID-2 calibration over time. This is evaluated by comparing the luminosity estimate of LUCID-2 to those from track counting in the inner detector and various ATLAS calorimeters over the course of the data-taking year (figure 1). The agreement between detectors quantifies the stability of the LUCID-2 response.

Using this three-step method and taking into account correlations between years, ATLAS has obtained a preliminary uncertainty on the luminosity estimate for the combined Run-2 data of 1.7%, improving slightly on the Run-1 precisions of 1.8% at 7 TeV and 1.9% at 8 TeV. The full 13 TeV Run-2 data sample corresponds to an integrated luminosity of 139 fb–1 – about 1.1 × 1016 proton collisions.

New constraints on charm–quark hadronisation

One of the most useful ways to understand the properties of the quark–gluon plasma (QGP) formed in relativistic heavy-ion collisions is to study how various probes interact when propagating though it. Heavier quarks, such as charm, can provide unique insights as they are produced early in the collisions, and their interactions with the QGP differ from their lighter cousins. One important input to these studies is a detailed understanding of hadronisation, by which quarks form experimentally detectable mesons and baryons.

The lightest charm baryon and meson are the Λ+c (udc) and the D0 (cu̅). In proton– proton (pp) collisions, charm hadrons are formed by fragmentation, in which charm quarks and antiquarks move away from each other and combine with newly generated quarks. In heavy-ion collisions, hadron production can also occur via “coalescence”, whereby charm quarks combine with other quarks while traversing the QGP. The contribution of coalescence depends strongly on the transverse momentum (pT) of the hadrons, and is expected to be much more significant for charm baryons than for charm mesons, as they contain more quarks.

The CMS experiment has recently determined the Λ+c/D0 yield ratio over a broad range of pT using the Λ+c→ pKπ+  and D0 → Kπ+ decay channels in both pp and lead–lead (PbPb) collisions, at a nucleon–nucleon centre-of-mass energy of 5.02TeV. Comparing the behaviour of the Λ+c/D0 ratio in different collision systems allows physicists to study the relative contributions of fragmentation and coalescence.

The measured Λ+c/D0-production cross-section ratio in pp-collisions (figure 1) is found to be significantly larger than that calculated in the standard version of the popular Monte-Carlo event generator PYTHIA, while the inclusion of an improved description of the fragmentation (“PYTHIA8+CR”) can better describe the CMS data. The data can also be reasonably described by a different model that includes Λ+c baryons produced by the decays of excited charm baryons (dashed line). However, an attempt to incorporate the coalescence process characteristic of hadron production in heavy-ion collisions (solid line) fails to reproduce the pp-collision measurements.

The CMS collaboration also measured Λ+c production in PbPb collisions. The Λ+c/D0-production ratio for pT>10GeV/c is found to be consistent with that from pp collisions. This similarity suggests that the coalescence process does not contribute significantly to charm hadron production in this pT range for PbPb collisions. These are the first measurements of the ratios at high pT for both the pp and PbPb systems at a nucleon–nucleon centre-of-mass energy of 5.02TeV.

In late 2018, CMS collected data corresponding to about 10 times more PbPb collisions than were used in the current measurement. These will shed new light on the interplay between the different processes in charm–quark hadronisation in heavy-ion collisions. In the meantime, the current results highlight the lack of understanding of charm–quark hadronisation in pp collisions, a subject that requires further experimental measurements and theoretical studies.

Studying neutron stars in the laboratory

A report from the ALICE experiment

Neutron stars consist of extremely dense nuclear matter. Their maximum size and mass are determined by their equation of state, which in turn depends on the interaction potentials between nucleons. Due to the high density, not only neutrons but also heavier strange baryons may play a role.

The main experimental information on the interaction potentials between nucleons and strange baryons comes from bubble-chamber scattering experiments with strange-hadron beams undertaken at CERN in the 1960s, and is limited in precision due to the short lifetimes (< 200 ps) of the hadrons. The ALICE collaboration is now using the scattering between particles produced in collisions at the LHC to constrain interaction potentials in a new way. So far, pK, pΛ, pΣ0, pΞ and pΩ interactions have been investigated. Recent data have already yielded the first evidence for a strong attractive interaction between the proton and the Ξ baryon.

Strong final-state interactions between pairs of particles make their momenta more parallel to each other in the case of an attractive interaction, and increase the opening angle between them in the case of a repulsive interaction. The attractive potential of the p-Ξ interaction was observed by measuring the correlation of pairs of protons and Ξ particles as a function of their relative momentum (the correlation function) and comparing it with theoretical calculations based on different interaction potentials. This technique is referred to as “femtoscopy” since it simultaneously measures the size of the region in which particles are produced and the interaction potential between them.

Data from proton–lead collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV show that p-Ξ pairs are produced at very small distances (~1.4 fm); the measured correlation is therefore sensitive to the short-range strong interaction. The measured p-Ξ correlations were found to be stronger than theoretical correlation functions with only a Coulomb interaction, whereas the prediction obtained by including both the Coulomb and strong interactions (as calculated by the HAL-QCD collaboration) agrees with the data (figure 1).

As a first step towards evaluating the impact of these results on models of neutron-star matter, the HAL-QCD interaction potential was used to compute the single-particle potential of Ξ within neutron-rich matter. A slightly repulsive interaction was inferred (of the order of 6 MeV, compared to the 1322 MeV mass of the Ξ), leading to better constraints on the equation of state for dense hadronic systems that contain Ξ particles. This is an important step towards determining the equation of state for dense and cold nuclear matter with strange hadrons.

Three-body B+ decays violate CP

New sources of CP violation (CPV) are needed to explain the absence of antimatter in our matter-dominated universe. The LHCb collaboration has reported new results describing CPV in B+π+K+K and B+π+π+π decays. Until very recently, all observations of CPV in B mesons were made in two-body and quasi-two-body decays; however, it has long been conjectured that the complex dynamics of multi-body decays could give rise to other manifestations. For CPV to occur in B decays, competing decay amplitudes with different weak phases (which change sign under CP) and strong phases (which do not) are required. The weak phase differences are tied to fundamental parameters of the Standard Model (SM), but the strong phase difference can arise from loop-diagram contributions, final-state re-scattering effects, and phases associated with intermediate resonant structure.

The three-body B decays under study proceed mainly via various intermediate resonances – effectively, a cascade of two-body decays – but also include contributions from non-resonant three-body interactions. The phase space is two-dimensional (it can be fully described by two kinematic variables) and its size allows a rich tapestry of resonant structures to emerge, bringing quantum-mechanical interference into play. Much as in Young’s double-slit experiment, the total amplitude comprises the sum of all possible decay paths. The interference pattern and its phase variation could contribute to CPV in regions where resonances overlap.

One of the most intriguing LHCb results was the 2014 observation of large CPV effects in certain phase-space regions of B+π+K+K and B+π+π+π decays. In the new analysis, these effects are described with explicit amplitude models for the first time (figure 1). A crucial step in the phenomenological description of these amplitudes is to include unitarity-conserving couplings between final states, most notably ππ and KK. Accounting for these is essential to accurately model the complex S-wave component of the decays, which is the configuration where there is no relative angular momentum between a pair of oppositely-charged final-state particles, and which contains broad resonances that are difficult to model. Three complementary approaches were deployed to describe the complicated spin-0 S-wave component of the B+π+π+π decay: the classical isobar model, which explicitly associates a line-shape with a clear physical interpretation to each contribution in the phase space; the K-matrix method, which takes data from scattering experiments as an input; and finally a quasi-model-independent approach, in which the S-wave magnitude and phase are extracted directly from the data.

LHCb’s amplitude analyses of these decays are based on data from Run 1 of the LHC and contain several groundbreaking results, including the largest CP asymmetry in a single component of an amplitude analysis, found in the ππ KK re-scattering amplitude; the first observation of CPV in the interference between intermediate states, seen in the overlap between the dominant spin-1 ρ(770)0 resonance and the π+π+ S-wave; and the first observation of CPV involving a spin-2 resonance of any kind, found in the decay B+ f2(1270)π+. These results provide significant new insights into how CPV in the SM manifests in practice, and motivate further study, particularly into the strong-phase-generating QCD processes that govern CP violation.

Kilogram joins the ranks of reproducible units

On 20 May, 144 years after the signing of the Metre Convention in 1875, the kilogram was given a new definition based on Planck’s constant, h. Long tied to the International Prototype of the Kilogram (IPK) – a platinum-iridium cylinder in Paris – the kilogram is the last SI base unit to be redefined based on fundamental constants or atomic properties rather than a human-made artefact.

The dimensions of h are m2 kg s–1. Since the second and the metre are defined in terms of a hyperfine transition in caesium-133 and the speed of light, knowledge of h allows the kilogram to be set without reference to the IPK.

Measuring h to a suitably high precision of 10 parts per billion required decades of work by international teams across continents. In 1975 British physicist Bryan Kibble proposed a device, then known as a watt balance and now renamed the Kibble balance in his honour, which linked h to the unit of mass. A coil is placed inside a precisely calibrated magnetic field and a current driven through it such that an electromagnetic force on the coil counterbalances the force of gravity. The experiment is then repeated thousands of times over a period of months in multiple locations. The precision required is such that the strength of the gravitational field, which varies across the laboratory, must be measured before each trial.

Once the required precision was achieved, the value of h could be fixed and the definitions inverted, removing the kilogram’s dependence on the IPK. Following several years of deliberations, the new definition was formally adopted at the 26th General Conference on Weights and Measures in November last year. The 2019 redefinition of the SI base units came into force in May, and also sees the ampere, kelvin and mole redefined by fixing the numerical values for the elementary electric charge, the Boltzmann constant and the Avogadro constant, respectively.

“The revised SI future-proofs our measurement system so that we are ready for all future technological and scientific advances such as 5G networks, quantum technologies and other innovations that we are yet to imagine,” says Richard Brown, head of metrology at the UK’s National Physical Laboratory.

But the SI changes are controversial in some quarters. While heralding the new definition of the kilogram as “huge progress”, CNRS research director Pierre Fayet warns of possible pitfalls of fixing the value of the elementary charge: the vacuum magnetic permeability (μo) then becomes an unfixed parameter to be measured experimentally, with the electrical units becoming dependent on the fine structure constant. “It appears to me as a conceptual weakness of the new definitions of electrical units, even if it does not have consequences for their practical use,” says Fayet.

One way out of this, he suggests, is to embed the new SI system within a larger framework in which c = ħ = μo = εo = 1, thereby fixing the vacuum magnetic permeability and other characteristics of the vacuum (C. R. Physique 20 33). This would allow all the units to be expressed in terms of the second, with the metre and joule identified as fixed numbers of seconds and reciprocal seconds, respectively. While likely attractive to high-energy physicists, however, Fayet accepts that it may be some time before such a proposal could be accepted.

bright-rec iop pub iop-science physcis connect