Around 100 researchers, academics and industry delegates joined a co-innovation workshop in Liverpool, UK, on 22 March to discuss the strategic R&D programme for a Future Circular Collider (FCC) and associated benefits for industry. Motivated by the FCC study, the aim of the event was to identify joint R&D opportunities across accelerator projects and disciplines.
New particle colliders provide industry with powerful test-beds with a high publicity factor. Well-controlled environments allow novel technologies and processes to be piloted, and SMEs are ideal partners to bring these technologies – which include superconducting magnets, cryogenics, civil engineering, detector development, energy efficiency and novel materials and material processing techniques – to maturity.
Short talks about FCC-related areas for innovation, examples of successful technology-transfer projects at CERN, as well as current and future funding opportunities stimulated interesting discussions. Several areas were identified as bases for co-innovation, including resource-efficient tunnelling, the transfer of bespoke machine-learning techniques from particle physics to industry, detector R&D, cooling and data handling. The notes from all the working groups will be used to establish joint funding bids between participants.
The co-innovation workshop was part of a bigger event, “Particle Colliders – Accelerating Innovation”, which was devoted to the benefits of fundamental science to society and industry, co-hosted by the University of Liverpool and CERN together with partners from the FCC and H2020 EuroCirCol projects, and supported by EU-funded MSCA training networks. Almost 1000 researchers and industrialists from across Europe, including university and high-school students, took part. An industry exhibition allowed more than 60 high-tech companies to showcase their latest products, also serving university students as a careers fair, and more than a dozen different outreach activities were available to younger students.
A separate event, held at CERN on 4 and 5 March, reviewed the FCC physics capabilities following the publication of the FCC conceptual design report in January (CERN Courier January/February 2019 p8). The FCC study envisages the construction of a new 100 km-circumference tunnel at CERN hosting an intensity-frontier lepton collider (FCC-ee) as a first step, followed by an energy-frontier hadron machine (FCC-hh). It offers substantial and model-independent studies of the Higgs boson by extending the range of measurable Higgs properties to its total width and self-coupling. Moreover, the combination of superior precision and energy reach allows a complementary mix of indirect and direct probes of new physics. For example, FCC-ee would enable the Higgs couplings to the Z boson to be measured with an accuracy better than 0.17%, while FCC-hh will determine model-independent ttH coupling to <1%.
Physics discussions were accompanied by a status report of the overall FCC project, reviewing the technological challenges for both accelerator and detectors, the project implementation strategy, and cost estimates. Construction of the more technologically ready FCC-ee could start by 2028, delivering first physics beams a decade later, right after the end of the HL-LHC programme. Another important aspect of the two-day meeting was the need for further improving theoretical predictions to match the huge step in experimental precision possible at the FCC.
Planning now for a 70-year-long programme may sound a remote goal. However, as Alain Blondel of the University of Geneva remarked in the concluding talk of the conference, the first report on the LHC dates back more than 40 years. “Progress in knowledge has no price,” he said. “The FCC sets ambitious but feasible goals for the global community resembling previous leaps in the long history of our field.”
2018 marked the 100th anniversary of the birth of Richard Feynman. As one of several events worldwide celebrating this remarkable figure in physics, a memorial conference was held at the Institute of Advanced Studies at Nanyang Technological University in Singapore from 22 to 24 October, co-chaired by Lars Brink, KK Phua and Frank Wilzcek. The format was one-hour talks with 45 minute discussions.
Pierre Ramond began the conference with anecdotes from his time as Feynman’s next-door neighbour at Caltech. He discussed Feynman the MIT undergraduate, his first paper and his work at Princeton as a graduate student. There, Feynman learnt about Dirac’s idea of summing over histories from Herbert Jehle. Jehle asked Feynman about it a few days later. He said that he had understood it and had derived the Schrödinger equation from it. Feynman’s adviser was John Wheeler. Wheeler was toying with the idea of a single electron travelling back and forth in time – were you to look at a slice of time you would observe many electrons and positrons. After his spell at Los Alamos, this led Feynman to the idea of the propagator, which considers antiparticles propagating backwards in time as well as particles propagating forwards. These ideas would soon underpin the quantum description of electromagnetism – QED – for which Feynman shared the 1965 Nobel Prize in Physics with Tomonaga and Schwinger.
Revolutionary diagrams
The propagator was the key to the eponymous diagrams Feynman then formulated to compute the Lamb shift and other quantities. At the Singapore conference, Lance Dixon exposed how Feynman diagrams revolutionised the calculation of scattering amplitudes. He offered as an example the calculation of the anomalous magnetic moment of the electron, which has now reached five-loop precision and includes 12,672 diagrams. Dixon also discussed the importance of Feynman’s parton picture for understanding deep-inelastic scattering, and the staggeringly complex calculations required to understand data at the LHC.
George Zweig, the most famous of Feynman’s students, and the inventor of “aces” as the fundamental constituents of matter, gave a vivid talk, recounting that it took a long time to convince a sceptical Feynman about them. He described life in the shadows of the great man as a graduate student at Caltech in the 1960s. At that time Feynman wanted to solve quantum gravity, and was giving a course on the subject of gravitation. He asked the students to suppose that Einstein had never lived: how would particle physicists discuss gravity? He quickly explained that there must be a spin-two particle mediating the force; by the second lecture he had computed the precession of the perihelion of Mercury, a juncture that other courses took months to arrive at. Zweig recounted that Feynman’s failure to invent a renormalisable theory of quantum gravity affected him for many years. Though he did not succeed, his insights continue to resound today. As Ramond earlier explained, Feynman’s contribution to a conference in Chapel Hill in 1957, his first public intervention on the subject, is now seen as the starting point for discussions on how to measure gravitational waves.
Cristiane Morais-Smith spoke on Feynman’s path integrals, comparing Hamiltonian and Lagrangian formulations, and showing their importance in perturbative QED. Michael Creutz, the son of one of Feynman’s colleagues at Princeton and Los Alamos, showed how the path integral is also necessary to be able to work on the inherently non-perturbative theory of quantum chromodynamics. Morais-Smith went on to illustrate how Feynman’s path integrals now have a plethora of applications outside particle physics, from graphene to quantum Brownian motion and dissipative quantum tunnelling. Indeed, the conference did not neglect Feynman’s famous interventions outside particle physics. Frank Wilczek recounted Feynman’s famous insight that there is plenty of room at the bottom, telling of his legendary after-dinner talk in 1959 that foreshadowed many developments in nanotechnology. Wilczek concluded that there is plenty of room left in Hilbert space, describing entanglement, quantum cryptography, quantum computation and quantum simulations. Quantum computing is the last subject that Feynman worked hard on. Artur Ekert described the famous conference at MIT in 1981 when Feynman first talked about the subject. His paper from this occasion “Simulating Physics with Computers” was the first paper on quantum computers and set the ground for the present developments.
Biology hangout
Feynman was also interested in biology for a long time. Curtis Callan painted a picture of Feynman “hanging out” in Max Delbruck’s laboratory at Caltech, even taking a sabbatical at the beginning of the 1960s to work there, exploring the molecular workings of heredity. In 1969 he gave the famous Hughes Aerospace lectures, offering a grand overview of biology and chemistry – but this was also the time of the parton model and somehow that interest took over.
Robbert Dijkgraaf spoke about the interplay between art and science in Feynman’s life and thinking. He pointed out how important beauty is, not only in nature, but also in mathematics, for instance whether one uses a geometric or algebraic approach. Another moving moment of this wide-ranging celebration of Feynman’s life and physics was Michelle Feynman’s words about growing up with her father. She showed him both as a family man and also as a scientist, sharing his enthusiasm for so many things in life.
Recordings of the presentations are available online.
The 66th Rencontres de Moriond, held in La Thuile, Italy, took place from 16 to 30 March, with the first week devoted to electroweak interactions and unified theories, and a second week to QCD and high-energy interactions. More than 200 physicists took part, presenting new results from precision Standard Model (SM) measurements to new exotic quark states, flavour physics and the dark sector.
A major theme of the electroweak session was flavour physics, and the star of the show was LHCb’s observation of CP violation in charm decays (see LHCb observes CP violation in charm decays). The collaboration showed several other new results concerning charm- and B-meson decays. One much anticipated result was an update on RK, the ratio of rare decays of a B+ to electrons and muons, using data taken at energies of 7, 8 and 13 TeV. These decays are predicted to occur at the same rate to within 1%; previous data collected are consistent with this prediction but favour a lower value, and the latest LHCb results continue to support this picture. Together with other measurements, these results paint an intriguing picture of possible new physics (p33) that was explored in several talks by theorists.
Run-2 results
The LHC experiments presented many new results based on data collected during Run 2. ATLAS and CMS have measured most of the Higgs boson’s main production and decay modes with high statistical significance and carried out searches for new, additional Higgs bosons. From a combination of all Higgs-boson measurements, ATLAS obtained new constraints on the important Higgs self-coupling, while CMS presented updated results on the Higgs decay to two Z bosons and its coupling to top quarks.
Precision SM studies continued with first evidence from ATLAS for the simultaneous production of three W or Z bosons, and CMS presented first evidence for the production of two W bosons in two simultaneous interactions between colliding partons. The very large new dataset has also allowed ATLAS and CMS to expand their searches for new physics, setting stronger lower limits on the allowed mass ranges of supersymmetric and other hypothetical particles (see Boosting searches for fourth-generation quarks and Pushing the limits on supersymmetry). These also include new limits from CMS on the parameters describing slowly moving heavy particles, and constraints from both collaborations on the production rate of Z′ bosons. ATLAS, using the results of lead–ion collisions taken in 2018, also reported the observation of light-by-light scattering – a very rare process that is forbidden by classical electrodynamics.
New results and prospects in the neutrino sector were communicated, including Daya Bay and the reactor antineutrino flux anomaly, searches for neutrinoless double-beta decay, and the reach of T2K and NOvA in tackling the neutrino mass hierarchy and leptonic CP violation. Dark matter, axions and cosmology also featured prominently. New results from experiments such as XENON1T, ABRACADABRA, SuperCDMS and ATLAS and CMS illustrate the power of multi-prong dark-matter searches – not just for WIMPs but also very light or exotic candidates. Cosmologist Lisa Randall gave a broad-reaching talk about “post-modern cosmology”, in which she argued that – as in particle physics – the easy times are probably over and that astronomers need to look at more subtle effects to break the impasse.
Moriond electroweak also introduced a new session: “feeble interactions”, which was designed to reflect the growing interest in very weak processes at the LHC and future experiments.
LHCb continued to enjoy the limelight during Moriond’s QCD session, announcing the discovery of a new five-quark hadron, named Pc(4312)+, which decays to a proton and a J/ψ and is a lighter companion of the pentaquark structures revealed by LHCb in 2015 (p15). The result is expected to motivate deeper studies of the structure of these and other exotic hadrons. Another powerful way to delve into the depths of QCD, addressed during the second week of the conference, is via the Bc meson family. Following the observation of the Bc(2S) by ATLAS in 2014, CMS reported the existence of a two-peak feature in data corresponding to the Bc(2S) and the Bc*(2S) – supported by new results from LHCb based on its full 2011–2018 data sample. Independent measurements of CP violation in the Bs system reported by ATLAS and LHCb during the electroweak session were also combined to yield the most precise measurement yet, which is consistent with the small value predicted by the SM.
A charmed life
In the heavy-ion arena, ALICE highlighted its observation that baryons containing charm quarks are produced more often in proton–proton collisions than in electron–positron collisions. Initial measurements in lead–lead collisions suggest an even higher production rate for charmed baryons, similar to what has been observed for strange baryons. These results indicate that the presence of quarks in the colliding beams affects the hadron production rate. The collaboration also presented the first measurement of the triangle-shaped flow of J/ψ particles in lead–lead collisions, showing that even heavy quarks are affected by the quarks and gluons in the quark–gluon plasma and retain some memory of the collisions’ initial geometry.
The SM still stands strong after Moriond 2019, and the observation of CP violation in D mesons represents another victory, concluded Shahram Rahatlou of Sapienza University of Rome in the experimental summary. “But the flavour anomaly is still there to be pursued at low and high mass.”
Supersymmetry (SUSY) introduces a new fermion–boson symmetry that gives rise to supersymmetric “partners” of the Standard Model (SM) particles, and “naturally” leads to a light Higgs boson with mass close to that of the W and Z. SUSY partners that are particularly relevant in these “natural SUSY” scenarios are the top and bottom squarks, as well as the SUSY partners of the weak SM bosons, the neutralinos and charginos.
Despite the theory’s many appealing features, searches for SUSY at the LHC and elsewhere have so far yielded only exclusion limits. With LHC Run 2 completed as of the end of 2018, the ATLAS experiment has recorded 139 fb-1 of physics-quality proton–proton collisions at a centre-of-mass energy of 13 TeV. Three recent ATLAS SUSY searches highlight the significant increase in sensitivity offered by this dataset.
The first search took advantage of refinements in b-tagging to search for light bottom squarks decaying into bottom quarks, Higgs bosons and the lightest SUSY partner, which is assumed to be invisible and stable (a candidate for dark matter). The data agree with the SM and lead to significantly improved constraints, with bottom squark masses now excluded up to 1.5 TeV.
If the accessible SUSY particles can only be produced via electroweak processes, the resulting low-production cross sections present a challenge. The second search focuses on such electroweak SUSY signatures with two charged leptons and a significant amount of missing momentum carried away by a pair of the lightest SUSY partners. The current search places strong constraints on SUSY models with light charginos and more than doubles the sensitivity of the previous analysis (figure 1).
A third recent analysis considered less conventional signatures. Top squarks – the bosonic SUSY partner of the top quark – may evade detection if they have a long lifetime and decay at macroscopic distances from the collision point. This search looked for SUSY particles decaying to a quark and a muon, looking primarily for long-lived top squarks that decayed several millimetres into the detector volume. The observed results are consistent with the background-only expectation.
These analyses represent just the beginning of a large programme of SUSY searches using the entirety of the Run-2 dataset. With a rich signature space left to explore, there remains plenty of room for discovery in mining the riches from the LHC.
Ever since the 1970s, when the third generation of quarks and leptons began to emerge experimentally, physicists have asked if further generations await discovery. One of the first key results from the Large Electron–Positron Collider 30 years ago provided evidence to the contrary, showing that there are only three generations of neutrinos. The discovery of the Higgs boson in 2012 added a further wrinkle to the story: many theorists believe that the mass of the Higgs boson is unnaturally small if there are additional generations of quarks heavier than the top quark. But a loophole arises if the new heavy quarks do not interact with the Higgs field in the same way as regular quarks. The search for new heavy fourth-generation quarks – denoted T – is therefore the subject of active research at the LHC today.
CMS researchers have recently completed a search for such “vector-like” quarks using a new machine-learning method that exploits special relativity in a novel way. If the new T quarks exist, they are expected to decay to a quark and a W, Z or Higgs boson. As top quarks and W/Z/H bosons decay themselves, production of a T quark–antiquark pair could lead to dozens of different final states. While most previous searches focused on a handful of channels at most, this new analysis is able to search for 126 different possibilities at once.
The key to classifying all the various final states is the ability to identify high-energy top quarks, Higgs bosons, and W and Z bosons that decay into jets of particles recorded by the detector. In the reference frame of the CMS detector, these particles produce wide jets that all look alike, but things look very different in a frame of reference in which the initial particle (a W, Z or H boson, or a top quark) is at rest. For example, in the centre-of-mass frame of a Higgs boson, it would appear as two well-collimated back-to-back jets of particles, whereas in the reference frame of the CMS detector the jets are no longer back-to-back and may indeed be difficult to identify as separate at all. This feature, based on special relativity, tells us how to distinguish “fat” jets originating from different initial particles.
Modern machine-learning techniques were used to train a deep neural-network classification algorithm using simulations of the expected particle decays. Several dozen properties of the jets were calculated in different hypothetical reference frames, and fed to the network, which classifies the original fat jets as coming from either top quarks, H, W or Z bosons, b quarks, light quarks, or gluons. Each event is then classified according to how many of each jet type there are in the event. The number of observed events in each category was then compared to the predicted background yield: an excess could indicate T-quark pair production.
CMS found no evidence for T-quark pair production in the 2016 data, and has excluded T-quark masses up to 1.4 TeV (figure 1). The collaboration is working on new ideas to improve the classification method and extend the search to higher masses using the four-times larger 2017 to 2018 dataset.
The study of lead–ion collisions at the LHC is a window into the quark–gluon plasma (QGP), a hot and dense phase of deconfined quarks and gluons. An important effect in heavy-ion collisions is jet quenching – the suppression of particle production at large transverse momenta (pT) due to energy loss in the QGP. This suppression is quantified by the nuclear-modification factor RAA, which is the ratio of particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled for the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects such as jet quenching.
Previous measurements of peripheral collisions revealed less suppression than seen in head-on collisions, but RAA remained significantly below unity. This observation indicates the formation of a dense and strongly interacting system – but it also poses a puzzle. In p–Pb collisions, no suppression has been observed, even though the energy densities are similar to those in peripheral Pb–Pb collisions.
The ALICE collaboration has recently put jet quenching to the test experimentally by performing a rigorous measurement of RAA in narrow centrality bins. The results (figure 1, left) show that the trend of a gradual reduction in the suppression of high-pT particle production as one moves from the most central collisions (corresponding to the 0% centrality percentile) to those with a greater impact parameter does not continue above a centrality of 75%. Instead, the data show a dramatically different behaviour: increasingly strong suppression for the most peripheral collisions. The change at 75% centrality shows that the suppression mechanism for peripheral collisions is fundamentally different from that observed in central collisions, where the suppression can be explained by parton energy loss in the QGP.
In a single Pb–Pb collision several nucleons collide. It has recently been suggested that the alignment of each nucleon collision plays an important role: if the nucleons are aligned, a single collision produces more particles, which results in a correlation between particle production at low pT, which is used to determine the centrality, and at high pT, where RAA is measured. The suppression in the peripheral events can be modelled with a simple PYTHIA- based model that does not implement jet-quenching effects, but incorporates the biases originating from the alignment of the nucleons, yielding qualitative agreement above 75% centrality (figure 1, right).
These results demonstrate that with the correct treatment of biases from the parton–parton interactions the observed suppression in Pb–Pb collisions is consistent with results from p–Pb collisions at similar multiplicities – an important new insight into the nuclear modification factor in small systems.
The LHCb collaboration has discovered a new pentaquark particle, dubbed the Pc(4312)+, decaying to a J/ψ and a proton, with a statistical significance of 7.3 standard deviations. The LHCb data, first presented at Rencontres de Moriond in March, also confirm that the Pc(4450)+ structure previously reported by the collaboration in 2015 has now been resolved into two narrow, overlapping peaks, the Pc(4440)+ and Pc(4457)+, with a statistical significance of 5.4 standard deviations compared to the single-peak hypothesis (figure 1). Together, the results offer rich studies of the strong internal dynamics of exotic hadrons.
In the famous 1964 papers that set out the quark model, Murray Gell-Mann and George Zweig mentioned the possibility of adding a quark–antiquark pair to the minimal meson and baryon states qq̅ and qqq, thereby proposing the new configurations qqq̅q̅ and qqqqq̅. Nearly four decades later, the Belle collaboration discovered the surprisingly narrow X(3872) state with a mass very close to the D0D̅*0 threshold, hinting at a tetraquark structure (cc̅uu̅). A decade after that, Belle discovered narrow Zb0,± states just above the BB̅* and B*B̅* thresholds; this was followed by observations of Zc0,± states just above the equivalent charm thresholds by BES-III and Belle. The existence of charged Zb± and Zc± partners makes the exotic nature of these states clear: they cannot be described as charmonium (cc̅) or bottomonium (bb̅) mesons, which are always neutral, but must instead be a combination such as cc̅ud̅. There is also evidence for broad Zc± states from Belle and LHCb, such as the Zc(4430)±.
A major turning point in exotic baryon spectroscopy was achieved by LHCb in July 2015 when, based on an analysis of Run 1 data, the collaboration reported significant pentaquark structures in the J/ψ−p mass distribution in Λb0→ J/ψpK− decays. A narrow Pc(4450)+ and a broad Pc(4380)+ were reported, both with minimal quark content of cc̅uud (CERN Courier September 2015 p5).
The new results use the data collected at LHCb in Run 1 and Run 2, providing a Λb0 sample nine times larger than that used in the 2015 paper. The new data reproduce the parameters of the Pc(4450)+ and Pc(4380)+ states when analysed the same way as before. However, the much larger dataset makes a more fine-grained analysis possible, revealing additional peaking structures in the J/ψ-p invariant mass spectrum that were not visible before. A new narrow peak, with a width comparable to the mass resolution, is observed near 4312 MeV, right below the Σ+cD̅0 threshold. The structure seen before at 4450 MeV has been resolved into two narrower peaks, at 4440 and 4457 MeV. The latter is right at the Σ+cD̅*0 threshold.
These Pc states join a growing family of narrow exotic hadrons with masses near hadron–hadron thresholds. This is expected in certain models of loosely bound “molecular” states whose structure resembles the way a proton and neutron bind to form a deuteron. Other models, such as of tightly bound pentaquarks, could also explain the Pc resonances. A more complete understanding will require further experimental and theoretical investigation.
Searching for the decay μ+ → e+γ is like looking for a needle in a haystack the size of the Great Pyramid of Giza. This simile-stretching endeavour is the task of the MEG II experiment at the Paul Scherrer Institute (PSI) in Villigen, Switzerland. MEG II is an upgrade of the previous MEG experiment, which operated from 2008 to 2013. All experimental data so far are consistent with muon decays that conserve lepton flavour by the production of two appropriately flavoured neutrinos. Were MEG II to observe the neutrinoless decay of the muon to a positron and a photon, it would be the first evidence of flavour violation with charged leptons, and unambiguous evidence for new physics.
Lepton-flavour conservation is a mainstay of every introductory particle-physics course, yet it is merely a so-called accidental symmetry of the Standard Model (SM). Unlike gauge symmetries, it arises because only massless left-handed neutrinos are included in the model. The corresponding mass and interaction terms of the Lagrangian can therefore be simultaneously diagonalised, which means that interactions always conserve lepton flavour. This is not the case in the quark sector, and as a result quark flavour is not conserved in weak interactions. Since lepton flavour is not considered to be a fundamental symmetry, most extensions of the SM predict its violation at a level that could be observed by state-of-the-art experiments.
Indeed an extension of the SM is already required to include the tiny neutrino masses that we infer from neutrino oscillations. In this extension, neutrino oscillations induce charged lepton-flavour-violating processes but with the branching ratio for μ+ → e+γ emerging to be only 10–54, which cannot be accessed experimentally (see “Charged lepton-flavour violation in the SM” box). A data sample of muons as large as the number of protons in the Earth would not be enough to see such an improbable decay. Charged lepton-flavour violation is therefore a clear signature of new physics with no SM backgrounds.
Finding the needle
The search requires an intense source of muons, and detectors capable of reconstructing the kinematics of the muon’s decay products with high precision. PSI offers the world’s most intense continuous muon beams, delivering up to 108 muons per second. MEG II (previously as MEG) is designed to search for μ+ → e+γ by stopping positive muons on a thin target, and looking for positron–photon pairs from muon decays at rest. This method exploits the two-body kinematics of the decay to discriminate signal events from the backgrounds, which are predominantly the radiative muon decay μ+ → e+ νe ν̅μ γ and the accidental time coincidence of a positron and photon produced by different muon decays.
In the late 1990s, when the first MEG experiment was being designed, theorists argued that the μ+ → e+γ branching ratio could be as high as 10–12 to 10–14, based on supersymmetry arising at the TeV scale. Twenty years later, MEG has excluded branching ratios above 4.2 × 10–13 (figure 1), and supersymmetric particles remain undiscovered at the LHC. Nevertheless, since charged lepton-flavour-violating processes are sensitive to the virtual exchange of new particles, while not requiring their creation as at the LHC, they can probe new physics models (supersymmetry, extra dimensions, leptoquarks, multi-Higgs, etc) up to mass scales of thousands of TeV. Scales such as these are not only unreachable at the LHC, but also at near-future accelerators.
The MEG collaboration therefore decided to upgrade the detectors with the goal of improving the sensitivity of the experiment by a factor of 10. The new experiment, which adopts the same measurement principle, is expected to start taking data at the end of 2019 (figure 2). Photons are reconstructed by a liquid xenon (LXe) detector technology that was pioneered by the MEG collaboration, achieving an unprecedented ~2% calorimetric resolution at energies as low as 52.8 MeV – the energy of the photon in a μ+ → e+γ decay. The LXe detector provides a high-resolution measurement of the position and timing of the photon conversion, precise to a few millimetres and approximately 70 ps. The positrons are reconstructed in a magnetic spectrometer instrumented with drift chambers for tracking, and scintillator bars for timing. A peculiarity of the MEG spectrometer is a non-uniform magnetic field, diminishing from 1.2 T at the centre of the detector to 0.5 T at the extremities. The gradated field prevents positrons from curling too many times. This avoids pileup in the detectors and makes positrons of the same momentum curl with the same radius, independent of their emission angle, thus simplifying the design and operation of the tracking system.
Following a major overhaul that was begun in 2011, all the detectors have now been upgraded. Silicon photomultipliers custom-modified for sensitivity to the ultraviolet LXe scintillation light have replaced conventional photomultipliers on the inner face of the calorimeter. Small scintillating tiles have replaced the scintillating bars of the positron-timing detector to improve timing and reduce pileup. The main challenge when upgrading the drift chambers was dealing with high positron rates. Here, the need for high granularity had to be balanced by keeping the total amount of material low. This reduces both multiple scattering and the rate of positrons annihilating in the material, and contributions to the coincident-photon background in the calorimeter. The solution was the use of extremely thin 40 and 50 μm silver-plated aluminium wires, 20 μm gold-plated tungsten wires, and innovative assembly techniques. All the detectors’ resolutions were improved by a factor of around two with respect to the MEG experiment. The MEG II design also includes a new detector to veto photons coming from radiative muon decays, improved calibration tools and new trigger and data-acquisition electronics to cope with the increased number of readout channels. The improved detector performance will allow the muon beam rate to be more than doubled, from 3.3 × 107 to 7 × 107 muons per second.
The detectors were installed and tested in the muon beam in 2018. In 2019 a test of the whole detector will be completed, with the possibility of collecting the first physics data. The experiment is then expected to run for three years to uncover evidence for the μ+ → e+γ decay if the branching ratio is around 10–13 or set a limit of 6 × 10–14 on its branching ratio.
Charged lepton-flavour violation in the SM – a very small neutrino oscillation experiment
The presence of only massless left-handed neutrinos in the Standard Model (SM) gives rise to the accidental symmetry of lepton-flavour conservation – yet neutrino oscillation experiments have observed neutrinos changing flavour in-transit from sources as far away as the Sun and as near as a nuclear reactor. Such neutral lepton-flavour violation implies that neutrinos have tiny masses and that their flavour eigenstates are distinct from their mass eigenstates. Phases develop between the mass eigenstates as a neutrino travels, and the wavefunction becomes a mixture of the flavour eigenstates, rather than the unique original flavour, as would remain the case for truly massless neutrinos.
The effect on charged lepton-flavour violation is subtle and small. In most neutrino oscillation experiments, a neutrino is created in a charged-current interaction and observed in a later interaction via the creation of a charged lepton of the corresponding flavour in the detector.
μ+ → e+γ may proceed in a similar way, but where the same W boson is involved in both the creation and destruction of the neutrino, and the neutrino oscillates in between (see figure above).
In this process, the neutrino oscillation ν̅μ→ν̅e has to occur at an energy scale E ~ mw, over an extremely short distance of L ~ 1/mw. Considering only two neutrino species with masses m1 and m2, the probability for the oscillation is proportional to sin2 [(m21 – m22) L /4E]. Hence, the μ → eγ branching ratio is suppressed by the tiny factor (m21 – m22)/m2w)2 ≲ 10–49. The exact calculation, including the most recent estimates of the neutrino mixing matrix elements, gives BR(μ → eγ) ~ 10–54.
New directions
In the meantime, PSI researchers are investigating the possibility of building new beamlines with 109 or even 1010 muons per second to allow experimenters to probe even smaller branching ratios. How could a future experiment cope with such high rates? Preliminary studies are investigating a system where photons are converted into pairs of electrons and positrons, and reconstructed in a tracking device. This solution, which has already been exploited previously by the MEGA experiment at Los Alamos National Laboratory, could also improve the photon resolution.
At the same time, other experiments are searching for charged lepton-flavour violation in other channels. Mu3e, also at PSI, will search for μ+ → e+e+e– decays. The Mu2e and COMET experiments, at Fermilab and J-PARC, respectively, will search for muon-to-electron conversion in the field of a nucleus. These processes are complementary to μ+ → e+γ,allowing alternative scenarios to be probed. At the same time, collider experiments such as Belle II and LHCb are working on studies of lepton-flavour violation in tau decays. LHCb researchers are also testing lepton universality, which holds that the weak couplings are the same for each lepton flavour (see The flavour of new physics). As theorists often stress, all these analyses are strongly complementary both with each other and with direct searches for new particles at the LHC.
Ever since the pioneering work of Conversi, Pancini and Piccioni, muons have played a crucial role in the development of particle physics. When I I Rabi exclaimed “who ordered that?”, he surely did not imagine that 80 years later the lightest unstable elementary particle would still be a focus of cutting-edge research.
Serbia became the 23rd Member State of CERN, on 24 March, following receipt of formal notification from UNESCO. Ever since the early days of CERN (former Yugoslavia was one of the 12 founding Member States of CERN in 1954, until its departure in 1961), theSerbian scientific community has made strong contributions to CERN’s projects. This includes at the Synchrocyclotron, Proton Synchrotron and Super Proton Synchrotron facilities. In the 1980s and 1990s, physicists from Serbia worked on the DELPHI experiment at CERN’s LEP collider. In 2001, CERN and Serbia concluded an International Cooperation Agreement, leading to Serbia’s participation in the ATLAS and CMS experiments at the LHC, in the Worldwide LHC Computing Grid, as well as in the ACE and NA61 experiments. Serbia’s main involvement with CERN today is in the ATLAS and CMS experiments, in the ISOLDE facility, and on design studies for future particle colliders – FCC and CLIC – both of which are potentially new flagship projects at CERN.
Serbia was an Associate Member in the pre-stage to membership from March 2012. As a Member State, Serbia will have voting rights in the CERN Council, while the new status will also enhance the recruitment opportunities for Serbian nationals at CERN and for Serbian industry to bid for CERN contracts. “Investing in scientific research is important for the development of our economy and CERN is one of the most important scientific institutions today,” says Ana Brnabić, Prime Minister of Serbia. “I am immensely proud that Serbia has become a fully-fledged CERN Member State. This will bring new possibilities for our scientists and industry to work in cooperation with CERN and fellow CERN Member States.”
On 8 April, CERN unveiled plans for a major new facility for scientific education and outreach. Aimed at audiences of all ages, the Science Gateway will include exhibition spaces, hands-on scientific experiments for schoolchildren and students, and a large amphitheatre to host science events for experts and non-experts alike. It is intended to satisfy the curiosity of hundreds of thousands visitors every year and is core to CERN’s mission to educate and engage the public in science.
“We will be able to share with everybody the fascination of exploring and learning how matter and the universe work, the advanced technologies we need to develop in order to build our ambitious instruments and their impact on society, and how science can influence our daily life,” says CERN director-general, Fabiola Gianotti. “I am deeply grateful to the donors for their crucial support in the fulfilment of this beautiful project.”
The overall cost of the Science Gateway, estimated at 79 m Swiss Francs, is entirely funded through donations. Almost three quarters of the cost has already been secured, thanks in particular to a contribution of 45 m Swiss Francs from Fiat Chrysler Automobiles. Other donors include a private foundation in Geneva and Loterie Romande, which distributes its profits to public utility projects. CERN is looking for additional donations to cover the full cost of the project.
The Science Gateway will be hosted in iconic buildings with a 7000 m2 footprint, linking CERN’s Meyrin site and the Globe of Science and Innovation. It is being designed by renowned architects Renzo Piano Building Workshop and intends to “celebrate the inventiveness and creativity that characterise the world of research and engineering”. Construction is planned to start in 2020 and be completed in 2022.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.