Comsol -leaderboard other pages

Topics

The flavour of new physics

In 1971, at a Baskin-Robbins ice-cream store in Pasadena, California, Murray Gell-Mann and his student Harald Fritzsch came up with the term “flavour” to describe the different types of quarks. From the three types known at the time – up, down and strange – the list of quark flavours grew to six. A similar picture evolved for the leptons: the electron and the muon were joined by the unexpected discovery of the tau lepton at SLAC in 1975 and completed with the three corresponding neutrinos. These 12 elementary fermions are grouped into three generations of increasing mass.

The three flavours of charged leptons – electron, muon and tau – are the same in many respects. This “flavour universality” is deeply ingrained in the symmetry structure of the Standard Model (SM) and applies to both the electroweak and strong forces (though the latter is irrelevant for leptons). It directly follows from the assumption that the SM gauge group, SU(3) × SU(2) × U(1), is one and the same for all three generations of fermions. The Higgs field, on the other hand, distinguishes between fermions of different flavours and endows them with different masses – sometimes strikingly so. In other words, the gauge forces, such as the electroweak force, are flavour-universal in the SM, while the exchange of a Higgs particle is not.

Today, flavour physics is a major field of activity. A quick look at the Particle Data Group (PDG) booklet, with its long lists of the decays of B mesons, D mesons, kaons and other hadrons, gives an impression of the breadth and depth of the field. Even in the condensed version of the PDG booklet, such listings run to more than 170 pages. Still, the results can be summarised succinctly: all the measured decays agree with SM predictions, with the exception of measurements that probe LFU in two quark-level transitions: b → cτν̅τ and b → sμ+μ.

Oddities in decays to D mesons

In the SM the b → cτν̅τ process is due to a tree-level exchange of a virtual W boson (figure 1, left). The W boson, being much heavier than the amount of energy that is released in the decay of the b quark, is virtual. Rather than materialising as a particle, it leaves its imprint as a very short-range potential that has the property of changing one quark (a b quark) into a different one (a c quark) with the simultaneous emission of a charged lepton and an antineutrino.

Flavour universality is probed by measuring the ratio of branching fractions: RD(*) = Br(B → D(*)τν̅τ)/Br(B → D(*)lν̅l), where l = e, μ. Two ratios can be measured, since the charm quark is either bound inside a D meson or its excited version, the D*, and the two ratios, RD and RD*, have the very welcome property that they can be precisely predicted in the SM. Importantly, since the hadronic inputs that describe the b → c transition do not depend on which lepton flavour is in the final state, the induced uncertainties mostly cancel in the ratios. Currently, the SM prediction is roughly three standard deviations away from the global average of results from the LHCb, BaBar and Belle experiments (figure 2).

A possible explanation for this discrepancy is that there is an additional contribution to the decay rate, due to the exchange of a new virtual particle. For coupling strengths that are of order unity, such that they are appreciably large yet small enough to keep our calculations reliable, the mass of such a new particle needs to be about 3 TeV to explain the reported hints for the increased b → cτν̅τ rates. This is light enough that the new particle could even be produced directly at the LHC. Even better, the options for what this new particle could be are quite restricted.

There are two main possibilities. One is a colour singlet that does not feel the strong force, for which candidates include a new charged Higgs boson or a new vector boson commonly denoted W (figure 1, middle). However, both of these options are essentially excluded by other measurements that do agree with the SM: the lifetime of the Bc meson; searches at the LHC for anomalous signals with tau leptons in the final state; decays of weak W and Z bosons into leptons; and by Bs mixing and B → Kν ν̅ decays.

The second possible type of new particle is a leptoquark that couples to one quark and one lepton at each vertex (figure 3, right). Typically, the constraints from other measurements are less severe for leptoquarks than they are for new colour-singlet bosons, making them the preferred explanation for the b → cτν̅τ anomaly. For instance, they contribute to Bs mixing at the one-loop level, making the resulting effect smaller than the present uncertainties. Since leptoquarks are charged under the strong force, in the same way as quarks are, they can be copiously produced at the LHC via strong interactions. Searches for pair- or singly-produced leptoquarks at the future high-luminosity LHC and at a proposed high-energy LHC will cover most of the available parameter space of current models.

Oddities in decays to kaons

The other decay showing interesting flavour deviations (b → sμ+μ) is probed via the ratios RK(*) = Br(B → K(*)μ+μ)/Br(B → K(*)e+e), which test whether the rate for the b → sμ+μ quark-level transition equals the rate for the b → se+e one. The SM very precisely predicts RK(*) = 1, up to small corrections due to the very different masses of the muon and the electron. Measurements from LHCb on the other hand, are consistently below 1, with statistical significances of about 2.5 standard deviations, while less precise measurements from Belle are consistent with both LHCb and the SM (figure 3). Further support for these discrepancies is obtained from other observables, for which theoretical predictions are more uncertain. These include the branching ratios for decays induced by the b → sμ+μ quark-level transition, and the distributions of the final-state particles.

In contrast to the tree-level b → cτν̅τ process underlying the semileptonic B decays to D mesons, the b → sμ+μ decay is induced via quantum corrections at the one-loop level (figure 4, left) and is therefore highly suppressed in the SM. Potential new-physics contributions, on the other hand, can be exchanged either at tree level or also at one-loop level. This means that there is quite a lot of freedom in what kind of new physics could explain the b → sμ+μ anomaly. The possible tree-level mediators are a Z and leptoquarks with masses of about 30 TeV or lighter, if the couplings are smaller. For loop-induced models the new particles are necessarily light, with masses in the TeV range or below. This means that the searches for direct production of new particles at the LHC can probe a significant range of explanations for the LHCb anomalies. However, for many of the possibilities the high-energy upgrade to the LHC or a future circular collider with much higher energy would be required for the new particles to be discovered or ruled out.

Taking stock

Could the two anomalies be due to a single new lepton non-universal force? Interestingly, a leptoquark dubbed U1 – a spin-one particle that is a colour triplet, charged under hypercharge but not weak isospin – can explain both anomalies. With some effort it can be embedded in consistent theoretical constructions, albeit those with very non-trivial flavour structures. These models are based on modified versions of grand unified theories (GUTs) from the 1980s. Since GUTs unify the leptons and quarks, some of the force carriers can change quarks to leptons and vice versa, i.e. some of the force carriers are leptoquarks. The U1 leptoquark could be one such force carrier, coupling predominantly to the third generation of fermions. In all cases the U1 leptoquark is accompanied by many other particles with masses not much above the mass of U1.

While intriguing, the two sets of B-physics anomalies are by no means confirmed. None of the measurements have separately reached the five standard deviations needed to claim a discovery and, indeed, most are hovering around the 1–3 sigma mark. However, taken together, they form an interesting and consistent picture that something is potentially going on. We are in a lucky position that new measurements are expected to be finished soon, some in a few months, others in a few years.

First of all, the observables showing the discrepancy from the SM, RD(*) and RK(*), will be measured more precisely at LHCb and at Belle II, which is currently ramping up at KEK in Japan. In addition, there are many related measurements that are planned, both at Belle II as well as at LHCb, and also at ATLAS and CMS. For instance, measuring the same transitions, but with different initial- and final-state hadrons, should give further insights into the structure of new-physics contributions. If the anomalies are confirmed, this would then set a clear target for the next collider such as the high-energy LHC or the proposed proton–proton Future Circular Collider, since the new particles cannot be arbitrarily heavy.

If this exciting scenario plays out, it would not be the first time that indirect searches foretold the existence of new physics at the next energy scale. Nuclear beta decay and other weak transitions prognosticated the electroweak W and Z gauge bosons, the rare kaon decay KL→ μ+μ pointed to the existence of the charm quark, including the prediction for its mass from kaon mixing, while B-meson mixings and measurements of electroweak corrections accurately predicted the top-quark mass before it was discovered. Finally, the measurement of CP violation in kaons led to the prediction of the third generation of fermions. If the present flavour anomalies stand firm, they will become another important item on this historic list, offering a view of a new energy scale to explore.

Rutherford, transmutation and the proton

In his early days, Ernest Rutherford was the right man in the right place at the right time. After obtaining three degrees from the University of New Zealand, and with two years’ original research at the forefront of the electrical technology of the day, in 1895 he won an Exhibition of 1851 Science Scholarship, which took him to the Cavendish Laboratory at the University of Cambridge in the UK. Just after his arrival, the discoveries of X-rays and radioactivity were announced and J J Thomson discovered the electron. Rutherford was an immediate believer in objects smaller than the atom. His life’s work changed to understanding radioactivity and he named the alpha and beta rays.

In 1898 Rutherford took a chair in physics at McGill University in Canada, where he achieved several seminal results. He discovered radon, demonstrated that radio-activity was just the natural transmutation of certain elements, showed that alpha particles could be deviated in electric and magnetic fields (and hence were likely to be helium atoms minus two electrons), dated minerals and determined the age of the Earth, among other achievements.

In 1901, the McGill Physical Society called a meeting titled “The existence of bodies smaller than an atom”. Its aim was to demolish the chemists. Rutherford spoke to the motion and was opposed by a young Oxford chemist, Frederick Soddy, who was at McGill by chance. Soddy’s address “Chemical evidence for the indivisibility of the atom” attacked physicists, especially Thomson and Rutherford, who “… have been known to give expression to opinions on chemistry in general and the atomic theory in particular which call for strong protest.” Rutherford invited Soddy, who specialised in gas analysis, to join him. It was a short but fruitful collaboration in which the pair determined the first few steps in the natural transmutation of the heavy elements.

Manchester days

For some years Rutherford had wished to be more in the centre of research, which was Europe, and in 1907 moved to the University of Manchester. Here he began to follow up on experiments at McGill in which he had noted that a beam of alpha particles became fuzzy if passed through air or a thin slice of mica. They were scattered by an angle of about two degrees, indicating the presence of electric fields of 100 MV/cm, prompting his statement that “the atoms of matter must be the seat of very intense electrical forces”.

At Manchester he inherited an assistant, Hans Geiger, who was soon put to work making accurate measurements of the number of alpha particles scattered by a gold foil over these small angles. Geiger, who trained the senior undergraduates in radioactive techniques, told Rutherford in 1909 that one, Ernest Marsden, was ready for a subject of his own. Everyone knew that beta particles could be scattered off a block of metal, but no one thought that alpha particles would be. So Rutherford told Marsden to examine this. Marsden quickly found that alpha particles are indeed scattered – even if the block of metal was replaced by Geiger’s gold foils. This was entirely unexpected. It was, as Rutherford later declared, as if you fired a 15 inch naval shell at a piece of tissue paper and it came back and hit you.

One day, a couple of years later, Rutherford exclaimed to Geiger that he knew what the atom looked like: a nuclear structure with most of the mass and all of one type of charge in a tiny nucleus only a thousandth the size of an atom. This is the work for which he is most famous today, eight decades after his death (CERN Courier May 2011 p20).

Around 1913, Rutherford asked Marsden to “play marbles” with alphas and light atoms, especially hydrogen. Classical calculations showed that an alpha colliding head-on with a hydrogen nucleus would cause the hydrogen to recoil with a speed 1.6 times, and a range four times, that of the alpha particle that struck it. The recoil of the less-massive, less-charged hydrogen could be detected as lighter flashes on the scintillation screen at much greater range than the alphas could travel. Marsden indeed observed such long-range “H” particles, as he named them, produced in hydrogen gas and in thin films of materials rich in hydrogen, such as paraffin wax. He also noticed that the long-ranged H particles were sometimes produced when alpha particles travelled through air, but he did not know where they came from: water vapour in the gas, absorbed water on the apparatus or even emission from the alpha source, were suggested.

Mid-1914 bought an end to the collaboration. Marsden wrote up his work before accepting a job in New Zealand. Meanwhile, Rutherford had sailed to Canada and the US to give lectures, spending just a month back at Manchester before heading to Australia for the annual meeting of the British Association for the Advancement of Science. Three days before his arrival, war was declared in Europe.

Splitting the atom

Rutherford arrived back in Manchester in January 1915, via a U-boat-laced North Atlantic. It was a changed world, with the young men off fighting in the war. On behalf of the Admiralty, Rutherford turned his mind to one of the most pressing problems of the war: how to detect submarines when submerged. His directional hydrophone (patented by Bragg and Rutherford) was to be fitted to fleet ships. It was not until 1917 when Rutherford could return to his scientific research, specifically alpha-particle scattering from light atoms. By December of that year, he reported to Bohr that “I am also trying to break up the atom by this method. – Regard this as private.”

He studied the long-range hydrogen-particle recoils in several media (hydrogen gas, solid materials with a lot of hydrogen present and gases such as CO2 and oxygen), and was surprised to find that the number of these “recoil” particles increased when air or nitrogen was present. He deduced that the alpha particle had entered the nucleus of the nitrogen atom and a hydrogen nucleus was emitted. This marked the discovery that the hydrogen nucleus – or the proton, to give it the name coined by Rutherford in 1920– is a constituent of larger atomic nuclei.

Marsden was again available to help with the experiments for a few months from January 1919, whilst awaiting transport back to New Zealand after the war, and that year Rutherford accepted the position of director of the Cavendish Laboratory. Having delayed publication of the 1917 results until the war ended, Rutherford produced four papers on the light-atom work in 1919. In the fourth, “An anomalous effect in nitrogen.”, he wrote “we must conclude that the nitrogen atom disintegrated … and that the hydrogen atom which is liberated formed a constituent part of the nitrogen nucleus.” He also stated: “Considering the enormous intensity of the forces brought into play, it is not so much a matter of surprise that the nitrogen atom should suffer disintegration as that the α particle itself escapes disruption into its constituents”.

In 1920 Rutherford first proposed building up atoms from stable alphas and H ions. He also proposed that a particle of mass one but zero charge had to exist (neutron) to account for isotopes. With Wilson’s cloud chamber he had observed branched tracks of alpha particles at the end of their range. A Japanese visitor, Takeo Shimizu, built an automated Wilson cloud chamber capable of being expanded several times per second and built two cameras to photograph the tracks at right angles. Patrick Blackett, after graduating in 1921, took over the project when Shimizu returned to Japan. After modifications, by 1924 he had some 23,000 photographs showing some 400,000 tracks. Eight were forked, confirming Rutherford’s discovery. As Blackett later wrote: “The novel result deduced from these photographs was that the α was itself captured by the nitrogen nucleus with the ejection of a hydrogen atom, so producing a new and then unknown isotope of oxygen, 17O.”

As Blackett’s work confirmed, Rutherford had split the atom, and in doing so had become the world’s first successful alchemist, although this was a term that he did not like very much. Indeed, he also preferred to use the word “disintegration” rather than “transmutation”. When Rutherford and Soddy realised that radioactivity caused an element to naturally change into another, Soddy has written that he yelled “Rutherford, this is transmutation: the thorium is disintegrating and transmuting itself into argon (sic) gas.” Rutherford replied, “For Mike’s sake, Soddy, don’t call it transmutation. They’ll have our heads off as alchemists!”

In 1908 Rutherford had been awarded the Nobel Prize in Chemistry “for his investigations into the disintegration of the elements, and the chemistry of radioactive substances”. There was never a second prize for his detection of individual alpha particles, unearthing the nuclear structure of atoms, or the discovery of the proton. But few would doubt the immense contributions of this giant of physics. 

Neutrino connoisseurs talk stats at CERN

PHYSTAT-nu 2019 was held at CERN from 22 to 25 January. Counted among the 130 participants were LHC physicists and professional statisticians as well as neutrino physicists from across the globe. The inaugural meeting took place at CERN in 2000 and PHYSTAT has gone from strength to strength since, with meetings devoted to specific topics in data analysis in particle physics. The latest PHYSTAT-nu event is the third of the series to focus on statistical issues in neutrino experiments. The workshop focused on the statistical tools used in data analyses, rather than experimental details and results.

Modern neutrino physics is geared towards understanding the nature and mixing of the three neutrinos’ mass and flavour eigenstates. This mixing can be inferred by observing “oscillations” between flavours as neutrinos travel through space. Neutrino experiments come in many different types and scales, but they tend to have one calculation in common: whether the neutrinos are created in an accelerator, a nuclear reactor, or by any number of astrophysical sources, the number of events expected in the detector is the product of the neutrino flux and the interaction cross section. Given the ghostly nature of the neutrino, this calculation presents subtle statistical challenges. To cancel common systematics, many facilities have two or more detectors at different distances from the neutrino source. However, as was shown for the NOVA and T2K experiments, competitors to observe CP violation using an accelerator-neutrino beam, it is difficult to correlate the neutrino yields in the near and far detectors. A full cancellation of the systematic uncertainties is complicated by the different detector acceptances, possible variations in the detector technologies, and the compositions of different neutrino interaction modes. In the coming years these two experiments plan to combine their data in a global analysis to increase their discovery power – lessons can be learnt from the LHC experience.

The problem of modelling the interactions of neutrinos with nuclei – essentially the problem of calculating the cross section in the detector – forces researchers to face the thorny statistical challenge of producing distributions that are unadulterated by detector effects. Such “unfolding” corrects kinematic observables for the effects of detector acceptance and smearing, but correcting for these effects can cause huge uncertainties. To counter this, strong “regularisation” is often applied, biasing the results towards the smooth spectra of Monte Carlo simulations. The desirability of publishing unregularised results as well as unfolded measurements was agreed by PHYSTAT-nu attendees. “Response matrices” may also be released, allowing physicists outside of an experimental collaboration to smear their own models, and compare them to detector-level data. Another major issue in modeling neutrino–nuclear interactions is the “unknown unknowns”. As Kevin McFarland of the University of Rochester reflected in his summary talk, it is important not to estimate your uncertainty by a survey of theory models. “It’s like trying to measure the width of a valley from the variance of the position of sheep grazing on it. That has an obvious failure mode: sheep read each other’s papers.”

An important step for current and future neutrino experiments could be to set up a statistics committee, as at the Tevatron, and, more recently, the LHC experiments. This PHYSTAT-nu workshop could be the first real step towards this exciting scenario.

The next PHYSTAT workshop will be held at Stockholm University from 31 July to 2 August on the subject of statistical issues in direct-detection dark-matter experiments.

Cross-fertilisation in detector development

More than 300 experts convened from 18-22 February for the 15th Vienna Conference on Instrumentation to discuss ongoing R&D efforts and set future roadmaps for collaboration. “In 1978 we discussed wire chambers as the first electronic detectors, and now we have a large number of very different detector types with performances unimaginable at that time,” said Manfred Krammer, head of CERN’s experimental physics department, recalling the first conference of the triennial series. “In the long history of the field we have seen the importance of cross-fertilisation as developments for one specific experiment can catalyse progress in many fronts.”

Following this strong tradition, the conference covered fundamental and technological issues associated with the most advanced detector technologies as well as the value of knowledge transfer to other domains. Over five days, participants covered topics ranging from sensor types and fast and efficient electronics to cooling technologies and their mechanical structures.

Contributors highlighted experiments proposed in laboratories around the world, spanning gravitational-wave detectors, colliders, fixed-target experiments, dark-matter searches, and neutrino and astroparticle experiments. A number of talks covered upgrade activities for the LHC experiments ahead of LHC Run 3 and for the high-luminosity LHC. An overview of LIGO called for serious planning to ensure that future ground-based gravitational-wave detectors can be operational in the 2030s. Drawing a comparison between the observation of gravitational waves and the discovery of the Higgs boson, Christian Joram of CERN noted “Progress in experimental physics often relies on breakthroughs in instrumentation that lead to substantial gains in measurement accuracy, efficiency and speed, or even open completely new approaches.”

Beyond innovative ideas and cross-disciplinary collaboration, the development of new detector technologies calls for good planning of resources and times. The R&D programme for the current LHC upgrades was set out in 2006, and it is already timely to start preparing for the third long shutdown in 2023 and the High-Luminosity LHC. Meanwhile, the CLIC and Future Circular Collider studies are developing clear ideas of the future experimental challenges in tackling the next exploration frontier.

Upping the tempo on wakefield accelerators

Around 50 experts from around the world met at CERN from 26 to 29 March for the second ALEGRO workshop to discuss advanced linear-collider concepts at the energy frontier.

ALEGRO, the Advanced Linear Collider Study Group, was formed as an outcome of an ICFA workshop on adv­anced accelerators held at CERN in 2017 (CERN Courier December 2017 p31). Its purpose is to unite the accelerator community behind a > 10 TeV electron–positron collider based on advanced and novel accelerators (ANAs), which use wakefields driven by intense laser pulses or relativistic particle bunches in plasma, dielectric or metallic structures to reach gradients as high as 1 GeV/m. The proposed Advanced Linear International Collider – ALIC for short – would be shorter than linear colliders based on more conventional acceleration technologies such as CLIC and ILC, and would reach higher energies.

The main research topics ALEGRO identified are the preservation of beam quality, the development of stable and efficient drivers (in particular laser systems), wall-plug-to-beam-power efficiency, operation at high-repetition rates, tolerance studies, the staging of two structures and the development of suitable numerical tools to allow for the simulation of the accelerator as a whole.

The next ALEGRO workshop will be held in March 2020 in Germany.

Particle colliders: accelerating innovation

Around 100 researchers, academics and industry delegates joined a co-innovation workshop in Liverpool, UK, on 22 March to discuss the strategic R&D programme for a Future Circular Collider (FCC) and associated benefits for industry. Motivated by the FCC study, the aim of the event was to identify joint R&D opportunities across accelerator projects and disciplines.

New particle colliders provide industry with powerful test-beds with a high publicity factor. Well-controlled environments allow novel technologies and processes to be piloted, and SMEs are ideal partners to bring these technologies – which include superconducting magnets, cryogenics, civil engineering, detector development, energy efficiency and novel materials and material processing techniques – to maturity.

Short talks about FCC-related areas for innovation, examples of successful technology-transfer projects at CERN, as well as current and future funding opportunities stimulated interesting discussions. Several areas were identified as bases for co-innovation, including resource-efficient tunnelling, the transfer of bespoke machine-learning techniques from particle physics to industry, detector R&D, cooling and data handling. The notes from all the working groups will be used to establish joint funding bids between participants.

The co-innovation workshop was part of a bigger event, “Particle Colliders – Accelerating Innovation”, which was devoted to the benefits of fundamental science to society and industry, co-hosted by the University of Liverpool and CERN together with partners from the FCC and H2020 EuroCirCol projects, and supported by EU-funded MSCA training networks. Almost 1000 researchers and industrialists from across Europe, including university and high-school students, took part. An industry exhibition allowed more than 60 high-tech companies to showcase their latest products, also serving university students as a careers fair, and more than a dozen different outreach activities were available to younger students.

A separate event, held at CERN on 4 and 5 March, reviewed the FCC physics capabilities following the publication of the FCC conceptual design report in January (CERN Courier January/February 2019 p8). The FCC study envisages the construction of a new 100 km-circumference tunnel at CERN hosting an intensity-frontier lepton collider (FCC-ee) as a first step, followed by an energy-frontier hadron machine (FCC-hh). It offers substantial and model-independent studies of the Higgs boson by extending the range of measurable Higgs properties to its total width and self-coupling. Moreover, the combination of superior precision and energy reach allows a complementary mix of indirect and direct probes of new physics. For example, FCC-ee would enable the Higgs couplings to the Z boson to be measured with an accuracy better than 0.17%, while FCC-hh will determine model-independent ttH coupling to <1%.

Physics discussions were accompanied by a status report of the overall FCC project, reviewing the technological challenges for both accelerator and detectors, the project implementation strategy, and cost estimates. Construction of the more technologically ready FCC-ee could start by 2028, delivering first physics beams a decade later, right after the end of the HL-LHC programme. Another important aspect of the two-day meeting was the need for further improving theoretical predictions to match the huge step in experimental precision possible at the FCC.

Planning now for a 70-year-long programme may sound a remote goal. However, as Alain Blondel of the University of Geneva remarked in the concluding talk of the conference, the first report on the LHC dates back more than 40 years. “Progress in knowledge has no price,” he said. “The FCC sets ambitious but feasible goals for the global community resembling previous leaps in the long history of our field.”

Centennial conference honours Feynman

2018 marked the 100th anniversary of the birth of Richard Feynman. As one of several events worldwide celebrating this remarkable figure in physics, a memorial conference was held at the Institute of Advanced Studies at Nanyang Technological University in Singapore from 22 to 24 October, co-chaired by Lars Brink, KK Phua and Frank Wilzcek. The format was one-hour talks with 45 minute discussions.

Pierre Ramond began the conference with anecdotes from his time as Feynman’s next-door neighbour at Caltech. He discussed Feynman the MIT undergraduate, his first paper and his work at Princeton as a graduate student. There, Feynman learnt about Dirac’s idea of summing over histories from Herbert Jehle. Jehle asked Feynman about it a few days later. He said that he had understood it and had derived the Schrödinger equation from it. Feynman’s adviser was John Wheeler. Wheeler was toying with the idea of a single electron travelling back and forth in time – were you to look at a slice of time you would observe many electrons and positrons. After his spell at Los Alamos, this led Feynman to the idea of the propagator, which considers antiparticles propagating backwards in time as well as particles propagating forwards. These ideas would soon underpin the quantum description of electromagnetism – QED – for which Feynman shared the 1965 Nobel Prize in Physics with Tomonaga and Schwinger.

Revolutionary diagrams

The propagator was the key to the epony­mous diagrams Feynman then formulated to compute the Lamb shift and other quantities. At the Singapore conference, Lance Dixon exposed how Feynman diagrams revolutionised the calculation of scattering amplitudes. He offered as an example the calculation of the anomalous magnetic moment of the electron, which has now reached five-loop precision and includes 12,672 diagrams. Dixon also discussed the importance of Feynman’s parton picture for understanding deep-inelastic scattering, and the staggeringly complex calculations required to understand data at the LHC.

George Zweig, the most famous of Feynman’s students, and the inventor of “aces” as the fundamental constituents of matter, gave a vivid talk, recounting that it took a long time to convince a sceptical Feynman about them. He described life in the shadows of the great man as a graduate student at Caltech in the 1960s. At that time Feynman wanted to solve quantum gravity, and was giving a course on the subject of gravitation. He asked the students to suppose that Einstein had never lived: how would particle physicists discuss gravity? He quickly explained that there must be a spin-two particle mediating the force; by the second lecture he had computed the precession of the perihelion of Mercury, a juncture that other courses took months to arrive at. Zweig recounted that Feynman’s failure to invent a renormalisable theory of quantum gravity affected him for many years. Though he did not succeed, his insights continue to resound today. As Ramond earlier explained, Feynman’s contribution to a conference in Chapel Hill in 1957, his first public intervention on the subject, is now seen as the starting point for discussions on how to measure gravitational waves.

Cristiane Morais-Smith spoke on Feynman’s path integrals, comparing Hamiltonian and Lagrangian formulations, and showing their importance in perturbative QED. Michael Creutz, the son of one of Feynman’s colleagues at Princeton and Los Alamos, showed how the path integral is also necessary to be able to work on the inherently non-perturbative theory of quantum chromodynamics. Morais-Smith went on to illustrate how Feynman’s path integrals now have a plethora of applications outside particle physics, from graphene to quantum Brownian motion and dissipative quantum tunnelling. Indeed, the conference did not neglect Feynman’s famous interventions outside particle physics. Frank Wilczek recounted Feynman’s famous insight that there is plenty of room at the bottom, telling of his legendary after-dinner talk in 1959 that foreshadowed many developments in nanotechnology. Wilczek concluded that there is plenty of room left in Hilbert space, describing entanglement, quantum cryptography, quantum computation and quantum simulations. Quantum computing is the last subject that Feynman worked hard on. Artur Ekert described the famous conference at MIT in 1981 when Feynman first talked about the subject. His paper from this occasion “Simulating Physics with Computers” was the first paper on quantum computers and set the ground for the present developments.

Biology hangout

Feynman was also interested in biology for a long time. Curtis Callan painted a picture of Feynman “hanging out” in Max Delbruck’s laboratory at Caltech, even taking a sabbatical at the beginning of the 1960s to work there, exploring the molecular workings of heredity. In 1969 he gave the famous Hughes Aerospace lectures, offering a grand overview of biology and chemistry – but this was also the time of the parton model and somehow that interest took over.

Robbert Dijkgraaf spoke about the interplay between art and science in Feynman’s life and thinking. He pointed out how important beauty is, not only in nature, but also in mathematics, for instance whether one uses a geometric or algebraic approach. Another moving moment of this wide-ranging celebration of Feynman’s life and physics was Michelle Feynman’s words about growing up with her father. She showed him both as a family man and also as a scientist, sharing his enthusiasm for so many things in life.

  • Recordings of the presentations are available online.

Standard Model stands strong at Moriond

The 66th Rencontres de Moriond, held in La Thuile, Italy, took place from 16 to 30 March, with the first week devoted to electroweak interactions and unified theories, and a second week to QCD and high-energy interactions. More than 200 physicists took part, presenting new results from precision Standard Model (SM) measurements to new exotic quark states, flavour physics and the dark sector.

A major theme of the electroweak session was flavour physics, and the star of the show was LHCb’s observation of CP violation in charm decays (see LHCb observes CP violation in charm decays). The collaboration showed several other new results concerning charm- and B-meson decays. One much anticipated result was an update on RK, the ratio of rare decays of a B+ to electrons and muons, using data taken at energies of 7, 8 and 13 TeV. These decays are predicted to occur at the same rate to within 1%; previous data collected are consistent with this prediction but favour a lower value, and the latest LHCb results continue to support this picture. Together with other measurements, these results paint an intriguing picture of possible new physics (p33) that was explored in several talks by theorists.

Run-2 results

The LHC experiments presented many new results based on data collected during Run 2. ATLAS and CMS have measured most of the Higgs boson’s main production and decay modes with high statistical significance and carried out searches for new, additional Higgs bosons. From a combination of all Higgs-boson measurements, ATLAS obtained new constraints on the important Higgs self-coupling, while CMS presented updated results on the Higgs decay to two Z bosons and its coupling to top quarks.

Precision SM studies continued with first evidence from ATLAS for the simultaneous production of three W or Z bosons, and CMS presented first evidence for the production of two W bosons in two simultaneous interactions between colliding partons. The very large new dataset has also allowed ATLAS and CMS to expand their searches for new physics, setting stronger lower limits on the allowed mass ranges of supersymmetric and other hypothetical particles (see Boosting searches for fourth-generation quarks and Pushing the limits on supersymmetry). These also include new limits from CMS on the parameters describing slowly moving heavy particles, and constraints from both collaborations on the production rate of Z bosons. ATLAS, using the results of lead–ion collisions taken in 2018, also reported the observation of light-by-light scattering – a very rare process that is forbidden by classical electrodynamics.

New results and prospects in the neutrino sector were communicated, including Daya Bay and the reactor antineutrino flux anomaly, searches for neutrinoless double-beta decay, and the reach of T2K and NOvA in tackling the neutrino mass hierarchy and leptonic CP violation. Dark matter, axions and cosmology also featured prominently. New results from experiments such as XENON1T, ABRACADABRA, SuperCDMS and ATLAS and CMS illustrate the power of multi-prong dark-matter searches – not just for WIMPs but also very light or exotic candidates. Cosmologist Lisa Randall gave a broad-reaching talk about “post-modern cosmology”, in which she argued that – as in particle physics – the easy times are probably over and that astronomers need to look at more subtle effects to break the impasse.

Moriond electroweak also introduced a new session: “feeble interactions”, which was designed to reflect the growing interest in very weak processes at the LHC and future experiments.

LHCb continued to enjoy the limelight during Moriond’s QCD session, announcing the discovery of a new five-quark hadron, named Pc(4312)+, which decays to a proton and a J/ψ and is a lighter companion of the pentaquark structures revealed by LHCb in 2015 (p15). The result is expected to motivate deeper studies of the structure of these and other exotic hadrons. Another powerful way to delve into the depths of QCD, addressed during the second week of the conference, is via the Bc meson family. Following the observation of the Bc(2S) by ATLAS in 2014, CMS reported the existence of a two-peak feature in data corresponding to the Bc(2S) and the Bc*(2S) – supported by new results from LHCb based on its full 2011–2018 data sample. Independent measurements of CP violation in the Bs system reported by ATLAS and LHCb during the electroweak session were also combined to yield the most precise measurement yet, which is consistent with the small value predicted by the SM.

A charmed life

In the heavy-ion arena, ALICE highlighted its observation that baryons containing charm quarks are produced more often in proton–proton collisions than in electron–positron collisions. Initial measurements in lead–lead collisions suggest an even higher production rate for charmed baryons, similar to what has been observed for strange baryons. These results indicate that the presence of quarks in the colliding beams affects the hadron production rate. The collaboration also presented the first measurement of the triangle-shaped flow of J/ψ particles in lead–lead collisions, showing that even heavy quarks are affected by the quarks and gluons in the quark–gluon plasma and retain some memory of the collisions’ initial geometry.

The SM still stands strong after Moriond 2019, and the observation of CP violation in D mesons represents another victory, concluded Shahram Rahatlou of Sapienza University of Rome in the experimental summary. “But the flavour anomaly is still there to be pursued at low and high mass.”

Pushing the limits on supersymmetry

A report from the ATLAS experiment

Supersymmetry (SUSY) introduces a new fermion–boson symmetry that gives rise to supersymmetric “partners” of the Standard Model (SM) particles, and “naturally” leads to a light Higgs boson with mass close to that of the W and Z. SUSY partners that are particularly relevant in these “natural SUSY” scenarios are the top and bottom squarks, as well as the SUSY partners of the weak SM bosons, the neutralinos and charginos.

Despite the theory’s many appealing features, searches for SUSY at the LHC and elsewhere have so far yielded only exclusion limits. With LHC Run 2 completed as of the end of 2018, the ATLAS experiment has recorded 139 fb-1 of physics-quality proton–proton collisions at a centre-of-mass energy of 13 TeV. Three recent ATLAS SUSY searches highlight the significant increase in sensitivity offered by this dataset.

The first search took advantage of refinements in b-tagging to search for light bottom squarks decaying into bottom quarks, Higgs bosons and the lightest SUSY partner, which is assumed to be invisible and stable (a candidate for dark matter). The data agree with the SM and lead to significantly improved constraints, with bottom squark masses now excluded up to 1.5 TeV. 

If the accessible SUSY particles can only be produced via electroweak processes, the resulting low-production cross sections present a challenge. The second search focuses on such electroweak SUSY signatures with two charged leptons and a significant amount of missing momentum carried away by a pair of the lightest SUSY partners. The current search places strong constraints on SUSY models with light charginos and more than doubles the sensitivity of the previous analysis (figure 1).

A third recent analysis considered less conventional signatures. Top squarks – the bosonic SUSY partner of the top quark – may evade detection if they have a long lifetime and decay at macroscopic distances from the collision point. This search looked for SUSY particles decaying to a quark and a muon, looking primarily for long-lived top squarks that decayed several millimetres into the detector volume. The observed results are consistent with the background-only expectation.

These analyses represent just the beginning of a large programme of SUSY searches using the entirety of the Run-2 dataset. With a rich signature space left to explore, there remains plenty of room for discovery in mining the riches from the LHC.

Boosting searches for fourth-generation quarks

A report from the CMS experiment

Ever since the 1970s, when the third generation of quarks and leptons began to emerge experimentally, physicists have asked if further generations await discovery. One of the first key results from the Large Electron–Positron Collider 30 years ago provided evidence to the contrary, showing that there are only three generations of neutrinos. The discovery of the Higgs boson in 2012 added a further wrinkle to the story: many theorists believe that the mass of the Higgs boson is unnaturally small if there are additional generations of quarks heavier than the top quark. But a loophole arises if the new heavy quarks do not interact with the Higgs field in the same way as regular quarks. The search for new heavy fourth-generation quarks – denoted T – is therefore the subject of active research at the LHC today.

CMS researchers have recently completed a search for such “vector-like” quarks using a new machine-learning method that exploits special relativity in a novel way. If the new T quarks exist, they are expected to decay to a quark and a W, Z or Higgs boson. As top quarks and W/Z/H bosons decay themselves, production of a T quark–antiquark pair could lead to dozens of different final states. While most previous searches focused on a handful of channels at most, this new analysis is able to search for 126 different possibilities at once.

The key to classifying all the various final states is the ability to identify high-energy top quarks, Higgs bosons, and W and Z bosons that decay into jets of particles recorded by the detector. In the reference frame of the CMS detector, these particles produce wide jets that all look alike, but things look very different in a frame of reference in which the initial particle (a W, Z or H boson, or a top quark) is at rest. For example, in the centre-of-mass frame of a Higgs boson, it would appear as two well-collimated back-to-back jets of particles, whereas in the reference frame of the CMS detector the jets are no longer back-to-back and may indeed be difficult to identify as separate at all. This feature, based on special relativity, tells us how to distinguish “fat” jets originating from different initial particles.

Modern machine-learning techniques were used to train a deep neural-network classification algorithm using simulations of the expected particle decays. Several dozen properties of the jets were calculated in different hypothetical reference frames, and fed to the network, which classifies the original fat jets as coming from either top quarks, H, W or Z bosons, b quarks, light quarks, or gluons. Each event is then classified according to how many of each jet type there are in the event. The number of observed events in each category was then compared to the predicted background yield: an excess could indicate T-quark pair production.

CMS found no evidence for T-quark pair production in the 2016 data, and has excluded T-quark masses up to 1.4 TeV (figure 1). The collaboration is working on new ideas to improve the classification method and extend the search to higher masses using the four-times larger 2017 to 2018 dataset.

bright-rec iop pub iop-science physcis connect