Comsol -leaderboard other pages

Topics

Alvin Tollestrup: 1924-2020

Machine maestro – Alvin Tollestrup led the pioneering work of designing and testing the superconducting magnets for the Tevatron, the first large-scale application of superconductivity. Credit: Fermilab

Alvin Tollestrup, who passed away on 9 February at the age of 95, was a visionary. When I joined his group at Caltech in the summer of 1960, experiments in particle physics at universities were performed at accelerators located on campus. Alvin had helped build Caltech’s electron synchrotron, the highest energy photon-producing accelerator at the time. But he thought more exciting physics could be performed elsewhere, and managed to get approval to run an experiment at Berkeley Lab’s Bevatron to measure a rare decay mode of the K+ meson. This was the first time an outsider was allowed to access Berkeley’s machine, much to the consternation of Luis Alvarez and other university faculty.

When I joined Alvin’s group he asked a postdoc, Ricardo Gomez, and me to design, build and test a new type of particle detector called a spark chamber. He gave us a paper by two Japanese authors on “A new type of particle detector: the discharge chamber”, not what he wanted, but a place to start. In retrospect it was remarkable that Alvin was willing to risk the success of his experiment on the creation of new technology. Alvin also asked me to design a transport system of magnetic lenses that would capture as many K mesons as possible at the “thin window” of the accelerator and guide them to our “hut” on the accelerator floor where K decays would be observed. I did my calculations on an IBM 709 at UCLA — Alvin checked them by tracing rays at his drafting table. When the beam design was completed and the chain of magnets was in place on the accelerator floor, Alvin threaded a single wire through them from the thin window to our hut.

I had no idea what he was doing, or why. Around Alvin the Zen master, I didn’t say much or ask many questions. After turning the magnets on and running current through the wire, the wire snapped to attention tracing the path a K would follow from where it left the accelerator to where its decays would be observed. The wire floated through the magnet centres far from their walls, tracing an unobstructed path. Calculations − how much current was required in the wire − followed by testing, were Alvin’s modus operandi.

A couple of months later in 1962, run-time arrived. All the equipment for the experiment was built and tested over a two-year period at Caltech, shipped in a moving van to the Bevatron, and assembled in our hut. We had 21 half-days to make our measurements. The proton beam inside the accelerator was steered into a tungsten target behind the thin window through which the Ks would pass. Inside the hut we waited for the scintillation counters to start clicking wildly, but there was hardly a click. In complete silence, Alvin set out to find what happened to the beam, slowly moving a scintillation counter from one magnet to the next until he reached the thin window. Finding that hardly any Ks were coming through it, Alvin asked the operator in the control room to shut the machine down and remove the thin window to expose the target — an unprecedented request that meant losing the vacuum the proton beam required. There was a long silence while the operator mentally processed the request. Several phone calls later the operator complied. With a pair of long tongs Alvin pressed a small square of dental film against the radioactive target. When developed it showed a faintly illuminated edge at the top of the target. The Bevatron surveyors had placed the target one inch below its proper position, a big mistake. But there was no panic or finger pointing, just measurement and appropriate action. That was Alvin’s style, always diplomatic with management, never asking for something without sufficient reason, and persistent. Unfortunately, we were unfairly charged a full day of running time, which Alvin chose not to contest. Not everyone at UC Berkeley was happy with outside users coming in to use “their machine,” and Alvin did not want to antagonize them.

Without his influence, I never would have discovered quarks (aces), whose existence was later definitively confirmed in deep inelastic scattering experiments.

Alvin was my first thesis advisor. When he taught me how to think about my measurements, he also taught me how to analyze and judge the measurements of others.  This was essential in understanding which of the many “discoveries” of hadrons in the early 1960s were believable. Without his influence, I never would have discovered quarks (aces), whose existence was later definitively confirmed in deep inelastic scattering experiments.

Fermilab years
More than a dozen years later, true to his belief that users of accelerators should improve them, Alvin left Caltech for Fermilab where he would create the first large-scale application of superconductivity. Physics at Fermilab at that time was limited by the energy of the protons it produced: 200 GeV, which was the design energy of the laboratory’s 6.3km circumference Main Ring. If superconducting magnets could be built, the Main Ring’s copper magnets could be replaced, energy costs could be significantly reduced, and the energy of protons could be doubled. Furthermore, protons and antiprotons could eventually be accelerated in the same ring, traveling in opposite directions, colliding at nodes around the ring where experiments could be performed. All this without digging a new tunnel.

The Tevatron, which operated from 1983 until 2011, had more than 1000 superconducting magnets and for 25 years was the world’s most powerful collider. Credit: R Hahn/Fermilab

I went to visit Alvin shortly after he arrived at Fermilab and found him at a drafting table once more tracing rays, this time through superconducting magnets. Looking up he told me of the magnetostrictive forces trying to tear each magnet apart, and the enormous energy stored within each one (as much energy as a one-tonne vehicle traveling more than 100 km h-1) all within a bath of liquid helium bombarded by stray high-energy protons. If a superconducting magnet “quenched” and returned to its normal state, this energy would suddenly be released and serious damage would occur. There was also the possibility of a domino effect, one magnet quenching after another.

With a number of ingenious inventions, always experimenting but only making one change at a time, and combining the understanding that comes from physics with the practicalities necessary for engineering, Alvin made essential contributions to the design, testing and commissioning of the superconducting magnets. When the “energy doubler”, henceforth the Tevatron, was completed in 1983, Alvin worked on converting it to a proton-antiproton collider. The collider began operation in 1987, and Alvin was the primary spokesperson for the CDF experimental collaboration from 1980 to 1992. The Tevatron was the world’s most powerful particle collider for 25 years until the LHC came along. The top quark and the tau neutrino were both discovered there. Alvin’s critical contributions to the design, construction and initial operation of the Tevatron were recognised in 1989 with a US National Medal of Technology and Innovation.

Deserved recognition
Designing robust superconducting magnets that could be mass produced was extremely difficult. Physicists at Brookhaven working on their next-generation accelerator − Isabelle – failed, even though they received substantially more government support and funding. And, ten days after the LHC was first switched on in 2008, an electrical fault in a connection between adjacent magnets caused a massive magnet quench and significant damage which closed the accelerator for several months.

The virtuosity required to create new accelerators sometimes exceeds what is necessary to run the resulting prizewinning experiments.

Alvin once told me that the Bevatron’s director, Ed Lofgren, never got the recognition he deserved. The Bevatron was designed and built to find the antiproton, and sure enough Segre and Chamberlain found it as soon as the Bevatron was turned on. They were recognised for their discovery with a Nobel Prize, but the work Lofgren did to create the machine for them was of a higher order than that required to run their experiment. Alvin also didn’t get the recognition he deserved. His modesty only exacerbated the problem. The virtuosity required to create new accelerators sometimes exceeds what is necessary to run the resulting prizewinning experiments.

Alvin remained a visionary all his life. For many years Richard Feynman kept a question carefully written in the upper left-hand corner of his blackboard: “Why does the muon weigh?” To help answer this question, and create a new frontier in high-energy physics, Alvin began work on a muon collider in the early 1990s, and interest in the collider has increased ever since.

There were things that I was never able to learn from Alvin. His intuition for electronics was beyond my grasp, a gift from the gods. That intuition helped him make one of the most important measurements of the 1950s. Parity violation had been discovered, but how was it violated? There were competing theories, championed by giants. The V − A theory predicted the existence of the decay π→ eν ̄, but this decay was not seen in two independent experiments by Jack Steinberger in 1955, and Herb Anderson in 1957. As a testimony to the difficulty of this measurement, both Steinberger and Anderson were outstanding experimentalists, students of Fermi. Steinberger later shared the Nobel Prize for demonstrating that the electron and muon each have their own neutrinos. Alvin, with his knowledge of how photomultipliers worked, discovered a flaw in one of the experiments, and with collaborators at CERN, went on to find the decay at the predicted rate, validating the V − A theory of the weak interactions.

Alvin did not suffer fools gladly, but outside of work he created a community of collaborators, an extended family. He fed and entertained us. His pitchers of martinis and platters of whole hams are memorable. As a child my parents took me to a traveling circus where we saw a tight-rope performer, Karl Wallenda, who had an incredible high-wire act. Walenda is quoted as saying, “Life is on the wire. The rest is waiting.” Alvin showed us how to have fun while waiting, and shared a long and phenomenal life with us, both off − and especially on − the high wire.

MICE demonstrates muon cooling

The MICE facility at the ISIS source.

Particle physicists have long coveted the advantages of a muon collider, which could offer the precision of a LEP-style electron–positron collider without the energy limitations imposed by synchrotron-radiation losses. The clean neutrino beams that could be produced by bright and well-controlled muon beams could also drive a neutrino factory. In a step towards demonstrating the technical feasibility of such machines, the Muon Ionisation Cooling Experiment (MICE) collaboration has published results showing that muon beams can be “cooled” in phase space.

“Muon colliders can in principle reach very high centre-of-mass energies and luminosities, allowing unprecedented direct searches of new heavy particles and high-precision tests of standard phenomena,” says accelerator physicist Lenny Rivkin of the Paul Scherrer Institute in Switzerland, who was not involved in the work. “Production of bright beams of muons is crucial for the feasibility of these colliders and MICE has delivered a detailed characterisation of the ionisation-cooling process – one of the proposed methods to achieve such muon beams. Additional R&D is required to demonstrate the feasibility of such colliders.”

MICE has delivered a detailed characterisation of the ionisation-cooling process

Lenny Rivkin

The potential benefits of a muon collider come at a price, as muons are unstable and much harder to produce than electrons. This imposes major technical challenges and, not least, a 2.2 µs stopwatch on accelerator physicists seeking to accelerate muons to longer lifetimes in the relativistic regime. MICE has demonstrated the essence of a technique called ionisation cooling, which squeezes the watermelon-sized muon bunches created by smashing protons into targets into a form that can be fed into the accelerating structures of a neutrino factory or the more advanced subsequent cooling stage required for a muon collider – all on a time frame short compared to the muon lifetime.

An alternative path to a muon collider or neutrino factory is the recently proposed Low Emittance Muon Accelerator (LEMMA) scheme, whereby a naturally cool muon beam would be obtained by capturing muon–antimuon pairs created in electron–positron annihilations.

Playing it cool

Based at Rutherford Appleton Laboratory (RAL) in the UK, and two decades in the making, MICE set out to reduce the spatial extent, or more precisely the otherwise approximately conserved phase-space volume, of a muon beam by passing it through a low-Z material while tightly focused, and then restoring the lost longitudinal momentum in such a way that the beam remains bunched and matched. This is only possible in low-Z materials where multiple scattering is small compared to energy loss via ionisation. The few-metre long MICE facility, which precisely measured the phase-space coordinates of individual muons upstream and downstream of the absorber (see figure), received muons generated by intercepting the proton beam from the ISIS facility with a cylindrical titanium target. The absorber was either liquid hydrogen in a tank with thin windows or solid lithium hydride, in both cases surrounded by coils to achieve the necessary tight focus, and maximise transverse cooling.

MICE Nature figure

A full muon-ionisation cooling channel would work by progressively damping the transverse momentum of muons over multiple cooling cells while restoring lost longitudinal momentum in radio-frequency cavities. However, due to issues with the spectrometer solenoids and the challenges of integrating the four-cavity linac module with the coupling coil, explains spokesperson Ken Long of Imperial College London, MICE adopted a simplified design without cavities. “MICE has demonstrated ionisation cooling,” says Long. The next issues to be addressed, he says, are to demonstrate the engineering integration of a demonstrator in a ring, cooling down to the lower emittances needed at a muon collider, and investigations into the effect of bulk ionisation on absorber materials. “The execution of a 6D cooling experiment is feasible – and is being discussed in the context of the Muon Collider Working Group.”

Twists and turns

The MICE experiment took data during 2017 and the collaboration confirmed muon cooling by observing an increased number of “low-amplitude” muons after the passage of the muon beam through an absorber. In this context, the amplitude is an additive contribution to the overall emittance of the beam, with a lower emittance corresponding to a higher density of muons in transverse phase space. The feat presented some extraordinary challenges, says MICE physics coordinator Chris Rogers of RAL. “We constructed a densely packed 12-coil and three-cryostat magnet assembly, with up to 5 MJ of stored energy, which was capable of withstanding 2 MN inter-coil forces,” he says. “The muons were cooled in a removable 22-litre vessel of potentially explosive liquid hydrogen contained by extremely thin aluminium windows.” The instrumentation developed to measure the correlations between the phase-space coordinates introduced by the solenoidal field is another successful outcome of the MICE programme, says Rogers, making a single-particle analysis possible for the first time in an accelerator-physics experiment.

“We started MICE in 2000 with great enthusiasm and a strong team from all continents,” says MICE founding spokesperson Alain Blondel of the University of Geneva. “It has been a long and difficult road, with many practical novelties to solve, however the collaboration has held together with exceptional resilience and the host institution never failed us. It is a great pride to see the demonstration achieved, just at a time when it becomes evident to many new people that we must include muon machines in the future of particle physics.”

Renewed doubt cast on origin of fast radio bursts

The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is one of several radio telescopes scouring the sky for fast radio bursts.

Fast radio bursts (FRBs) are a relatively new mystery within astrophysics. Around 100 of these intense few-millisecond bursts of radio waves have been spotted since the first detection in 2007, and hardly anything is known about their origin. Thanks to close collaboration between different radio facilities and lessons learned from the study of previous astrophysical mysteries such as quasars, our understanding of these phenomena is evolving rapidly. During the past year or so, several FRBs have been localised in different galaxies, strongly suggesting that they are extra-galactic. A newly published FRB measurement, however, casts doubts about their underlying origin.

As recently as one year ago, only a few tens of FRBs had been measured. One of these FRBs was of particular interest because, unlike the single-event nature of all other known FRBs, it produced several radio signals within a short time scale – earning it the nickname “the repeater”. This could imply that while all other FRBs were a result of some type of cataclysmic event, the repeater was an altogether different source which just happened to produce a similar signal. Adding to the intrigue, measurements also showed it to be in a rather peculiar high-metallicity dwarf galaxy close to the supermassive black hole within this host galaxy.

Much has happened in the field of FRBs since then, mainly thanks to data from new facilities such as ASKAP in Australia, CHIME in Canada (pictured above), and FAST in China. A number of new FRBs have been detected including nine more repeaters. Additionally, the new range of facilities has allowed for more detailed location measurements, including some for non-repeating FRBs which are more challenging due to their unpredictable occurrence. Since non-repeating bursts were found to be in more conventional galaxies than that of the repeater, a fully different origin of the two types of FRBs seemed the more likely explanation.

The new repeating fast radio burst (red circle) was traced to a star forming region arm of a fairly ordinary spiral galaxy, unlike the previous localisation of the first repeater.

The latest localisation measurement of an FRB, using data from CHIME and subsequent triangulation via eight radio telescopes from the European VLBI network, throws this theory into question. Writing in Nature, the international team found that another repeater was not only the closest FRB found to date (at a distance of 500 million light years), it was found in a star-forming region of a galaxy not that different from the Milky Way and therefore very different from the other localised repeating FRB. This precise localisation measurement, which allowed astronomers to pinpoint the location within an area just seven light years across, indicates that extreme environments are not required for repeater FRBs. Additionally, some of the repeated signals from this source were not strong enough to have come from any of the non-repeating FRBs as these are all at a larger distance. The latter finding casts doubt on the idea of two distinct classes of FRBs as the non-repeaters could just simply be too far away for some of their signal to reach us.

Although these latest findings give new insights in the quickly evolving field of FRBs it is clear that more measurements are required. The new radio facilities will soon make populations studies possible. Such populations studies have previously answered many questions for the fields of gamma-ray burst and quasars which in their early stages showed large similarities with the state in which FRB studies are now. Such studies could show if one of the two vastly differing environments in the two repeaters are found is simply a peculiarity or if FRBs can be produced in a range of different environments. Additionally, studies of the burst intensities and the distances of their origin will be able to prove if repeaters and non-repeaters are only different because of their distance.

Japanese scientists identify priorities

Illustration of the proposed International Linear Collider.

The International Linear Collider (ILC), currently being considered to be hosted in the Tohoku region of Japan, has not been selected as a high-priority project in the country’s 2020 “master plan” for large research projects. The master plan, which is compiled every three years, was announced on 30 January by the Science Council of Japan (SJC). Among 31 projects which did make it onto high-priority list were the Super-B factory at KEK, the KAGRA gravitational-wave laboratory and an upgrade of the J-PARC facility.

“Even though the ILC did not go into the final shortlist, it was selected as one of the projects that went to the hearing stage indicating that the scientific merit of the ILC was recognized by the committee,” said ILC director Shin Michizono. “This allows the ILC project to move to the next phase.”

In 2012, physicists in Japan submitted a petition to the Japanese government to host the ILC, an electron–positron collider serving as a Higgs-factory. A technical design report was published the following year and, in 2017, the original ILC design was revised to reduce its centre-of-mass energy by half (to 250 GeV), shortening the machine by around a third. In 2018, the International Committee for Future Accelerators (ICFA) issued a statement of support for the project, but in March last year, Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced that it has “not yet reached declaration” for hosting the ILC and that the project “requires further discussion in formal academic decision-making processes such as the SCJ master plan”.

The important thing is that discussions on how to share the burden start soon.

Lyn Evans

At a press conference held on 31 January, state minister for MEXT, Koichi Hagiuda, responded positively to the contents of the SJC document. “This has been put together from the viewpoint of people representing the academic community, and we believe that it will serve as a reference for future discussions within the government. Being an international project, the ILC project requires broad support from both inside and outside the country. In light of the outcome of the Master Plan 2020, and observing the progress of other discussions such as the European Strategy for Particle Physics, we would like to carefully carry forward the discussions.”

Member of the Japanese government’s cabinet office, Naokazu Takemoto, who is minister of state for science and technology policy, said: “To put it simply, the project made it through the first round of evaluations, and there were about 60 such projects. In the second round, 31 projects were selected, and the ILC was not among them. However, this is a viewpoint of the Science Council. When considering the possibilities going forward, MEXT will look at high-priority research topics, and I hear that the ILC will be included in the list of these topics.” Responding to a question about the cost of the ILC, Takemoto continued: “The cost is to be shared among many countries, but some say that Japan needs to shoulder most of it. Even if these are the presumptions, I personally think we should strongly ask for realizing the project. It will effectively contribute to regional revitalisation. It will give back hope to people who have suffered greatly by the [damage caused by a tsunami in 2011]. Furthermore, it will give Japan’s technology an advantage to have an important share in the area of the world’s scientific research.”

MEXT representatives are expected to update the community on 20 February during the 85th meeting of ICFA at SLAC National Laboratory in the US.

“It is no surprise that the ILC is not on the SCJ list,” says Lyn Evans, director of the Linear Collider Collaboration.  “It is of a different order of magnitude to any other project the committee considered. It also requires broad international collaboration. The important thing is that discussions on how to share the burden start soon.”

Bad Honnef strategy session concludes

Following a week of discussions, the European Strategy Group has released a statement reporting convergence on recommendations to guide the future of high-energy physics in Europe. The 60-or-so delegates, among them scientific representatives from each of CERN’s member and associate-member states, directors and representatives of major European laboratories and organisations, and invitees from outside Europe, now return home. Their recommendations will be presented to the CERN Council in March and made public at an event in Budapest, Hungary, on 25 May.

Statement from the European Strategy Group after the Bad Honnef drafting meeting, 25 January

The drafting session of the European Strategy Group preparing the next European Particle Physics Strategy Update took place in Bad Honnef (Germany) between 21-25 January 2020. After a week of fruitful discussions involving senior figures of European and international particle physics, convergence was achieved on recommendations that will guide the future of the field.

The drafting session marks a key stage of the strategy update process. The attendees of the Bad Honnef drafting session successfully carried out their ambitious task of identifying a set of priorities and recommendations. They built on the impressive progress made since the last update of the European Strategy for Particle Physics, in 2013, and the rich input received from the entire particle physics community in the current update process.

The next step in this process will be to submit the document outlining the recommendations to the CERN Council. It will be discussed by the Council in March and submitted for final approval at an extraordinary Council Session on 25 May, in Budapest, Hungary. Once approved, it can be made public.

The European Strategy Group

50 years of the GIM mechanism

GIM originators 50 years on

In 1969 many weak amplitudes could be accurately calculated with a model of just three quarks, and Fermi’s constant and the Cabibbo angle to couple them. One exception was the remarkable suppression of strangeness-changing neutral currents. John Iliopoulos, Sheldon Lee Glashow and Luciano Maiani boldly solved the mystery using loop diagrams featuring the recently hypothesised charm quark, making its existence a solid prediction in the process. To celebrate the fiftieth anniversary of their insight, the trio were guests of honour at an international symposium at the T. D. Lee Institute at Shanghai Jiao Tong University on 29 October, 2019.

The UV cutoff needed in the three-quark theory became an estimate of the mass of the fourth quark

The Glashow-Iliopoulos-Maiani (GIM) mechanism was conceived in 1969, submitted to Physical Review D on 5 March 1970, and published on 1 October of that year, after several developments had defined a conceptual framework for electroweak unification. These included Yang-Mills theory, the universal V−A weak interaction, Schwinger’s suggestion of electroweak unification, Glashow’s definition of the electroweak group SU(2)L×U(1)Y, Cabibbo’s theory of semileptonic hadron decays and the formulation of the leptonic electroweak gauge theory by Weinberg and Salam, with spontaneous symmetry breaking induced by the vacuum expectation value of new scalar fields. The GIM mechanism then called for a fourth quark, charm, in addition to the three introduced by Gell-Mann, such that the first two blocks of the electroweak theory are made each by one lepton and one quark doublet, [(νe, e), (u, d)] and [(νµ, µ), (c, s)]. Quarks u and c are coupled by the weak interaction to two superpositions of the quarks d and s: u ↔ dC , with dC the Cabibbo combination dC = cos θC d + sin θC s, and c ↔ sC , with sC the orthogonal combination. In subsequent years, a third generation, [(ντ, τ ), (t, b)] was predicted to describe CP violation. No further generations have been observed yet.

Problem solved

The GIM mechanism was the solution to a problem arising in the simplest weak interaction theory with one charged vector boson coupled to the Cabibbo currents. As pointed out in 1968, strangeness-changing neutral-current processes, such as KL → µ+µ and K0K0 mixing, are generated at one loop with amplitudes of order G sinθC cosθC (GΛ2), where G is the Fermi constant, Λ is an ultraviolet cutoff, and GΛ2 (dimensionless) is the first term in a perturbative expansion which could be continued to take higher order diagrams into account. To comply with the strict limits existing at the time, one had to require a surprisingly small value of the cutoff, Λ, of 2 − 3 GeV, to be compared with the naturally expected value: Λ = G-1/2 ~ 300 GeV. This problem was taken seriously by the GIM authors, who wrote that “it appears necessary to depart from the original phenomenological model of weak interactions”.

GIM mechanism Feynman diagrams

To sidestep this problem, Glashow, Iliopoulos and Maiani brought in the fourth “charm” quark, already introduced by Bjorken, Glashow and others, with its typical coupling to the quark combination left alone in the Cabibbo theory: c ↔ sC = − sinθC d + cosθC s. Amplitudes for s → d with u or c on the same fermion line would cancel exactly for mc = mu, suggesting a more natural means to suppress strangeness-changing neutral-current processes to measured levels. For mc >> mu, a residual neutral-current effect would remain, which, by inspection, and for dimensional reasons, is of order G sinθC cos θC (Gmc2). This was a real surprise: the “small” UV cutoff needed in the simple three-quark theory became an estimate of the mass of the fourth quark, which was indeed sufficiently large to have escaped detection in the unsuccessful searches for charmed mesons that had been conducted in 1960s. With the two quark doublets included, a detailed study of strangeness changing neutral current processes gave mc ∼ 1.5 GeV, a value consistent with more recent data on the masses of charmed mesons and baryons. Another aspect of the GIM cancellation is that the weak charged currents make an SU(2) algebra together with a neutral component that has no strangeness changing terms. Thus, there is no difficulty to include the two quark doublets in the unified electroweak group SU(2)L×U(1)Y of Glashow, Weinberg and Salam. The 1970 GIM paper noted that “in contradistinction to the conventional (three-quark) model, the couplings of the neutral intermediary – now hypercharge conserving – cause no embarrassment.”

The GIM mechanism has become a cornerstone of the Standard Model and it gives a precise description of the observed flavour changing neutral current processes for s and b quarks. For this reason, flavour-changing neutral currents are still an important benchmark and give strong constraints on theories that go beyond the Standard Model in the TeV region.

Astroparticle physicists head down under

Yvonne Wong at TeVPA 2019

Despite the thick haze of bushfire smoke hanging over the skyline, 200 delegates gathered in Sydney from 2 to 6 December for the 14th edition of the TeV Particle-Astrophysics conference (TeVPA), to discuss the status and future of astroparticle physics.

The week began with a varied series of talks on dark matter. Luca Grandi (Chicago) and Tom Thorpe (LNGS) updated delegates on progress towards the next generation of xenon and argon-based experiments: these massive underground detectors are now approaching total masses in the multiton-scale. Experiments like XENON, LZ and DarkSide are poised to be so sensitive to rare signals that they will even able to detect coherent elastic neutrino-nucleus scattering – the ultimate background to direct dark-matter searches. Meanwhile, Greg Lane (Australian National University) brought us news of exciting developments in Australian dark-matter research. The Stawell Underground Laboratory—the first deep underground site in the southern hemisphere—will host part of the SABRE experiment, which aims to test the annually modulating event rate seen by the DAMA experiment. This highly controversial, dark-matter-like signal has been observed for two decades by DAMA, but remains in irreconcilable tension with null results from many other experiments. Excavation at Stawell is underway as of October last year. The site will form a central component of the Centre of Excellence for Dark Matter Particle Physics, recently awarded by the Australian Research Council.

Galaxies can be used as laboratories for particle physics

Eminent astrophysicist Joe Silk (IAP) reviewed the many ways in which galaxies can be used as laboratories for particle physics. One of the most persistent hints of dark-matter particle interactions in astrophysical data is the notorious excess of GeV gamma rays coming from the galactic centre. Recent analyses of the excess using improved statistical techniques and better models for the Milky Way’s central bulge were detailed by Shunsaku Horiuchi (Virginia Tech). While dark-matter-related explanations remain tempting, there is growing evidence in support of millisecond pulsars being responsible, given the spatial morphology of the excess. Francesca Calore (LAPTh) told us that multi-wavelength probes of the excess will be possible in the near-future, and may finally allow us to conclusively determine the origin of the signal.

Probing the cosmos

Delegates enjoyed a stirring series of talks on the ever-increasing number of probes of cosmology. Following a review of the post-Planck status of cosmology by Jan Hamaan (UNSW), Xuelei Chen (CAS) explained how the unique 21 cm radio line can be used to map neutral hydrogen throughout the universe and across cosmic time. A host of upcoming ground and space-based experiments attempting to observe the sky-averaged 21 cm line will hopefully allow us to peer back to the birth of the first stars at “cosmic dawn”. We also heard from Yvonne Wong (UNSW) about how cosmological data can be used as a test of neutrino physics and how neutrino physics may in turn be a means to alleviate tensions between cosmological datasets. For example, strong self-interactions between neutrinos could bring the two increasingly divergent measurements of the Hubble constant, from the cosmic microwave background and type-1a supernovae respectively, into agreement.

The 21 cm radio line can be used to map neutral hydrogen throughout the universe and across cosmic time

Much of the week’s schedule was devoted to cosmic-ray research, gamma rays and indirect searches for dark matter. The antimatter cosmic-ray detector AMS, mounted on the International Space Station, is making measurements of cosmic-ray spectra to within 1% accuracy. Weiwei Xu (Shandong) summarised an impressive array of physics results made over almost a decade by AMS, including the most recent measurement of the positron flux, which has a clear high-energy component with a well-defined cutoff at 810 GeV – just as expected for galactic dark-matter annihilations. As with the GeV gamma-ray excess, however, pulsars represent a possible natural astrophysical explanation. The mystery could be resolved by the fact that, unlike pulsars, dark-matter annihilations are expected to produce antiprotons. While current antiproton data show a tantalisingly similar trend to the positron spectrum, more data is needed to identify the origin of the high-energy positrons. Many ongoing and upcoming observatories in the fields of cosmic-ray and gamma-ray research were also introduced to us, such as DAMPE (Jingjing Zang, CAS), the Cherenkov Telescope Array (Roberta Zanin, CTAO), the Pierre Auger Observatory (Bruce Dawson, U. Adelaide) and LHAASO (Zhen Cao, CAS). We are entering an exciting time when many of the enticing but ambiguous anomalies in cosmic-ray spectra will be definitively tested, potentially identifying a signal of dark matter in the process.

Gamma ray bursts (GRBs) generated much enthusiasm this year, with Edna Ruiz-Velasco (MPIK) and Elena Moretti (IFAF) talking about brand new observations of GRBs from the H.E.S.S. and MAGIC collaborations, including the first detection of a GRB afterglow at very high energies (>100 GeV), by H.E.S.S. These observations have helped resolve long-standing mysteries surrounding the complex array of processes that are needed to produce the phenomenal energies of GRB emission. An important contribution is now known to be “synchrotron self-Compton” – emission in which a synchrotron photon generated from an electron spiralling around a magnetic field line is Compton up-scattered by the same electron that produced it.

Many well-motivated theories of modified gravity are now finding little room to hide

Finally, the subject of gravitational waves continues to surge in popularity within this community. We were first given a summary by Susan Scott (Australian National University) of over 50 confirmed gravitational-wave discoveries made by Advanced LIGO and Advanced Virgo to date, and from Tara Murphy (Sydney), about the intense work involved in rapidly following-up luminous gravitational-wave events with radio observations. LIGO’s discoveries of neutron-star and black-hole mergers are a window into the one of the strongest regimes of gravity we have ever been able to see. With general relativity still holding up as robustly as ever, many well-motivated theories of modified gravity are now finding little room to hide.

The next TeVPA will take place in late October 2020 in Chengdu, China.

Linacs pushed to the limit in Chamonix

This past June in Chamonix, CERN hosted the 12th edition of an international workshop dedicated to the development and application of high-gradient and high-frequency linac technology. These technologies are making accelerators more compact, less expensive and more efficient, and broadening their range of applications. The workshop brought together over seventy 70 and engineers involved in a wide range of accelerator applications, with common interest in the use and development of normal-conducting radio-frequency cavities with very high accelerating gradients ranging from around 50 MV/m to above 100 MV/m.

Applications for high-performance linacs such as these include the Compact Linear Collider (CLIC), compact XFELs and inverse-Compton-scattering photon sources, medical accelerators, and specialised devices such as radio-frequency quadrupoles, transverse deflectors and energy-spread linearisers. In recent years the latter two devices have become essential to achieving low emittances and short bunch lengths in high-performance electron linacs of many types, including superconducting linacs. In the coming years, developments from the high-gradient community will be increasing the energy of beams in existing facilities through retrofit programs, for example in an energy upgrade of the FERMI free-electron laser. In the medium term, a number of new high-gradient linacs are being proposed, such as the room-scale X-ray-source SMART*LIGHT, the linac for the advanced accelerator concept research accelerator EUPRAXIA, and a linac to inject electrons into CERN’s Super Proton Synchrotron for a dark-matter search. The workshop also covered fundamental studies of the very complex physical effects that limit the achievable high gradients, such as vacuum arcing, which is one of the main limitations for future technological advances.

Vacuum arcing is one of the main limitations for future technological advances

Originated by the CLIC study, the focus of the workshop series has grown to encompass high-gradient radio-frequency design, precision manufacture, assembly, power sources, high-power operation and prototype testing. It is also notable for having a strong industrial participation, and plays an important role in broadening the applications of linac technology by highlighting upcoming hardware to companies. The next workshop in the series will be hosted jointly by SLAC and Los Alamos and take place on the shore of Lake Tahoe from 8 to 12 June.

Space–time symmetries scrutinised in Indiana

The eighth CPT and Lorentz Symmetry meeting

The space–time symmetries of physics demand that experiments yield identical results under continuous Lorentz transformations – rotations and boosts – and under the discrete CPT transformation (the combination of charge conjugation, parity inversion and time reversal). The Standard-Model Extension (SME) provides a framework for testing these symmetries by including all operators that break them in an effective field theory. The first CPT and Lorentz Symmetry meeting, in Bloomington, Indiana, in 1998, featured the first limits on SME coefficients. Last year’s event, the 8th in the triennial series, brought 100 researchers together from 12 to 16 May 2019 at the Indiana University Center for Spacetime Symmetries, to sample a smorgasbord of ongoing SME studies.

Most physics is described by operators of mass dimension three or four that are quadratic in the conventional fields – for example the Dirac lagrangian contains an operator ψ ∂̸ ψ (mass dimension 3/2 + 1 + 3/2 = 4) and an operator ψψ (mass dimension 3/2 + 3/2 = 3), with the latter controlled by an additional mass coefficient – however, the search for fundamental symmetry violations may need to employ operators of higher mass dimensions and higher order in the fields. One example is the Lorentz-breaking lagrangian-density term (kVV)μν(ψγμ ψ) (ψγν ψ), which is quartic in the fermion field ψ. The coefficient kVV carries units of GeV–2 and controls the operator, which has mass dimension six. Searches for Lorentz-symmetry breaking seek nonzero values for coefficients like kVV. In the 21 years since the first CPT meeting, theoretical studies have uncovered how to write down the myriad operators that describe hypothetical Lorentz violations in both flat and curved space–times. Meanwhile, experiments in particle physics, atomic physics, astrophysics and gravitational physics continue to place exquisitely tight bounds on the SME coefficients, motivated by the intriguing prospect of finding a crack in the Lorentz symmetry of nature.

The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate

Comparisons between matter and antimatter offer rich prospects for testing Lorentz symmetry, because individual SME coefficients can be isolated. The AEgIS, ALPHA, ASACUSA, ATRAP, BASE and gBAR collaborations at CERN, as well as ones at other institutions, are working to develop the challenging technology for such tests. Several presenters discussed Penning traps – devices that confine charged particles in a static electromagnetic field – for storing and mixing the ingredients for antihydrogen, the production of antihydrogen, spectroscopy for the hyperfine and 1S–2S transitions, and the prospects for interferometric measurements of antimatter acceleration. The commissioning of ELENA, CERN’s 30 m-circumference antiproton deceleration ring, promises larger quantities of relatively slow-moving antiprotons in support of this work.

Lorentz violation can occur independently in each sector of the particle world, and participants discussed existing and future limits on SME coefficients based on the muon g-2 experiment at Fermilab, neutrino oscillations at Daya Bay in China, kaon oscillations in Frascati, and on positronium decay using the Jagellonian PET detector, to name a few. Dozens of Lorentz-symmetry tests have probed the photon sector of the SME with table-top devices such as atomic clocks and resonant cavities, and with astrophysical polarisation measurements of sources such as active galactic nuclei, which leverage vast distances to limit cumulative effects such as the rotation of a polarisation angle. In the gravity sector, SME coefficient bounds were presented from the 2015 gravitational-wave detection by the LIGO collaboration, as well as from observations of pulsars, cosmic rays and other phenomena with signals that are proportional to the travel distance. Symmetry-breaking signals are also sought in matter-gravity interactions with test masses, and here CPT’19 included discussions of short-range spin-dependent gravity and neutron-interferometry physics.

The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate. CPT’19 showed that there is no shortage of physicists with the adventurous spirit to explore this frontier further.

Hyper-active neutrino physicists visit London

The sixth edition of Prospects in Neutrino Physics (NuPhys19) attracted almost 100 participants to the Cavendish Conference Centre in London from 16 to 18 December. Jointly organised by King’s College London and the Institute for Particle Physics Phenomenology at Durham University, the conference provides a much-needed snapshot of the fast-moving field of neutrino physics.

The neutrino community’s current challenge is to understand the origin of neutrino masses and lepton mixing. This means establishing whether neutrinos are Dirac or Majorana fermions, their absolute mass scale, the order of the measured mass splittings (the neutrino mass ordering), whether there is leptonic CP violation, the precise value of other parameters in the neutrino mixing matrix, and, finally, whether there is an indication of physics beyond the standard three-neutrino paradigm, for example through the detection of sterile neutrinos.

Construction of the Hyper-Kamiokande experiment will begin in 2020

2015 Nobel laureate Takaaki Kajita (University of Tokyo) opened the conference by confirming that construction of the Hyper-Kamiokande experiment will begin in 2020, following the allocation by the Japanese government of a supplementary budget on 13 December. Hyper-Kamiokande will be a water-Cherenkov detector with a total mass of 260 kton — almost an order of magnitude larger than its famous predecessor Super-Kamiokande, where atmospheric neutrino oscillations were discovered, and far larger than KamiokaNDE, which observed solar neutrinos and supernova SN1987A. Hyper-Kamiokande will eventually replace Super-Kamiokande as the far detector for the upgraded J-PARC neutrino beam, which is situated on the far side of Japan (essentially a comprehensive upgrade of the T2K experiment), with the aim of measuring CP violation in the leptonic sector. It will also provide high statistics for proton-decay searches, supernova neutrino bursts, atmospheric and solar neutrinos, and indirect searches for dark matter. Hyper-Kamiokande will therefore soon join DUNE in the US as a next-generation long-baseline neutrino-oscillation experiment under construction. Together the detectors will provide a far wider coverage of physics signals than either could manage alone.

Critical mass

News of KATRIN’s record-breaking new upper limit on the electron-antineutrino mass was complemented by a report by Joseph Formaggio (MIT) on the successful “Project 8” demonstration in the US of a new approach to directly measuring neutrino masses wherein the energies of beta-decay electrons are determined from the frequency of cyclotron radiation as the electrons spiral in a magnetic field. This work will be complemented by the JUNO experiment in China which will in 2021 begin to constrain the ordering of the neutrino-mass eigenvalues.

The search for neutrinoless double-beta decay also has the potential to provide information on neutrino masses. A potentially unambiguous indication of lepton-number violation and the postulated Majorana nature of neutrinos, it is being pursued aggressively as experiments compete to reduce backgrounds and increase detector masses to the ton-scale. Several talks emphasised the complementary progress by the theory community to better estimate nuclear effects, and reduce the errors arising from the differences between different nuclear models and different isotopes. These calculations are equally important for NOvA and T2K, which is now beginning to probe leptonic CP conservation at the 3? level.

The cosmological upper limit on the sum of neutrino masses could be relaxed upwards

Current and future cosmological constraints of neutrino properties were reviewed by Eleonora Di Valentino (Manchester), whose recent work with Alessandro Melchiorri and Joe Silk reinterprets Planck-satellite data to favour a closed universe at more than 99% significance – an inference which could lead to the current cosmological upper limit on the sum of neutrino masses being relaxed upwards if it is accepted by the community. Conversely, astrophysical neutrinos are also powerful tools for studying astrophysical objects. One key development in this field is the doping of Super-Kamiokande with gadolinium, currently underway in Japan. This will soon give the detector sensitivity to the diffuse supernova-neutrino background.

The next edition of NuPhys will take place in London from 16 to 18 December 2020.

bright-rec iop pub iop-science physcis connect