Comsol -leaderboard other pages

Topics

HL-LHC superconducting quadrupole successfully tested

The quadrupole magnet being prepared for a test at Brookhaven National Laboratory. Credit: Brookhaven National Laboratory

A quadrupole magnet for the high-luminosity LHC (HL-LHC) has been tested successfully in the US, attaining a conductor peak field of 11.4 T – a record for a focusing magnet ready for installation in an accelerator. The 4.2 m-long, 150-mm-single-aperture device is based on the superconductor niobium tin (Nb3Sn) and is one of several quadrupoles being built by US labs and CERN for the HL-LHC, where they will squeeze the proton beams more tightly within the ATLAS and CMS experiments to produce a higher luminosity. The result follows successful tests carried out last year at CERN of the first accelerator-ready Nb3Sn dipole magnet, and both of these milestones are soon to be followed by tests of other 7.2 m and 4.2 m quadrupole magnets at CERN and the US.

“This copious harvest comes after significant recent R&D on niobium-tin superconducting magnet technology and is the best answer to the question if HL-LHC is on time: it is,” says HL-LHC project leader Lucio Rossi of CERN. “We should also underline that this full-length, accelerator-ready magnet performance record is a real textbook case for international collaboration in the accelerator domain: since the very beginning the three US labs and CERN teamed up and managed to have a common and very synergic R&D, particularly for the quadrupole magnet that is the cornerstone of the upgrade. This has resulted in substantial savings and improved output.”

This is a real textbook case for international collaboration in the accelerator domain

Lucio Rossi

The current LHC magnets, which have been tested to a bore field of 8.3 T and are currently operated at 7.7 T at 1.9 K for 6.5 TeV operation, are made from the superconductor niobium-titanium (Nb-Ti). As the transport properties of Nb-Ti are limited for fields beyond 10-11 T at 1.9 K, HL-LHC magnets call for a move to Nb3Sn, which remain superconducting for much higher fields. Although Nb3Sn has been studied for decades and is already in widespread use in solenoids for NMR — not to mention underpinning the large coils, presently being manufactured, that will be used to contain and control the plasma in the ITER fusion experiment – it is more challenging than Nb-Ti to work with: once formed, the Nb3Sn compound becomes brittle and strain sensitive and therefore much harder than niobium-titanium alloy to process into cables to be wound with the accuracy required to achieve the performance and field quality of state-of-the-art accelerator magnets.

Researchers at Fermilab, Brookhaven National Laboratory and Lawrence Berkeley National Laboratory are to provide a total of 16 quadrupole magnets for the interactions regions of the HL-LHC, which is due to operate from 2027. The purpose of a quadrupole magnet is to produce a field gradient in the radial direction with respect to the beam, allowing charged-particle beams to be focused. A test was carried out at Brookhaven in January, when the team operated the 8-tonne quadrupole magnet continuously at a nominal field gradient of around 130 T/m and a temperature of 1.9 K for five hours. Eight longer quadrupole magnets (each providing an equivalent “cold mass” as two US quadrupole magnets) are being produced by CERN.

It’s a very cutting-edge magnet

Kathleen Amm

“We’ve demonstrated that this first quadrupole magnet behaves successfully and according to design, based on the multiyear development effort made possible by DOE investments in this new technology,” said Fermilab’s Giorgio Apollinari, head of the US Accelerator Upgrade Project in a Fermilab press release. “It’s a very cutting-edge magnet,” added Kathleen Amm, who is Brookhaven’s representative for the project.

Dipole tests at CERN

In addition to stronger focusing magnets, the HL-LHC requires new dipole magnets positioned on either side of a collimator to correct off-momentum protons in the high-intensity beam. To gain the required space in the magnetic lattice, Nb3Sn dipole magnets of shorter length and higher field than the current LHC dipole magnets are needed. In July 2019 the CERN magnet group successfully tested a full-length, 5.3-m, 60-mm-twin-aperture dipole magnet – the longest Nb3Sn magnet tested so far – and achieved a nominal bore field of 11.2 T at 1.9 K (corresponding to a conductor peak field of 11.8 T).

“This multi-year effort on Nb3Sn, which we are running together with the US, and our partner laboratories in Europe, is leading to a major breakthrough in accelerator magnet technology, from which CERN, and the whole particle physics community, will profit for the years to come,” says Luca Bottura, head of the CERN magnet group.

The dipole- and quadrupole-magnet milestones also send a positive signal about the viability of future hadron colliders beyond the LHC, which are expected to rely on Nb3Sn magnets with fields of up to 16 T. To this end, CERN and the US labs are achieving impressive results in the performance of Nb3Sn conductor in various demonstrator magnets. In February, the CERN magnet group produced a record field of 16.36 T at 1.9 K (16.5 T conductor peak field) in the centre of a short “enhanced racetrack model coil” demonstrator, with no useful aperture, which was developed in the framework of the Future Circular Collider study. In June 2019, as part of the US Magnet Development Programme, a short “cos-theta” dipole magnet with an aperture of 60 mm reached a bore field of 14.1 T at 4.5 K at Fermilab. Beyond magnets, says Rossi, the HL-LHC is also breaking new ground in superconducting-RF crab cavities, advanced material collimators and 120 kA links based on novel MgB2 superconductors.

Next steps

Before they can constitute fully operational accelerator magnets which could be installed in the HL-LHC, both these quadrupole magnets and the dipole magnets must be connected in pairs (the longer CERN quadrupole magnets are single units). Each magnet in a pair has the same winding, and differs only in its mechanical interfaces and details of its electrical circuitry. Tests of the remaining halves of the quadrupole- and dipole-magnet pairs were scheduled to take place in the US and at CERN during the coming months, with the dipole magnet pairs to be installed in the LHC tunnel this year. Given the current global situation, this plan will have to be reviewed, which is now the high-priority discussion within the HL-LHC project.

Plasma polarised by spin-orbit effect

Figure 1

Spin-orbit coupling causes fine structure in atomic physics and shell structure in nuclear physics, and is a key ingredient in the field of spintronics in materials sciences. It is also expected to affect the development of the quickly rotating quark–gluon plasma (QGP) created in non-central collisions of lead nuclei at LHC energies. As such plasmas are created by the collisions of lead nuclei that almost miss each other, they have very high angular momenta of the order of 107ħ – equivalent to the order of 1021 revolutions per second. While the extreme magnetic fields generated by spectating nucleons (of the order of 1014 T, CERN Courier Jan/Feb 2020 p17) quickly decay as the spectator nucleons pass by, the plasma’s angular momentum is sustained throughout the evolution of the system as it is a conserved quantity. These extreme angular momenta are expected to lead to spin-orbit interactions that polarise the quarks in the plasma along the direction of the angular momentum of the plasma’s rotation. This should in turn cause the spins of vector (spin-1) mesons to align if hadronisation proceeds via the recombination of partons or by fragmentation. To study this effect, the ALICE collaboration recently measured the spin alignment of the decay products of neutral K* and φ vector mesons produced in non-central Pb–Pb collisions.

Spin alignment can be studied by measuring the angular distribution of the decay products of the vector mesons. It is quantified by the probability ρ00 of finding a vector meson in a spin state 0 with respect to the direction of the angular momentum of the rotating QGP, which is approximately perpendicular to the plane of the beam direction and the impact parameter of the two colliding nuclei. In the absence of spin-alignment effects, the probability of finding a vector meson in any of the three spin states (–1, 0, 1) should be equal, with ρ00 = 1/3.

The ALICE collaboration measured the angular distributions of neutral K* and φ vector mesons via their hadronic decays to Kπ and KK pairs, respectively. ρ00 was found to deviate from 1/3 for low-pT and mid-central collisions at a level of 3σ (figure 1). The corresponding results for φ mesons show a deviation of ρ00 values from 1/3 at a level of 2σ. The observed pT dependence of ρ00 is expected if quark polarisation via spin-orbit coupling is subsequently transferred to the vector mesons by hadronisation, via the recombination of a quark and an anti-quark from the quark–gluon plasma. The data are also consistent with the initial angular momentum of the hot and dense matter being highest for mid-central collisions and decreasing towards zero for central and peripheral collisions.

The results are surprising as studies with Λ hyperons are compatible with zero

The results are surprising, however, as corresponding quark-polarisation values obtained from studies with Λ hyperons are compatible with zero. A number of systematic tests have been carried out to verify these surprising results. K0S mesons do indeed yield ρ00 = 1/3, indicating no spin alignment, as must be true for a spin-zero particle. For proton–proton collisions, the absence of initial angular momentum also leads to ρ00 = 1/3, consistent with the observed neutral K* spin alignment being the result of spin-orbit coupling.

The present measurements are a step towards experimentally establishing possible spin-orbit interactions in the relativistic-QCD matter of the quark–gluon plasma. In the future, higher statistics measurements in Run 3 will significantly improve the precision, and studies with the charged K*, which has a magnetic moment seven times larger than neutral K*, may even allow a direct observation of the effect of the strong magnetic fields initially experienced by the quark–gluon plasma.

Einstein and Heisenberg: The Controversy over Quantum Physics

Einstein and Heisenberg: The Controversy over Quantum Physics

This attractive and exciting book gives easy access to the history of the two main pillars of modern physics of the first half of the 20th century: the theory of relativity and quantum mechanics. The history unfolds along the parallel biographies of the two giants in these fields, Albert Einstein and Werner Heisenberg. It is a fascinating read for everybody interested in the science and culture of their time.

At first sight, one could think that the author presents a twin biography of Einstein and Heisenberg, and that’s all. However, one quickly realises that there is much more to this concise and richly illustrated text. Einstein and Heisenberg’s lives are embedded in the context of their time, with emphasis given to explaining the importance and nature of their interactions with the physicists of rank and name around them. The author cites many examples from letters and documents for both within their respective environments, which are most interesting to read, and illustrate well the spirit of the time. Direct interactions between both heroes of the book were quite sparse though.

At several stages throughout the book, the reader will become familiar with the personal life stories of both protagonists, who were, in spite of some commonalities, very different from each other. Common to both, for instance, was their devotion to music and their early interest and outstanding talent in physics as boys at schools in Munich, but on the contrary they were very different in their relations with family and partners, as the author discusses in a lively way. Many of these aspects are well known, but there are also new facets presented. I liked the way this is done, and, in particular, the author does not shy away from also documenting the perhaps less commendable human aspects, but without judgement, leaving the reader to come to their own conclusion.

Topics covering a broad spectrum are commented on in a special chapter called “Social Affinities”. These include religion, music, the importance of family, and, in the case of Einstein, his relation to his wives and women in general, the way he dealt with his immense public reputation as a super scientist, and also his later years when he could be seen as “scientifically an outsider”. In Heisenberg’s case, one is reminded of his very major contributions to the restoration of scientific research in West Germany and Europe after World War II, not least of course in his crucial founding role in the establishment of CERN.

Do not expect a systematic, comprehensive introduction to relativity and quantum physics; this is not a textbook. Its great value is the captivating way the author illustrates how these great minds formed their respective theories in relation to the physics and academic world of their time. The reader learns not only about Einstein and Heisenberg, but also about many of their contemporary colleagues. A central part in this is the controversy about the interpretation of quantum mechanics among Heisenberg’s colleagues and mentors, such as Schrödinger, Bohr, Pauli, Born and Dirac, to name just a few.

Another aspect of overriding importance for the history of that time was of course the political environment spanning the time from before World War I to after World War II. Both life trajectories were influenced in a major way by these external political and societal factors. The author gives an impressive account of all these aspects, and sheds light on how the pair dealt with these terrible constraints, including their attitudes and roles in the development of nuclear weapons.

A special feature of the book, which will make it interesting to everybody, is the inclusion of various hints as to where relativity and quantum mechanics play a direct role in our daily lives today, as well as in topical contemporary research, such as the recently opened field of gravitational-wave astronomy.

This is an ambitious book, which tells the story of the birth of modern physics in a well-documented and well-illustrated way. The author has managed brilliantly to do this in a serious, but nevertheless entertaining, way, which will make the book a pleasant read for all.

Protons herald new cardiac treatment

The 80 m-circumference synchrotron at CNAO

In a clinical world-first, a proton beam has been used to treat a patient with a ventricular tachycardia, which causes unsynchronised electrical impulses that prevent the heart from pumping blood. On 13 December, a 150 MeV beam of protons was directed at a portion of tissue in the heart of a 73-year-old male patient at the National Center of Oncological Hadrontherapy (CNAO) in Italy – a facility set out 25 years ago by the TERA Foundation and rooted in accelerator technologies developed in conjunction with CERN via the Proton Ion Medical Machine Study (PIMMS). The successful procedure had a minimal impact on the delicate surrounding tissues, and marks a new path in the rapidly evolving field of hadron therapy.

The use of proton beams in radiation oncology, first proposed in 1946 by founding director of Fermilab Robert Wilson, allows a large dose to be depo­sited in a small and well-targeted volume, reducing damage to healthy tissue surrounding a tumour and thereby reducing side effects. Upwards of 170,000 cancer patients have benefitted from proton therapy at almost 100 centres worldwide, and demand continues to grow (CERN Courier January/February 2018 p32).

The choice by clinicians in Italy to use protons to treat a cardiac pathology was born out of necessity to fight an aggressive form of ventricular tachycardia that had not responded effectively to traditional treatments. The idea is that the Bragg peak typical of light charged ions (by which a beam can deposit a large amount of energy in a small region) can produce small scars in the heart tissues similar to the ones caused by the standard invasive technique of RF cardiac ablation. “To date, the use of heavy particles (protons, carbon ions) in this area has been documented in the international scientific literature only on animal models,” said Roberto Rordorf, head of arrhythmology at San Matteo Hospital, in a press release on 22 January. “The Pavia procedure appears to be the first in the world to be performed on humans and the first results are truly encouraging. For this reason, together with CNAO we are evaluating the feasibility of an experimental clinical study.”

Hadron therapy for all

CNAO is one of just six next-generation particle-therapy centres in the world capable of generating beams of protons and carbon ions, which are biologically more effective than protons in the treatment of radioresistant tumours. The PIMMS programme from which the accelerator design emerged, carried out at CERN from 1996 to 2000, aimed to design a synchrotron optimised for ion therapy (CERN Courier January/February 2018 p25). The first dual-ion treatment centre in Europe was the Heidelberg Ion-Beam Therapy Centre (HIT) in Germany, designed by GSI, which treated its first patient in 2009. CNAO followed in 2011 and then the Marburg Ion-Beam Therapy Centre in Germany (built by Siemens and operated by Heidelberg University Hospital since 2015). Finally, MedAustron in Austria, based on the PIMMS design, has been operational since 2016. Last year, CERN launched the Next Ion Medical Machine Study (NIMMS) as a continuation of PIMMS to carry out R&D into the superconducting magnets, linacs and gantries for advanced hadron therapy. NIMMS will also explore ways to reduce the cost and footprint of hadron therapy centres, allowing more people in different regions to benefit from the treatment (CERN Courier March 2017 p31).

I think that in 20 years’ time cardiac arrhythmias will be mostly treated with light-ion accelerators

“When I decided to leave the spokesmanship of the DELPHI collaboration to devote my time to cancer therapy with light-ion beams I could not imagine that, 30 years later, I would have witnessed the treatment of a ventricular tachycardia with a proton beam and, moreover, that this event would have taken place at CNAO, a facility that has its roots at CERN,” says TERA founder Ugo Amaldi. “The proton treatment recently announced, proposed to CNAO by cardiologists of the close-by San Matteo Hospital to save the life of a seriously ill patient, is a turning point. Since light-ion ablation is non-invasive and less expensive than the standard catheter ablation, I think that in 20 years’ time cardiac arrhythmias will be mostly treated with light-ion accelerators. For this reason, TERA has secured a patent on the use of ion linacs for heart treatments.”

LHC and RHIC heavy ions dovetail in Wuhan

The 28th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, also known as “Quark Matter”, took place in Wuhan, China, in November. More than 800 participants discussed the latest results of the heavy-ion programmes at the Large Hadron Collider and at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC), as well as the most recent theoretical developments. The focus of these studies is the fundamental understanding of strongly interacting matter at extremes of temperature and density. In these conditions, which also characterise the early universe, matter is a quark-gluon plasma (QGP), in which quarks and gluons are not confined within hadrons. In the recent editions of Quark Matter, much attention has also been devoted to the study of emergent QCD phenomena in high-multiplicity proton-proton and proton-nucleus collisions, which resemble the collective effects seen in nucleus-nucleus collisions and pose the intriguing question of whether a QGP can also form in “small-system” collisions.

The LHC and RHIC together cover a broad range of quark-gluon-plasma temperatures

The large data sample from the Pb-Pb period of LHC Run 2 in 2018 allowed ALICE, ATLAS, CMS and LHCb to study rare probes of the QGP, such as jets and heavy quarks, with unprecedented precision. New constraints on the energy loss of partons when traversing the high-density medium were presented, pushing the limits of jet measurements to lower transverse momenta and larger radii: jet modifications are now measured in the transverse momentum range from 40 to 1000 GeV/c and in the jet radius (resolution parameter) range 0.2 to 1. The internal structure of jets was studied not only by the LHC experiments, but also by the PHENIX and STAR collaborations at the 25-times lower RHIC collision energy. LHC and RHIC measurements are complementary as they cover a broad range of QGP temperatures and differ in the balance of quark- and gluon-initiated jets, with the former dominating at RHIC and the latter dominating at the LHC.  

New probes

Measurements in the sectors of heavy quarks and rarely-produced light nuclei (such as deuterons, 3He and hypertriton, a pnΛ bound state) also strongly benefitted from the large recent samples recorded at the LHC. In particular, their degree of collective behaviour could be studied in much greater detail. The family of QGP probes in the heavy-quark sector has been extended with new members at the LHC by first observations of the X(3872) exotic hadron and of top-antitop quark production. In the sector of electromagnetic processes, new experimental observations were presented for the first time at the conference, including the photo-production of dileptons in collisions with and without hadronic overlap, and light-by-light scattering. These effects are induced by the interaction of the strong electromagnetic fields of the two Pb nuclei (Z=82) passing close to each other (CERN Courier January/February 2020, p17).  

In nuclear collisions the fluid-dynamical flow of the QGP leaves an imprint in the azimuthal distribution of soft particles, as the initial geometry of the collision is translated to flow through pressure gradients. Its experimental trace is multi-particle angular correlations between low-momentum particles, even at large rapidity separations. In non-central nucleus-nucleus collisions that have an elliptical initial geometry, the resulting azimuthal modulation of particles momentum distribution is called elliptic flow. New information on collective behaviour and on the dynamics of heavy-quark interactions in the QGP was added by a first measurement of the D-meson momentum distribution down to zero momentum in Pb-Pb collisions at the LHC, and by new measurements of the elliptic flow of D mesons, muons from charm and beauty decays as well as bound states of heavy quarks (charmonia and bottomonia). These measurements suggest a stronger degree of collective behaviour for light than heavy quarks, and further constrain estimates of the QGP viscosity. Such estimates also require understanding of heavy-quark hadronisation, which was discussed in the light of new results at RHIC and the LHC which indicate an increased production of charmed baryons with respect to mesons, at low momentum in both pp and nucleus-nucleus collisions, when compared to expectations from electron-positron collisions. 

The situation is much less clear in the collisions of small systems

While there is strong evidence for the production of QGP in nuclear collisions, the situation is much less clear in the collisions of small systems. The momentum correlations and azimuthal modulation that characterise the large nuclear collisions were also observed in smaller collision systems, such as p-Pb at the LHC, p-Au, d-Au and 3He-Au at RHIC, and even pp. The persistence of these correlations in smaller collision systems, down to pp collisions where it is unlikely that an equilibrated system could be created, may offer an inroad to understand how the collective behaviour of the QGP arises from the microscopic interaction of its individual constituents. New measurements on multi-particle correlations were presented and the dynamical origin of the collectivity in small systems was discussed. Small expanding QGP droplets, colour connections of overlapping QCD strings, and final-state rescattering at partonic or hadronic level are among the possible mechanisms that are proposed to describe these observations. While many signs characteristic of the QGP are seen in the small-system collisions, parton energy loss (in the form of jet or large-momentum hadron modifications) remains absent in the measurements carried out to date. 

The future

Beyond Quark Matter 2019, the field is now looking forward to the future programmes at the LHC and at RHIC, which were extensively reviewed at the conference. At the LHC, the heavy-ion injectors and the experiments are currently being upgraded. In particular, the heavy-ion-dedicated ALICE detector is undergoing major improvements, with readout and tracker upgrades that will provide larger samples and better performance for heavy-flavour selection. Run 3 of the LHC, which is scheduled to start in 2021, will provide integrated luminosity increases ranging from one order of magnitude for the data samples based on rare triggers to two orders of magnitude for the minimum-bias (non-triggered) samples. At RHIC, the second beam-energy-scan programme is now providing the STAR experiment with higher precision data to search for the energy evolution of QGP effects, and the new sPHENIX experiment aims at improved measurements of jets and heavy quarks from 2023. Low-energy programmes at the CERN SPS, NICA, FAIR, HIAF and J-PARC, which target a systematic exploration of heavy-ion collisions with high baryon density to search for the onset of deconfinement and the predicted QCD critical point, were also discussed in Wuhan, and the updated plans for the US-based Electron-Ion Collider (EIC), which is foreseen to be constructed at Brookhaven National Laboratory, were presented. With ep and e-nucleus interactions, the EIC will provide unprecedented insights into the structure of the proton and the modification of parton densities in nuclei, which will benefit our understanding of the initial conditions for nucleus-nucleus collisions. 

Success in scientific management

Barry Barish

Your co-Nobelists in the discovery of gravitational waves, Kip Thorne and Rainer Weiss, have both recognised your special skills in the management of the LIGO collaboration. When you landed in LIGO in 1994, what was the first thing you changed?

When I arrived in LIGO, there was a lot of dysfunction and people were going after each other. So, the first difficult problem was to make LIGO smaller, not bigger, by moving people out who weren’t going to be able to contribute constructively in the longer term. Then, I started to address what I felt were the technical and management weaknesses. Along with my colleague, Gary Sanders, who had worked with me on one of the would-be detectors for the Superconducting Super Collider (SSC) before the project was cancelled, we started looking for the kind of people that were missing in technical areas.

For example, LIGO relies on very advanced lasers but I was convinced that the laser that was being planned for, a gas laser, was not the best choice because lasers were, and still are, a very fast-moving technology and solid-state lasers were more forward-looking. Coming from particle physics, I’m used to not seeing a beam with my own eyes. So I wasn’t disturbed that the most promising lasers at that time emitted light in the infrared, instead of green, and that technology had advanced to where they could be built in industry. People who worked with interferometers were used to “little optics” on lab benches where the lasers were all green and the alignment of mirrors etc was straightforward. I asked three of the most advanced groups in the world who worked on lasers of the type we needed (Hannover in Germany, Adelaide in Australia and Stanford in California) if they’d like to work together with us, and we brought these experts into LIGO to form the core of what we still have today as our laser group.

Project management for forefront science experiments is very different, and it is hard for people to do it well

This story is mirrored in many of the different technical areas in LIGO. Physics expertise and expertise in the use of interferometer techniques were in good supply in LIGO, so the main challenge was to find expertise to develop the difficult forefront technologies that we were going to depend on to reach our ambitious sensitivity goals. We also needed to strengthen the engineering and project-management areas, but that just required recruiting very good people. Later, the collaboration grew a lot, but mostly on the data-analysis side, which today makes up much of our collaboration.

According to Gary Sanders of SLAC, “efficient management of large science facilities requires experience and skills not usually found in the repertoire of research scientists”. Are you a rare exception?

Gary Sanders was a student of Sam Ting, then he went to Los Alamos where he got a lot of good experience doing project work. For myself, I learned what was needed kind of organically as my own research grew into larger and larger projects. Maybe my personality matched the problem, but I also studied the subject. I know how engineers go about building a bridge, for example, and I could pass an exam in project management. But, project management for forefront science experiments is very different, and it is hard for people to do it well. If you build a bridge, you have a boss, and he or she has three or four people who do tasks under his/her supervision, so generally the way a large project is structured is a big hierarchical organisation. Doing a physics research project is almost the opposite. For large engineering projects, once you’ve built the bridge, it’s a bridge, and you don’t change it. When you build a physics experiment, it usually doesn’t do what you want it to do. You begin with one plan and then you decide to change to another, or even while you’re building it you develop better approaches and technologies that will improve the instruments. To do research in physics, experience tells us that we need a flat, rather than vertical, organisational style. So, you can’t build a complicated, expensive ever-evolving research project using just what’s taught in the project-management books, and you can’t do what’s needed to succeed in cost, schedule, performance, etc, in the style found in a typical physics-department research group. You have to employ some sort of hybrid. Whether it’s LIGO or an LHC experiment, you need to have enough discipline to make sure things are done on time, yet you also need the flexibility and encouragement to change things for the better. In LIGO, we judiciously adapted various project-management formalities, and used them by not interfering any more than necessary with what we do in a research environment. Then, the only problem – but admittedly a big one – is to get the researchers, who don’t like any structure, to buy into this approach.

How did your SSC experience help?

It helped with the political part, not the technical part, because I came to realise how difficult the politics and things outside of a project are. I think almost anything I worked on before has been very hard, because of what it was or because of some politics in doing it, but I didn’t have enormous problems that were totally outside my control, as we had in the SSC.

How did you convince the US government to keep funding LIGO, which has been described as the most costly project in the history of the NSF?

It’s a miracle, because not only was LIGO costly, but we didn’t have much to show in terms of science for more than 20 years. We were funded in 1994, and we made the first detection more than 20 years later. I think the miracle wasn’t me, rather we were in a unique situation in the US. Our funding agency, the NSF, has a different mission than any other agency I know about. In the US, physical sciences are funded by three big agencies. One is the DOE, which has a division that does research in various areas with national labs that have their own structures and missions. The other big agency that does physical science is NASA, and they have the challenge of safety in space. The NSF gets less money than the other two agencies, but it has a mission that I would characterise by one word: science. LIGO has so far seen five different NSF directors, but all of them were prominent scientists. Having the director of the funding agency be someone who understood the potential importance of gravitational waves, maybe not in detail, helped make NSF decide both to take such a big risk on LIGO and then continue supporting it until it succeeded. The NSF leadership understands that risk-taking is integral to making big advancements in science.

What was your role in LIGO apart from management?

I concentrated more on the technical side in LIGO than on data analysis. In LIGO, the analysis challenges are more theoretical than they are in particle physics. What we have to do is compare general relativity with what happens in a real physical phenomenon that produces gravitational waves. That involves more of a mixed problem of developing numerical relativity, as well as sophisticated data-analysis pipelines. Another challenge is the huge amount of data because, unlike at CERN, there are no triggers. We just take data all the time, so sorting through it is the analysis problem. Nevertheless, I’ve always felt and still feel that the real challenge for LIGO is that we are limited by how sensitive we can make the detector, not by how well we can do the data analysis.

What are you doing now in LIGO?

Now that I can do anything I want, I am focusing on something I am interested in and that we don’t employ very much, which is artificial intelligence and machine learning (ML). In LIGO there are several problems that could adapt themselves very well to ML with recent advances. So we built a small group of people, mostly much younger than me, to do ML in LIGO. I recently started teaching at the University of California Riverside, and have started working with young faculty in the university’s computer-science department on adapting some techniques in ML to problems in physics. In LIGO, we have a problem in the data that we call “glitches”, which appear when something that happens in the apparatus or outside world appears in the data. We need to get rid of glitches, and we use a lot of human manpower to make the data clean. This is a problem that should adapt itself very well to a ML analysis.

Now that gravitational waves have joined the era of multi-messenger astronomy, what’s the most exciting thing that can happen next?

For gravitational waves, knowing what discovery you are going to make is almost impossible because it is really a totally new probe of the universe. Nevertheless, there are some known sources that we should be able to see soon, and maybe even will in the present run. So far we’ve seen two sources of gravitational waves: a collision of two black holes and a collision of two neutron stars, but we haven’t yet seen a black hole with a neutron star going around it. They’re particularly interesting scientifically because they contain information about nuclear physics of very compact objects, and because the two objects are very different in mass and that’s very difficult to calculate using numerical relativity. So it’s not just checking off another source that we found, but new areas of gravitational-wave science. Another attractive possibility is to detect a spinning neutron star, a pulsar. This is a continuous signal that is another interesting source which we hope to detect in a short time. Actually, I’m more interested in seeing unanticipated sources where we have no idea what we’re going to see, perhaps phenomena that uniquely happen in gravity alone.

The NSF leadership understands that risk-taking is integral to making big advancements

Will we ever see gravitons?

That’s a really good question because gravitons don’t exist in Einstein’s equations. But that’s not necessarily nature, that’s Einstein’s equations! The biggest problem we have in physics is that we have two fantastic theories. One describes almost anything you can imagine on a large scale, and that’s Einstein’s equations, and the other, which describes almost too well everything you find here at CERN, is the Standard Model, which is based on quantum field theory. Maybe black holes have the feature that they satisfy Einstein’s equations and at the same time conserve quantum numbers and all the things that happen in quantum physics. What we are missing is the experimental clue, whether it’s gravitons or something else that needs to be explained by both these theories. Because theory alone has not been able to bring them together, I think we need experimental information.

Do particle accelerators still have a role in this?

We never know because we don’t know the future, but our best way of understanding what limits our present understanding has been traditional particle accelerators because we have the most control over the particles we’re studying. The unique feature of particle accelerators is that of being able to measure all the parameters of particles that we want. We’ve found the Higgs boson and that’s wonderful, but now we know that the neutrinos also have mass and the Higgs boson possibly doesn’t describe that. We have three families of particles, and a whole set of other very fundamental questions that we have no handle on at all, despite the fact that we have this nice “standard” model. So is it a good reason to go to higher energy or a different kind of accelerator? Absolutely, though it’s a practical question whether it’s doable and affordable.

What’s the current status of gravitational-wave observatories?

We will continue to improve the sensitivity of LIGO and Virgo in incremental steps over the next few years, and LIGO will add a detector in India to give better global coverage. KAGRA in Japan is also expected to come online. But we can already see that
next-generation interferometers will be needed to pursue the science in the future. A good design study, called the Einstein Telescope, has been developed in Europe. In the US we are also looking at next-generation detectors and have different ideas, which is healthy at this point. We are not limited by nature, but by our ability to develop the technologies to make more sensitive interferometers. The next generation of detectors will enable us to reach large red shifts and study gravitational-wave cosmology. We all look forward to exploiting this new area of physics, and I am sure important discoveries will emerge.

David Mark Ritson 1924–2019

David Ritson with Bjørn Wiik

David Mark Ritson, professor emeritus of physics at Stanford University, died peacefully at home on 4 November 2019, just shy of his 95th birthday. He was the last of the leaders of the original seven physics groups formed at SLAC: four of the other leaders were awarded Nobel prizes in physics.

Dave Ritson was born in London and grew up in Hampstead. His ancestors emigrated from Australia, Germany and Lithuania, and his father, a Cambridge alumnus, wrote Helpful Information and Guidance for Every Refugee, distributed in the 1930s and 1940s. Dave won scholarships to Merchant Taylors’ School and to Christ Church, Oxford. His 1948 PhD work included deploying the first high-sensitivity emulsion at the Jungfraujoch research station, and then developing it. Within the data were two particle-physics icons: the whole π → μ → e sequence, and τ-meson decay.

Dave moved to the Dublin IAS, to Rochester and to MIT, doing experiments which helped prove that the s-quark exists. His results were among many that underpinned the “τθ puzzle”, solved by the discovery of parity violation in beta and muon decay. Dave also assisted accelerator physicist Ken Robinson with the proof that stable storage of an electron beam in a synchrotron was possible. In 1961 he and Ferdinando Amman published the equation for disruption caused by colliding e+e beams. “Low beta” collider interaction regions are based on the Amman–Ritson equation.

Dave edited the book Techniques of High Energy Physics, published in 1961, and then took a faculty position in the Stanford physics department – bringing British acuity and economy to the ambitious SLAC team. Between 1964 and 1969, he and Burt Richter submitted four proposals to the US Atomic Energy Commission (AEC) for an e+e collider, all of which were rejected. Dave designed the 1.6 GeV spectro­meter in End Station A to detect proton recoils, which were used to reconstruct “missing mass” and to measure the photoproduction of hard-to-detect bosons.

After 1969 Dave founded Fermilab E-96, the Single Arm Spectrometer Facility, and obtained contributions from many institutions, including Argonne, CERN, Cornell, INFN Bari, MIT and SLAC. It was unusual for accelerator labs to support the fabrication of experiments at other lab’s facilities. Meanwhile, SLAC found internal funding for the SPEAR e+e collider, a stripped-down version of the last proposal rejected by the AEC and led by Richter, driving the epic 1974 c-quark discovery.

Dave returned to SLAC and in 1976 led the formation of the MAC collaboration for SLAC’s new PEP e+e collider. The MAC design of near-hermetic calorimetry with central and toroidal outer spectrometers is now classic. Bill Ford from Colorado used MAC to first observe the long b-quark lifetime. In 1983 Dave led the close-in tracker (vertex detector) project with the first layer only 4.6 cm from the e+e beams, and verified the long b-quark life with reduced errors.

He formally retired in 1987 but was active until 2003 in accelerator design at SLAC, CERN, Fermilab and for the SSC. He helped guide the SLC beams through their non-planar path into collision, and wrote several articles for Nature. He also contributed to the United Nations’ Intergovernmental Panel on Climate Change.

Dave was intensely devoted to his wife Edda, from Marsala, Sicily, who died in 2004, and is survived by their five children.

Vladislav Šimák 1934–2019

Vladislav Šimák

Experimental particle physicist and founder of antiproton physics in Czechoslovakia (later the Czech Republic), Vladislav Šimák, passed away on 26 June 2019. Since the early 1960s his vision and organisational skills helped shape experimental particle physics, not only in Prague, but the whole of the country.

After graduating from Charles University in Prague, he joined the group at the Institute of Physics of the Czechoslovak Academy of Sciences studying cosmic rays using emulsion techniques, earning a PhD in 1963. Though it was difficult to travel abroad at that time, Vlada got a scholarship and went to CERN, where he joined the group led by Bernard French investigating collisions of antiprotons using bubble chambers. It was there and then that his lifelong love affair with antiprotons began. He brought back to Prague film material showing the results of collisions of 5.7 GeV antiprotons and protons from a hydrogen bubble chamber, and formed a group of physicists and technicians, involving many diploma and PhD students who processed them. Vlada also fell in love with the idea of quarks as proposed by Gell-Mann and Zweig, and was the first Czech or Slovak physicist to apply a quark model to pion production in proton–antiproton collisions.

In the early 1970s, when contacts with the West were severely limited, Vlada exploited the experiences he accumulated at CERN and put together a group of Czech and Slovak physicists involved in the processing and analysis of data from proton–antiproton collisions, using the then-highest-energy beam of antiprotons (22.4 GeV) and a hydrogen bubble chamber at the Serpukhov accelerator in Russia. This experiment, which in the later stage provided collisions of antideuterons with protons and deuterons, gave many young physicists the chance to work on unique data for their PhDs and earned Vlada respect in the international community.

After the Velvet Revolution he played a pivotal role in accession to CERN membership

In the late 1980s, when the political atmosphere in Czechoslovakia eased, Vlada together with his PhD student joined the UA2 experiment at CERN’s proton–antiproton collider, where he devoted his attention to jet production. After the Velvet Revolution in November 1989 he played a pivotal role in the decision of the Czech and Slovak particle-physics community to focus on accession to CERN membership.

In 1992 Vlada took Czechoslovak particle physicists into the newly formed ATLAS collaboration, and in 1997 he joined the D0 experiment at Fermilab. He was active in ATLAS until very recently, and in 2014, in acknow­ledgment of his services to physics, the Czech Academy of Sciences awarded Vlada the Ernst Mach Medal for his contributions to the development of physics.

Throughout his life he combined his passion for physics with a love for music, for many years playing the violin in the Academy Chamber Orchestra. For many of us Vlada was a mentor, colleague and friend. We all admired his vitality and enthusiasm for physics, which was contagious. Vlada clearly enjoyed life and we very much enjoyed his company.

He will be sorely missed.

A recipe for sustainable particle physics

The SESAME light source

There has been a marked increase in awareness about climate change in society. Whether due to the recent school strikes initiated by Greta Thunberg or the destructive bushfires gripping Australia, the climate emergency has now moved up in the public’s list of concerns. Governments around the world have put in place various targets to reduce greenhouse-gas emissions as part of the Intergovernmental Panel on Climate Change (IPCC) 2015 Paris agreement. The scientific community, like others, will increasingly be expected to put in place measures to reduce its greenhouse-gas emissions. It is then timely to create structures that will minimise the carbon footprint of current and future experiments, and their researchers.

The LHC uses 1.25 TWh of electricity annually, the equivalent of powering around 300,000 homes, or roughly 2% of the annual consumption of Switzerland. Fortunately, the electricity supply of the LHC comes from France, where only about 10% of electricity is produced by fossil fuels. CERN is adopting several green initiatives. For example, it recently released plans to use hot water from a cooling plant at Point 8 of the LHC (where the LHCb detector is situated) to heat 8000 homes in the nearby town of Ferney-Voltaire. In 2015, CERN introduced an energy-management panel and the laboratory is about to publish a wide-ranging environmental report. CERN is also involved in the biennial workshop series Energy for Sustainable Science at Research Infrastructures, which started in 2011 and is where useful ideas are shared among research infrastructures. Whether it be related to high-performance computing or the LHC’s cryogenic systems, increased energy efficiency both reduces CERN’s carbon footprint and provides financial savings.

It is a moral imperative for the community to look at ways to reduce its carbon footprint

In addition to colliders, particle physics also involves detectors, some of which need particular gases for their operation or cooling. Unfortunately, some of these gases have very high global-warming potential. For example, sulphur hexa­fluoride, which is commonly used in high-voltage supplies and also in certain detectors such as the resistive plate chambers in the ATLAS muon spectrometer, causes 16,000 times more warming than CO2 over a 20-year period. Though mostly used in closed circuits, some of these gases are occasionally vented to the atmosphere or leak from detectors, and, although the quantities involved are small, it is likely that some of the gases used by current detectors are about to be banned by many countries, making them very hard to procure and their price volatile. A lot is already being done to combat this issue. At CERN, for instance, huge efforts have gone into replacing detector cooling fluids and investigating new gas mixtures.

Strategic approach

The European particle-physics community is currently completing the update of its strategy for the next five years or so, which will guide not only CERN activities but also those in all European countries. It is of the utmost importance that sustainability goals be included in this strategy. To this end, myself and my colleagues Cham Ghag and David Waters (University College London) and Francesco Spano (Royal Holloway) arrived at three main recommendations on sustainability as input into the strategy process.

Véronique Boisvert

First, as part of their grant-giving process, European laboratories and funding agencies should include criteria evaluating the energy efficiency and carbon footprint of particle-physics proposals, and should expect to see evidence that energy consumption has been properly estimated and minimised. Second, any design of a major experiment should consider plans for reduction of energy consumption, increased energy efficiency, energy recovery and carbon-offset mechanisms. (Similarly, any design for new buildings should consider the highest energy-efficiency standards.) Third, European laboratories should invest in next-generation digital meeting spaces including virtual-reality tools to minimise the need for frequent travel. Many environmental groups are calling for a frequent-flyer levy, since roughly 15% of the population take about 70% of all flights. This could potentially have a massive effect on the travel budgets of particle physicists, but it is a moral imperative for the community to look at ways to reduce this carbon footprint. Another area that the IPCC has identified will need to undergo a massive change is food. Particle physicists could send a very powerful message by choosing to have all of its work-related catering be mostly vegetarian.

Particle physics is flush with ideas for future accelerators and technologies to probe deeper into the structure of matter. CERN and particle physicists are important role models for all the world’s scientific community. Channelling some of our scientific creativity into addressing the sustainability of our own field, or even finding solutions for climate change, will produce ripples across all of society.

Anomalies persist in flavour-changing B decays

The distribution of the angular variable P5’ as a function of the mass squared of the muon pair, q2. The LHCb Run 1 results (red), those from the additional 2016 dataset only (blue), and those from both datasets (black) are shown along with the SM predictions (orange). Credit: LHCb

The LHCb collaboration has confirmed previous hints of odd behaviour in the way B mesons decay into a K*and a pair of muons, bringing fresh intrigue to the pattern of flavour anomalies that has emerged during the past few years. At a seminar at CERN on 10 March, Eluned Smith of RWTH Aachen University presented an updated analysis of the angular distributions of B0→K*0μ+μ decays based on around twice as many events than were used for the collaboration’s previous measurement reported in 2015. The result reveals a mild increase in the overall tension with the Standard Model (SM) prediction, though, at 3.3σ, more data are needed to determine the source of the effect.

The B0→K*0μ+μ decay is a promising system with which to explore physics beyond the SM. A flavour-changing neutral-current process, it involves a quark transition (b→s) which is forbidden at the lowest perturbative order in the SM, and therefore occurs only around once for every million B decays. The decay proceeds instead via higher-order penguin and box processes, which are sensitive to the presence of new, heavy particles. Such particles would enter in competing processes and could significantly change the B0→K*0μ+μ decay rate and the angular distribution of its final-state particles. Measuring angular distributions as a function of the invariant mass squared (q2) of the muon pair is of particular interest because it is possible to construct variables that depend less on hadronic modelling uncertainties.

Potentially anomalous behaviour in an angular variable called P5′ came to light in 2013, when LHCb reported a 3.7σ local deviation with respect to the SM in one q2 bin, based on 1fb-1 of data. In 2015, a global fit of different angular distributions of the B0→K*0μ+μ decays using the total Run 1 data sample of 3 fb-1 reaffirmed the puzzle, showing discrepancies of 3.4σ (later reduced to 3.0σ when using new theory calculations with an updated description of potentially large hadronic effects). In 2016, the Belle experiment at KEK in Japan performed its own angular analysis of B0→K*0μ+μ using data from electron—positron collisions and found a 2.1σ deviation in the same direction and in the same q2 region as the LHCb anomaly.

We as a community have been eagerly waiting for this measurement and LHCb has not disappointed

Jure Zupan

The latest LHCb result includes additional Run 2 data collected during 2016, corresponding to a total integrated luminosity of 4.7fb-1. It shows that the local tension of P5′ in two q2 bins between 4 and 8 GeV2/c4 reduces from 2.8 and 3.0σ, as observed in the previous analysis, to 2.5 and 2.9σ. However, a global fit to several angular observables shows that the overall tension with the SM increases from 3.0 to 3.3σ. The results of the fit also find a better overall agreement with predictions of new-physics models that contain additional vector or axial-vector contributions. However, the collaboration also makes it clear that the discrepancy could be explained by an unexpectedly large hadronic effect that is not accounted for in the SM predictions.

“We as a community have been eagerly waiting for this measurement and LHCb has not disappointed,” says theorist Jure Zupan of the University of Cincinnati. “The new measurements have moved closer to the SM predictions in the angular observables so that the combined significance of the excess remained essentially the same. It is thus becoming even more important to understand well and scrutinise the SM predictions and the claimed theory errors.”

Flavour puzzle
The latest result makes LHCb’s continued measurements of lepton-flavour universality even more important, he says. In recent years, LHCb has also found that the ratio of the rates of muonic and electronic B decays departs from the SM prediction, suggesting a violation of the key SM principle of lepton-flavour universality. Though not individually statistically significant, the measurements are theoretically very clean, and the most striking departure – in the variable known as RK — concerns B decays that proceed via the same b→s transition as B0→K*0μ+μ. This has led physicists to speculate that the two effects could be caused by the same new physics, with models involving leptoquarks or new gauge bosons in principle able to accommodate both sets of anomalies.

An update on RK based on additional Run 2 data is hotly anticipated, and the collaboration is also planning to add data from 2017-18 to the B0→K*0μ+μ angular analysis, as well as working on further analyses with b-quark transitions in mesons. LHCb also recently brought the decays of beauty baryons, which also depend on b→s transitions, to bear on the subject. Departures from the norm have also been spotted in B decays to D mesons, which involve tree-level b→c quark transitions. Such decays probe lepton-flavour universality via comparisons between tau leptons and muons and electrons but, as with RK, the individual measurements are not highly significant.

“We have not seen evidence of new physics, but neither were the B physics anomalies ruled out,” says Zupan of the LHCb result. “The wait for the clear evidence of new physics continues.”

bright-rec iop pub iop-science physcis connect