Comsol -leaderboard other pages

Topics

L’essor de la physique des particules en France

L’intérieur d’une cavité de radiofréquence

Se souvient-on des images du laboratoire de Pierre et Marie Curie ? «Ce n’est qu’une baraque en planches, au sol bitumé et au toit vitré, protégeant incomplètement contre la pluie, dépourvue de tout aménagement », selon Marie Curie. Même ses collègues étrangers se désolent alors du peu de moyens dont ils disposent. Le chimiste allemand Wilhelm Ostwald déclare : « Ce laboratoire tenait à la fois de l’étable et du hangar à pommes de terre. Si je n’y avais pas vu des appareils de chimie, j’aurais cru que l’on se moquait de moi ». Dans les années 1920, des journaux témoignent de la misère des laboratoires. « Il y en a dans les greniers, d’autres dans des caves, d’autres en plein air… », rapporte le Petit Journal en 1921. Augmenter les moyens de la recherche pour se mettre au niveau de pays comme l’Allemagne devient une cause nationale.

Entre les deux guerres, Jean Perrin, prix Nobel de physique 1926 pour ses travaux sur l’existence des atomes, s’engage pour le développement de la science, avec le soutien de nombreux scientifiques.  Grâce à des financements de la Fondation Rothschild, il crée l’Institut de biologie physico-chimique, où travaillent pour la première fois des chercheurs à temps plein. En 1935, il obtient la création de la Caisse nationale de la recherche scientifique qui finance des projets universitaires et des bourses de chercheurs. L’un de ses premiers boursiers en 1937 est le jeune Lew Kowarski, issu du laboratoire de Frédéric Joliot-Curie au Collège de France. En mai 1939, ils déposent avec Hans von Halban, via la Caisse, les brevets qui esquissent la production d’énergie nucléaire et le principe de la bombe atomique.

Avec l’arrivée du gouvernement de Léon Blum en 1936, un sous-secrétaire d’État à la recherche est nommé pour la première fois. Autre première : trois femmes intègrent le gouvernement à une époque où elles n’avaient pas encore le droit de vote en France. Irène Joliot-Curie accepte ce poste pour trois mois afin de soutenir la cause féminine et celle de la recherche scientifique. Pendant cette courte période, elle définira des orientations majeures : une augmentation des budgets de la recherche, des salaires et des bourses de chercheurs.

Environ 32000 personnes travaillent aujourd’hui au CNRS en collaboration avec des universités, des laboratoires privés et d’autres organisations

À sa démission, Jean Perrin lui succède. Avec son image de scientifique hirsute, le sexagénaire « déploya aussitôt la fougue d’un jeune homme, l’enthousiasme d’un débutant, non pour les honneurs, mais pour les moyens d’action qu’ils fournissaient », note dans ses mémoires Jean Zay, le très jeune ministre de l’éducation nationale d’alors. Après quatre ans de réalisations, dont la création de laboratoires comme l’Institut d’astrophysique de Paris, le décret fondant le CNRS est publié en octobre 1939. Six semaines après le début de la deuxième guerre mondiale, Jean Perrin annonce : « Il n’est pas de science possible où la pensée n’est pas libre, et la pensée ne peut pas être libre sans que la conscience soit également libre. On ne peut pas imposer à la chimie d’être marxiste, et en même temps favoriser le développement des grands chimistes ; on ne peut pas imposer à la physique d’être cent pour cent aryenne et garder sur son territoire le plus grand des physiciens… Chacun de nous peut bien mourir, mais nous voulons que notre idéal vive. »

La mission du CNRS est encore aujourd’hui d« identifier, effectuer ou faire effectuer, seul ou avec ses partenaires, toutes les recherches présentant un intérêt pour la science ainsi que pour le progrès technologique, social et culturel du pays ». Environ 32 000 personnes, dont 11 000 chercheurs, travaillent au CNRS en collaboration avec les universités, d’autres organismes ou des laboratoires privés. La plupart des 1100 laboratoires du CNRS sont gérés en cotutelle avec un établissement partenaire Ils accueillent du personnel CNRS et, dans la majorité des cas, des enseignants-chercheurs. Ces unités mixtes de recherche, dont le statut date de 1966, constituent les briques de la recherche française et permettent de mener des recherches pointues tout en restant proche de l’enseignement et des étudiants.

L’évolution de la physique nucléaire et des hautes énergies

Jean Perrin

Placé sous la tutelle du ministère de l’Enseignement supérieur, de la recherche et de l’innovation, le CNRS est le plus grand organisme de recherche en France. Avec un budget annuel de 3,4 milliards d’euros, il couvre l’ensemble des recherches scientifiques, des humanités aux sciences de la nature et de la vie, de la matière et de l’Univers, de la recherche fondamentale aux applications. Les disciplines sont organisées en dix instituts thématiques qui gèrent les programmes scientifiques ainsi qu’une importante partie des investissements dans les infrastructures de recherche, comme les contributions de ses laboratoires aux expériences du CERN. Il joue un rôle de coordination, en particulier à travers ses trois instituts nationaux, dont l’IN2P3 (Institut national de physique nucléaire et physique des particules) aux cotés des instituts nationaux des sciences de l’Univers et des mathématiques.

À la création du CNRS, la physique française est au meilleur niveau mondial : Irène et Frédéric Joliot-Curie, Jean Perrin, Louis de Broglie, Pierre Auger sont parmi les noms entrés dans l’histoire de la discipline. Le laboratoire de Frédéric Joliot-Curie au Collège de France joue un rôle important grâce à son cyclotron, de même que l’Institut du radium de Irène Joliot-Curie, le laboratoire de Louis Leprince-Ringuet à l’École polytechnique, ou encore celui de Jean Thibaud à Lyon. Des équipements, des boursiers et du personnel technique, des chaires en physique nucléaire dans les universités et les grandes écoles sont financés par le tout jeune CNRS. Avec la guerre, une véritable césure se produit : les chercheurs s’exilent ou tentent de continuer à faire fonctionner leurs laboratoires dans un isolement certain.

Fort de son engagement dans la résistance, Fréderic Joliot-Curie, prend la direction du CNRS en août 1944 et œuvre pour que la France rattrape le retard accumulé pendant la guerre, notamment en physique nucléaire. Après le lancement des bombes atomiques sur Hiroshima et Nagasaki, le Général de Gaulle demande à Frédéric Joliot-Curie et à Raoul Dautry, ministre de la reconstruction et de l’urbanisme, de mettre en place le Commissariat à l’énergie atomique (CEA). Dans l’idée de Joliot-Curie, cet organisme allait rassembler et coordonner toutes les recherches fondamentales de physique nucléaire, y compris celles des laboratoires universitaires. Dès 1946, les grands noms rejoignent le CEA : Pierre Auger, Irène Joliot-Curie, Francis Perrin, Lew Kowarski. Le CNRS se préoccupe alors peu de ce domaine. En 1947, la décision est prise de construire un centre à Saclay qui couple les recherches fondamentales et appliquées sur ce sujet. André Berthelot y dirigera le service de physique nucléaire et y installera plusieurs accélérateurs.

La création du CERN

Dans les années 1950, les physiciens français jouent un rôle important dans la création du CERN : Louis de Broglie, le premier scientifique de renom à demander la création d’un laboratoire multinational lors d’une conférence de Lausanne en 1949, Pierre Auger qui dirige le département des sciences exactes et naturelles de l’UNESCO, Raoul Dautry, l’administrateur général du CEA, Francis Perrin, haut-commissaire, et Lew Kowarski, l’un des premiers employés du CERN et qui deviendra plus tard le directeur des services techniques et scientifiques. On lui doit la construction de la première chambre à bulles au CERN et l’introduction des ordinateurs. Frédéric Joliot-Curie, révoqué en 1950 de ses fonctions au CEA pour ses convictions politiques, est quant à lui très affecté de ne pas être nommé au Conseil du CERN, à l’inverse de Francis Perrin, qui lui a succédé au CEA. Conscient des potentialités du CERN, Louis Leprince-Ringuet réoriente les recherches de ses équipes portant sur les rayons cosmiques vers les accélérateurs. Il deviendra le premier président français du Comité des directives scientifiques (SPC) en 1964 et son laboratoire jouera un rôle important dans l’implication des physiciens français au CERN.

Irène et Frédéric Joliot-Curie en 1935

Une autre recrue du CNRS de l’après-guerre fera également parler de lui au CERN : Georges Charpak. Admis au CNRS comme chercheur en 1948, il réalise sa thèse sous la direction de Frédéric Joliot-Curie. Alors que ce dernier veut l’orienter vers la physique nucléaire, il choisit son propre sujet : les détecteurs. En 1963, il est recruté par Leon Lederman au CERN. La suite est connue : il met au point la chambre proportionnelle « multi-fils » qui remplace les chambres à bulles et les chambres à étincelles en permettant un traitement numérique des données. L’invention lui vaut le prix Nobel de physique en 1992.

A son retour au Collège de France, Frédéric Joliot-Curie s’engage auprès d’Irène dans la création du campus d’Orsay. Avec la perspective de nouvelles installations au CERN, des infrastructures en France leur semblent nécessaires pour permettre aux physiciens français de se former et de préparer leurs expériences au CERN. « Contribuer à créer et à faire vivre le CERN en laissant s’éteindre la recherche fondamentale française en physique nucléaire serait agir contre les intérêts de notre pays et contre ceux de la science », écrit Irène Joliot-Curie dans « Le Monde ». Le gouvernement de Pierre Mendès France donne une priorité à la recherche et alloue en 1954 des crédits pour la construction de deux accélérateurs, un synchrocyclotron dans l’Institut du radium d’Irène Joliot-Curie, et un accélérateur linéaire pour le Laboratoire de physique de Yves Rocard à l’École normale supérieure. Irène Joliot-Curie obtient les terrains nécessaires à Orsay pour la construction de l’Institut de physique nucléaire (IPNO) et le Laboratoire de l’accélérateur linéaire (LAL). Irène Joliot-Curie ne verra pas le nouveau laboratoire et c’est Fréderic Joliot-Curie qui devient le premier directeur de l’IPNO et Hans Halban, rappelé d’Angleterre, prend la direction du LAL. Ces deux instituts emblématiques jouent encore un rôle majeur pour les contributions françaises au CERN.

L’éclosion des laboratoires

Pendant les années 1950-1960, le CNRS connaît un fort développement et d’autres laboratoires de physique nucléaire et des hautes énergies sont créés. Un accélérateur Cockroft-Walton construit pendant la guerre à Strasbourg par les Allemands sera le germe du Centre de recherches nucléaires, dirigé par Serge Gorodetzky. Créée en 1947, la chaire de Maurice Scherer en physique nucléaire à Caen devient le Laboratoire de physique corpusculaire. L’un de ses premiers thésards, Louis Avan, fondera un laboratoire du même nom à Clermont-Ferrand en 1959. A Grenoble, Louis Néel pose les fondations d’une importante activité de recherche en physique et le CEA y installera le Centre d’études nucléaires en 1956. En 1967, le réacteur de recherche franco-allemand de l’Institut Laue-Langevin y est construit. La même année, l’Institut des sciences nucléaires de Grenoble voit le jour : il accueillera un cyclotron, utilisé en particulier pour produire des radio-isotopes en médecine. Son directeur, Jean Yoccoz, sera l’un des futurs directeurs de l’IN2P3. Le Centre d’études nucléaires de Bordeaux-Gradignan s’installe dans un ancien château bordelais en 1969.

Les physiciens de ces laboratoires participent activement aux expériences au CERN, bénéficiant en particulier d’une mobilité facilitée par le CNRS. Parmi eux, Bernard Gregory, du laboratoire de Leprince-Ringuet, s’oriente, en vue de la prochaine mise en service du Synchrotron à protons (PS) du CERN, vers la construction à Saclay d’une grande chambre à bulles à hydrogène liquide de 81 centimètres. Elle produira plus de dix millions de clichés d’interactions de particules, distribués à travers toute l’Europe. En 1965, Bernard Gregory est désigné directeur général du CERN. Cinq ans plus tard, il succède à Louis Leprince-Ringuet à la direction du laboratoire de l’École polytechnique, puis devient directeur général du CNRS. Il est élu président du Conseil en 1977.

Gérer l’expansion

Dans les années 1960, les équipements de recherche deviennent si imposants qu’émerge au sein du CNRS l’idée de créer des instituts nationaux pour coordonner les ressources et les activités des laboratoires. Le directeur du LAL, André Blanc-Lapierre, milite pour la création d’un institut national de physique nucléaire et de physique des particules, à l’instar de l’INFN italien fondé en 1951. Il s’agit d’organiser les moyens alloués aux différents laboratoires par le CNRS, les universités et le CEA : les discussions entre les partenaires commencent.

Le collisionneur italien AdA

Parallèlement, un autre débat anime les physiciens français. Après la construction en 1958 de l’accélérateur de protons SATURNE de 3 GeV au CEA à Saclay et, en 1962, de l’accélérateur linéaire à électrons de 1,3 GeV au LAL à Orsay, et la construction du collisionneur électron-positron ACO, les esprits se divisent sur la construction d’une machine nationale qui complèterait les capacités expérimentales du CERN et renforcerait la communauté scientifique française. Deux propositions sont en lice : une machine à protons et une machine à électrons. D’autant qu’en Europe d’autres machines sont sorties de terre. En Italie, le collisionneur électron-positon AdA est suivi en 1969 par ADONE. À Hambourg en Allemagne, le synchrotron à électrons DESY démarre en 1964.

La France en revanche donne la priorité à la construction européenne avec le CERN. Aucun des deux projets proposés ne voit donc le jour. Jean Teillac, successeur de Frédéric Joliot-Curie à la tête de l’IPNO, fonde l’IN2P3 en 1971, regroupant les laboratoires du CNRS et des universités.  Il faudra attendre 1975 pour que le CEA et l’IN2P3 décident de construire ensemble à Caen une machine nationale, le Grand accélérateur national d’ions lourds (GANIL), spécialisé en physique nucléaire. Bien que les laboratoires concernés du CEA ne fassent pas partie de l’IN2P3, les collaborations entre les physiciens des deux organismes sont importantes.

Ainsi, André Lagarrigue, directeur du LAL depuis 1969, propose la construction d’une nouvelle chambre à bulles, Gargamelle, sur un faisceau de neutrinos du CERN. Le scientifique avait exploré auparavant à l’École polytechnique la faisabilité de chambres à bulles contenant des liquides lourds au lieu de l’hydrogène, favorisant les interactions avec des neutrinos. Après sa construction au CEA Saclay, la chambre remplie de fréon liquide est installée au CERN et décèlera en 1973 les courants neutres. Une découverte majeure, certainement nobélisable si Lagarrigue n’avait pas succombé à une crise cardiaque en 1975.

Depuis lors

André Lagarrigue

L’IN2P3 compte aujourd’hui une vingtaine de laboratoires, environ 3200 personnes dont 1000 chercheurs et enseignants-chercheurs dans les domaines de la physique nucléaire, des particules et des astroparticules ainsi qu’en cosmologie. L’Institut contribue au développement d’accélérateurs, de détecteurs et d’instruments d’observation et leurs applications. Son centre de calcul à Lyon joue un rôle important dans le traitement et le stockage de grands volumes de données, hébergeant par ailleurs des infrastructures numériques d’autres disciplines.

Les liens avec le CERN sont forts à travers des nombreux projets et expériences : la découverte des bosons W et Z par UA1 et UA2, le LEP avec des contributions à ALEPH, DELPHI et L3, la découverte du boson de Higgs par ATLAS et CMS au LHC, LHCb et ALICE, la physique des neutrinos, la violation de CP, les expériences sur l’antimatière, ainsi que la physique nucléaire. Les collaborations impliquent d’autres instituts du CNRS comme l’INP (Institut de physique), auquel sont rattachés des théoriciens, ainsi que les spécialistes de la physique quantique et des lasers, ou encore les recherches des champs magnétiques intenses.

Et la suite ? Les futurs projets du CERN sont en discussion à l’occasion de la mise à jour de la stratégie européenne pour la physique des particules. Ils offrent la possibilité de faire émerger de nouvelles collaborations entre le CERN et le CNRS, en physique mais aussi en ingénierie, en calcul, dans les applications biomédicales ou, pourquoi pas, en sciences humaines. Sans aucun doute, de la synergie entre ces deux organismes porteurs d’une richesse scientifique exceptionnelle, de nouvelles recherches passionnantes verront le jour !

  • La version anglaise de cet article est disponible ici.

Last stop for the Higgs Couplings workshop

Higgs-boson measurements are entering the precision regime, with Higgs couplings to gauge bosons now measured to better than 10% precision, and its decays to third-generation fermions measured to better than 20%. These and other recent experimental and theoretical results were the focus of discussions at the eighth international Higgs Couplings workshop, held in Oxford from 30 September to 4 October 2019. Making its final appearance with this moniker (next year it will be rebranded as Higgs 2020), the conference programme comprised 38 plenary and 46 parallel talks attended by 120 participants.

The first two days of the conference reviewed Higgs measurements, including a new ATLAS measurement of ttH production using Higgs boson decays to leptons, and a differential measurement of Higgs boson production in its decays to W-boson pairs using all of the CMS data from Run 2. These measurements showed continuing progress in coupling measurements, but the highlight of the precision presentations was a new determination of the Higgs boson mass from CMS using its decays to two photons. Combining this result with previous CMS measurements gives a Higgs boson mass of 125.35 ± 0.15 GeV/c2, corresponding to an impressive relative precision of 0.12%. From the theory side, the challenges of keeping up with experimental precision were discussed. For example, the Higgs boson production cross section is calculated to the highest order of any observable in perturbative QCD, and yet it must be predicted even more precisely to match the expected experimental precision of the HL-LHC.

ATLAS presented an updated self-coupling constraint

One of the highest priority targets of the HL-LHC is the measurement of the self-coupling of the Higgs boson, which is expected to be determined to 50% precision. This determination is based on double-Higgs production, to which the self-coupling contributes when a virtual Higgs boson splits into two Higgs bosons. ATLAS and CMS have performed extensive searches for two-Higgs production using data from 2016, and at the conference ATLAS presented an updated self-coupling constraint using a combination of single- and double-Higgs measurements and searches.  Allowing only the self-coupling to be modified by a factor ?λ in the loop corrections yields a constraint on the Higgs self-coupling of –2.3 < ?λ < 10.3 times the Standard Model prediction at 95% confidence.

The theoretical programme of the conference included an overview of the broader context for Higgs physics, covering the possibility of generating the observed matter-antimatter asymmetry through a first- order electroweak phase transition, as well as possibilities for generating the Yukawa coupling matrices. In the so-called electroweak baryogenesis scenario, the cooling universe developed bubbles of broken electroweak symmetry with asymmetric matter-antimatter interactions at the boundaries, with sphalerons in the electroweak-symmetric space converting the resulting matter asymmetry into a baryon asymmetry. The matter-asymmetric interactions could have arisen through Higgs boson couplings to fermions or gauge bosons, or through its self-couplings. In the latter case the source could be an additional electroweak singlet or doublet modifying the Higgs potential.

The broader interpretation of Higgs boson measurements and searches was discussed both in the case of specific models and in the Standard Model effective field theory, where new particles appear at significantly higher masses (~1 TeV/c2 or more). The calculations in the effective field theory continue to advance, adding higher orders in QCD to more electroweak processes, and an analytical determination of the dependence of the Higgs decay width on the theory parameters. Constraints on the number and values of these parameters also continue to improve through an expanded use of input measurements.

The conference wrapped up with a look into the crystal ball of future detectors and colliders, with a sobering yet inspirational account of detector requirements at the next generation of colliders. To solve the daunting challenges, the audience was encouraged to be creative and explore new technologies, which will likely be needed to succeed. Various collider scenarios were also presented in the context of the European Strategy update, which will wrap up early next year.

The newly minted Higgs conference will be held in late October or early November of 2020 in Stonybrook, New York.

Redeeming the role of mathematics

A currently popular sentiment in some quarters is that theoretical physics has dived too deeply into mathematics, and lost contact with the real world. Perhaps, it is surmised, the edifice of quantum gravity and string theory is in fact a contrived Rube-Goldberg machine, or a house of cards which is about to collapse – especially given that one of the supporting pillars, namely supersymmetry, has not been discovered at the LHC. Graham Farmelo’s new book sheds light on this issue.

The universe speaks in numbers, reads Farmelo’s title. With hindsight this allows a double interpretation: first, that it is primarily mathematical structure which underlies nature. On the other hand, one can read it as a caution that the universe speaks to us purely via measured numbers, and theorists should pay attention to that. The majority of physicists would likely support both interpretations, and agree that there is no real tension between them.

The author, who was a theoretical physicist before becoming an award-winning science writer, does not embark on a detailed scientific discussion of these matters, but provides a historical tour de force of the relationship between mathematics and physics, and their tightly correlated evolution. At the time of ancient Greeks there was no distinction between these fields, and it was only from about the 19th century onwards that they were viewed as separate. Evidently, a major factor was the growing role of experiments, which provided a firmer grounding in the physical world than what had previously been called natural philosophy.

Theoretical physicists should not allow themselves to be distracted by every surprising experimental finding

Paul Dirac

The book follows the mutual fertilisation of mathematics and physics through the last few centuries, as the disciplines gained momentum with Newton, and exploded in the 20th century. Along the way it peeks into the thinking of notable mathematicians and physicists, often with strong opinions. For example, Dirac, a favourite of the author, is quoted as reflecting both that “Einstein failed because his mathematical basis… was not broad enough” and that “theoretical physicists should not allow themselves to be distracted by every surprising experimental finding.” The belief that mathematical structure is at the heart of physics and that experimental results ought to have secondary importance holds sway in this section of the book. Such thinking is perhaps the result of selection bias, however, as only scientists with successful theories are remembered.

The detailed exposition makes the reader vividly aware that the relationship between mathematics and physics is a roller-coaster loaded with mutual admiration, contempt, misunderstandings, split-ups and re-marriages. Which brings us, towards the end of the book, to the current state of affairs in theoretical high-energy physics, which most of us in the profession would agree is characterised by extreme mathematical and intellectual sophistication, paired with a stunning lack of experimental support. After many decades of flourishing interplay, which provided, for example, the group-theoretical underpinning of the quark model, the geometry of gauge theories, the algebraic geometry of supersymmetric theories and finally strings, is there a new divorce ahead? It appears that some not only desire, but relish the lack of supporting experimental evidence. This concern is also expressed by the author, who criticises self-declared experts who “write with a confidence that belies the evident slightness of their understanding of the subject they are attacking”.

The last part of the book is the least readable. Based on personal interactions with physicists, the exposition becomes too detailed to be of use to the casual, or lay reader. While there is nothing wrong with the content, which is exciting, it will only be meaningful to people who are already familiar with the subject. On the positive side, however, it gives a lively and accurate snapshot of today’s sociology in theoretical particle physics, and of influential but less well known characters in the field.

The Universe Speaks in Numbers illuminates the role of mathematics in physics in an easy-to-grasp way, exhibiting in detail their interactive co-evolution until today. A worthwhile read for anybody, the book is best suited for particle physicists who are close to the field.

Jacques Soffer 1940–2019

Jacques Soffer

Jacques Soffer, a prolific theorist and phenomenologist with nearly 300 articles in journals or conference proceedings to his name, was born in 1940 in Marseille. During the war, he and his family were sheltered in a farm in the Alps. Afterwards, Jacques came back to Marseille, studied there, and obtained his doctoral degree under the supervision of A Visconti. He spent most of his career at the Centre de Physique Théorique in Marseille, serving as director from 1986 to 1993. He enjoyed sabbaticals at Maryland, Cambridge, CERN, the Weizmann Institute and Lausanne University, and after his retirement he became adjunct-professor at Temple University in the US.

Jacques played a big part in persuading the elementary particle community of the importance of polarisation-type measurements, which provide a probe of dynamical theories far sharper than tests involving just differential and total cross-sections. He is renowned in the community for predicting, together with Claude Bourrely and Tai Wu in 1984, the dramatic phenomenology of the growth with energy of the proton–proton cross-section. This prediction still holds when compared with experimental data after a 100-fold increase in collision energy – up to and including LHC energies. In 1999 Jacques contributed to a paper showing how to make an absolute measurement of the degree of polarisation of a proton beam – which was essential to the success of the Brookhaven spin programme.

In recent years, Jacques showed how positivity sets bounds on spin observables, with important applications to the extraction and determination of the polarised parton structure functions and to low-energy hadron–hadron scattering. His various achievements culminated in three major reviews in Physics Reports.

Jacques always cooperated closely and fruitfully with experimentalists. Entire programmes, such as the polarised proton–proton collisions at Brookhaven’s Relativistic Heavy-Ion Collider, were inspired by his work and carried out with his guidance. Along his career, Jacques organised or co-organised several workshops and conferences on spin physics, and in more recent years was often giving the summary talk.

Throughout his pioneering work in particle physics, Jacques always got to the central issues very quickly, guided by an uncanny feeling for the new physics that roused the amazement and admiration of his collaborators. His colleagues and collaborators, and especially his thesis students, benefited from his advice and his broad knowledge of theory tools and experimental facts. They unanimously praised his warm friendship and hospitality, his sense of humour and his widespread interests in the arts, literature and technology.

Jacques is survived by his wife, Danielle, their three children and nine grandchildren.

Anton Oed 1933–2018

Anton Oed

Anton Oed, a passionate inventor and a source of inspiration for many of us today, passed away on 30 September 2018. His introduction of micro-strip gas chambers (MSGCs) at the Institut Laue-Langevin (ILL) in 1988 was a decisive breakthrough in the field of radiation detectors. It demonstrated a significant gain in spatial resolution and counting rate, and the invention immediately stimulated the development of a new class of micro-pattern gas detectors (MPGDs).

Anton was born 1933 in Ulm, Germany, and studied physics at the University of Tübingen. For his diploma thesis on “The double resonance spectrum of 23Na”, he received the prize of the Faculty of Mathematics and Natural Sciences of the University of Tübingen. In his doctoral thesis, again in atomic physics, he studied the double-quanta decay of the hydrogen 2S level.

Anton arrived at the ILL in Grenoble in 1979, and set about developing the detector of the “Cosi Fan Tutte” spectrometer to measure the mass, charge and kinetic energy of fission fragments. The results obtained with this detector were so precise that it has been taken as a reference for several nuclear instruments in other institutes. Anton later started developing the MSGC technique to upgrade detectors of neutron diffractometers. Several ILL instruments are now equipped with MSGCs that have been in operation for more than 10 years.

The development of MSGCs for high-energy physics started at the beginning of the 1990s. Encouraging results were obtained by the RD28 collaboration at CERN but the relative fragility of MSGCs under harsh irradiation conditions motivated the development of new detectors with improved robustness. Among these, Micromegas and gas electron multipliers (GEMs) have become very successful and are currently being implemented in various upgrades to the LHC experiments. MSGC detectors are also used to detect X-rays on ESA’s INTEGRAL telescope.

In 1997 Anton received the R W Pohl medal from the Deutsche Physikalische Gesellschaft for the invention of the MSGC. To honour his memory, the ILL has established a prize promoting his innovative spirit and the ability to solve technical challenges in the field of micro-pattern gas detectors.

Memories of the technology’s development and of Anton’s personality were shared during a special session at the MPGD 2019 conference held in La Rochelle from 5–10 May. He has always been of great inspiration to many of the collaborators working with him. We will remember him as a very friendly and enthusiastic person, as well as for his kindness towards everybody.

Accelerating magnet technology

A Nb3Sn cable

The steady increase in the energy of colliders during the past 40 years was possible thanks to progress in superconducting materials and accelerator magnets. The highest particle energies have been reached by proton–proton colliders, where beams of high-rigidity travelling on a piecewise circular trajectory require magnetic fields largely in excess of those that can be produced using resistive electromagnets. Starting from the Tevatron in 1983, through HERA in 1991 (see Constructing HERA: rising to the challenge), RHIC in 2000 and finally the LHC in 2008 (see LHC insertions: the key to CERNs new accelerator and Superconductivity and the LHC: the early days), all large-scale hadron colliders were built using superconducting magnets.

Large superconducting magnets for detectors are just as important to large high-energy physics experiments as beamline magnets are to particle accelerators. In fact, detector magnets are where superconductivity took its stronghold, right from the infancy of the technology in the 1960s, with major installations such as the large bubble-chamber solenoid at Argonne National Laboratory, followed by the giant BEBC solenoid at CERN, which held the record for the highest stored energy for many years. A long line of superconducting magnets has provided the field to the detectors of all large-scale high-energy physics colliders (see ALEPH coil hits the road and CMS: a super solenoid is ready for business), with the last and largest realisation being the LHC experiments, CMS and ATLAS.

All past accelerator and detector magnets have one thing in common: they were built using composite Nb-Ti/Cu wires and cables. Nb-Ti is a ductile alloy with a critical field of 14.5 T and critical temperature of 9.2 K, made from almost equal parts of the two constituents and discovered to be superconducting in 1962. Its performance, quality and cost have been optimised over more than half a century of research, development and large-scale industrial production. Indeed, it is unlikely that the performance of the LHC dipole magnets, operated so far at 7.7 T and expected to reach nominal conditions at 8.33 T, can be surpassed using the same superconducting material, or any foreseeable improvement of this alloy.

The HL-LHC springboard

And yet, approved projects and studies for future circular machines are all calling for the development of superconducting magnets that produce fields beyond those produced for the LHC. These include the High-Luminosity LHC (HL-LHC), which is currently taking place, and the Future Circular Collider design study (FCC), both at CERN, together with studies and programmes outside Europe, such as the Super proton–proton Collider in China (SppC) or the past studies of a Very Large Hadron Collider at Fermilab and the US–DOE Muon Accelerator Program. This requires that we turn to other superconducting materials and novel magnet technology.

Luca Bottura

To reach its main objective, to increase the levelled LHC luminosity at ATLAS and CMS by a factor of five and the integrated one by a factor of 10, HL-LHC requires very large-aperture quadrupoles, with field levels at the coil in the range of 12 T in the interaction regions. These quadrupoles, currently being produced, are the main fruit of the 10-year US-DOE LHC Accelerator Research  Program (US–LARP) – a joint venture between CERN, Brookhaven National Laboratory, Fermilab and Lawrence Berkeley National Laboratory. In addition, the increased beam intensity calls for collimators to be inserted in locations within the LHC “dispersion suppressor”, the portion of the accelerator where the regular magnet lattice is modified to ensure that off-momentum particles are centered in the interaction points. To gain the required space, standard arc dipoles will be substituted by dipoles of shorter length and higher field, approximately 11 T. As described earlier, such fields require the use of new materials. For HL-LHC, the material of choice is the inter-metallic compound of niobium and tin Nb3Sn, which was discovered in 1954. Nb3Sn has a critical field of 30 T and a critical temperature of 18 K, outperforming Nb-Ti by a factor two. Though discovered before Nb-Ti, and exhibiting better performance, Nb3Sn has not been used for accelerator magnets so far because in its final form it is brittle and cannot withstand large stress and strain without special precautions.

In fact, Nb3Sn was one of the candidate materials considered for the LHC in the late 1980s and mid 1990s. Already at that time it was demonstrated that accelerator magnets could be built with Nb3Sn, but it was also clear that the technology was complex, with a number of critical steps, and not ripe for large-scale production. A good 20 years of progress in basic material performance, cable development, magnet engineering and industrial process control was necessary to reach the present state, during which time the success of the production of Nb3Sn for ITER (see ITER’s massive magnets enter production) has given confidence in the credibility of this material for large-scale applications. As a result, magnet experts are now convinced that Nb3Sn technology is sufficiently mature to satisfy the challenging field levels required by HL-LHC.

The present manufacturing recipe for Nb3Sn accelerator magnets consists of winding the magnet coil with glass-fibre insulated cables made of multi-filamentary wires that contain Nb and Sn precursors in a Cu matrix. In this form the cables can be handled and plastically deformed without breakage. The coils then undergo heat treatment, typically at a temperature of around 600 to 700 °C, during which the precursor elements react chemically and form the desired Nb3Sn superconducting phase. At this stage, the reacted coil is extremely fragile and needs to be protected from any mechanical action. This is done by injecting a polymer, which fills the interstitial spaces among cables, and is subsequently cured to become a matrix of hardened plastic providing cohesion and support to the cables.

The above process, though conceptually simple, has a number of technical difficulties that call for top-of-the-line engineering and production control. To give some examples, the electrical insulation consisting of a few tenths of mm of glass-fibre needs to be able to withstand the high-temperature heat-treatment step, but also retain dielectric and mechanical properties at liquid helium temperatures 1000 degrees lower. The superconducting wire also changes its dimensions by a few percent, which is orders of magnitude larger than the dimensional accuracy requested for field quality and therefore must be predicted and accommodated for by appropriate magnet and tooling design. The finished coil, even if it is made solid by the polymer cast, still remains stress and strain sensitive. The level of stress that can be tolerated without breakage can be up to 150 MPa, to be compared to the electromagnetic stress of optimised magnets operating at 12 T that can reach levels in the range of 100 MPa. This does not leave much headroom for engineering margins and manufacturing tolerances. Finally, protecting high-field magnets from quench, with their large stored energy, requires that the protection system has a very fast reaction – three times faster than at the LHC – and excellent noise rejection to avoid false trips related to flux jumps in the large Nb3Sn filaments.

The CERN magnet group, in collaboration with the US-DOE laboratories participating in the LHC Accelerator Upgrade Project, is in the process of addressing these and other challenges, finding solutions suitable for a magnet production on the scale required for HL-LHC. A total of six 11 T dipoles (each about 6 m long) and 20 inner triplet quadrupoles (up to 7.5 m long) are in production. And yet, it is clear that we are not ready to extrapolate such production on a much larger scale, i.e. to the thousands of magnets required for a future hadron collider such as FCC-hh. This is exactly why HL-LHC is so critical to the development of high-field magnets for future accelerators: not only will it be the first demonstration of Nb3Sn magnets in operation, steering and colliding beams, but by building it on a scale that can be managed at the laboratory level we have a unique opportunity to identify all the areas of necessary development, and the open technology issues, to allow the next jump. Beyond its prime physics objective, HL-LHC is the springboard into the future of high-field accelerator magnets.

The climb to higher peak fields

For future circular colliders, the target dipole field has been set at 16 T for FCC-hh, allowing proton-proton collisions at an energy of 100 TeV, while the SppC aims at a 12 T dipole field as a first step, to be followed by a 20 T dipole. Are these field levels realistic? And based on which technology?

The large-bore Nb3Sn dipole FRESCA2 at CERN

Looking at the dipole fields produced by Nb3Sn development magnets during the past 40 years (figure 1), fields up to 16 T have been achieved in R&D demonstrators, suggesting that the FCC target can be reached. In 2018 “FRESCA2” – a large-aperture dipole developed over the past decade through a collaboration between CERN and CEA-Saclay in the framework of the European Union project EuCARD – attained a record field of 14.6 T at 1.9 K (13.9 T at 4.5 K). Another very relevant recent result is the successful test at Fermilab of the high-field dipole MDPCT1, which reached a field of 14.1 T at 4.5 K earlier this year.

A field of 16 T seems to be the upper limit that can be reached with Nb3Sn. Indeed, though the conductor performance can still be improved, as demonstrated by recent results obtained at NHMFL, OSU and FNAL within the scope of the US-DOE Magnet Development Program, this is the point at which the material itself will run out of steam: as for any other superconductor, the critical current density drops as the field is increased, requiring an increasing amount of material to carry a given current. This effect becomes dramatic approaching a significant fraction of the critical field. Akin to Nb-Ti in the range of 8 T, a further field increase with Nb3Sn beyond 16 T would require an exceedingly large coil and an impractical amount of conductor. Reaching the ultimate performance of Nb3Sn, which will be situated between the present 12 T and the expected maximum of 16 T, still requires much work. The technology issues identified by the ongoing work on the HL-LHC magnets are exacerbated by the increase in field, electro-magnetic force and stored energy. Innovative industrial solutions will be needed, and the conductor itself brought to a level of maturity comparable to Nb-Ti in terms of performance, quality and cost. This work is the core of the ongoing FCC magnet development programme that CERN is pursuing in collaboration with laboratories, universities and industries worldwide.

As the limit of Nb3Sn comes into view, we see history repeating itself: the only way to push beyond it to higher fields will be to resort to new materials. Since Nb3Sn is technically the low-temperature superconductor (LTS) with the highest performance, this will require a transition to high-temperature superconductors (HTS).

Brave new world of HTS

High-temperature superconductors, discovered in 1986, are of great relevance in the quest for high fields. When operated at low temperature (the same liquid-helium range as LTS), they have exceedingly large critical fields in the range of 100 T and above. And yet, only recently the material and magnet engineering has reached the point where HTS materials can generate magnetic fields in excess of LTS ones. The first user applications coming to fruition are ultra-high-field NMR magnets, as recently delivered by Bruker Biospin, and the intense magnetic fields required by material science, for example the 32 T all-superconducting user facility built by the US National High Magnetic Field Laboratory.

Diagram of record fields attained with Nb3Sn dipole magnets

As for their application in accelerator magnets, the potential of HTS to make a quantum leap is enormous. But it is also clear that the tough challenges that needed to be solved for Nb3Sn will escalate to a formidable level in HTS accelerator magnets. The magnetic force scales with the square of the field produced by the magnet, and for HTS the problem will no longer be whether the material can carry the super-currents, but rather how to manage stresses approaching structural material limits. Stored energy has the same square dependence on the field, and quench detection and protection in large HTS magnets are still a spectacular challenge. In fact, HTS magnet engineering will probably differ so much from the LTS paradigm that it is fair to say that we do not yet know whether we have identified all the issues that need to be solved. HTS is the most exciting class of material to work with; the new world for brave explorers. But it is still too early to count on practical applications, not least because the production cost for this rather complex class of ceramic materials is about two orders of magnitude higher than that of good old Nb-Ti.

It is quite logical to expect the near future to be based mainly on Nb3Sn. With the first demonstration to come imminently, in the LHC, we need to consolidate the technology and bring it to the maturity necessary on a large-scale production. This may likely take place in steps – exploring 12 T territory first, while seeking the solutions to the challenges of ultimate Nb3Sn performance towards 16 T – and could take as long as a decade.

Meanwhile, nurtured by novel ideas and innovative solutions, HTS could grow from the present state of a material of great potential to its first applications. The grand challenges posed by HTS will likely require a revolution rather than an evolution of magnet technology, and significant technology advancement leading to large-scale application in accelerators can only be imagined on the 25-year horizon.

Road to the future

There are two important messages to retain from this rather simplified perspective on high-field magnets for accelerators. Firstly, given the long lead times of this technology, and even in times of uncertainty, it is important to maintain a healthy and ambitious programme so that the next step in technology is at hand when critical decisions on the accelerators of the future are due. The second message is that with such long development cycles and very specific technology, it is not realistic to rely on the private sector to advance and sustain the specific demands of HEP. In fact, the business model of high-energy physics is very peculiar, involving long investment times followed by short production bursts, and not sustainable by present industry standards. So, without taking the place of industry, it is crucial to secure critical know-how and infrastructure within the field to meet development needs and ensure the long-term future of our accelerators, present and to come. 

The galaxy that feeds three times per day

All galaxies are thought to contain a super-massive black hole (SMBH) at their centres, one of which was famously pictured for the first time by the Event Horizon Telescope only a few months ago (CERN Courier May/June 2019 p10). Both the size and activity of such SMBHs differ significantly from galaxy to galaxy: some galaxies contain an almost dormant black hole at their centre, while in others the SMBH is accumulating surrounding matter at a vast rate resulting in bright emission with energies ranging from the radio to the X-ray regime.

While solar-mass black holes can show dramatic variations in their emission on the time scale of days or even hours, such time scales increase with size, meaning that for an SMBH one would not expect much change during years or even centuries. However, observations during the past decade have revealed sudden increases. In 2010 the X-ray emission from a galaxy called GSN 069, which has a relatively small SMBH (400,000 solar masses), became 240 times brighter compared to observations in 1989 – turning it into an active galaxy. In such objects the matter falling into the central SMBH releases radiation when it approaches the event horizon (the boundary beyond which nothing can escape the black hole’s gravitational field).

The brightness of the emission produced as the SMBH feeds on the surrounding disk of matter typically varies randomly on short time scales, a result of a change in accretion rate and turbulence in the disk. But subsequent observations with the European Space Agency’s X-ray satellite XMM-Newton in 2018 revealed never-before-seen behavior. The object emitted strong bursts of X-rays lasting about one hour. Even more surprising was that the bursts appeared to occur at very consistent intervals of nine hours. Follow-up observations in 2019 with both XMM-Newton and NASA’s Chandra X-ray telescope have now confirmed this picture. While simultaneous observations at radio wavelengths showed no variability, the intensity of the bursts at X-ray wavelengths decreased. An extrapolation of this decrease indicates that, by now, the bursts should have fully disappeared, although further observations are needed to confirm this.

XMM-Newton data

The team behind the latest observations, published in Nature, has no clear explanation of what causes such extreme periodic behaviour from such a massive object. One possibility, claims the paper, is that it is the result of a second SMBH orbiting the main one: each time it crosses the disk of matter a burst would be expected. However, the associated variation would be expected to be more smooth than is observed. Furthermore, no such bursts were seen in the 2010 observations, making this theory implausible. Another explanation is that a semi-destroyed star is currently orbiting the SMBH, disturbing the accretion rate. The last and most probable hypothesis is that the quasi-periodic explosions are a result of complex oscillations in the disk of hot matter surrounding the SMBH induced by instabilities. The authors make it clear, however, that deeper studies are required to fully explain this new phenomenon.

Although only observed for the first time in GSN 069, it could very well be that other galaxies exhibit a similar behaviour. Other SMBHs with masses many orders of magnitude larger could exhibit the same periodic burst but on time scales of months or years, explaining why no one has ever noticed them. So while it could be that GSN 069 is simply a strange galaxy, the finding could have large implications for galaxies in general.

Further reading:
G Miniutti et al. 2019 Nature 573 381.

 

European strategy enters next phase

European Strategy for Particle Physics

Physicists in Europe have published a 250-page “briefing book” to help map out the next major paths in fundamental exploration. Compiled by an expert physics-preparatory group set up by the CERN Council, the document is the result of an intense effort to capture the status and prospects for experiment, theory, accelerators, computing and other vital machinery of high-energy physics.

Last year, the European Strategy Group (ESG) — which includes scientific delegates from CERN’s member and associate-member states, directors and representatives of major European laboratories and organisations and invitees from outside Europe — was tasked with formulating the next update of the European strategy for particle physics. Following a call for input in September 2018, which attracted 160 submissions, an open symposium was held in Granada, Spain, on 13-16 May at which more than 600 delegates discussed the potential merits and challenges of the proposed research programmes. The ESG briefing book distills input from the working groups and the Granada symposium to provide an objective scientific summary.

“This document is the result of months of work by hundreds of people, and every effort has been made to objectively analyse the submitted inputs,” says ESG chair Halina Abramowicz of Tel Aviv University. “It does not take a position on the strategy process itself, or on individual projects, but rather is intended to represent the forward thinking of the community and be the main input to the drafting session in Germany in January.”

Collider considerations
An important element of the European strategy update is to consider which major collider should follow the LHC. The Granada symposium revealed there is clear support for an electron–positron collider to study the Higgs boson in greater detail, but four possible options at different stages of maturity exist: an International Linear Collider (ILC) in Japan, a Compact Linear Collider (CLIC) or Future Circular Collider (FCC-ee) at CERN, and a Circular Electron Positron Collider (CEPC) in China. The briefing book states that, in a global context, CLIC and FCC-ee are competing with the ILC and with CEPC. As Higgs factories, however, the report finds all four to have similar reach, albeit with different time schedules and with differing potentials for the study of physics topics at other energies.

Also considered in depth are design studies in Europe for colliders that push the energy frontier, including a 3 TeV CLIC and a 100 TeV circular hadron collider (FCC-hh). The briefing book details the estimated timescales to develop some of these technologies, observing that the development of 16 T dipole magnets for FCC-hh will take a comparable time (about 20 years) to that projected for novel acceleration technologies such as plasma-wakefield techniques to reach conceptual designs.

“The Granada symposium and the briefing book mention the urgent need for intensifying accelerator R&D, including that for muon colliders,” says Lenny Rivkin of Paul Scherrer Institut, who was co-convener of the chapter on accelerator science and technology. “Another important aspect of the strategy update is to recognize the potential impact of the development of accelerator and associated technology on the progress in other branches of science, such as astroparticle physics, cosmology and nuclear physics.”

The bulk of the briefing book details the current physics landscape and prospects for progress, with chapters devoted to electroweak physics, strong interactions, flavour physics, neutrinos, cosmic messengers, physics beyond the Standard Model, and dark-sector exploration. A preceding chapter about theory emphasises the importance of keeping theoretical research in fundamental physics “free and diverse” and “not only limited to the goals of ongoing experimental projects”. It points to historical success stories such as Peter Higgs’ celebrated 1964 paper, which had the purely theoretical aim to show that Gilbert’s theorem is invalid for gauge theories at a time when applications to electroweak interactions were well beyond the horizon.

“While an amazing amount of progress has been made in the past seven years since the Higgs boson discovery, our knowledge of the couplings of the Higgs-boson to the W and Z and to third-generation charged fermions is quite imprecise, and the couplings of the Higgs boson to the other charged fermions and to itself are unmeasured,” says Beate Heinemann of DESY, who co-convened the report’s electroweak chapter. “The imperative to study this unique particle further derives from its special properties and the special role it might play in resolving some of the current puzzles of the universe, for example dark matter, the matter-antimatter asymmetry or the hierarchy problem.”

Readers are reminded that the discovery of neutrino oscillations constitutes a “laboratory” proof of physics beyond the Standard Model. The briefing book also notes the significant role played by Europe, via CERN, in neutrino-experiment R&D since the last strategy update concluded in 2013. Flavour physics too should remain at the forefront of the European strategy, it argues, noting that the search for flavour and CP violation in the quark and lepton sectors at different energy frontiers “has a great potential to lead to new physics at moderate cost”. An independent determination of the proton structure is needed if present and future hadron colliders are to be turned into precision machines, reports the chapter on strong interactions, and a diverse global programme based on fixed-target experiments as well as dedicated electron-proton colliders is in place.

Europe also has the opportunity to play a leading role in the searches for dark matter “by fully exploiting the opportunities offered by the CERN facilities, such as the SPS, the potential Beam Dump Facility, and the LHC itself, and by supporting the programme of searches for axions to be hosted at other European institutions”. The briefing book notes the strong complementarity between accelerator and astrophysical searches for dark matter, and the demand for deeper technology sharing between particle and astroparticle physics.

Scientific diversity
The diversity of the experimental physics programme is a strong feature of the strategy update. The briefing book lists outstanding puzzles that did not change in the post-Run 2 LHC era – such as the origin of electroweak symmetry breaking, the nature of the Higgs boson, the pattern of quark and lepton masses and the neutrino’s nature – that can also be investigated by smaller scale experiments at lower energies, as explored by CERN’s dedicated Physics Beyond Colliders initiative.

Finally, in addressing the vital roles of detector & accelerator development, computing and instrumentation, the report acknowledges both the growing importance of energy efficiency and the risks posed by “the limited amount of success in attracting, developing and retaining instrumentation and computing experts”, urging that such activities be recognized correctly as fundamental research activities. The strong support in computing and infrastructure is also key to the success of the high-luminosity LHC which, the report states, will see “a very dynamic programme occupying a large fraction of the community” during the next two decades – including a determination of the couplings between the Higgs boson and Standard Model particles “at the percent level”.

Following a drafting session to take place in Bad Honnef, Germany, on 20-24 January, the ESG is due to submit its recommendations for the approval of the CERN Council in May 2020 in Budapest, Hungary.

“Now comes the most challenging part of the strategy update process: how to turn the exciting and well-motivated scientific proposals of the community into a viable and coherent strategy which will ensure progress and a bright future for particle physics in Europe,” says Abramowicz. “Its importance cannot be overestimated, coming at a time when the field faces several crossroads and decisions about how best to maintain progress in fundamental exploration, potentially for generations to come.”

Hadron therapy to get heavier in Southeast Europe

Montenegro prime minister Duško Marković marks the start of the SEEIIST design phase on 18 September.

A state-of-the-art facility for hadron therapy in Southeast Europe has moved from its conceptual to design phase, following financial support from the European Commission. At a kick-off meeting held on Wednesday 18 September in Budva, Montenegro, more than 120 people met to discuss the future South East European International Institute for Sustainable Technologies (SEEIIST) – a facility for tumour therapy and biomedical research that follows the founding principles of CERN.

“This is a region that has no dilemma regarding its European affiliation, and which, I believe, will be part of a joint European competition for technological progress. Therefore, the International Institute for Sustainable Technologies is an urgent need of our region,” said Montenegro prime minister Duško Marković during the opening address. “I am confident that the political support for this project is obvious and indisputable. The memorandum of understanding was signed by six prime ministers in July this year in Poznan. I believe that other countries in the region will formally join the initiative.”

The idea for SEEIIST germinated three years ago at a meeting of trustees of the World Academy of Art and Science in Dubrovnik, Croatia. It is the brainchild of former CERN Director-General Herwig Schopper, and has benefitted from a political push from Montenegro minister of science Sanja Damjanović, who is also a physicist who works at CERN and GSI-FAIR in Darmstadt, Germany. SEEIIST aims to create a platform for internationally competitive research in the spirit of the CERN model “science for peace”, stimulating the education of young scientists, building scientific capacity and fostering greater cooperation and mobility in the region.

SEEIIST event

In January 2018, at a forum at the International Centre for Theoretical Physics in Italy held under the auspices of UNESCO, the International Atomic Energy Agency and the European Physical Society, two possibilities for a large international institute were presented: a synchrotron X-ray facility and a hadron-therapy centre. Soon afterwards, the 10 participating parties of SEEIIST’s newly formed intergovernmental steering committee chose the latter.

Europe has played a major role in the development of hadron therapy, with numerous centres currently offering proton therapy and four facilities offering proton and more advanced carbon-ion treatment. But currently no such facility exists in Southeast Europe despite a growing number of tumours being diagnosed there. SEEIIST will follow the  idea of the “PIMMS” accelerator design started at CERN two decades ago, profiting from the experience at the dual proton–ion centres CNAO in Italy and MedAustron in Austria, and also centres at GSI and in Heidelberg. It will be a unique facility that splits its beam time 50:50 between treating patients and performing research with a wide range of different ions for radiobiology, imaging and treatment planning. The latter will include studies into the feasibility of heavier ions such as oxygen, making SEEIIST distinct in this rapidly growing field.

The next steps are to prepare a definite technical design for the facility, to propose a structure and business plan and to define the conditions for the site selection. To carry out these tasks, several working groups are being established in close collaboration with CERN and GSI-FAIR. “This great event was a culmination of the continuous efforts invested since 2017 into the project,” says Damjanović. “If all goes well, construction is expected to start in 2023, with first patient treatment in 2028.”

KATRIN sets first limit on neutrino mass

Based on just four weeks of running, researchers at the Karlsruhe Tritium Neutrino (KATRIN) experiment in Germany have set a new model-independent bound on the mass of the neutrino. At a colloquium today, the collaboration reported an upper limit of 1.1 eV at 90% confidence, almost halving the previous bound.

Neutrinos are among the least well understood particles in the Standard Model. Their three known mass eigenstates do not match up with the better-known flavour eigenstates, but mix according to the PMNS matrix, resulting in the flavour transmutations seen by neutrino-oscillation experiments. Despite their success in constraining neutrino mixing, such experiments are sensitive only to squared mass differences between the eigenstates, and not to the neutrino masses themselves.

Physicists have pursued direct mass measurements since Reines and Cowan observed electron antineutrinos in inverse beta decays in 1956. The direct mass measurement method hinges on precisely measuring the energy spectrum of beta-decay electrons, and is considered model independent as the extracted neutrino mass depends only on the kinematics of the decay. KATRIN is now the most precise experiment of this kind. It builds on the invention of gaseous molecular tritium sources and spectrometers based on the principle of magnetic adiabatic collimation with electrostatic filtering. The combination of these methods culminated in the previous best limits of 2.3 eV at 95% confidence in 2005, and 2.05 eV at 95% confidence in 2011, by physicists working in Mainz, Germany and Troitsk, Russia, respectively. The KATRIN analysis improves on these experimental results, with systematic uncertainties reduced by a factor of six and statistical uncertainties reduced by a factor of two.

These are exciting times for the collaboration

Guido Drexlin

“These are exciting times for the collaboration,” said KATRIN co-spokesperson Guido Drexlin. “The first KATRIN result is based on a measurement campaign of only four weeks at reduced source activity, equivalent to five days at nominal activity.” To reach its final sensitivity, KATRIN will collect data for 1000 days, and systematic errors will be reduced. “This will allow us to probe neutrino masses down to 0.2 eV,” continued Drexlin, “as well as many other interesting searches for beyond-the-Standard-Model physics, such as for admixtures of sterile neutrinos from the eV up to the keV scale.”

The KATRIN beamline

Conceived almost two decades ago, KATRIN operates using a high-resolution, large-acceptance and low-background measurement of the decay spectrum of tritium 3H → 3He e ν̄e. Electrons are transported to the spectrometer via a beamline that was completed in autumn 2016, allowing experimenters to search for distortions in the tail of the electron energy distribution that depend on the absolute mass of the neutrino. KATRIN collaborators are now looking forward to a two-month measurement campaign, which will start in a few days. It will feature a signal-to-background ratio that is expected to be about one order of magnitude better than the initial measurements, due to an increase in source activity, and a decrease in background due to hardware upgrades. The goal is to achieve an activity of 1011 beta-decay electrons per second, while reducing the current background level by about a factor of two.

Direct measurements are not the only handle on neutrino masses available to physicists, though they are certainly the most model independent. Experiments searching for neutrinoless double beta-decay offer a complementary limit, but must assume that the neutrino is a Majorana fermion.

The tightest limit on neutrino masses comes from cosmology. Comparing data from the Planck satellite with simulations of the development of structure in the early universe yields an upper limit on the sum of all three neutrino masses of 0.17 eV at 95% confidence.

The Planck limit is fairly robust, and one would have to go to great lengths to avoid it

Joachim Kopp

“The Planck limit is fairly robust, and one would have to go to great lengths to avoid it – but it’s not impossible to do so,” says CERN theorist Joachim Kopp. For example, it would be invalidated by a scenario where as-yet-undiscovered right-handed neutrinos couple to a new scalar field with a vacuum expectation value that evolves over cosmological timescales. “Planck data tell us what neutrinos were like in the early universe,” says Kopp. “The value of KATRIN lies in testing neutrinos now.”

bright-rec iop pub iop-science physcis connect