Comsol -leaderboard other pages

Topics

Solving the next mystery in astrophysics

Stellar stats

In 2007, while studying archival data from the Parkes radio telescope in Australia, Duncan Lorimer and his student David Narkevic of West Virginia University in the US found a short, bright burst of radio waves. It turned out to be the first observation of a fast radio burst (FRB), and further studies revealed additional events in the Parkes data dating from 2001. The origin of several of these bursts, which were slightly different in nature, was later traced back to the microwave oven in the Parkes Observatory visitors centre. After discarding these events, however, a handful of real FRBs in the 2001 data remained, while more FRBs were being found in data from other radio telescopes.

The cause of FRBs has puzzled astronomers for more than a decade. But dedicated searches under way at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Australian Square Kilometre Array Pathfinder (ASKAP), among other activities, are intensifying the search for their origin. Recently, while still in its pre-commissioning phase, CHIME detected no less than 13 new FRBs – one of them classed as a “repeater” on account of its regular radio output – setting the field up for an exciting period of discovery.

Dispersion

All FRBs have one thing in common: they last for a period of several milliseconds and have a relatively broad spectrum where the radio waves with the highest frequencies arrive first followed by those with lower frequencies. This dispersion feature is characteristic of radio waves travelling through a plasma in which free electrons delay lower frequencies more than the higher ones. Measuring the amount of dispersion thus gives an indication of the number of free electrons the pulse has traversed and therefore the distance it has travelled. In the case of FRBs, the measured delay cannot be explained by signals travelling within the Milky Way alone, strongly indicating an extragalactic origin.

The size of the emission region responsible for FRBs can be deduced from their duration. The most likely sources are compact km-sized objects such as neutron stars or black holes. Apart from their extragalactic origin and their size, not much more is known about the 70 or so FRBs that have been detected so far. Theories about their origin range from the mundane, such as pulsar or black-hole emission, to the spectacular – such as neutron stars travelling through asteroid belts or FRBs being messages from extraterrestrials.

For one particular FRB, however, its location was precisely measured and found to coincide with a faint unknown radio source within a dwarf galaxy. This shows clearly that the FRB was extragalactic. The reason this FRB could be localised is that it was one of several to come from the same source, allowing more detailed studies and long-term observations. For a while, it was the only FRB found to do so, earning it the title “The Repeater”. But the recent detection by CHIME has now doubled the number of such sources. The detection of repeater FRBs could be seen as evidence that FRBs are not the result of a cataclysmic event, since the source must survive in order to repeat. However, another interpretation is that there are actually two classes of FRBs: those that repeat and those that come from cataclysmic events.

Until recently the number of theories on the origin of FRBs outnumbered the number of detected FRBs, showing how difficult it is to constrain theoretical models based on the available data. Looking at the experience of a similar field – that of gamma-ray burst (GRB) research, which aims to explain bright flashes of gamma rays discovered during the 1960s – an increase in the number of detections and searches for counterparts in other wavelengths or in gravitational waves will enable quick progress. As the number of detected GRBs started to go into the thousands, the number of theories (which initially also included those with extraterrestrial origins) decreased rapidly to a handful. The start of data taking by ASKAP and the increasing sensitivity of CHIME means we can look forward to an exponential growth of the number of detected FRBs, and an exponential decrease in the number of theories on their origin.

Reviews

Lost in Math – How beauty leads physics astray
by Sabine Hossenfelder
Basic Books

The eye of the beholder

In Lost in Math, theoretical physicist Sabine Hossenfelder embarks on a soul-searching journey across contemporary theoretical particle physics. She travels to various countries to interview some of the most influential figures of the field (but also some “outcasts”) to challenge them, and be challenged, about the role of beauty in the investigation of nature’s laws.

Colliding head-on with the lore of the field and with practically all popular-science literature, Hossenfelder argues that beauty is overrated. Some leading scientists say that their favourite theories are too beautiful not to be true, or possess such a rich mathematical structure that it would be a pity if nature did not abide by those rules. Hossenfelder retorts that physics is not mathematics, and names examples of extremely beautiful and rich maths that does not describe the world. She reminds us that physics is based on data. So, she wonders, what can be done when an entire field is starved of experimental breakthroughs?

Confirmation bias

Nobel laureate Steven Weinberg, interviewed for this book, argues that experts call “beauty” the experience-based feeling that a theory is on a good track. Hossenfelder is sceptical that this attitude really comes from experience. Maybe most of the people who chose to work in this field were attracted to it, in the first place, because they like mathematics and symmetries, and would not have worked in the field otherwise. We may be victims of confirmation bias: we choose to believe that aesthetic sense leads to correct theories; hence, we easily recall to memory all of the correct theories that possess some quality of beauty, while we do not pay equal attention to the counterexamples. Dirac and Einstein, among many, vocally affirmed beauty as a guiding principle, and achieved striking successes by following its guidance; however, they also had, as Hossenfelder points out, several spectacular failures that are less well known. Moreover, a theoretical sense of beauty is far from universal. Copernicus made a breakthrough because he sought a form of beauty that differed from those of his predecessors, making him think out of the box; and by today’s taste, Kepler’s solar system of platonic solids feels silly and repulsive.

Hossenfelder devotes attention to a concept that is particularly relevant to contemporary particle physics: the “naturalness principle” (see Understanding naturalness). Take the case of the Higgs mass: the textbook argument is that quantum corrections go wild for the Higgs boson, making any mass value between zero and the Planck mass a priori possible; however, its value happens to be closer to zero than to the Planck mass by a factor of 1017. Hence, most particle physicists argue that there must be an almost perfect cancellation of corrections, a problem known as the “hierarchy problem”. Hossenfelder points out that implicit in this simple argument is that all values between zero and the Planck mass should be equally likely. “Why,” she asks, “are we assuming a flat probability, instead of a logarithmic (or whatever other function) one?” In general, we say that a new theory is necessary when a parameter value is unlikely, but she argues that we can estimate the likeliness of that value only when we have a prior likelihood function, for which we would need a new theory.

New angles

Hossenfelder illustrates various popular solutions to this naturalness problem, which in essence all try to make small values of the Higgs mass much more likely than large ones. She also discusses string theory, as well as multiverse hypotheses and anthropic solutions, exposing their shortcomings. Some of her criticisms may recall Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong, but Hossenfelder brings new angles to the discussion.

This book comes out at a time when more and more specialists are questioning the validity of naturalness-inspired predictions. Many popular theories inspired by the naturalness problem share an empirical consequence: either they manifest themselves soon in existing experiments, or they definitely fail in solving the problems that they were invented for.

Hossenfelder describes in derogatory terms the typical argumentative structure of contemporary theory papers that predict new particles “just around the corner”, while explaining why we did not observe them yet. She finds the same attitude in what she calls the “di-photon diarrhoea”, i.e., the prolific reaction of the same theoretical community to a statistical fluctuation at a mass of around 750 GeV in the earliest data from the LHC’s Run 2.

The author explains complex matters at the cutting edge of theoretical physics research in a clear way, with original metaphors and appropriate illustrations. With this book, Hossenfelder not only reaches out to the public, but also invites it to join a discourse that she is clearly passionate about. The intended readership ranges from fellow scientists to the layperson, also including university administrators and science policy makers, as is made explicit in an appendix devoted to practical suggestions for various categories of readers.

While this book will mostly attract attention for its pars destruens, it also contains a pars construens. Hossenfelder argues for looking away from the lamppost, both theoretically and experimentally. Having painted naturalness arguments as a red herring that drives attention away from the real issues, and acknowledging throughout the book that when data offer no guidance there is no other choice than following some non-empirical assessment criteria, she advocates other criteria that deserve better prominence, such as the internal consistency of the theoretical foundations of particle physics.

As a non-theorist my opinion carries little weight, but my gut feeling is that this direction of investigation, although undeniably crucial, is not comparably “fertile”. On the other hand, Hossenfelder makes it clear that she sees nothing scientific in this kind of fertility, and even argues that bibliometric obsessions played a big role in creating what she depicts as a gigantic bibliographical bubble. Inspired by that, Hossenfelder also advises learning how to recognise and mitigate biases, and building a culture of criticism both in the scientific arena and in response to policies that create short-term incentives, going against the idea of exploring less conventional ideas. Regardless of what one may think about the merits of naturalness or other non-empirical criteria, I believe that these suggestions are uncontroversially worthy of consideration.

Andrea Giammanco, UCLouvain, Louvain-la-Neuve, Belgium.

Amaldi’s last letter to Fermi: a monologue
Theatre, CERN Globe
11 September 2018

Ideas shaker

On the occasion of the 110th anniversary of the birth of Italian physicist Edoardo Amaldi (1908–1989), CERN hosted a new production titled “Amaldi l’italiano, centodieci e lode!” The title is a play on words concerning the top score at an Italian university (“110 cum laude”) and the production is a well-deserved recognition of a self-confessed “ideas shaker” who was one of the pioneers
in the establishment of CERN, the European Space Agency (ESA) and the Italian National Institute for Nuclear Physics (INFN).

The nostalgic monologue opens with Amaldi, played by Corrado Calda, sitting at his desk and writing a letter to his mentor, Enrico Fermi. Set on the last day of Amaldi’s life, the play retraces some of his scientific, personal and historical memories, which pass by while he writes.

It begins in 1938 when Amaldi is part of an enthusiastic group of young scientists, led by Fermi and nicknamed “Via Panisperna boys” (boys from Panisperna Road, the location of the Physics Institute of the University of Rome). Their discoveries on slow neutrons led to Fermi’s Nobel Prize in Physics that year.

Then, suddenly, World War II  begins and everything falls apart. Amaldi writes about his frustrations to his teacher, who had passed away but is still close to him. “While physicists were looking for physical laws, Europe sank into racial laws,” he despairs. Indeed, most of his colleagues and friends, including Fermi who had a Jewish wife, moved to the US. Left alone in Italy, Amaldi decided to stop his studies on fission and focus on cosmic rays, a type of research that required less resources and was not related to military applications.

Out of the ruins

After World War II, while in Italy there was barely enough money to buy food, the US was building state-of-the-art particle-physics detectors. Amaldi described his strong temptation to cross the ocean, and re-join with Fermi. However, he decided to stay in war-torn Europe and help European science grow out of the ruins. He worked to achieve his dream of “a laboratory independent from military organisations, where scientists from all over the world could feel at home” – today know as CERN. He was general secretary of CERN between 1952 and 1954, before its official foundation in September 1954.

This beautiful monologue is interspersed by radio messages from the epoch, which announce salient historical facts. These create a factual atmosphere that becomes less and less tense as alerts about the Nazi’s declarations and bombs are replaced by news about the first women’s vote, the landing of the first person on the Moon, and disarmament movements.

Written and directed by Giusy Cafari Panico and Corrado Calda, the play was composed after consulting with Edoardo’s son, Ugo Amaldi, who was present at the inaugural performance. The script is so rich in information that you leave the theatre feeling you now know a lot about scientific endeavours, mindsets and the general zeitgeist of the last century. Moreover, the play touches on some topics that are still very relevant today, including: brain drain, European identity, women in science and the use of science for military purposes.

The event was made possible thanks to the initiative of Ugo Amaldi, CERN’s Lucio Rossi, the Edoardo Amaldi Association (Fondazione Piacenza e Vigevano, Italy), and several sponsors. The presentation was introduced by former CERN Director-General Luciano Maiani, who was Edoardo Amaldi’s student, and current CERN Director-General Fabiola Gianotti, who expressed her gratitude for Amaldi’s contribution in establishing CERN.

Letizia Diamante, CERN.

Topological and Non-Topological Solitons in Scalar Field Theories
by Yakov M Shnir
Cambridge University Press

In the 19th century, the Scottish engineer John Scott Russell was the first to observe what he called a “wave of transition” while watching a boat drawn along a channel by a pair of horses. This phenomenon is now referred to as a soliton and described mathematically as a stable, non-dissipative wave packet that maintains its shape while propagating at a constant velocity.

Solitons emerge in various nonlinear physical systems, from nonlinear optics and condensed matter to nuclear physics, cosmology and supersym­metric theories.

Structured in three parts, this book provides a comprehensive introduction to the description and construction of solitons in various models. In the first two chapters of part one, the author discusses the properties of topological solitons in the completely integrable Sine-Gordon model and in the non-integrable models with polynomial potentials. Then, in chapter three, he introduces solitary wave solutions of the Korteweg–de Vries equation, which provide an example of non-topological solitons.

Part two deals with higher dimensional nonlinear theories. In particular, the properties of scalar soliton configurations are analysed in two 2+1 dimension systems: the O(3) nonlinear sigma model and the baby Skyrme model. Part three focuses mainly on the solitons in three spatial dimensions. Here, the author covers stationary Q-balls and their properties. Then he discusses soliton configurations in the Skyrme model (called skyrmions) and the knotted solutions of the Faddev–Skyrme model (hopfions). The properties of the related deformed models, such as the Nicole and the Aratyn–Ferreira–Zimerman model, are also summarised.

Based on the author’s lecture notes for a graduate-level course, this book is addressed at graduate students in theoretical physics and mathematics, as well as researchers interested in solitons.

Virginia Greco, CERN.

Universal Themes of Bose–Einstein Condensation
by Nick P Proukakis, David W Snoke and Peter B Littlewood
Cambridge University Press

The study of Bose–Einstein condensation (BEC) has undergone an incredible expansion during the last 25 years. Back then, the only experimentally realised Bose condensate was liquid helium-4, whereas today the phenomenon has been observed in a number of diverse atomic, optical and condensed-matter systems. The turning point for BEC came in 1995, when three different US groups reported the observation of BEC in trapped, weakly interacting atomic gases of rubidium-87, lithium-7 and sodium-23 within weeks of one another. These studies led to the 2001 Nobel Prize in Physics being jointly awarded to Eric Cornell, Wolfgang Ketterle and Carl Wieman.

This book is a collection of essays written by leading experts on various aspects and in different branches of BEC, which is now a broad and interdisciplinary area of modern physics. Composed of four parts, the volume starts with the history of the rapid development of this field and then takes the reader through the most important results.

The second part provides an extensive overview of various general themes related to universal features of Bose–Einstein condensates, such as the question of whether BEC involves spontaneous symmetry breaking, of how the ideal Bose gas condensation is modified by interactions between the particles, and the concept of universality and scale invariance in cold-atom systems. Part three focuses on active research topics in ultracold environments, including optical lattice experiments, the study of distinct sound velocities in ultracold atomic gases – which has shaped our current understanding of superfluid helium – and quantum turbulence in atomic condensates.

Part four is dedicated to the study of condensed-matter systems that exhibit various features of BEC, while in part five possible applications of the study of condensed matter and BEC to answer questions on astrophysical scales are discussed.

Virginia Greco, CERN.

Zeros of Polynomials and Solvable Nonlinear Evolution Equations
by Francesco Calogero
Cambridge University Press

This concise book discusses the mathematical tools used to model complex phenomena via systems of nonlinear equations, which can be useful to describe many-body problems.

Starting from a well-established approach to solvable dynamical systems identification, the author proposes a novel algorithm that allows some of the restrictions of this approach to be eliminated and, thus, identifies more solvable/integrable N-body problems. After reporting this new differential algorithm to evaluate all the zeros of a generic polynomial of arbitrary degree, the book presents many examples to show its application and impact. The author first discusses systems of ordinary differential equations (ODEs), including second-order ODEs of Newtonian type, and then moves on to systems of partial differential equations and equations evolving in discrete time-steps.

This book is addressed to both applied mathematicians and theoretical physicists, and can be used as a basic text for a topical course for advanced undergraduates.

Virginia Greco, CERN.

Actinide series shown to end with lawrencium

Heavy elements

One hundred and fifty years since Dmitri Mendeleev revolutionised chemistry with the periodic table of the elements, an international team of researchers has resolved a longstanding question about one of its more mysterious regions – the actinide series (or actinoids, as adopted by the International Union of Pure and Applied Chemistry, IUPAC).

The periodic table’s neat arrangement of rows, columns and groups is a consequence of the electronic structures of the chemical elements. The actinide series has long been identified as a group of heavy elements starting with atomic number Z = 89 (actinium) and extending up to Z = 103 (lawrencium), each of which is characterised by a stabilised 7s2 outer electron shell. But the electron configurations of the heaviest elements of this sequence, from Z = 100 (fermium) onwards, have been difficult to measure, preventing confirmation of the series. The reason for the difficulty is that elements heavier than fermium can be produced only one atom at a time in nuclear reactions at heavy-ion accelerators.

Confirmation

Now, Tetsuya Sato at the Japan Atomic Energy Agency (JAEA) and colleagues have used a surface ion source and isotope mass-separation technique at the tandem accelerator facility at JAEA in Tokai to show that the actinide series ends with lawrencium. “This result, which would confirm the present representation of the actinide series in the periodic table, is a serious input to the IUPAC working group, which is evaluating if lawrencium is indeed the last actinide,” says team member Thierry Stora of CERN.

Using the same technique, Sato and co-workers measured the first ionisation potential of lawrencium back in 2015. Since this is the energy required to remove the most weakly bound electron from a neutral atom and is a fundamental property of every chemical element, it was a key step towards mapping lawrencium’s electron configuration. The result suggested that lawrencium has the lowest first ionisation potential of all actinides, as expected owing to its weakly bound electron in the 7p1/2 valence orbital. But with only this value the team couldn’t confirm the expected increase of the ionisation values of the heavy actinides up to nobelium (Z = 102). This occurs with the filling of the 5f electron shell in a manner similar to the filling of the 4f electron shell until ytterbium in the lanthanides.

In their latest study, Sato and colleagues have determined the successive first ionisation potentials from fermium to lawrencium, which is essential to confirm the filling of the 5f shell in the heavy actinides (see figure). The results agree well with those predicted by state-of-the-art relativistic calculations in the framework of QED and confirm that the ionisation values of the heavy actinides increase up to nobelium, while that of lawrencium is the lowest among the series.

The results demonstrate that the 5f orbital is fully filled at nobelium (with the [Rn] 5f14 7s2 electron configuration, where [Rn] is the radon configuration) and that lawrencium has a weakly bound electron, confirming that the actinides end with lawrencium. The nobelium measurement also agrees well with laser spectroscopy measurements made at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany.

“The experiments conducted by Sato et al. constitute an outstanding piece of work at the top level of science,” says Andreas Türler, a chemist from the University of Bern, Switzerland. “As the authors state, these measurements provide unequivocal proof that the actinide series ends with lawrencium (Z = 103), as the filling of the 5f orbital proceeds in a very similar way to lanthanides, where the 4f orbital is filled. I am already eagerly looking forward to an experimental determination of the ionisation potential of rutherfordium (Z = 104) using the same experimental approach.”

CMS has high luminosity in sight

Detector focus

The CMS detector has performed better than what was thought possible when it was conceived. Combined with advances in analysis techniques, this has allowed the collaboration to make measurements – such as the coupling between the Higgs boson and bottom quarks – that were once deemed impossible. Indeed, together with its sister experiment ATLAS, CMS has turned the traditional view of hadron colliders as “hammers” rather than “scalpels” on its head.

In exploiting the LHC and its high-luminosity upgrade (HL-LHC) to maximum effect in the coming years, the CMS collaboration has to battle higher overall particle rates, higher “pileup” of superimposed proton–proton collision events per LHC bunch crossing, and higher instantaneous and integrated radiation doses to the detector elements. In the collaboration’s arsenal to combat this assault are silicon sensors able to withstand the levels of irradiation expected, a new high-rate trigger, and detectors with higher granularity or precision timing capabilities to help disentangle piled-up events.

The majority of CMS detector upgrades for the HL-LHC will be installed and commissioned during long-shutdown three (LS3). However, the planned 30-month duration of LS3 imposes logistical constraints that result in a large part of the muon-system upgrade and many ancillary systems (such as cooling, power and environmental control) needing to be installed substantially beforehand. This makes the CMS work plan for LS2 extremely complex, dividing into three classes of activity: the five-yearly maintenance of the existing detectors and services, the completion of so called “phase 1” upgrades necessary for CMS to continue to operate until LS3, and the initial upgrades to detectors, infrastructure or ancillary systems necessary for HL-LHC. “The challenge of LS2 is to prepare CMS for Run 3 while not neglecting the work needed now to prepare for Run 4,” says technical coordinator Austin Ball.

A dedicated CMS upgrade programme was planned since the LHC switched on in 2008. It is being carried out in two phases: the first, which started in 2014 during LS1, concerns improvements to deal with a factor-of-two increase over the design instantaneous luminosity delivered in Run 2; and the second relates to the upgrades necessary for the HL- LHC. The phase-1 upgrade is almost complete, thanks to works carried out during LS1 and regular end-of-year technical stops. This included the replacement of the three-layer barrel (two-disk forward) pixel detector with a four-layer barrel (three-disk forward) version, the replacement of photosensors and front-end electronics for some of the hadron calorimeters, and the introduction of a more powerful, FPGA-based, level-1 hardware trigger. LS2 will conclude phase-1 by upgrading photosensors (hybrid photodiodes) in the barrel hadron calorimeter with silicon photomultipliers and replacing the innermost pixel barrel layer.

Phase-2 activities

But LS2 also sees the start of the phase-2 CMS upgrade, the first step of which is a new beampipe. The collaboration already replaced the beampipe during LS1 with a narrower one to allow the phase-1 pixel detector to reach closer to the interaction point. Now, the plan is to extend the cylindrical section of the beampipe further to provide space for the phase-2 pixel detector with enlarged pseudo-rapidity coverage, to be installed in LS3. In addition, for the muon detectors CMS will install a new gas electron multiplier (GEM), layer in the inner ring of the first endcap disk, upgrade the on-detector electronics of the cathode strip chambers, and lay services for a future GEM layer and improved resistive plate chambers. Several other preparations of the detector infrastructure and services will take place in LS2 to be ready for the major installations in LS3.

Assembly

Work plan

Key elements of the LS2 work plan include: constructing major new surface facilities; modifying the internal structure of the underground cavern to accommodate new detector services (especially CO2 cooling); replacing the beampipe for compatibility with the upgraded tracking system; and improving the powering system of the 3.8 T solenoid to increase its longevity through the HL-LHC era. In addition, the system for opening and closing the magnet yoke for detector access will be modified to accommodate future tolerance requirements and service volumes, and the shielding system protecting detectors from background radiation will be reinforced. Significant upgrades of electrical power, gas distribution and the cooling plant also have to take place during LS2.

The CMS LS2 schedule is now fully established, with a critical path starting with the pixel-detector and beampipe removal and extending through the muon system upgrade and maintenance, installation of the phase-2 beampipe plus the revised phase-1 pixel innermost layer, and, after closing the magnet yoke, re-commissioning of the mag-net with the upgraded powering system. The other LS2 activities, including the barrel hadron calorimeter work, will take place in the shadow of this critical path.

Pixel renewal

“The timely completion of the intense LS2 programme, including the construction of the on-site surface infrastructures necessary for the construction, assembly or refurbishment activities of the phase-2 detectors, is critical for a successful CMS phase-2 upgrade,” explains upgrade coordinator Frank Hartmann. “Although still far away, LS3 activities are already being planned in detail.” The future LS3 shutdown will see the CMS tracker completely replaced with a new outer tracker that can provide tracks at 40 MHz to the upgraded level-1 trigger, and with a new inner tracker with extended pseudo-rapidity coverage. The 36 modules of the barrel electromagnetic calorimeter will be removed and their on-detector electronics upgraded to enable the high readout rate, while both current hadron and electromagnetic endcap calorimeters will be replaced with a brand-new system (see “A new era in calorimetry” box). The addition of timing detectors in the barrel and endcaps will allow a 4D reconstruction of collision vertices and, together with the other new and upgraded detectors, reduce the effective event pile-up at the HL-LHC to a level comparable to that already seen.

“The upgraded CMS detector will be even more powerful and able to make even more precise measurements of the properties of the Higgs boson as well as extending the searches for new physics in the unprecedented conditions of the HL-LHC,” says CMS spokesperson Roberto Carlin. 

ATLAS upgrades in LS2

Iron support

To precisely study the Higgs boson and extend our sensitivity to new physics in the coming years of LHC operations, the ATLAS experiment has a clear upgrade plan in place. Ageing of the inner tracker due to radiation exposure, data volumes that would saturate the readout links, obsolescence of electronics, and a collision environment swamped by up to 200 interactions per bunch crossing are some of the headline challenges facing the 3000-strong collaboration. While many installations will take place during long-shutdown three (LS3), beginning in 2024, much activity is taking place during the current LS2 – including major interventions to the giant muon spectrometer at the outermost reaches of the detector.

The main ATLAS upgrade activities during LS2 are aimed at increasing the trigger efficiency for leptonic and hadronic signatures, especially for electrons and muons with a transverse momentum of at least 20 GeV. To improve the selectivity of the electron trigger, the amount of information used for the trigger decision will be drastically increased: until now, the very fine-grained information produced by the electromagnetic calorimeter is grouped in “trigger towers” to limit the number and hence cost of trigger channels, but advances in electronics and the use of optical fibres allows the transmission of a much larger amount of information at a reasonable cost. By replacing some of the components of the front-end electronics of the electromagnetic calorimeter, the level of segmentation available at the trigger level will be increased fourfold, improving the ability to reject jets and preserve electrons and photons. The ATLAS trigger and data-acquisition systems will also be upgraded during LS2 by introducing new electronics boards that can deal with the more granular trigger information coming from the detector.

New small wheels

Since 2013, ATLAS has been working on a replacement for its “small wheel” forward-muon endcap systems so that they can operate under the much harsher background conditions of the future LHC. The new small wheel (NSW) detectors employ two detector technologies: small-strip thin gap chambers (sTGC) and Micromegas (MM). Both technologies are able to withstand the higher flux of neutrons and photons expected in future LHC interactions, which will produce counting rates as high as 20 kHz cm–2 in the inner part of the NSW, while delivering information for the first-level trigger and muon measurement. The main aim of the NSW is to reduce the fake muon triggers in the forward region and improve the sharpness of the trigger threshold drastically, allowing the same selection power as the present high-level trigger.

Extreme pile up

The first NSW started to take shape at CERN last year. The iron shielding disks (see “Iron support” image), which serve as the support for the NSW detectors in addition to shielding the endcap muon chambers from hadrons, have been assembled, while the services team is installing numerous cables and pipes on the disks. Only a few millimetres of space is available between the disk and the chambers for the cables on one side, and between the disk and the calorimeter on the other side, and the task is made even more difficult by having to work from an elevated platform. In a nearby building, the sTGC chambers coming from the different construction sites are being integrated in full wedges and, soon this year, the Micromegas wedges will be integrated and tested at a separate integration site. The construction of the sTGC chambers is taking place in Canada, Chile, China, Israel and Russia, while the Micromegas are being constructed in France, Germany, Greece, Italy and Russia. On a daily basis, cables arrive to be assembled with connectors and tested; piping is cut to length, cleaned and protected until installation; and gas-leak and high-voltage test stations are employed for quality control. In the meantime, several smaller upgrades will be deployed during LS2, including the installation of 16 new muon chambers in the inner layer of the barrel spectrometer.

The organisation of LS2 activities is a complex exercise in which the maintenance needs of the detectors have to be addressed in parallel with installation schedules. After a first period devoted to the opening of the detector and the maintenance of the forward muon spectrometer, the first major non-standard operation (scheduled for January) will be to bring to the surface the first small wheel. Having the detector fully open on one side will also allow very important test for the installation of the new all-silicon inner tracker, which is scheduled to be installed during LS3. The upgrade of the electromagnetic-calorimeter electronics will start in February and continue for about one year, requiring all front-end boards to be dismounted from their crates, modifications to both the boards and the crates, and reinstallation of the modified boards in their original position. Maintenance of the ATLAS tile calorimeter and inner detector will take place in parallel, a very important aspect of which will be the search for leaks in the front-end cooling system.

Endcap petals

Delicate operation

In August, the first small wheel will be lowered again, allowing the second small wheel to be brought to the surface to make space for the NSW installation foreseen in April 2020. In the same period, all the optical transmission boards of the pixel detector will have to be changed. Following these installations, there will be a long period of commissioning of all the upgraded detectors and the preparation for the installation of the second NSW in the autumn of 2020. At that moment the closing process will start and will last for about three months, including the bake-out of the beam pipe, which is a very delicate and dangerous operation for the pixel detectors of the inner tracker.

A coherent upgrade programme for ATLAS is now fully underway to enable the experiment to fully exploit the physics potential of the LHC in the coming years of high-luminosity operations. Thousands of people around the world in more than 200 institutes are involved, and the technical design reports alone for the upgrade so far number six volumes, each containing several hundred pages. At the end of LS2, ATLAS will be ready to take data in Run 3 with a renewed and better performing detector.

China and Europe bid for post-LHC collider

Big thinking

The discovery of the Higgs boson at the LHC in the summer of 2012 set particle physics on a new course of exploration. While the LHC experiments have determined many of the properties of the Higgs boson with a precision beyond expectations, and will continue to do so until the mid-2030s, physicists have long planned a successor to the LHC that can further explore the Higgs mechanism and other potential sources of new physics. Several proposals are on the table, the most ambitious and scientifically far-reaching involving circular colliders with a circumference of around 100 km.

On 18 December, the Future Circular Collider (FCC) study released its conceptual design report (CDR) for a 100 km collider based at CERN. A month earlier, the Institute of High Energy Physics (IHEP) in China officially presented a CDR for a similar project called the Circular Electron Positron Collider (CEPC). Both studies were launched shortly after the discovery of the Higgs boson (the FCC was a direct response to a request from the 2013 update of the European Strategy for Particle Physics to prepare a post-LHC machine, following preliminary proposals for a circular Higgs factory at CERN in 2011), and both envisage physics programmes extending deep into the 21st century. Documents concerning the FCC and CEPC proposals were also submitted as input to the latest update of the European Strategy for Particle Physics at the end of the year (see Input received for European strategy update).

“If a high-luminosity electron–positron Higgs factory were to drop out of the sky tomorrow, the line of users would be very long, while a very-high-energy hadron collider is a vessel of discovery and will help us study the role of the Higgs boson in taming the high-energy behaviour of longitudinal gauge-boson (WW) scattering,” says theorist Chris Quigg of Fermilab in the US. “It is a very significant validation of the scientific promise opened by a 100 km ring for scientists of different regions to express the same judgment.”

The four-volume FCC report demonstrates the project’s technical feasibility and identifies the physics opportunities offered by the different collider options: a high-luminosity electron–positron collider (FCC-ee) with a centre-of-mass energy ranging from the Z pole (90 GeV) to the tt̅ threshold (365 GeV); a 100 TeV proton–proton collider (FCC-hh); a future lepton–proton collider (FCC-eh); and, finally, a higher-energy hadron collider in the existing tunnel (HE-LHC). The FCC is a global collaboration of more than 140 universities, research institutes and industrial partners. During the past five years, with the support of the European Commission’s Horizon 2020 programme, the FCC collaboration has made significant advances in high-field superconducting magnets, high-efficiency radio-frequency cavities, vacuum systems, large-scale cryogenic refrigeration and other enabling technologies (see A giant leap for physics).

According to the present proposal, an eight-year period for project preparation and administration is required before construction of FCC’s underground areas can begin, potentially allowing the FCC-ee physics programme to start by 2039. The FCC-hh, installed in the same tunnel, could then start operations in the late 2050s. “Though the two machines can be built independently, a combined scenario profits from the extensive reuse of civil engineering and technical systems, and also from the additional time available for high-field magnet breakthroughs,” says deputy leader of the FCC study Frank Zimmermann of CERN. “Timely preparation, early investment and diverse collaborations between researchers and industry are already yielding promising results and confirming the anticipated downward trend in the costs associated with operation.”

Asian ambition

CEPC is a putative 240 GeV circular electron-positron collider, the tunnel for which is foreseen to one day host a super proton–proton collider (SppC) that reaches energies beyond the LHC (CERN Courier June 2018 p21). The two-volume CEPC design report summarises the work accomplished in the past few years by thousands of scientists and engineers in China and abroad. IHEP states that construction of CEPC will begin as soon as 2022 – allowing time to build prototypes of key technical components and establish support for manufacturing – and be completed by 2030. According to the tentative operational plan, CEPC will run for seven years as a Higgs factory, followed by two years as a Z factory and one year at the WW threshold, potentially followed by the installation of the SppC. Although CEPC–SppC is a Chinese-proposed project to be built in China, it has an international advisory committee and more than 20 agreements have been signed with institutes and universities around the world.

“The Beijing Electron Positron Collider will stop running in the 2020s, and China’s government is encouraging Chinese scientists to initiate and work towards large international science projects, so it is possible that CEPC may get a green light soon,” explains deputy leader of the CEPC project Jie Gao of IHEP. “As for the site, many Chinese local governments showed strong interest to host CEPC with the support of the central government.”

Cost is a key factor for both the Chinese and European projects, with the tunnel taking up a large fraction of the expense. CEPC’s price tag is currently $5 billion and FCC-ee is hovering at around twice this value, while, at present, a hadron collider on either continent would cost significantly more due to the cost of the required superconducting wire. Geoffrey Taylor of the University of Melbourne, who is chair of the International Committee for Future Accelerators, says that CERN has the major benefit of magnet expertise and high-energy collider development and operation, in addition to already having the multi-billion-dollar accelerator infrastructure required for the project. “The value of this infrastructure at CERN outweighs the cost of the tunnel; on the other hand, the Chinese proposal has a lower cost of tunneling but lacks the immense infrastructure and expertise necessary for the hadron collider.”

Taylor says that whilst it is essential that CERN maintains its pre-eminent position, having competition from Asia with the potential for major investment would be beneficial for the field as a whole because Western investment in future machines may well remain at current levels. There are also broader cultural factors to be considered, says Quigg: “CERN has earned an exemplary reputation for inclusiveness and openness, which go hand in hand with scientific excellence. Any region, nation, and institution that aims to host a world-leading instrument must strive for a similar environment.”

For theorist Gerard ’t Hooft, who shared the 1999 Nobel Prize in Physics for elucidating the quantum structure of electroweak interactions, the physics target of a 100 km collider is far more important than its location. It is not obvious, in view of our present theoretical understanding, whether or not a 100 km accelerator will be able to enforce a breakthrough, he says. “Most theoreticians were hoping that the LHC might open up a new domain of our science, and this does not seem to be happening. I am just not sure whether things will be any different for a 100 km machine. It would be a shame to give up, but the question of whether spectacular new physical phenomena will be opened up and whether this outweighs the costs, I cannot answer. On the other hand, for us theoretical physicists the new machines will be important even if we can’t impress the public with their results.”

Profound discoveries

Experimentalist Joe Incandela of the University of California in Santa Barbara, who was spokesperson of the CMS experiment at the time of the Higgs-boson discovery, believes that a post-LHC collider is needed for closure – even if it does not yield new discoveries. “While such machines are not guaranteed to yield definitive evidence for new physics, they would nevertheless allow us to largely complete our exploration of the weak scale,” he says. “This is important because it is the scale where our observable universe resides, where we live, and it should be fully charted before the energy frontier is shut down. Completing our study of the weak scale would cap a short but extraordinary 150 year-long period of profound experimental and theoretical discoveries that would stand for millennia among mankind’s greatest achievements.”

Exploring the spin of top-quark pairs

Fig. 1.

One of the most fascinating particles studied at the LHC is the top quark. As the heaviest elementary particle to date, the top quark lives for less than a trillionth of a trillionth of a second (10–24 s) and decays long before it can form hadrons. It is also the only quark that provides the possibility to study a bare quark. This allows physicists to explore its spin, which is related to the quark’s intrinsic quantum angular momentum. The spin of the top quark can be inferred from the particles it decays into: a bottom quark and a W boson, which subsequently decays into leptons or quarks.

The CMS collaboration has analysed proton–proton collisions in which pairs of top quarks and antiquarks are produced. The Standard Model (SM) makes precise predictions for the frequency at which the spin of the top quark is aligned with (or correlated to) the spin of the top antiquark. A measure of this correlation is thus a highly sensitive test of the SM. If, for example, an exotic heavier Higgs boson were to exist in addition to the one discovered in 2012 at the LHC, it could decay into a pair of top quarks and antiquarks and change their spin correlation significantly. A high-precision measurement of the spin correlation therefore opens a window to explore physics beyond our current knowledge.

The CMS collaboration studied more than one million top-quark–antiquark pairs in dilepton final states recorded in 2016. To study all the spin and polarisation effects accessible in top-quark–antiquark pair production, nine event quantities sensitive to top-quark spin and correlations, and three quantities sensitive to the top-quark polarisation were measured. The measured observables were corrected for experimental effects (“unfolded”) and directly compared to precise theoretical predictions.

The observables studied in this analy­sis show good agreement between data and theory, for example showing no angular dependence for unpolarised top quarks (see figure 1, left). A moderate discrepancy is seen in one of the measured distributions sensitive to spin (the azimuthal opening angle between two leptons), with respect to one of the Monte Carlo simulations (POWHEGv2+PYTHIA). This discrepancy is consistent with an observation made by the ATLAS collaboration last year, although CMS finds that other simulations (“MG5_aMC@NLO”) and calculations that should give similar results agree with the data within the uncertainties.

In summary, a good agreement with the SM prediction is observed in CMS data, except for the case of one particular but commonly used observable, suggesting further input from theory calculations is probably necessary. The full Run-2 data set already recorded by CMS contains four times more top quarks than were used for this result. This larger sample will allow an even more precise measurement, increasing the chances for a first glimpse of new physics.

Real-time triggering boosts heavy-flavour programme

Fig. 1.

A report from the LHCb collaboration

Throughout LHC Run 2, LHCb has been flooded by b- and c-hadrons due to the large beauty and charm production cross-sections within the experiment’s acceptance. To cope with this abundant flux of signal particles and to fully exploit them for LHCb’s precision flavour-physics programme, the collaboration has recently implemented a unique real-time analysis strategy to select and classify, with high efficiency, a large number of b- and c-hadron decays. Key components of this strategy are a real-time alignment and calibration of the detector, allowing offline-quality event reconstruction within the software trigger, which runs on a dedicated computing farm. In addition, the collaboration took the novel step of only saving to tape interesting physics objects (for example, tracks, vertices and energy deposits), and discarding the rest of the event. Dubbed “selective persistence”, this substantially reduced the average event size written from the online system without any loss in physics performance, thus permitting a higher trigger rate within the same output data rate (bandwidth). This has allowed the LHCb collaboration to maintain, and even expand, its broad programme throughout Run 2, despite limited computing resources.

LHCb has been flooded by b- and c-hadrons due to the large beauty and charm production cross-sections within the experiment’s acceptance.

The two-stage LHCb software trigger is able to select heavy flavoured hadrons with high purity, leaving event-size reduction as the handle to reduce trigger bandwidth. This is particularly true for the large charm trigger rate, where saving the full raw events would result in a prohibitively high bandwidth. Saving only the physics objects entering the trigger decision reduces the event size by a factor up to 20, allowing larger statistics to be collected at constant bandwidth. Several measurements of charm production and decay properties have been made so far using only this information. The sets of physics objects that must be saved for offline analysis can also be chosen “à la carte”, opening the door for further bandwidth savings on inclusive analyses too.

For the LHCb upgrade (see LHCb’s momentous metamorphosis), when the instantaneous luminosity increases by a factor of five, these new techniques will become standard. LHCb expects that more than 70% of the physics programme will use the reduced event format. The full software trigger, combined with real-time alignment and calibration, along with the selective persistence pioneered by LHCb, will likely become the standard for very high-luminosity experiments. The collaboration is therefore working hard to implement these new techniques and ensure that the current quality of physics data can be equalled or surpassed in Run 3.

A giant leap for physics

Mind the gap

Particle physics has revolutionised our understanding of the universe. The experimental and theoretical tools developed in the 20th century delivered the Standard Model of particle physics, the particle content of which was completed in 2012 with the discovery of the Higgs boson at the LHC. And, yet, this hugely powerful theory leaves several observations unexplained. In solving mysteries such as the nature of dark matter, the origin of neutrino masses, the dominance of matter over antimatter on cosmological scales, and the low mass of the Higgs boson itself, physicists could open a completely new view of nature. Therefore, it is high time to start planning a new collider that maintains this rich course of exploration throughout the 21st century.

In late 2018 the Future Circular Collier (FCC) collaboration published a conceptual design report (CDR) addressing this need. A similar proposal is also under development in China (CERN Courier June 2018 p21). In more than 1000 pages distributed over four volumes, the FCC CDR covers all aspects of the project, including technologies, detector design, physics goals and civil-engineering considerations. But what changes when we move from a 27 km to a new 100 km-long tunnel, and what stays the same? The obstacles to new colliders pushing the current energy and intensity frontiers are many, yet the past five years have seen the international FCC study steadily break them down.

Lessons learned

The FCC design report shows that CERN’s existing accelerator chain can serve as the foundation for a 100 km post-LHC machine, while also opening a rich fixed-target programme. The new 100 km infrastructure is indeed enormous, representing a four-fold increase in dimensions compared to the LHC. But, taking history as a guide, it should be possible: this jump in scale is identical to that adopted in the 1980s to move from the Super Proton Synchrotron (SPS) to the Large Electron Positron collider (LEP) and eventually to the LHC, allowing the completion of the Standard Model. Jumping to larger and more complex machines always comes with new challenges, but these translate precisely into opportunities for young researchers and industry (CERN Courier September 2018 p51).

FCC-ee

A 100 km tunnel offers three main collider options. The most straightforward in terms of technological readiness is a luminosity-frontier lepton collider (FCC-ee) that will deliver unprecedented collision rates in a clean environment at specific energies corresponding to the Z pole (91 GeV), the WW threshold (161 GeV), Higgs production (240 GeV), and the top quark–antiquark threshold (350 to 365 GeV). By filling the FCC tunnel with new superconducting magnets twice the strength of the LHC’s (16 T as opposed to 8 T), however, a hadron collider called FCC-hh can be built with a collision energy of 100 TeV – an order-of-magnitude higher than the LHC. The FCC study, which was formally launched in early 2014, also explores the option of a proton–electron collider (FCC-he) that could run in parallel with FCC-hh, and a high-energy LHC based on high-field magnets installed in the current LHC tunnel (CERN Courier June 2018 p15).

The cost of future colliders is a major issue, and concerted value-engineering of all aspects from individual components through sustainability to logistics is required. Cost estimates for FCC construction and operation are detailed in the CDR, although the range of collider modes, staging approaches and technology choices make it difficult to place a single figure on each machine. Construction on a site with an existing infrastructure, as offered by CERN, is a major cost advantage in terms of capital investment, sharing of infrastructure and breadth of the overall physics programme. The sequence of FCC-ee and FCC-hh would also resemble the successful staging of LEP and the LHC: a lepton–lepton machine followed by a hadron collider (both for protons and heavy ions). In the case of the FCC, possibly even a future muon collider could then follow as a third stage.

Fig. 1.

FCC-ee is a dream machine for precision measurements, taking the successful LEP scheme into entirely new territory (figure 1). Precise measurements of the properties of the Z, W and Higgs boson and the top quark, together with much improved measurements of other input parameters to the Standard Model such as the electromagnetic and strong coupling constants, would provide sensitivity to new particles with masses in the range 10–70 TeV.

Common lattice

The bulk of FCC-ee will comprise around 8000 normal-conducting low-power and cost-effective twin-aperture dipole magnets, 3000 focusing magnets and between 26 (Z pole) and 161 (tt̅ threshold) four-cavity radio-frequency (RF) cryomodules, to compensate for the energy loss from synchrotron radiation and provide the required accelerating voltage. Currently, two interaction points are planned for high-luminosity FCC-ee operations, though up to four can be accommodated. A common FCC-ee lattice has been designed for all energy stages except for the highest energy tt̅ threshold, where a small rearrangement of the beamline passing through the RF cavities will be needed. The basic cell of the FCC-ee lattice has been chosen for operation at a beam energy of 182.5 GeV and combines four dipole magnets and two main quadrupoles in a 50 m-long section. Moreover, to achieve the required high luminosities, the vertical beta function at the interaction points (called βy*) has to be very small (0.8 mm) at the Z pole, which is 50 times smaller than for LEP but about three times larger than for the SuperKEKB accelerator now being commissioned in Japan. The reduction in βy* is possible because of technological innovations during the past three decades (such as local chromatic correction of the final-quadrupole doublet and use of a crab-waist collision scheme) and thanks to the large size of the ring.

Racetrack coil

Indeed, achieving the unprecedented FCC-ee luminosity of up to 4 × 1036 cm–2s–1 (the total for two experiments), while minimising the amount of synchrotron radiation near the detector, called for considerable effort in designing the final-focus system. Combined with a small crossing angle of 30 mrad, the minimum distance from the interaction point to the first quadrupole is 2 m, which is a compromise between beam dynamics and detector constraints. The present optics design has a momentum acceptance of around 2%, which is one of the most critical requirements of the FCC-ee design because it determines the beam lifetime.

A distinct feature of FCC-ee, in contrast to LEP, is the use of separate beam pipes for the two counter-rotating electron and positron beams, based on energy-efficient dual-aperture main magnets (pictured above). The two separate rings allow operation with a large number of bunches – up to around 16,000 at the Z pole – by avoiding parasitic collisions. This approach also allows for a well-centered orbit all around the ring and a nearly perfect mitigation of the energy “sawtooth” at the highest tt̅ energies. A so-called tapering scheme is foreseen, which will enable the strengths of all the magnets to be scaled according to the local energy of the electron and positron beams, taking into account any differences in the energy loss due to synchrotron radiation. Also distinct from LEP, a top-up injection scheme has been designed for FCC-ee to maximise the integrated luminosity, whereby electrons and positrons are injected into the machine by a full-energy booster to maintain a constant high beam current.

Beating the fourth power

When moving to a larger radius and higher energies, one of the key obstacles for colliders is the synchrotron radiation emitted by the accelerated particles because the resulting energy loss increases with the fourth power of a charged particle’s energy. Improving energy efficiency is critical for any future big accelerator, and the development of high-efficiency RF power sources, along with robust higher-gradient superconducting cavities, is at the core of the FCC programme. The cavities can be produced, for example, by applying a thin superconducting film on a copper substrate, as is currently being pursued by CERN in collaboration with global partners (CERN Courier May 2018 p26). To achieve a low power consumption and guarantee sustainable operation, a high conversion efficiency from wall-plug to RF power is critical. The FCC target RF operation efficiency is 65%, profiting from recent innovations in klystron design at CERN.

For FCC-ee to fulfil its promise of precision electroweak measurements, it is also vital that physicists can accurately determine its centre-of-mass energy so that the Z mass can be measured with a relative precision of 3 × 10–5, the total Z width with a precision of 0.1 MeV and the W mass within 0.5 MeV. A strategy based on the resonant-depolarisation technique, as used at LEP, guarantees precise energy measurements every 15-20 minutes for both the electron and positron beam.

The design of the FCC-ee detectors is also described in the FCC design report. Due to the beam crossing angle, the detectors’ solenoid magnetic field is limited to 2 T to confine their impact on the luminosity due to the synchrotron radiation emitted within the solenoid field. Two detector concepts have been optimised for the FCC-ee: CLD, a consolidated option based on the detector developed for CLIC, with a silicon tracker and a 3D-imaging highly-granular calorimeter; and IDEA, a bolder, possibly more cost-effective, design, with a short drift-wire chamber and a dual-readout calorimeter. However, specific detector-technology choices will be made at a later date.

Following the operation of FCC-ee, the same tunnel could host a 100 TeV proton collider, FCC-hh. A very large, circular hadron collider is the only feasible approach to reach significantly higher collision energies than the LHC (13-14 TeV) in the coming decades. A 100 TeV collider would offer access to new particles through direct production in the few-TeV to 30 TeV mass range, far beyond the LHC’s reach. It would also provide much higher rates for phenomena in the sub-TeV mass range and therefore much greater precision on key measurements (CERN Courier May 2017 p34).

Beam screen

Within 25 years of operation, FCC-hh could accumulate an integrated luminosity of around 20 ab–1 in each of the two main experiments. FCC-hh also offers the possibility of colliding heavy ions with protons and heavy ions with heavy ions, adding to its physics opportunities. Reaching the physics goals of such a collider requires a machine availability of about 70%, which is comparable to what has been routinely reached with the LHC. Nevertheless, considering the increased machine complexity and the introduction of an additional machine in the injector chain in the FCC baseline scenario, achieving this target availability poses major challenges.

FCC-hh is envisioned to lie adjacent to the LHC and SPS, with two injection insertions so that protons can be injected from either the LHC or SPS tunnel. In the first case, the beam will be injected at an energy of 3.3 TeV from the LHC (which requires, in addition to new transfer lines and extraction systems, some modifications to allow the LHC to be ramped five times faster than today). In the second case, a new superconducting SPS – from which other experiments would also profit – could provide a beam at 1.3 TeV using fast ramping and cost-effective 6 T superconducting magnets. The FCC design report presents a complete lattice for FCC-hh that is consistent with this layout and the required energy reach. The arc lattice consists of around 500 cells each 200 m long and made up of two short, straight sections and 12 cryo-dipoles, comprising one 14 m-long dipole and one 0.11 m-long sextupole corrector. Integrated studies of the lattice performance are ongoing and will inform the final choice for the magnet design, along with considerations of power efficiency and cost.

Reducing costs

The biggest cost in reaching higher energies is that of the magnets. A primary goal of FCC-hh is to build 16 T superconducting magnets that are a factor of three to five times more cost-effective per TeV than those of the LHC. Achieving this goal would impact many accelerator applications outside physics, from medical treatments to food-quality monitoring and energy storage and distribution. The FCC study has recently launched a global conductor R&D programme involving collaborators from the US, Russia, Europe, Japan and Korea to improve the performance of the niobium-tin conductor and to reduce its cost.

The FCC-hh foresees two high-luminosity experiments, for which a key design challenge is to obtain the target values of βy* in the collision points while protecting the detectors and the magnets from the collision debris. Incredibly, FCC-hh will produce a pile-up of up to 1000 events per bunch crossing, compared to around 200 at HL-LHC. Another major challenge for FCC-hh is the beam-dump system to protect the machine components. Each of the two rings will have to reliably abort proton beams with stored energies of around 8 GJ, which is more than an order of magnitude higher than for HL-LHC. Beam extraction at the FCC has to be fast, and the first prototypes of new kicker generator and superconducting septum technologies are now being tested.

Synchrotron radiation is also an issue, since FCC-hh will emit about 5 MW at 100 TeV, and calls for a novel beam screen held at a temperature of 50 K (compared with 5–20 K at the LHC). The FCC-hh beam screen, a prototype of which is shown left, enables cost-effective heat removal and maintains the high quality vacuum while providing shielding from the beam. Finally, cooling the FCC-hh superconducting magnets poses entirely new challenges compared to the LHC. In addition to the higher synchrotron radiation, the cooling system (which, like the LHC will use liquid helium at 1.9 K) will have to cope with higher heat dissipated inside the cold magnets as well as from the cold bore itself. About 100 MW of total cooling power will be required to remove 5 MW of synchrotron radiation heat (see China and Europe bid for post-LHC collider).

Coordinating the future

For almost 90 years, progress in particle physics has gone hand-in-hand with progress in accelerators. Today, capitalising on the great success of the LHC, the field faces pivotal decisions about what collider to build next. Advancing the enabling technologies for a future circular collider can only be done via a coordinated international effort between universities, research centres and industry. It also calls for smart solutions to ensure reliability and sustainability. The results of these efforts are documented in the four volumes of the FCC conceptual design report, which presents a clear route to a post-LHC machine and also serves as an input to the update of the European Strategy for Particle Physics.

The FCC offers great potential for curiosity-driven research with unimaginable consequences. Discoveries of new particles and forces not only alter our perspective of humankind’s position in the universe, but also, either directly or via the technology that made them possible, lead to radical applications that improve our quality of life. In the present age of political turbulence and rapid change, we are proposing an ambitious future accelerator complex to push the boundaries of knowledge and to optimally prepare future generations for the challenges they are sure to face. 

Report summarises dark-sector exploration

Fig. 1.

A report from the ATLAS experiment

In our current understanding of the energy content of the universe, there are two major unknowns: the nature of a non-luminous component of matter (dark matter) and the origin of the accelerating expansion of the universe (dark energy). Both are supported by astrophysical and cosmological measurements but their nature remains unknown. This has motivated a myriad of theoretical models, most of which assume dark matter to be a weakly interacting massive particle (WIMP).

WIMPs may be produced in high-energy proton collisions at the LHC, and are therefore intensively searched for by the LHC experiments. Since dark matter is not expected to interact with the detectors, its production leaves a signature of missing transverse momentum (ETmiss). It can be detected if the dark-matter particles recoil against a visible particle X, which could be a quark or gluon, a photon, or a W, Z or Higgs boson. These are commonly known as X + ETmiss signatures. To interpret these searches, a variety of simplified models are used that describe dark-matter production kinematics with a minimal number of free parameters. These models introduce new spin-0 or spin-1 mediator particles that propagate the interaction between the visible and the dark sectors. Because the mediators must couple to Standard Model (SM) particles in order to be produced in the proton–proton collisions, the mediators can also be directly searched for through their decays to jets, top-quark pairs and potentially even leptons. For certain model parameters, these direct searches can be more sensitive than the X + ETmiss ones.

However, simplified models are not full theories like, for example, supersymmetry. Recent theoretical work has therefore focused on developing more complete, renormalisable models of dark matter, such as two-Higgs doublet models (2HDM) with an additional mediator particle. These models introduce a larger number of free parameters, allowing for a richer phenomenology.

Fig. 2.

Similarly, for dark energy, effective field theory implementations may introduce a stable and non-interacting scalar field that universally couples to matter. This also leads to a characteristic ETmiss signature at the LHC.

ATLAS has recently released a summary gathering the results from more than 20 experimental searches for dark matter and a first collider search for dark energy. The wide range of analyses gives good coverage for the different dark-matter models studied. For new models, such as 2HDM with an additional pseudoscalar mediator, multiple regions of the parameter space are explored to probe the interplay between the masses, mixing angles and vacuum expectation values. For the 2HDM with an additional vector mediator, the resulting exclusion limits are further improved by combining the ETmiss + Higgs analyses where the Higgs boson decays to a pair of photons or b-quarks. For the dark-energy models, two operators at the lowest order effective Lagrangian allow for interactions between SM particles and the new scalar particles. These operators are proportional to the mass or momenta of the SM particles, making them most sensitive to the ETmiss + top–antitop or the ETmiss + jet final states.

To date, no significant excess over the SM backgrounds has been observed in any of the ATLAS searches for dark matter or dark energy. Limits on the simplified models are set on the mediator-versus- dark-matter masses (figure 1), which can also be compared to those obtained by direct detection experiments. For the 2HDM with a pseudoscalar mediator, limits are placed on the heavy pseudoscalar versus the mediator masses, highlighting the complementarity of different channels in different regions of the parameter space (figure 2). Finally, collider limits on the scalar dark energy model (see Colliders join the hunt for dark energy) are also set and for the models studied improve over the limits obtained from astronomical observation and lab measurements by several orders of magnitude. With the full dataset of LHC collisions collected by ATLAS during Run 2, the sensitivity to these models will continue to improve.

bright-rec iop pub iop-science physcis connect