It is 20 years since the discovery that the expansion of the universe is accelerating, yet physicists still know precious little about the underlying cause. In a classical universe with no quantum effects, the cosmic acceleration can be explained by a constant that appears in Einstein’s equations of general relativity, albeit one with a vanishingly small value. But clearly our universe obeys quantum mechanics, and the ability of particles to fluctuate in and out of existence at all points in space leads to a prediction for Einstein’s cosmological constant that is 120 orders of magnitude larger than observed. “It implies that at least one, and likely both, of general relativity and quantum mechanics must be fundamentally modified,” says Clare Burrage, a theorist at the University of Nottingham in the UK.
With no clear alternative theory available, all attempts to explain the cosmic acceleration introduce a new entity called dark energy (DE) that makes up 70% of the total mass-energy content of the universe. It is not clear whether DE is due to a new scalar particle or a modification of gravity, or whether it is constant or dynamic. It’s not even clear whether it interacts with other fundamental particles or not, says Burrage. Since DE affects the expansion of space–time, however, its effects are imprinted on astronomical observables such as the cosmic microwave background and the growth rate of galaxies, and the main approach to detecting DE involves looking for possible deviations from general relativity on cosmological scales.
Unique environment
Collider experiments offer a unique environment in which to search for the direct production of DE particles, since they are sensitive to a multitude of signatures and therefore to a wider array of possible DE interactions with matter. Like other signals of new physics, DE (if accessible at small scales) could manifest itself in high-energy particle collisions either through direct production or via modifications of electroweak observables induced by virtual DE particles.
Last year, the ATLAS collaboration at the LHC carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. The results demonstrate the ability of collider experiments to access new regions of parameter space and provide complementary information to cosmological probes.
Unlike dark matter, for which there exists many new-physics models to guide searches at collider experiments, few such frameworks exist that describe the interaction between DE and Standard Model (SM) particles. However, theorists have made progress by allowing the properties of the prospective DE particle and the strength of the force that it transmits to vary with the environment. This effective-field-theory approach integrates out the unknown microscopic dynamics of the DE interactions.
The new ATLAS search was motivated by a 2016 model by Philippe Brax of the Université Paris-Saclay, Burrage, Christoph Englert of the University of Glasgow, and Michael Spannowsky of Durham University. The model provides the most general framework for describing DE theories with a scalar field and contains as subsets many well-known specific DE models – such as quintessence, galileon, chameleon and symmetron. It extends the SM lagrangian with a set of higher dimensional operators encoding the different couplings between DE and SM particles. These operators are suppressed by a characteristic energy scale, and the goal of experiments is to pinpoint this energy for the different DE–SM couplings. Two representative operators predict that DE couples preferentially to either very massive particles like the top quark (“conformal” coupling) or to final states with high-momentum transfers, such as those involving high-energy jets (“disformal” coupling).
Signatures
“In a big class of these operators the DE particle cannot decay inside the detector, therefore leaving a missing energy signature,” explains Spyridon Argyropoulos of the University of Iowa, who is a member of the ATLAS team that carried out the analysis. “Two possible signatures for the detection of DE are therefore the production of a pair of top-antitop quarks or the production of high-energy jets, associated with large missing energy. Such signatures are similar to the ones expected by the production of supersymmetric top quarks (“stops”), where the missing energy would be due to the neutralinos from the stop decays or from the production of SM particles in association with dark-matter particles, which also leave a missing energy signature in the detector.”
The ATLAS analysis, which was based on 13 TeV LHC data corresponding to an integrated luminosity of 36.1 fb–1, re-interprets the result of recent ATLAS searches for stop quarks and dark matter produced in association with jets. No significant excess over the predicted background was observed, setting the most stringent constraints on the suppression scale of conformal and disformal couplings of DE to normal matter in the context of an effective field theory of DE. The results show that the characteristic energy scale must be higher than approximately 300 GeV for the conformal coupling and above 1.2 TeV for the disformal coupling.
The search for DE at colliders is only at the beginning, says Argyropoulos. “The limits on the disformal coupling are several orders of magnitudes higher than the limits obtained from other laboratory experiments and cosmological probes, proving that colliders can provide crucial information for understanding the nature of DE. More experimental signatures and more types of coupling between DE and normal matter have to be explored and more optimal search strategies could be developed.”
With this pioneering interpretation of a collider search in terms of dark-energy models, ATLAS has become the first experiment to probe all forms of matter in the observable universe, opening a new avenue of research at the interface of particle physics and cosmology. A complementary laboratory measurement is also being pursued by CERN’s CAST experiment, which studies a particular incarnation of DE (chameleon) produced via interactions of DE with photons.
But DE is not going to give up its secrets easily, cautions theoretical cosmologist Dragan Huterer at the University of Michigan in the US. “Dark energy is normally considered a very large-scale phenomenon, but you may justifiably ask how the study of small systems in a collider can say anything about DE. Perhaps it can, but in a fairly model-dependent way. If ATLAS finds a signal that departs from the SM prediction it would be very exciting. But linking it firmly to DE would require follow-up work and measurements – all of which would be very exciting to see happen.”
On 1 January a new virtual centre devoted to some of the most precise measurements in science was established by researchers in Germany and Japan. The Centre for Time, Constants and Fundamental Symmetries will offer access to ultra-sensitive equipment to allow experimental groups in atomic and nuclear physics, antimatter research, quantum optics and metrology to collaborate closely on fundamental measurements. Three partners – the Max Planck Institutes for nuclear physics (MPI-K) and for quantum optics (MPQ), the National Metrology Institute of Germany (PTB) and RIKEN in Japan – agreed to fund the centre in equal amounts with a total of around €7.5 million for five years, and scientific activities will be coordinated at MPI-K.
A major physics target of the German–Japanese centre is to investigate whether the fundamental constants really are constant or if they change in time by tiny amounts. Another goal concerns the subtle differences in the properties of matter and antimatter, namely C, P and T invariance, which have not yet shown up, even though such differences intrinsically must exist, otherwise the universe would consist of almost pure radiation. Closely related to these tests of fundamental symmetries is the search for physics beyond the Standard Model. The broad research portfolio also includes the development of novel optical clocks based on atoms, nuclei and highly charged ions.
“It is fascinating that nowadays manageable laboratory experiments make it possible to investigate such fundamental questions in physics and cosmology by means of their high precision”, says Klaus Blaum of MPI-K.
Stringent tests of fundamental interactions and symmetries using the protons and antiprotons available at the BASE experiment at CERN are another key aspect of the German–Japanese initiative, explains Stefan Ulmer, co-director of the centre, chief scientist at RIKEN, and spokesperson of the BASE experiment: “This centre will strongly promote fundamental physics in general, in addition to the research goals of BASE. Given this support we are developing new equipment to improve both the precision of the proton-to-antiproton charge-to-mass ratio as well as the proton/antiproton magnetic moment comparison by factors of 10 to 100.”
To reach these goals, the researchers intend to develop novel experimental techniques – such as transportable antiproton traps, sympathetic cooling of antiprotons by laser-cooled beryllium ions, and optical clocks based on highly charged ions and thorium nuclei – which will outperform contemporary methods and enable measurements at even shorter time scales and with improved sensitivity. “The combined precision-physics expertise of the individual groups with their complementary approaches and different methods using traps and lasers has the potential for substantial progress,” says Ulmer. “The low-energy, ultra-high-precision investigations for physics beyond the Standard Model will complement studies in particle physics.”
The electromagnetic field of the highly charged lead ions in the LHC beams provides a very intense flux of high-energy quasi-real photons that can be used to probe the structure of the proton in lead–proton collisions. The exclusive photoproduction of a J/ψ vector meson is of special interest because it samples the gluon density in the proton. Previous measurements by ALICE have shown that this process could be measured in a wide range of centre-of-mass energies of the photon–proton system (Wγp), enlarging the kinematic reach by more than a factor of two with respect to that of calculations performed at the former HERA collider.
Recently, the ALICE collaboration has performed a measurement of exclusive photoproduction of J/ψ mesons off protons in proton–lead collisions at a centre-of-mass energy of 5.02 TeV at the LHC using two new configurations. In both cases, the J/ψ meson is reconstructed from its decay into a lepton pair. In the first case, the leptons are measured at mid-rapidity using ALICE’s central-barrel detectors. The excellent particle-identification capabilities of these detectors allow the measurement of both the e+e– and μ+μ– channels. The second configuration combines a muon measured with the central-barrel detectors with a second muon measured by the muon spectrometer located at forward rapidity. By this clever use of the detector configuration, we were able to significantly extend the coverage of the J/ψ measurement.
The energy of the photon–proton collisions, Wγp, is determined by the rapidity (which is a function of the polar angle) of the produced J/ψ with respect to the beam axis. Since the direction of the proton and the lead beams was inverted halfway through the data-taking period, ALICE covers both backward and forward rapidities using a single-arm spectrometer.
These two configurations, plus the one used previously where both muons were measured in the muon spectrometer, allow ALICE to cover – in a continuous way – the range in Wγp from 20 to 700 GeV. The typical momentum at which the structure of the proton is probed is conventionally given as a fraction of the beam momentum, x, and the new measurements extend over three orders of magnitude in x from 2 × 10–2 to 2 × 10–5. The measured cross section for this process as a function of Wγp is shown in figure 1 and compared with previous measurements and models based on different assumptions such as the validity of DGLAP evolution (JMRT), the vector-dominance model (STARlight), next-to-leading order BFKL, the colour–glass condensate (CGC), and the inclusion of fluctuating sub-nucleonic degrees-of-freedom (CCT). The last two models include the phenomenon of saturation, where nonlinear effects reduce the gluon density in the proton at small x.
The new measurements are compatible with previous HERA data where available, and all models agree reasonably well with the data. Nonetheless, it is seen that at the largest energies, or equivalently the smallest x, some of the models predict a slower growth of the cross section with energy. This is being studied by ALICE with data taken in 2016 in p–Pb collisions at a centre-of-mass energy of 8.16 TeV, allowing exploration of the Wγp energy range up to 1.5 TeV, potentially shedding new light on the question of gluon saturation.
Strategy is a base that allows resources to be prioritised in the pursuit of important goals. No strategy would be needed if enough resources were available – we would just do what appears to be necessary.
Elementary particle physics generally requires large and expensive facilities, often on an international scale, which take a long time to develop and are heavy consumers of resources during operations. For this reason, in 2005 the CERN Council initiated a European Strategy for Particle Physics (ESPP), resulting in a document being adopted the following year. The strategy was updated in 2013 and the community is now working towards a second ESPP update (CERN Courier April 2018 p7).
The making of the ESPP has three elements: bottom-up activities driven by the scientific community through document submission and an open symposium (the latter to be held in Spain in May 2019); strategy drafting (to take place in Germany in January 2020) by scientists, who are mostly appointed by CERN member states; and the final discussion and approval by the CERN Council. Therefore, the final product should be an amalgamation of the wishes of the community and the political and financial constraints defined by state authorities. Experience of the previous ESPP update suggests that this is entirely achievable, but not without effort and compromise.
Out of four high-priority items in the current ESPP, which concluded in 2013, three of them are well under way: the full exploitation of the LHC via a luminosity upgrade; R&D and design studies for a future energy-frontier machine at CERN; and establishing a platform at CERN for physicists to develop neutrino detectors for experiments around the world. The remaining item, relating to an initiative of the Japanese particle-physics community to host an international linear collider in Japan, has not made much progress.
In physics, discussions about strategy usually start with a principled statement: “Science should drive the strategy”. This is of course correct, but unfortunately not always sufficient in real life, since physics consideration alone does not provide a practical solution most of the time. In this context, it is worth recalling the discussion about long-baseline neutrino experiments that took place during the previous strategy exercises.
Optimal outcome
At the time of the first ESPP almost 15 years ago, so little was known about the neutrino mass-mixing parameters that several ambitious facilities were discussed so as to cover necessary parameter spaces. Some resources were directed into R&D, but most probably they were too little and not well prioritised. In the meantime, it became clear that a state-of-the-art neutrino beam based on conventional technology would be sufficient to make the next necessary step of measuring the neutrino CP-violation parameter and mass hierarchy. What should be done was therefore clear from a scientific point of view, but there simply were not enough resources in Europe to construct a long-baseline neutrino experiment together with a high performance beam line while fully exploiting the LHC at the same time. The optimal outcome was found by considering global opportunities and this was one of the key ingredients that drove the strategy.
The challenge facing the community now in updating the current ESPP is to steer the field into the mid-2020s and beyond. As such, discussions about the various ideas for the next big machine at CERN will be an important focus, but numerous other projects, including proposals for non-collider experiments, will be jostling for attention. Many brilliant people are working in our field with many excellent ideas, with different strengths and weaknesses. The real issue of the strategy update is how we can optimise the resources using time and location, and possibly synergies with other scientific fields.
The intention of the strategy is to achieve a scientific goal. We may already disagree about what this goal is, since it is people with different visions, tastes and habits who conduct research. But let us at least agree this to be “to understand the most fundamental laws of nature” for now. Also, depending on the time scales, the relative importance of elements in the decision-making might change and factors beyond Europe cannot be neglected. Strategy that cannot be implemented is not useful for anyone and the key is to make a judgement on the balance among many elements. Lastly, we should not forget that the most exciting scenario for the ESPP update will be the appearance of an unexpected result –then there would be a real paradigm shift in particle physics.
The features in this first issue of 2019 bring you all the shutdown news from the seven LHC experiments, and what to expect when the souped-up detectors come back online in 2021.
During the next two years of long-shutdown two (LS2), the LHC and its injectors will be tuned up for high-luminosity operations: Linac2 will leave the floor to Linac4 to enable more intense beams; the Proton Synchrotron Booster will be equipped with completely new injection and acceleration systems; and the Super Proton Synchrotron will have new radio-frequency power. The LHC is also being tested for operation at its design energy of 14 TeV, while, in the background, civil-engineering works for the high-luminosity upgrade (HL-LHC), due to enter service in 2026, are proceeding apace.
The past three years of Run 2 at a proton–proton collision energy of 13 TeV have seen the LHC achieve record peak and integrated luminosities, forcing the detectors to operate at their limits. Now, the four main experiments ALICE, ATLAS, CMS and LHCb, and the three smaller experiments LHCf, MoEDAL and TOTEM, are gearing up for the extreme conditions of Run 3 and beyond.
At the limits
Since the beginning of the LHC programme, it was clear that the original detectors would last for approximately a decade due to radiation damage. That time has now come. Improvements, repairs and upgrades have been taking place in the LHC detectors throughout the past decade, but significant activities will take place during LS2 (and LS3, beginning 2024), capitalising on technology advances and the ingenuity of thousands of people over a period of several years. Combined, the technical design reports for the LHC experiment upgrades number some 20 volumes each containing hundreds of pages.
For LHCb, the term “upgrade” hardly does it justice, since large sections of the detector are to be completely replaced and a new trigger system is to be installed (LHCb’s momentous metamorphosis). ALICE too is undergoing major interventions to its inner detectors during LS2 (ALICE revitalised), and both collaborations are installing new data centres to deal with the higher data rate from future LHC runs. ATLAS and CMS are upgrading numerous aspects of their detectors while at the same time preparing for major installations during LS3 for HL-LHC operations (CMS has high luminosity in sight and ATLAS upgrades in LS2). At the HL-LHC, one year of collisions is equivalent to 10 years of LHC operations in terms of radiation damage. Even more challenging, HL-LHC will deliver a mean event pileup of up to 200 interactions per beam crossing – 10 times greater than today – requiring totally new trigger and other capabilities.
Three smaller experiments at the LHC are also taking advantage of LS2. TOTEM, which comprises two detectors located 220 m either side of CMS to measure elastic proton–proton collisions (see “Forging ahead” image), aims to perform total-cross-section measurements at maximal LHC energies. For this, the collaboration is building a new scintillator detector to be integrated in CMS, in addition to service work on its silicon-strip and spectrometer detectors.
Another “forward” experiment called LHCf, made up of two detectors 140 m either side of ATLAS, uses forward particles produced by the LHC collisions to improve our knowledge about how cosmic-ray showers develop in Earth’s atmosphere. Currently, the LHCf detectors are being prepared for 14 TeV proton–proton operations, higher luminosities and also for the possibility of colliding protons with light nuclei such as oxygen, requiring a completely renewed data-acquisition system. Finally, physicists at MoEDAL, a detector deployed around the same intersection region as LHCb to look for magnetic monopoles and other signs of new physics, are preparing a request to take data during Run 3. For this, among other improvements, a new sub-detector called MAPP will be installed to extend MoEDAL’s physics reach to long-lived and fractionally charged particles.
The seven LHC experiments are also using LS2 to extend and deepen their analyses of the Run-2 data. Depending on what lies there, the collaborations could have more than just shiny new detectors on their hands by the time they come back online in the spring of 2021.
In 2007, while studying archival data from the Parkes radio telescope in Australia, Duncan Lorimer and his student David Narkevic of West Virginia University in the US found a short, bright burst of radio waves. It turned out to be the first observation of a fast radio burst (FRB), and further studies revealed additional events in the Parkes data dating from 2001. The origin of several of these bursts, which were slightly different in nature, was later traced back to the microwave oven in the Parkes Observatory visitors centre. After discarding these events, however, a handful of real FRBs in the 2001 data remained, while more FRBs were being found in data from other radio telescopes.
The cause of FRBs has puzzled astronomers for more than a decade. But dedicated searches under way at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Australian Square Kilometre Array Pathfinder (ASKAP), among other activities, are intensifying the search for their origin. Recently, while still in its pre-commissioning phase, CHIME detected no less than 13 new FRBs – one of them classed as a “repeater” on account of its regular radio output – setting the field up for an exciting period of discovery.
Dispersion
All FRBs have one thing in common: they last for a period of several milliseconds and have a relatively broad spectrum where the radio waves with the highest frequencies arrive first followed by those with lower frequencies. This dispersion feature is characteristic of radio waves travelling through a plasma in which free electrons delay lower frequencies more than the higher ones. Measuring the amount of dispersion thus gives an indication of the number of free electrons the pulse has traversed and therefore the distance it has travelled. In the case of FRBs, the measured delay cannot be explained by signals travelling within the Milky Way alone, strongly indicating an extragalactic origin.
The size of the emission region responsible for FRBs can be deduced from their duration. The most likely sources are compact km-sized objects such as neutron stars or black holes. Apart from their extragalactic origin and their size, not much more is known about the 70 or so FRBs that have been detected so far. Theories about their origin range from the mundane, such as pulsar or black-hole emission, to the spectacular – such as neutron stars travelling through asteroid belts or FRBs being messages from extraterrestrials.
For one particular FRB, however, its location was precisely measured and found to coincide with a faint unknown radio source within a dwarf galaxy. This shows clearly that the FRB was extragalactic. The reason this FRB could be localised is that it was one of several to come from the same source, allowing more detailed studies and long-term observations. For a while, it was the only FRB found to do so, earning it the title “The Repeater”. But the recent detection by CHIME has now doubled the number of such sources. The detection of repeater FRBs could be seen as evidence that FRBs are not the result of a cataclysmic event, since the source must survive in order to repeat. However, another interpretation is that there are actually two classes of FRBs: those that repeat and those that come from cataclysmic events.
Until recently the number of theories on the origin of FRBs outnumbered the number of detected FRBs, showing how difficult it is to constrain theoretical models based on the available data. Looking at the experience of a similar field – that of gamma-ray burst (GRB) research, which aims to explain bright flashes of gamma rays discovered during the 1960s – an increase in the number of detections and searches for counterparts in other wavelengths or in gravitational waves will enable quick progress. As the number of detected GRBs started to go into the thousands, the number of theories (which initially also included those with extraterrestrial origins) decreased rapidly to a handful. The start of data taking by ASKAP and the increasing sensitivity of CHIME means we can look forward to an exponential growth of the number of detected FRBs, and an exponential decrease in the number of theories on their origin.
In Lost in Math, theoretical physicist Sabine Hossenfelder embarks on a soul-searching journey across contemporary theoretical particle physics. She travels to various countries to interview some of the most influential figures of the field (but also some “outcasts”) to challenge them, and be challenged, about the role of beauty in the investigation of nature’s laws.
Colliding head-on with the lore of the field and with practically all popular-science literature, Hossenfelder argues that beauty is overrated. Some leading scientists say that their favourite theories are too beautiful not to be true, or possess such a rich mathematical structure that it would be a pity if nature did not abide by those rules. Hossenfelder retorts that physics is not mathematics, and names examples of extremely beautiful and rich maths that does not describe the world. She reminds us that physics is based on data. So, she wonders, what can be done when an entire field is starved of experimental breakthroughs?
Confirmation bias
Nobel laureate Steven Weinberg, interviewed for this book, argues that experts call “beauty” the experience-based feeling that a theory is on a good track. Hossenfelder is sceptical that this attitude really comes from experience. Maybe most of the people who chose to work in this field were attracted to it, in the first place, because they like mathematics and symmetries, and would not have worked in the field otherwise. We may be victims of confirmation bias: we choose to believe that aesthetic sense leads to correct theories; hence, we easily recall to memory all of the correct theories that possess some quality of beauty, while we do not pay equal attention to the counterexamples. Dirac and Einstein, among many, vocally affirmed beauty as a guiding principle, and achieved striking successes by following its guidance; however, they also had, as Hossenfelder points out, several spectacular failures that are less well known. Moreover, a theoretical sense of beauty is far from universal. Copernicus made a breakthrough because he sought a form of beauty that differed from those of his predecessors, making him think out of the box; and by today’s taste, Kepler’s solar system of platonic solids feels silly and repulsive.
Hossenfelder devotes attention to a concept that is particularly relevant to contemporary particle physics: the “naturalness principle” (see Understanding naturalness). Take the case of the Higgs mass: the textbook argument is that quantum corrections go wild for the Higgs boson, making any mass value between zero and the Planck mass a priori possible; however, its value happens to be closer to zero than to the Planck mass by a factor of 1017. Hence, most particle physicists argue that there must be an almost perfect cancellation of corrections, a problem known as the “hierarchy problem”. Hossenfelder points out that implicit in this simple argument is that all values between zero and the Planck mass should be equally likely. “Why,” she asks, “are we assuming a flat probability, instead of a logarithmic (or whatever other function) one?” In general, we say that a new theory is necessary when a parameter value is unlikely, but she argues that we can estimate the likeliness of that value only when we have a prior likelihood function, for which we would need a new theory.
New angles
Hossenfelder illustrates various popular solutions to this naturalness problem, which in essence all try to make small values of the Higgs mass much more likely than large ones. She also discusses string theory, as well as multiverse hypotheses and anthropic solutions, exposing their shortcomings. Some of her criticisms may recall Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong, but Hossenfelder brings new angles to the discussion.
This book comes out at a time when more and more specialists are questioning the validity of naturalness-inspired predictions. Many popular theories inspired by the naturalness problem share an empirical consequence: either they manifest themselves soon in existing experiments, or they definitely fail in solving the problems that they were invented for.
Hossenfelder describes in derogatory terms the typical argumentative structure of contemporary theory papers that predict new particles “just around the corner”, while explaining why we did not observe them yet. She finds the same attitude in what she calls the “di-photon diarrhoea”, i.e., the prolific reaction of the same theoretical community to a statistical fluctuation at a mass of around 750 GeV in the earliest data from the LHC’s Run 2.
The author explains complex matters at the cutting edge of theoretical physics research in a clear way, with original metaphors and appropriate illustrations. With this book, Hossenfelder not only reaches out to the public, but also invites it to join a discourse that she is clearly passionate about. The intended readership ranges from fellow scientists to the layperson, also including university administrators and science policy makers, as is made explicit in an appendix devoted to practical suggestions for various categories of readers.
While this book will mostly attract attention for its pars destruens, it also contains a pars construens. Hossenfelder argues for looking away from the lamppost, both theoretically and experimentally. Having painted naturalness arguments as a red herring that drives attention away from the real issues, and acknowledging throughout the book that when data offer no guidance there is no other choice than following some non-empirical assessment criteria, she advocates other criteria that deserve better prominence, such as the internal consistency of the theoretical foundations of particle physics.
As a non-theorist my opinion carries little weight, but my gut feeling is that this direction of investigation, although undeniably crucial, is not comparably “fertile”. On the other hand, Hossenfelder makes it clear that she sees nothing scientific in this kind of fertility, and even argues that bibliometric obsessions played a big role in creating what she depicts as a gigantic bibliographical bubble. Inspired by that, Hossenfelder also advises learning how to recognise and mitigate biases, and building a culture of criticism both in the scientific arena and in response to policies that create short-term incentives, going against the idea of exploring less conventional ideas. Regardless of what one may think about the merits of naturalness or other non-empirical criteria, I believe that these suggestions are uncontroversially worthy of consideration.
Andrea Giammanco, UCLouvain, Louvain-la-Neuve, Belgium.
Amaldi’s last letter to Fermi: a monologue Theatre, CERN Globe
11 September 2018
On the occasion of the 110th anniversary of the birth of Italian physicist Edoardo Amaldi (1908–1989), CERN hosted a new production titled “Amaldi l’italiano, centodieci e lode!” The title is a play on words concerning the top score at an Italian university (“110 cum laude”) and the production is a well-deserved recognition of a self-confessed “ideas shaker” who was one of the pioneers
in the establishment of CERN, the European Space Agency (ESA) and the Italian National Institute for Nuclear Physics (INFN).
The nostalgic monologue opens with Amaldi, played by Corrado Calda, sitting at his desk and writing a letter to his mentor, Enrico Fermi. Set on the last day of Amaldi’s life, the play retraces some of his scientific, personal and historical memories, which pass by while he writes.
It begins in 1938 when Amaldi is part of an enthusiastic group of young scientists, led by Fermi and nicknamed “Via Panisperna boys” (boys from Panisperna Road, the location of the Physics Institute of the University of Rome). Their discoveries on slow neutrons led to Fermi’s Nobel Prize in Physics that year.
Then, suddenly, World War IIbegins and everything falls apart. Amaldi writes about his frustrations to his teacher, who had passed away but is still close to him. “While physicists were looking for physical laws, Europe sank into racial laws,” he despairs. Indeed, most of his colleagues and friends, including Fermi who had a Jewish wife, moved to the US. Left alone in Italy, Amaldi decided to stop his studies on fission and focus on cosmic rays, a type of research that required less resources and was not related to military applications.
Out of the ruins
After World War II, while in Italy there was barely enough money to buy food, the US was building state-of-the-art particle-physics detectors. Amaldi described his strong temptation to cross the ocean, and re-join with Fermi. However, he decided to stay in war-torn Europe and help European science grow out of the ruins. He worked to achieve his dream of “a laboratory independent from military organisations, where scientists from all over the world could feel at home” – today know as CERN. He was general secretary of CERN between 1952 and 1954, before its official foundation in September 1954.
This beautiful monologue is interspersed by radio messages from the epoch, which announce salient historical facts. These create a factual atmosphere that becomes less and less tense as alerts about the Nazi’s declarations and bombs are replaced by news about the first women’s vote, the landing of the first person on the Moon, and disarmament movements.
Written and directed by Giusy Cafari Panico and Corrado Calda, the play was composed after consulting with Edoardo’s son, Ugo Amaldi, who was present at the inaugural performance. The script is so rich in information that you leave the theatre feeling you now know a lot about scientific endeavours, mindsets and the general zeitgeist of the last century. Moreover, the play touches on some topics that are still very relevant today, including: brain drain, European identity, women in science and the use of science for military purposes.
The event was made possible thanks to the initiative of Ugo Amaldi, CERN’s Lucio Rossi, the Edoardo Amaldi Association (Fondazione Piacenza e Vigevano, Italy), and several sponsors. The presentation was introduced by former CERN Director-General Luciano Maiani, who was Edoardo Amaldi’s student, and current CERN Director-General Fabiola Gianotti, who expressed her gratitude for Amaldi’s contribution in establishing CERN.
Letizia Diamante, CERN.
Topological and Non-Topological Solitons in Scalar Field Theories by Yakov M Shnir Cambridge University Press
In the 19th century, the Scottish engineer John Scott Russell was the first to observe what he called a “wave of transition” while watching a boat drawn along a channel by a pair of horses. This phenomenon is now referred to as a soliton and described mathematically as a stable, non-dissipative wave packet that maintains its shape while propagating at a constant velocity.
Solitons emerge in various nonlinear physical systems, from nonlinear optics and condensed matter to nuclear physics, cosmology and supersymmetric theories.
Structured in three parts, this book provides a comprehensive introduction to the description and construction of solitons in various models. In the first two chapters of part one, the author discusses the properties of topological solitons in the completely integrable Sine-Gordon model and in the non-integrable models with polynomial potentials. Then, in chapter three, he introduces solitary wave solutions of the Korteweg–de Vries equation, which provide an example of non-topological solitons.
Part two deals with higher dimensional nonlinear theories. In particular, the properties of scalar soliton configurations are analysed in two 2+1 dimension systems: the O(3) nonlinear sigma model and the baby Skyrme model. Part three focuses mainly on the solitons in three spatial dimensions. Here, the author covers stationary Q-balls and their properties. Then he discusses soliton configurations in the Skyrme model (called skyrmions) and the knotted solutions of the Faddev–Skyrme model (hopfions). The properties of the related deformed models, such as the Nicole and the Aratyn–Ferreira–Zimerman model, are also summarised.
Based on the author’s lecture notes for a graduate-level course, this book is addressed at graduate students in theoretical physics and mathematics, as well as researchers interested in solitons.
Virginia Greco, CERN.
Universal Themes of Bose–Einstein Condensation by Nick P Proukakis, David W Snoke and Peter B Littlewood Cambridge University Press
The study of Bose–Einstein condensation (BEC) has undergone an incredible expansion during the last 25 years. Back then, the only experimentally realised Bose condensate was liquid helium-4, whereas today the phenomenon has been observed in a number of diverse atomic, optical and condensed-matter systems. The turning point for BEC came in 1995, when three different US groups reported the observation of BEC in trapped, weakly interacting atomic gases of rubidium-87, lithium-7 and sodium-23 within weeks of one another. These studies led to the 2001 Nobel Prize in Physics being jointly awarded to Eric Cornell, Wolfgang Ketterle and Carl Wieman.
This book is a collection of essays written by leading experts on various aspects and in different branches of BEC, which is now a broad and interdisciplinary area of modern physics. Composed of four parts, the volume starts with the history of the rapid development of this field and then takes the reader through the most important results.
The second part provides an extensive overview of various general themes related to universal features of Bose–Einstein condensates, such as the question of whether BEC involves spontaneous symmetry breaking, of how the ideal Bose gas condensation is modified by interactions between the particles, and the concept of universality and scale invariance in cold-atom systems. Part three focuses on active research topics in ultracold environments, including optical lattice experiments, the study of distinct sound velocities in ultracold atomic gases – which has shaped our current understanding of superfluid helium – and quantum turbulence in atomic condensates.
Part four is dedicated to the study of condensed-matter systems that exhibit various features of BEC, while in part five possible applications of the study of condensed matter and BEC to answer questions on astrophysical scales are discussed.
Virginia Greco, CERN.
Zeros of Polynomials and Solvable Nonlinear Evolution Equations by Francesco Calogero Cambridge University Press
This concise book discusses the mathematical tools used to model complex phenomena via systems of nonlinear equations, which can be useful to describe many-body problems.
Starting from a well-established approach to solvable dynamical systems identification, the author proposes a novel algorithm that allows some of the restrictions of this approach to be eliminated and, thus, identifies more solvable/integrable N-body problems. After reporting this new differential algorithm to evaluate all the zeros of a generic polynomial of arbitrary degree, the book presents many examples to show its application and impact. The author first discusses systems of ordinary differential equations (ODEs), including second-order ODEs of Newtonian type, and then moves on to systems of partial differential equations and equations evolving in discrete time-steps.
This book is addressed to both applied mathematicians and theoretical physicists, and can be used as a basic text for a topical course for advanced undergraduates.
One hundred and fifty years since Dmitri Mendeleev revolutionised chemistry with the periodic table of the elements, an international team of researchers has resolved a longstanding question about one of its more mysterious regions – the actinide series (or actinoids, as adopted by the International Union of Pure and Applied Chemistry, IUPAC).
The periodic table’s neat arrangement of rows, columns and groups is a consequence of the electronic structures of the chemical elements. The actinide series has long been identified as a group of heavy elements starting with atomic number Z = 89 (actinium) and extending up to Z = 103 (lawrencium), each of which is characterised by a stabilised 7s2 outer electron shell. But the electron configurations of the heaviest elements of this sequence, from Z = 100 (fermium) onwards, have been difficult to measure, preventing confirmation of the series. The reason for the difficulty is that elements heavier than fermium can be produced only one atom at a time in nuclear reactions at heavy-ion accelerators.
Confirmation
Now, Tetsuya Sato at the Japan Atomic Energy Agency (JAEA) and colleagues have used a surface ion source and isotope mass-separation technique at the tandem accelerator facility at JAEA in Tokai to show that the actinide series ends with lawrencium. “This result, which would confirm the present representation of the actinide series in the periodic table, is a serious input to the IUPAC working group, which is evaluating if lawrencium is indeed the last actinide,” says team member Thierry Stora of CERN.
Using the same technique, Sato and co-workers measured the first ionisation potential of lawrencium back in 2015. Since this is the energy required to remove the most weakly bound electron from a neutral atom and is a fundamental property of every chemical element, it was a key step towards mapping lawrencium’s electron configuration. The result suggested that lawrencium has the lowest first ionisation potential of all actinides, as expected owing to its weakly bound electron in the 7p1/2 valence orbital. But with only this value the team couldn’t confirm the expected increase of the ionisation values of the heavy actinides up to nobelium (Z = 102). This occurs with the filling of the 5f electron shell in a manner similar to the filling of the 4f electron shell until ytterbium in the lanthanides.
In their latest study, Sato and colleagues have determined the successive first ionisation potentials from fermium to lawrencium, which is essential to confirm the filling of the 5f shell in the heavy actinides (see figure). The results agree well with those predicted by state-of-the-art relativistic calculations in the framework of QED and confirm that the ionisation values of the heavy actinides increase up to nobelium, while that of lawrencium is the lowest among the series.
The results demonstrate that the 5f orbital is fully filled at nobelium (with the [Rn] 5f14 7s2 electron configuration, where [Rn] is the radon configuration) and that lawrencium has a weakly bound electron, confirming that the actinides end with lawrencium. The nobelium measurement also agrees well with laser spectroscopy measurements made at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany.
“The experiments conducted by Sato et al. constitute an outstanding piece of work at the top level of science,” says Andreas Türler, a chemist from the University of Bern, Switzerland. “As the authors state, these measurements provide unequivocal proof that the actinide series ends with lawrencium (Z = 103), as the filling of the 5f orbital proceeds in a very similar way to lanthanides, where the 4f orbital is filled. I am already eagerly looking forward to an experimental determination of the ionisation potential of rutherfordium (Z = 104) using the same experimental approach.”
The CMS detector has performed better than what was thought possible when it was conceived. Combined with advances in analysis techniques, this has allowed the collaboration to make measurements – such as the coupling between the Higgs boson and bottom quarks – that were once deemed impossible. Indeed, together with its sister experiment ATLAS, CMS has turned the traditional view of hadron colliders as “hammers” rather than “scalpels” on its head.
In exploiting the LHC and its high-luminosity upgrade (HL-LHC) to maximum effect in the coming years, the CMS collaboration has to battle higher overall particle rates, higher “pileup” of superimposed proton–proton collision events per LHC bunch crossing, and higher instantaneous and integrated radiation doses to the detector elements. In the collaboration’s arsenal to combat this assault are silicon sensors able to withstand the levels of irradiation expected, a new high-rate trigger, and detectors with higher granularity or precision timing capabilities to help disentangle piled-up events.
The majority of CMS detector upgrades for the HL-LHC will be installed and commissioned during long-shutdown three (LS3). However, the planned 30-month duration of LS3 imposes logistical constraints that result in a large part of the muon-system upgrade and many ancillary systems (such as cooling, power and environmental control) needing to be installed substantially beforehand. This makes the CMS work plan for LS2 extremely complex, dividing into three classes of activity: the five-yearly maintenance of the existing detectors and services, the completion of so called “phase 1” upgrades necessary for CMS to continue to operate until LS3, and the initial upgrades to detectors, infrastructure or ancillary systems necessary for HL-LHC. “The challenge of LS2 is to prepare CMS for Run 3 while not neglecting the work needed now to prepare for Run 4,” says technical coordinator Austin Ball.
A dedicated CMS upgrade programme was planned since the LHC switched on in 2008. It is being carried out in two phases: the first, which started in 2014 during LS1, concerns improvements to deal with a factor-of-two increase over the design instantaneous luminosity delivered in Run 2; and the second relates to the upgrades necessary for the HL- LHC. The phase-1 upgrade is almost complete, thanks to works carried out during LS1 and regular end-of-year technical stops. This included the replacement of the three-layer barrel (two-disk forward) pixel detector with a four-layer barrel (three-disk forward) version, the replacement of photosensors and front-end electronics for some of the hadron calorimeters, and the introduction of a more powerful, FPGA-based, level-1 hardware trigger. LS2 will conclude phase-1 by upgrading photosensors (hybrid photodiodes) in the barrel hadron calorimeter with silicon photomultipliers and replacing the innermost pixel barrel layer.
Phase-2 activities
But LS2 also sees the start of the phase-2 CMS upgrade, the first step of which is a new beampipe. The collaboration already replaced the beampipe during LS1 with a narrower one to allow the phase-1 pixel detector to reach closer to the interaction point. Now, the plan is to extend the cylindrical section of the beampipe further to provide space for the phase-2 pixel detector with enlarged pseudo-rapidity coverage, to be installed in LS3. In addition, for the muon detectors CMS will install a new gas electron multiplier (GEM), layer in the inner ring of the first endcap disk, upgrade the on-detector electronics of the cathode strip chambers, and lay services for a future GEM layer and improved resistive plate chambers. Several other preparations of the detector infrastructure and services will take place in LS2 to be ready for the major installations in LS3.
Key elements of the LS2 work plan include: constructing major new surface facilities; modifying the internal structure of the underground cavern to accommodate new detector services (especially CO2 cooling); replacing the beampipe for compatibility with the upgraded tracking system; and improving the powering system of the 3.8 T solenoid to increase its longevity through the HL-LHC era. In addition, the system for opening and closing the magnet yoke for detector access will be modified to accommodate future tolerance requirements and service volumes, and the shielding system protecting detectors from background radiation will be reinforced. Significant upgrades of electrical power, gas distribution and the cooling plant also have to take place during LS2.
The CMS LS2 schedule is now fully established, with a critical path starting with the pixel-detector and beampipe removal and extending through the muon system upgrade and maintenance, installation of the phase-2 beampipe plus the revised phase-1 pixel innermost layer, and, after closing the magnet yoke, re-commissioning of the mag-net with the upgraded powering system. The other LS2 activities, including the barrel hadron calorimeter work, will take place in the shadow of this critical path.
“The timely completion of the intense LS2 programme, including the construction of the on-site surface infrastructures necessary for the construction, assembly or refurbishment activities of the phase-2 detectors, is critical for a successful CMS phase-2 upgrade,” explains upgrade coordinator Frank Hartmann. “Although still far away, LS3 activities are already being planned in detail.” The future LS3 shutdown will see the CMS tracker completely replaced with a new outer tracker that can provide tracks at 40 MHz to the upgraded level-1 trigger, and with a new inner tracker with extended pseudo-rapidity coverage. The 36 modules of the barrel electromagnetic calorimeter will be removed and their on-detector electronics upgraded to enable the high readout rate, while both current hadron and electromagnetic endcap calorimeters will be replaced with a brand-new system (see “A new era in calorimetry” box). The addition of timing detectors in the barrel and endcaps will allow a 4D reconstruction of collision vertices and, together with the other new and upgraded detectors, reduce the effective event pile-up at the HL-LHC to a level comparable to that already seen.
“The upgraded CMS detector will be even more powerful and able to make even more precise measurements of the properties of the Higgs boson as well as extending the searches for new physics in the unprecedented conditions of the HL-LHC,” says CMS spokesperson Roberto Carlin.
To precisely study the Higgs boson and extend our sensitivity to new physics in the coming years of LHC operations, the ATLAS experiment has a clear upgrade plan in place. Ageing of the inner tracker due to radiation exposure, data volumes that would saturate the readout links, obsolescence of electronics, and a collision environment swamped by up to 200 interactions per bunch crossing are some of the headline challenges facing the 3000-strong collaboration. While many installations will take place during long-shutdown three (LS3), beginning in 2024, much activity is taking place during the current LS2 – including major interventions to the giant muon spectrometer at the outermost reaches of the detector.
The main ATLAS upgrade activities during LS2 are aimed at increasing the trigger efficiency for leptonic and hadronic signatures, especially for electrons and muons with a transverse momentum of at least 20 GeV. To improve the selectivity of the electron trigger, the amount of information used for the trigger decision will be drastically increased: until now, the very fine-grained information produced by the electromagnetic calorimeter is grouped in “trigger towers” to limit the number and hence cost of trigger channels, but advances in electronics and the use of optical fibres allows the transmission of a much larger amount of information at a reasonable cost. By replacing some of the components of the front-end electronics of the electromagnetic calorimeter, the level of segmentation available at the trigger level will be increased fourfold, improving the ability to reject jets and preserve electrons and photons. The ATLAS trigger and data-acquisition systems will also be upgraded during LS2 by introducing new electronics boards that can deal with the more granular trigger information coming from the detector.
New small wheels
Since 2013, ATLAS has been working on a replacement for its “small wheel” forward-muon endcap systems so that they can operate under the much harsher background conditions of the future LHC. The new small wheel (NSW) detectors employ two detector technologies: small-strip thin gap chambers (sTGC) and Micromegas (MM). Both technologies are able to withstand the higher flux of neutrons and photons expected in future LHC interactions, which will produce counting rates as high as 20 kHz cm–2 in the inner part of the NSW, while delivering information for the first-level trigger and muon measurement. The main aim of the NSW is to reduce the fake muon triggers in the forward region and improve the sharpness of the trigger threshold drastically, allowing the same selection power as the present high-level trigger.
The first NSW started to take shape at CERN last year. The iron shielding disks (see “Iron support” image), which serve as the support for the NSW detectors in addition to shielding the endcap muon chambers from hadrons, have been assembled, while the services team is installing numerous cables and pipes on the disks. Only a few millimetres of space is available between the disk and the chambers for the cables on one side, and between the disk and the calorimeter on the other side, and the task is made even more difficult by having to work from an elevated platform. In a nearby building, the sTGC chambers coming from the different construction sites are being integrated in full wedges and, soon this year, the Micromegas wedges will be integrated and tested at a separate integration site. The construction of the sTGC chambers is taking place in Canada, Chile, China, Israel and Russia, while the Micromegas are being constructed in France, Germany, Greece, Italy and Russia. On a daily basis, cables arrive to be assembled with connectors and tested; piping is cut to length, cleaned and protected until installation; and gas-leak and high-voltage test stations are employed for quality control. In the meantime, several smaller upgrades will be deployed during LS2, including the installation of 16 new muon chambers in the inner layer of the barrel spectrometer.
The organisation of LS2 activities is a complex exercise in which the maintenance needs of the detectors have to be addressed in parallel with installation schedules. After a first period devoted to the opening of the detector and the maintenance of the forward muon spectrometer, the first major non-standard operation (scheduled for January) will be to bring to the surface the first small wheel. Having the detector fully open on one side will also allow very important test for the installation of the new all-silicon inner tracker, which is scheduled to be installed during LS3. The upgrade of the electromagnetic-calorimeter electronics will start in February and continue for about one year, requiring all front-end boards to be dismounted from their crates, modifications to both the boards and the crates, and reinstallation of the modified boards in their original position. Maintenance of the ATLAS tile calorimeter and inner detector will take place in parallel, a very important aspect of which will be the search for leaks in the front-end cooling system.
In August, the first small wheel will be lowered again, allowing the second small wheel to be brought to the surface to make space for the NSW installation foreseen in April 2020. In the same period, all the optical transmission boards of the pixel detector will have to be changed. Following these installations, there will be a long period of commissioning of all the upgraded detectors and the preparation for the installation of the second NSW in the autumn of 2020. At that moment the closing process will start and will last for about three months, including the bake-out of the beam pipe, which is a very delicate and dangerous operation for the pixel detectors of the inner tracker.
A coherent upgrade programme for ATLAS is now fully underway to enable the experiment to fully exploit the physics potential of the LHC in the coming years of high-luminosity operations. Thousands of people around the world in more than 200 institutes are involved, and thetechnical design reports alone for the upgrade so far number six volumes, each containing several hundred pages. At the end of LS2, ATLAS will be ready to take data in Run 3 with a renewed and better performing detector.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.