Giovanni Muratori received a double degree in naval and mechanical engineering at the University of Genoa in 1949, after which he worked at ENI-AGIP on the construction of instruments for oil exploration. He started at CERN in August 1959 in the PS division, where he worked on the heavy-liquid bubble chamber designed to study neutrino physics. Giovanni oversaw the design of the cameras – not an easy task in view of the strong magnetic field that precluded the use of electric motors – and, after some initial setbacks, the chamber was ready for data-taking in early 1961. Finding the event rate to be insufficient, a crash programme was set in motion to improve the beam (using van der Meer’s magnetic horn) and to increase the total mass of detectors (by adding spark chambers downstream). Giovanni embarked on the design of the mechanics and optics for these spark chambers, which were operational in 1963.
At the end of 1961 he was transferred to the nuclear physics division and in April 1966 was appointed leader of the technical assistance group, which was involved in the design and construction of optical and mechanical equipment. The group developed and constructed a wide variety of detectors and associated equipment, including the R-108 experiment at the ISR where the group built a set of novel cylindrical drift chambers allowing track positions along the wire to be measured using the difference of arrival times of the signal at the ends of each wire. For NA31 the group built drift chambers installed in a helium-filled tank as well as a lightweight Kevlar window separating the helium from a vacuum tank.
Early-on, the group designed and constructed an automatic machine for winding large wire spark chambers and soon became specialised in the construction of arrays for the new multiwire proportional chambers. Led by Giovanni, the group developed equipment and facilities for Cherenkov detectors, including a dry lab for handling lithium foil and methods of producing precision glass spherical mirrors coated with highly reflecting aluminium coatings. Mirrors made using these techniques were later used in the RICH detector at LEP’s DELPHI experiment.
Towards the end of his CERN career he worked on the initial designs of the TPC detector for another LEP detector, ALEPH. He also started a collaboration with a group searching for the existence of a “fifth force” and designed and built a rotor that generated a dynamic gravitational field at around 450 Hz, which was used in the first absolute calibration of the gravitational wave detector EXPLORER at CERN.
Giovanni remained at CERN for several years after his retirement in 1986, during which time he worked on several problems including the initial design of a prototype liquid argon chamber for use in underground experiments at Gran Sasso. He was a superb engineer. His work was highly appreciated and his opinions respected. He participated actively in the design of equipment with innovative and ingenious ideas. He also loved solving machining and manufacturing problems, whether on a large or Swiss-watch scale. With his common-sense attitude and his warm and generous spirit, his advice was often sought on personal matters. Giovanni will be remembered with respect and affection by all.
As the heaviest known particle, the top quark plays a unique role in the Standard Model (SM), making its presence felt in corrections to the masses of the W and Higgs bosons, and also, perhaps, in as-yet unseen physics beyond the SM. During Run 2 of the Large Hadron Collider (LHC), high-luminosity proton beams were collided at a centre-of-mass energy of 13 TeV. This allowed ATLAS to record and study an unprecedented number of collisions producing top–antitop pairs, providing ATLAS physicists with a unique opportunity to gain insights into the top quark’s properties.
ATLAS has measured the top–antitop production cross-section using events where one top quark decays to an electron, a neutrino and a bottom quark, and the other to a muon, a neutrino and a bottom quark. The striking eμ signature gives a clean and almost background-free sample, leading to a result with an uncertainty of only 2.4%, which is the most precise top-quark pair-production measurement to date. The measurement provides information on the top quark’s mass, and can be used to improve our knowledge of the parton distribution functions describing the internal structure of the proton. The kinematic distributions of the leptons produced in top-quark decays have also been precisely measured, providing a benchmark to test programs that model top-quark production and decay at the LHC (figure 1).
The mass of the top quark is a fundamental parameter of the SM, which impacts precision calculations of certain quantum corrections. It can be measured kinematically through the reconstruction of the top quark’s decay products. The top quark decays via the weak interaction as a free particle, but the resulting bottom quark interacts with other particles produced in the collision and eventually emerges as a collimated “b-jet” of hadrons. Modelling this process and calibrating the jet measurement in the detector limits the precision in many top-quark mass measurements, however, 20% of the b-jets contain a muon that carries information relating to the parent bottom quark. By combining this muon with an isolated lepton from a W-boson originating from the same top-quark decay, ATLAS has made a new measurement of the top quark mass with a much-reduced dependence on jet modelling and calibration. The result is ATLAS’s most precise individual top-quark mass measurement to date: 174.48 ± 0.78 GeV.
Higher order QCD diagrams translate this imbalance into the charge asymmetry
At the LHC, top and antitop quarks are not produced fully symmetrically with respect to the proton-beam direction, with top antiquarks produced slightly more often at large angles to the beam, and top quarks, which receive more momentum from the colliding proton, emerging closer to the axis. Higher order QCD diagrams translate this imbalance into the so-called charge asymmetry, which the SM predicts to be small (~0.6%), but which could be enhanced, or even suppressed, by new physics processes interfering with the known production modes. Using its full Run-2 data sample, ATLAS finds evidence of charge asymmetry in top-quark pair events with a significance of four standard deviations, confidently showing that the asymmetry is indeed non-zero. The measured charge asymmetry of 0.0060 ± 0.0015 is compatible with the latest SM predictions. ATLAS also measured the charge asymmetry versus the mass of the top–antitop system, further probing the SM (figure 2).
When two lead nuclei collide in the LHC at an energy of a few TeV per nucleon, an extremely strong magnetic field of the order 1014 –1015 T is generated by the spectator protons, which pass by the collision zone without breaking apart in inelastic collisions. The strongest yet probed by scientists, this magnetic field, and in particular the rate at which it decays, is interesting to study since it probes unexplored properties of the quark–gluon plasma (QGP), such as its electric conductivity. In addition, chiral phenomena such as the chiral magnetic effect are expected to be induced by the strong fields. Left–right asymmetry in the production of negatively and positively charged particles relative to the collision reaction plane is one of the observables that is directly sensitive to electromagnetic fields. This asymmetry, called directed flow (v1), is sensitive to two main competing effects: the Lorentz force experienced by charged particles (quarks) propagating in the magnetic field, and the Faraday effect – the quark current that is induced by the rapidly decreasing magnetic field. Charm quarks are produced in the early stages of heavy-ion collisions and are therefore more strongly affected by the electromagnetic fields than lighter quarks.
An extremely strong magnetic field of the order 1014 –1015 T is generated
The ALICE collaboration has recently probed this effect by measuring the directed flow, v1, for charged hadrons and D0/D0 mesons as a function of pseudorapidity (η) in mid-central lead–lead collisions at √sNN = 5.02 TeV. Head-on (most central) collisions were excluded from the analyses because in those collisions there are very few spectator nucleons (almost all nucleons interact inelastically), which leads to a weaker magnetic field.
The top-left panel of the figure shows the η dependence of v1 for charged hadrons (centrality class 5–40%). The difference Δv1 between positively and negatively charged hadrons is shown in the bottom-left panel. The η slope is found to be dΔv1/dη = 1.68 ± 0.49 (stat) ± 0.41 (syst) × 10–4 – positive at 2.6σ significance. This measurement has a similar order of magnitude to recent model calculations of the expected effect for charged pions, but with the opposite sign.
The right-hand panels show the same analysis for the neutral charmed mesons D0 (cū) and D0 (c̄u) (centrality class 10–40%). The measured directed flows are found to be about three orders of magnitude larger than for the charged hadrons, reflecting the stronger fields experienced immediately after the collision when the charm quarks are created. The slopes, which are seen to be positive for D0 and negative for D0, are opposite and larger than in the model calculations. The slope of the differences in the directed flows is dΔv1/dη = 4.9 ± 1.7 (stat) ± 0.6 (syst) × 10–1 – positive at 2.7σ significance (lower-right panel). Also, in this case, the sign of the observed slope is opposite with respect to model calculations, suggesting that the relative contributions of the Lorentz and Faraday effects in those calculations are not correct.
Together with recent observations at RHIC, these LHC measurements provide an intriguing first sign of the effect of the large magnetic fields experienced in heavy-ion collisions on final-state particles. Measurements with larger data samples in Run 3 will have a precision sufficient to allow the contributions of the Lorentz force and the Faraday effect to be separated.
One of the best strategies for searching for new physics in the TeV regime is to look for the decays of new particles. The CMS collaboration has searched in the dilepton channel for particles with masses above a few hundred GeV since the start of LHC data taking. Thanks to newly developed triggers, the searches are now being extended to the more difficult lower range of masses. A promising possible addition to the Standard Model (SM) that could exist in this mass range is the dark photon (ZD). Its coupling with SM particles and production rate depend on the value of a kinetic mixing coefficient ε, and the resulting strength of the interaction of the ZD with ordinary matter may be several orders of magnitude weaker than the electroweak interaction.
The CMS collaboration has recently presented results of a search for a narrow resonance decaying to a pair of muons in the mass range from 11.5 to 200 GeV. This search looks for a strikingly sharp peak on top of a smooth dimuon mass spectrum that arises mainly from the Drell–Yan process. At masses below approximately 40 GeV, conventional triggers are the main limitation for this analysis as the thresholds on the muon transverse momenta (pT), which are applied online to reduce the rate of events saved for offline analysis, introduce a significant kinematic acceptance loss, as evident from the red curve in figure 1.
A dedicated set of high-rate dimuon “scouting” triggers, with some additional kinematic constraints on the dimuon system and significantly lower muon pT thresholds, was deployed during Run 2 to overcome this limitation. Only a minimal amount of high-level information from the online reconstruction is stored for the selected events. The reduced event size allows significantly higher trigger rates, up to two orders of magnitude higher than the standard muon triggers. The green curve in figure 1 shows the dimuon invariant mass distribution obtained from data collected with the scouting triggers. The increase in kinematic acceptance for low masses can be well appreciated.
The full data sets collected with the muon scouting and standard dimuon triggers during Run 2 are used to probe masses below 45 GeV, and between 45 and 200 GeV, respectively, excluding the mass range from 75 to 110 GeV where Z-boson production dominates. No significant resonant peaks are observed, and limits are set on ε2 at 90% confidence as a function of the ZD mass (figure 2). These are among the world’s most stringent constraints on dark photons in this mass range.
It is impossible to envisage high-energy physics without its foundation of microprocessor technology, software and distributed computing. Almost as soon as CERN was founded the first contract to provide a computer was signed, but it took manufacturer Ferranti more than two years to deliver “Mercury”, our first valve-based behemoth, in 1958. So early did this machine arrive that the venerable FORTRAN language had yet to be invented! A team of about 10 people was required for operations and the I/O system was already a bottleneck. It was not long before faster and more capable machines were available at the lab. By 1963, an IBM 7090 based on transistor technology was available with a FORTRAN compiler and tape storage. This machine could analyse 300,000 frames of spark-chamber data – a big early success. By the 1970s, computers were important enough that CERN hosted its first Computing and Data Handling School. It was clear that computers were here to stay.
By the time of the LEP era in the late 1980s, CERN hosted multiple large mainframes. Workstations, to be used by individuals or small teams, had become feasible. DEC VAX systems were a big step forward in power, reliability and usability and their operating system, VMS, is still talked of warmly by older colleagues in the field. Even more economical machines, personal computers (PCs), were also reaching a threshold of having enough computing power to be useful to physicists. Moore’s law, which predicted the doubling of transistor densities every two years, was well established and PCs were riding this technological wave. More transistors meant more capable computers, and every time transistors got smaller, clock speeds could be ramped up. It was a golden age where more advanced machines, running ever faster, gave us an exponential increase in computing power.
Key also to the computing revolution, alongside the hardware, was the growth of open-source software. The GNU project had produced many utilities that could be used by hackers and coders on which to base their own software. With the start of the Linux project to provide a kernel, humble PCs became increasingly capable machines for scientific computing. Around the same time, Tim Berners-Lee’s proposal for the World Wide Web, which began as a tool for connecting information for CERN scientists, started to take off. CERN realised the value in releasing the web as an open standard and in doing so enabled a success that today connects almost the entire planet.
LHC computing
This interconnected world was one of the cornerstones of the computing that was envisaged for the Large Hadron Collider (LHC). Mainframes were not enough, nor were local clusters. What the LHC needed was a worldwide system of interconnected computing systems: the Worldwide LHC Computing Grid (WLCG). Not only would information need to be transferred, but huge amounts of data and millions of computer jobs would need to be moved and executed, all with a reliability that would support the LHC’s physics programme. A large investment in brand new grid technologies was undertaken, and software engineers and physicists in the experiments had to develop, deploy and operate a new grid system utterly unlike anything that had gone before. Despite rapid progress in computing power, storage space and networking, it was extremely hard to make a reliable, working distributed system for particle physics out of these pieces. Yet we achieved this incredible task. During the past decade, thousands of physics results from the four LHC experiments, including the Higgs-boson discovery, were enabled by the billions of jobs executed and the petabytes of data shipped around the world.
The software that was developed to support the LHC is equally impressive. The community had made a wholesale migration from the LEP FORTRAN era to C++ and millions of lines of code were developed. Huge software efforts in every experiment produced frameworks that managed data taking and reconstruction of raw events to analysis data. In simulation, the Geant4 toolkit enabled the experiments to begin data-taking at the LHC with a fantastic level of understanding of the extraordinarily complex detectors, enabling commissioning to take place at a remarkable rate. The common ROOT foundational libraries and analysis environment allowed physicists to process the billions of events that the LHC supplied and extract the physics from them successfully at previously unheard of scales.
Changes in the wider world
While physicists were busy preparing for the LHC, the web became a pervasive part of people’s lives. Internet superpowers like Google, Amazon and Facebook grew up as the LHC was being readied and this changed the position of particle physics in the computing landscape. Where particle physics had once been a leading player in software and hardware, enjoying good terms and some generous discounts, we found ourselves increasingly dwarfed by these other players. Our data volumes, while the biggest in science, didn’t look so large next to Google; the processing power we needed, more than we had ever used before, was small beside Amazon; and our data centres, though growing, were easily outstripped by Facebook.
Technology, too, started to shift. Since around 2005, Moore’s law, while still largely holding, has no longer been accompanied by increases in CPU clock speeds. Programs that ran in a serial mode on a single CPU core therefore started to become constrained in their performance. Instead, performance gains would come from concurrent execution on multiple threads or from using vectorised maths, rather than from faster cores. Experiments adapted by executing more tasks in parallel – from simply running more jobs at the same time to adopting multi-process and multi-threaded processing models. This post hoc parallelism was often extremely difficult because the code and frameworks written for the LHC had assumed a serial execution model.
The barriers being discovered for CPUs also caused hardware engineers to rethink how to exploit CMOS technology for processors. The past decade has witnessed the rise of the graphics processing unit (GPU) as an alternative way to exploit transistors on silicon. GPUs run with a different execution model: much more of the silicon is devoted to floating-point calculations, and there are many more processing cores, but each core is smaller and less powerful than a CPU. To utilise such devices effectively, algorithms often have to be entirely rethought and data layouts have to be redesigned. Much of the convenient, but slow, abstraction power of C++ has to be given up in favour of more explicit code and simpler layouts. However, this rapid evolution poses other problems for the code long term. There is no single way to programme a GPU and vendors’ toolkits are usually quite specific to their hardware.
It is both a challenge and also an opportunity to work with new scientific partners in the era of exascale science
All of this would be less important were it the case that the LHC experiments were standing still, but nothing could be further from the truth. For Run 3 of the LHC, scheduled to start in 2021, the ALICE and LHCb collaborations are installing new detectors and preparing to take massively more data than they did up to now. Hardware triggers are being dropped in favour of full software processing systems and continuous data processing. The high-luminosity upgrade of the LHC for Run 4, from 2026, will be accompanied by new detector systems for ATLAS and CMS, much higher trigger rates and greatly increased event complexity. All of this physics needs to be supported by a radical evolution of software and computing systems, and in a more challenging sociological and technological environment. The LHC will also not be the only scientific big player in the future. Facilities such as DUNE, FAIR, SKA and LSST will come online and have to handle as much, if not more, data than at CERN and in the WLCG. That is both a challenge but also an opportunity to work with new scientific partners in the era of exascale science.
There is one solution that we know will not work: simply scaling up the money spent on software and computing. We will need to live with flat budgets, so if the event rate of an experiment increases by a factor of 10 then we have a budget per event that just shrank by the same amount! Recognising this, the HEP Software Foundation (HSF) was invited by the WLCG in 2016 to produce a roadmap for how to evolve software and computing in the 2020s – resulting in a community white paper supported by hundreds of experts in many institutions worldwide (CERN Courier April 2018 p38). In parallel, CERN open lab – a public–private partnership through which CERN collaborates with leading ICT companies and other research organisations – published a white paper setting out specific challenges that are ripe for tackling through collaborative R&D projects with leading commercial partners.
Facing the data onslaught
Since the white paper was published, the HSF and the LHC-experiment collaborations have worked hard to tackle the challenges it lays out. Understanding how event generators can be best configured to get good physics at minimum cost is a major focus, while efforts to get simulation speed-ups from classical fast techniques, as well as new machine-learning approaches, have intensified. Reconstruction algorithms have been reworked to take advantage of GPUs and accelerators, and are being seriously considered for Run 3 by CMS and LHCb (as ALICE makes even more use of GPUs since their successful deployment in Run 2). In the analysis domain, the core of ROOT is being reworked to be faster and also easier for analysts to work with. Much inspiration is taken from the Python ecosystem, using Jupyter notebooks and services like SWAN.
These developments are firmly rooted in the new distributed models of software development based on GitHub or GitLab and with worldwide development communities, hackathons and social coding. Open source is also vital, and all of the LHC experiments have now opened up their software. In the computing domain there is intense R&D into improving data management and access, and the ATLAS-developed Rucio data management system is being adopted by a wide range of other HEP experiments and many astronomy communities. Many of these developments got a shot in the arm from the IRIS–HEP project in the US; other European initiatives, such as IRIS in the UK and the IDT-UM German project are helping, though much more remains to be done.
All this sets us on a good path for the future, but still, the problems remain significant, the implementation of solutions is difficult and the level of uncertainty is high. Looking back to the first computers at CERN and then imagining the same stretch of time into the future, predictions are next to impossible. Disruptive technology, like quantum computing, might even entirely revolutionise the field. However, if there is one thing that we can be sure of, it’s that the next decades of software and computing at CERN will very likely be as interesting and surprising as the ones already passed.
For every trillion K0S, only five are expected to decay to two muons. Like the better known Bs → μ+μ– decay, which was first observed jointly by LHCb and CMS in 2013, the decay rate is very sensitive to possible contributions from yet-to-be discovered particles that are too heavy to be observed directly at the LHC, such as leptoquarks or supersymmetric partners. These particles could significantly enhance the decay rate, up to existing experimental limits, but could also suppress it via quantum interference with the Standard Model (SM) amplitude.
Despite the unprecedented K0S production rate at the LHC, searching for K0S → μ+μ– is challenging due to the low transverse momentum of the two muons, typically of a few hundred MeV/c. Though primarily designed for the study of heavy-flavour particles, LHCb’s unique ability to select low transverse-momentum muons in real time makes the search feasible. According to SM predictions, just two signal events are expected in the Run-2 data, potentially making this the rarest decay ever recorded.
The analysis uses two machine-learning tools: one to discriminate muons from pions, and another to discriminate signal candidates from the so-called combinatorial background that arises from coincidental decays. Additionally, a detailed and data-driven map of the detector material around the interaction point helps to reduce the “fixed-target” background caused by particles interacting with the detector material. A background of K0S → π+π– decays dominates the selection, and in the absence of a compelling signal, an upper limit to the branching fraction of 2.1 × 10–10 has been set at 90% confidence. This is approximately four times more stringent than the previous world-best limit, set by LHCb with Run-1 data. This result has implications for physics models with leptoquarks and some fine-tuned regions of the Minimal Supersymmetric SM.
The upgraded LHCb detector, scheduled to begin operating in 2021 after the present long shutdown of the LHC, will offer excellent opportunities to improve the precision of this search and eventually find a signal. In addition to the increased luminosity, the LHCb upgrade will have a full software trigger, which is expected to significantly improve the signal efficiency for K0S → μ+μ– and other decays with very soft final-state particles.
The 3 km-high summit of Cerro Armazones, located in the Atacama desert of Northern Chile, is a construction site for one of most ambitious projects ever mounted by astronomers: the Extremely Large Telescope (ELT). Scheduled for first light in 2025, the ELT is centred around a 39 m-diameter main mirror that will gather 250 times more light than the Hubble Space Telescope and use advanced corrective optics to obtain exceptional image quality. It is the latest major facility of the European Southern Observatory (ESO), which has been surveying the southern skies for almost 60 years.
The science goals of the ELT are vast and diverse. Its sheer size will enable the observation of distant objects that are currently beyond reach, allowing astronomers to better understand the formation of the first stars, galaxies and even black holes. The sharpness of its images will also enable a deeper study of extrasolar planets, possibly even the characterisation of their atmospheres. “One new direction may become possible through very high precision spectroscopy – direct detection of the expansion rate of the universe, which would be an amazing feat,” explains Pat Roche of the University of Oxford and former president of the ESO council. “But almost certainly the most exciting results will be from unexpected discoveries.”
Technical challenges
Approved in 2006, civil engineering for the ELT began in 2014. Construction of the 74 m-high, 86 m-diameter dome and the 3400-tonne main structure began in 2019. In January 2018 the first segments of the main mirror were successfully cast, marking the first step of a challenging five-mirror system that goes beyond the traditional two-mirror “Gregorian” design. The introduction of a third powered mirror delivers a focal plane that remains un-aberrated at all field locations, while a fourth and a fifth mirror correct distortions in real-time due to the Earth’s atmosphere or other external factors. This novel arrangement, combined with the sheer size of the ELT, makes almost every aspect of the design particularly challenging.
The main mirror is itself a monumental enterprise; it consists of 798 hexagonal segments, each measuring approximately 1.4 m across and 50 mm thick. To keep the surface unchanged by external factors such as temperature or wind, each segment has edge sensors measuring its location within a few nanometres – the most accurate ever used in a telescope. The construction and polishing of the segments, as well as the edge sensors, is a demanding task and only possible thanks to the collaboration with industry; at least seven private companies are working on the main mirror alone. The size of the mirror was originally 42 m, but it was later reduced to 39 m, mainly for costs reasons, but still allowing the ELT to fulfill its main scientific goals. “The ELT is ESO’s largest project and we have to ensure that it can be constructed and operated within the available budget,” says Roche. “A great deal of careful planning and design, most of it with input from industry, was undertaken to understand the costs and the cost drivers, and the choice of primary mirror diameter emerged from these analyses.”
The task is not much easier for the other mirrors. The secondary mirror, measuring 4 m across, is highly convex and will be the largest secondary mirror ever employed on a telescope and the largest convex mirror ever produced. The ELT’s tertiary mirror also has a curved surface, contrary to more traditional designs. The fourth mirror will be the largest adaptive mirror ever made, supported by more than 5000 actuators that will deform and adjust its shape in real-time to achieve a factor-500 improvement in resolution.
Currently 28 companies are actively collaborating on different parts of the ELT design; most of these companies are European, but also include contracts with the Chilean companies ICAFAL, for the road and platform construction, and Abengoa for the ELT technical facility. Among the European contracts, the construction of the telescope dome and main structure by the Italian ACe consortium of Astraldi and Cimolai is the largest in ESO’s history. The total cost estimate for the baseline design of the ELT is €1.174 billion, while the running cost is estimated to be around €50 million per year. Since the approval of the ELT, ESO has increased its number of member states from 14 to 16, with Poland and Ireland incorporating in 2015 and 2018, respectively. Chile is a host state and Australia a strategic partner.
European Southern Observatory’s particle-physics roots
The ELT’s success lies in ESO’s vast experience in the construction of innovative telescopes. The idea for ESO, a 16-nation intergovernmental organisation for research in ground-based astronomy, was conceived in 1954 with the aim of creating a European observatory dedicated to observations of the southern sky. At the time, the largest such facilities had an aperture of about 2 m; more than 50 years later, ESO is responsible for a variety of observatories, including its first telescope at La Silla, not far from Cerro Armazones (home of the ELT).
Like CERN, ESO was born in the aftermath of the war to allow European countries to develop scientific projects that nations were unable to do on their own. The similarities are by no means a mere coincidence. From the beginning, CERN served as a model regarding important administrative aspects of the organisation, such as the council delegate structure, the finance base or personnel regulations. A stronger collaboration ensued in 1969, when ESO approached CERN to assist with the powerful and sophisticated instrumentation of its 3.6 m telescope and other challenges ESO was facing, both administrative and technological. This collaboration saw ESO facilities established at CERN: the Telescope Project Division and, a few years later, ESO’s Sky Atlas Laboratory. A similar collaboration has since been organised for EMBL and, more recently for a new hadron-therapy facility in Southeast Europe.
Unprecedented view
A telescope of this scale has never been attempted before in astronomy. Not only must the ELT be constructed and operated within the available budget, but it should not impact the operation of ESO’s current flagship facilities (such as the VLT, the VLT interferometer and the ALMA observatory).
The amount of data produced by the ELT is estimated to be around 1-2 TB per night, including scientific observations plus calibration observations. The data will be analysed automatically, and users have the option to download the processed data or, if needed, download the original data and process it in their own research centres. To secure observation time with the facility, ESO makes a call for proposals once or twice a year, at which researchers propose desired observations according to their own fields. “A committee of astronomers then evaluates the proposals and ranks them according to their relevance and potential scientific impact, the highest ranked ones are then chosen to be followed,” explains project scientist Miguel Pereira of the University of Oxford.
Currently, 28 companies are actively collaborating on different parts of the ELT design, mostly from Europe
In addition to its astronomical goals, the ELT will contribute to the growing confluence of cosmology and fundamental physics. Specifically, it will help elucidate the nature of dark energy by identifying distant type 1a supernovae, which serve as excellent markers of the universe’s expansion history. The ELT will also measure the change in redshift with time of distant objects – a feat that is beyond the capabilities of current telescopes – to indicate the rate of expansion. Possible variations over time of fundamental physics constants, such as the fine-structure constant and the strong coupling constant, will also be targeted. Such measurements are very challenging because the strength of the constraint on the variability depends critically on the accuracy of the wavelength calibration. The ELT’s ultra-stable high-resolution spectrograph aims to remove the systematic uncertainty currently present in the wavelength calibration measurements, offering the possibility to make an unambiguous detection of such variations.
The ELT construction is on schedule for completion, and first light is expected in 2025. “In the end, projects succeed because of the people who design, build and support them,” Roche says, attributing the success of the ELT to rigorous attention to design and analysis across all aspects of the project. The road ahead is still challenging and full of obstacles, but, as the former director of the Paris observatory André Danjon wrote to his counterpart at the Leiden Observatory, Jan Oort, in 1962: “L’astronomie est bien l’ecole de la patience.” No doubt the ELT will pay extraordinary scientific rewards.
The first joint meeting of the European Committee for Future Accelerators (ECFA), the Nuclear Physics European Collaboration Committee (NuPECC), and the Astroparticle Physics European Consortium (APPEC) took place from 14 – 16 October in Orsay, France. Making progress in domains such as dark matter, neutrinos and gravitational waves increasingly requires interdisciplinary approaches to scientific and technological challenges, and the new Joint ECFA-NuPECC-APPEC Seminar (JENAS) events are designed to reinforce links between astroparticle, nuclear and particle physicists.
Jointly organised by LAL-Orsay, IPN-Orsay, CSNSM-Orsay, IRFU-Saclay and LPNHE-Paris, the inaugural JENAS meeting saw 230 junior and senior members of the three communities discuss overlapping interests. Readout electronics, silicon photomultipliers, big-data computing and artificial intelligence were just a handful of the topics discussed. For example, the technological evolution of silicon photomultipliers, which are capable of measuring single-photon light signals and can operate at low voltage and in magnetic fields, will be key both for novel calorimeters and timing detectors at the high-luminosity LHC. They will also be used in the Cherenkov Telescope Array – an observatory of more than 100 telescopes which will be installed at La Palma in the northern hemisphere, and in the Atacama Desert in the southern hemisphere, becoming the world’s most powerful instrument for ground-based gamma-ray astronomy.
As chairs of the three consortia, we issued a call for novel expressions of interest
Organisational synergies related to education, outreach, open science, open software and careers are also readily identified, and a diversity charter was launched by the three consortia, whereby statistics on relevant parameters will be collected at each conference and workshop in the three subfields. This will allow the communities to verify how well we embrace diversity.
As chairs of the three consortia, we issued a call for novel expressions of interest to tackle common challenges in subjects as diverse as computing and the search for dark matter. Members of the high-energy physics and related communities can submit their ideas, in particular those concerning synergies in technology, physics, organisation and applications. APPEC, ECFA and NuPECC will discuss and propose actions in advance of the next JENAS event in 2021.
Gamma-ray bursts (GRBs) are the brightest electromagnetic events in the universe since the Big Bang. First detected in 1967, GRBs have been observed about once per day using a range of instruments, allowing astrophysicists to gain a deeper understanding of their origin. As often happens, 14 January 2019 saw the detection of three GRBs. While the first two were not of particular interest, the unprecedented energy of photons emitted by the third – measured by the MAGIC telescopes — provides a new insight into these mysterious phenomena.
The study of GRBs is unique, both because GRBs occur at random locations and times and because each GRB has different time characteristics and energy spectra. GRBs consist of two phases: a prompt phase, lasting from hundreds of milliseconds to hundreds of seconds, which consists of one or several bright bursts of hard X-rays and gamma-rays; followed by a significantly weaker “afterglow” phase which can be observed at lower energies ranging from radio to X-rays and lasts for periods up to months.
The recent detection adds yet another messenger: TeV photons
Since the late 1990, optical observations have confirmed both that GRBs happen in other galaxies and that longer duration GRBs tend to be associated with supernovae, strongly hinting that they result from the death of massive stars. Shorter GRBs, meanwhile, have recently been shown to be the result of neutron-star mergers thanks to the first joint observations of a GRB with a gravitational wave event in 2017. While this event is often regarded as the start of multi-messenger astrophysics, the recent detection of GRB190114C lying 4.5 billion light years from Earth adds yet another messenger to the field of GRB astrophysics: TeV photons.
The MAGIC telescopes on the island of La Palma measure Cherenkov radiation produced when TeV photons induce electromagnetic showers after interacting with the Earth’s atmosphere. During the past 15 years, MAGIC has discovered a range of astrophysical sources via their emission at these extreme energies. However, detecting the emission from GRBs, despite over 100 attempts, remained elusive despite theoretical predictions that such emission could exist.
On 14 January, based on an alert provided by space-based gamma-ray detectors, the MAGIC telescopes started repointing within a few tens of seconds of the onset of the GRB. Within the next half hour, the telescopes had observed around a 1000 high energy photons from the source. This emission, which has long been predicted by theorists, is shown by the collaboration to be the result of the “synchrotron self-Compton” process, whereby high-energy electrons accelerated in the initial violent explosion interact with magnetic fields produced by the collision between these ejecta and interstellar matter. The synchrotron emission from this interaction produces the afterglow observed at X-ray, optical and radio energies. However, some of these synchrotron photons subsequently undergo inverse Compton scattering with the same electrons, allowing them to reach TeV energies. These measurements by MAGIC show for the first time that indeed this mechanism does occur. Given the many observations in the past where it wasn’t observed, it appears to be yet another feature which differs between GRBs.
The MAGIC results were published in an issue of Nature which also reported a discovery of similar emission in a different GRB by another Cherenkov telescope: the High Energy Stereoscopic System (H.E.S.S) in Namibia. While the measurements are consistent, it is interesting to note that the measurements by H.E.S.S were made ten hours after that particular GRB, showing that this type of emission can occur also at much later time scales. With two new large-scale Cherenkov observatories – the Large High Altitude Air Shower Observatory in China and the global Cherenkov Telescope Array — about to commence data taking, the field of GRB astrophysics can now expect a range of new discoveries.
In 1871, James Clerk Maxwell undertook the titanic enterprise of planning a new physics laboratory for the University of Cambridge from scratch. To avoid mistakes, he visited the Clarendon Laboratory in Oxford, and the laboratory of William Thomson (Lord Kelvin) in Glasgow – then the best research institutes in the country – to learn all that he could from their experiences. Almost 150 years later, Malcolm Longair, a renowned astrophysicist and the Cavendish laboratory’s head from 1997 to 2005, has written a monumental account of the scientific achievements of those who researched, worked and taught at a laboratory which has become an indispensable part of the machinery of modern science.
The 22 chapters of the book are organised in ten parts corresponding to the inspiring figures who led the laboratory through the years, most famously among them Maxwell himself, Thomson, Rutherford, Bragg, Mott and few others. The numerous Nobel laureates who spent part of their careers at the Cavendish are also nicely characterised, among them Chadwick, Appleton, Kapitsa, Cockcroft and Walton, Blackett, Watson and Crick, Cormack, and, last but not least Didier Queloz, Nobel Laureate in 2019 and professor at the universities of Cambridge and Geneva. You may even read about friends and collaborators as the exposition includes the most recent achievements of the laboratory.
Rutherford and Thomson managed the finances of the laboratory almost from their personal cheque book
Besides the accuracy of the scientific descriptions and the sharpness of the ideas, this book inaugurates a useful compromise that might inspire future science historians. So far it was customary to write biographies (or collected works) of leading scientists and extensive histories of various laboratories: here these two complementary aspects are happily married in a way that may lead to further insights on the genesis of crucial discoveries. Longair elucidates the physics with a competent care that is often difficult to find. His exciting accounts will stimulate an avalanche of thoughts on the development of modern science. By returning to a time when Rutherford and Thomson managed the finances of the laboratory almost from their personal cheque book, this book will stimulate readers to reflect on the interplay between science, management and technology.
History is often instrumental in understanding where we come from, but it cannot reliably predict directions for the future. Nevertheless the history of the Cavendish shows that lasting progress can come from diversity of opinion, the inclusiveness of practices and mutual respect between fundamental sciences. How can we sum up the secret of the scientific successes described in this book? A tentative recipe might be unity in necessary things, freedom in doubtful ones and respect for every honest scientific endeavour.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.