The diffuse photon background that fills the universe does not limit itself to the attention-hogging cosmic microwave background, but spans a wide spectrum extending up to TeV energies. The origin of the photon emission at X-ray and gamma-ray wavelengths, first discovered in the 1970s, remains poorly understood. Many possible sources have been proposed, ranging from active galactic nuclei to dark-matter annihilation. Thanks to many years of gamma-ray data from the Fermi Large Area Telescope (Fermi-LAT), a group from Australia and Italy has now produced a model that links part of the diffuse emission to star-forming galaxies (SFGs).
As their name implies, SFGs are galaxies in which stars are formed, and therefore also die through supernova events. Such sources, which include our own Milky Way, have gained interest from gamma-ray astronomers during the past decade because several resolvable SFGs have been shown to emit in the 100 MeV to 1 TeV energy range. Given their preponderance, SFGs are thus a prime-suspect source of the diffuse gamma-ray background.
Clear correlation
The source of gamma rays within SFGs is very likely the interaction between cosmic rays and the interstellar medium (ISM). The cosmic rays, in turn, are thought to be accelerated within the shockwaves of supernova remnants, after which they interact with the ISM to produce a hadronic cascade. The cascade includes neutral pions, which decay into gamma rays. This connection between supernova remnants and gamma rays is strengthened by a clear correlation between the star-formation rate in a galaxy and the gamma-ray flux they emit. Additionally, such sources are theorised to be responsible for the neutrino emission detected by the IceCube observatory over the past few years, which also appears to be highly isotropic.
Based on additional SFG gamma-ray sources found by Fermi–LAT, which could be used for validation, the Australian/Italian group developed a physical model to study the contribution of SFGs to the cosmic diffuse gamma-ray background. The model used to predict the gamma-ray emission from galaxies starts with the spectra of charged cosmic-rays produced in the numerous supernovae remnants within a galaxy, and greatly benefits from data collected from several such remnants present in the Milky Way. Subsequently the production and energies of gamma rays through their interaction of cosmic rays with the ISM is modelled, followed by the gamma-ray transport to Earth, which includes losses due to interactions with low-energy photons leading to pair production.
The main uncertainty in previous models was the efficiency of a galaxy to transform the energy from cosmic rays into gamma rays, since it is not possible to use our own galaxy to measure it. The big breakthrough in the new work is a more thorough theoretical modelling of this efficiency, which was first tested extensively using data from resolved SFG sources. After such tests proved successful, the model could be applied to predict the gamma-ray emission properties of galaxies spanning the history of the universe. These predictions indicate that the low-energy part of the spectrum can be largely attributed to galaxies from the so-called cosmic noon: the period when star formation in large galaxies was at its peak, about 10 billion years ago. Nearby galaxies, on the other hand, explain the high-energy part of the spectrum, which, for old and distant sources, is absorbed in the intergalactic medium by low-energy photons undergoing pair production with TeV emission. Overall, the model predicts not only the spectral shape but also the overall flux (see “Good fit” figure), negating the need for other possible sources such as active galactic nuclei or dark matter.
These new results once again indicate the importance of star-forming regions for astrophysics, after also recently being proposed as a possible source of PeV cosmic rays by LHAASO (CERN Courier July/August 2021 p11). Furthermore, it shows the potential for an expansion to other astrophysical messengers, with the authors stating their ambition to apply the same model to radio-emission and high-energy neutrinos.
Twenty-five years. That is the time we have from now to ensure a smooth transition between the LHC and the next major collider at CERN. Twenty-five years to approve a project, find the necessary funding, solve administrative problems and define a governance model; to dig a tunnel, equip it with a cutting-edge accelerator, and design and build experiments at the limits of technology.
One of the most memorable moments of my time as president of the CERN Council came on 19 June 2020, when delegates from CERN’s Member States adopted a resolution updating the European strategy for particle physics. The implementation of the European strategyrecommendations is now in full swing, based around two major topics for CERN’s long-term future: the Future Circular Collider (FCC) feasibility study with an organisational schema in place, and the elaboration of roadmaps for advanced accelerator and detector technologies. At the next strategy update towards the middle of the decade, we should be able to decide if the first phase of the FCC – an electron–positron collider operating at centre-of-mass energies from 91 GeV (the Z mass) to 365 GeV (above the tt production threshold) – can be built, paving the way for a hadron collider with an energy of at least 100 TeV in the same tunnel. By then, we should also have a clearer picture of the potential of novel accelerator technologies, such as muon colliders or plasma acceleration.
Besides the purely technical questions, many other challenges lie ahead. It will be indispensable to attract major interregional partners to CERN’s next large project. Together with the scientific impact, the socioeconomic benefits of skills and technologies built through large research infrastructures are increasingly recognised, which makes a new collider an appealing prospect for states to participate in. But what collaboration model can we elaborate together that is fair and efficient? How can we build bridges to other projects currently discussed, such as the ILC? The US recently started its own “Snowmass” strategy process, which may also impact the decisions ahead.
Neither the implementation of the technology roadmaps nor the FCC feasibility study, and far less its construction, can be carried out by CERN alone. Without a tight network of collaboration and exchanges it will not be possible to find the brains, the hands and the financial resources to ensure that CERN continues to thrive in the long term. The collaboration and support from laboratories and institutes in CERN’s Member and Associate Member States and beyond are crucial. Can we imagine new ways to enhance and to intensify the collaboration, to spread the quality and to share the savoir faire? Understanding where difficulties may lay merits continued efforts.
For projects that reach far into the century, we will need the curiosity, creativity and motivation of young people entering our field. Efforts such as the recent ECFA early-career researcher survey are salutary. But are there other means through which we can broaden the freedom and creativity for young scientists within our highly organised collaborations? If there are silver linings to the pandemic, one is surely the increased accessibility to scientific discourse for a greater range of young and diverse researchers that our adaptation to virtual meetings has demonstrated.
Societal acceptance will also be crucial in convincing local communities to accept the impact of a new, big project. Developing environmentally friendly technologies is one factor, especially if we can contribute with innovative solutions. In this context, the launch in September 2020 of CERN’s first public environment report (with a second report about to be published) is timely. CERN’s new education and outreach centre, the Science Gateway, will also significantly increase the number of people who can visit and be inspired by CERN.
For projects that reach far into the century, we will need the curiosity, creativity and motivation of young people entering
our field
The enormous amount of work that has taken place during Long Shutdown 2 lays the foundation for the HL-LHC later this decade. However, beyond ensuring the success of this flagship programme, and that of CERN’s large and diverse portfolio of non-collider experiments, we must clearly and carefully explain the case for continued exploration at the energy frontier. To other scientists: we all benefit from mutual exchange and stimulation. To teachers and educators: we can contribute to make science fascinating and help attract young people into STEM subjects. To society: we can help increase scientific literacy, which is crucial for democracies to distinguish sense from, well, nonsense.
Twenty-five years is not long. And no matter our individual roles at CERN, we each have our work cut out. Together, we need to stand behind this unique laboratory, be proud of its past achievements, and embrace the changes necessary to build its – and our – future.
Having led the SKAO for almost a decade, how did it feel to get the green light for construction in June this year?
The project has been a long time in gestation and I have invested much of my professional life in the SKA project. When the day came, I was 95% confident that the SKAO council would give us the green light to proceed, as we were still going through ratification processes in national parliaments. I sent a message to my senior team saying: “This is the most momentous week of my career” because of the collective effort of so many people in the observatory and across the entire partnership over so many years. It was a great feeling, even if we couldn’t celebrate properly because of the pandemic.
What will the SKA telescopes do that previous radio telescopes couldn’t?
The game changer is the sheer size of the facility. Initially, we’re building 131,072 low-frequency antennas in Western Australia (“SKA-Low”) and 197 15 m-class dishes in South Africa (“SKA-Mid”). This will provide us with up to a factor of 10 improvement in our ability to see fainter details in the universe. The long-term SKA vision will increase the sensitivity by a further factor of 10. We’ve got many science areas, but two are going to be unique to us. One is the ability to detect hydrogen all the way back to the epoch of reionisation, also called the “cosmic-dawn”. The frequency range that we cover, combined with the large collecting area and the sensitivity of the two radio telescopes, will allow us to make a “movie” of the universe evolving from a few hundred million years after the Big Bang to the present day. We probably won’t see the first stars but will see the effect of the first stars, and we may see some of the first galaxies and black holes.
We put a lot of effort into conveying the societal impact of the SKA
The second key science goal is the study of pulsars, especially millisecond pulsars, which emit radio pulses extremely regularly, giving astronomers superb natural clocks in the sky. The SKA will be able to detect every pulsar that can be detected on Earth (at least every pulsar that is pointing in our direction and within the ~70% of the sky visible by the SKA). Pulsars will be used as a proxy to detect and study gravitational waves from extreme phenomena. For instance, when there’s a massive galaxy merger that generates gravitational waves, we will be able to detect the passage of the waves through a change in the pulse arrival times. The SKA telescopes will be a natural extension of existing pulsar-timing arrays, and will be working as a network but also individually.
Another goal is to better understand the influence of dark matter on galaxies and how the universe evolves, and we will also be able to address questions regarding the nature of neutrinos through cosmological studies.
How big is the expected SKA dataset, and how will it be managed?
It depends where you look in the data stream, because the digital signal processing systems will be reducing the data volume as much as possible. Raw data coming out of SKA-Low will be 2 Pb per second – dramatically exceeding the entire internet data rate. That data goes from our fibre network into data processing, all on-site, with electronics heavily shielded to protect the telescopes from interference. Coming out from there, it’s about 5 Tb of data per second being transferred to supercomputing facilities off-site, which is pretty much equivalent to the output generated by SKA-Mid in South Africa. From that point the data will flow into supercomputers for on-the-fly calibration and data processing, emerging as “science-ready” data. It all flows into what we call the SKA Regional Centre network, basically supercomputers dotted around the globe, very much like that used in the Worldwide LHC Computing Grid. By piping the data out to a network of regional centres at a rate of 100 Gb per second, we are going to see around 350 Pb per year of science data from each telescope.
And you’ve been collaborating with CERN on the SKA data challenge?
Very much so. We signed a memorandum of understanding three years ago, essentially to learn how CERN distributes its data and how its processing systems work. There are things we were able to share too, as the SKA will have to process a larger amount of data than even the High-Luminosity LHC will produce. Recently we have entered into a further, broader collaboration with CERN, GÉANT and PRACE [the Partnership for Advanced Computing in Europe] to look at the collaborative use of supercomputer centres in Europe.
SKAO’s organisational model also appears to have much in common with CERN’s?
If you were to look at the text of our treaty you would see its antecedents in those of CERN and ESO (the European Southern Observatory). We are an intergovernmental organisation with a treaty and a convention signed in Rome in March 2019. Right now, we’ve got seven members who have ratified the convention, which was enough for us to kick-off the observatory, and we’ve got countries like France, Spain and Switzerland on the road to accession. Other countries like India, Sweden, Canada and Germany are also following their internal processes and we expect them to join the observatory as full members in the months to come; Japan and South Korea are observers on the SKAO council at this stage. Unlike CERN, we don’t link member contributions directly to gross domestic product (GDP) – one reason being the huge disparity in GDP amongst our member states. We looked at a number of models and none of them were satisfactory, so in the end we invented something that we use as a starting point for negotiation and that’s a proxy for the scientific capacity within countries. It’s actually the number of scientists that an individual country has who are members of the International Astronomical Union. For most of our members it correlates pretty well with GDP.
Is there a sufficient volume of contracts for industries across the participating nations?
Absolutely. The SKA antennas, dishes and front-ends are essentially evolutions of existing designs. It’s the digital hardware and especially the software where there are huge innovations with the SKA. We have started a contracting process with every country and they’re guaranteed to get at least 70% of their investment in the construction funds back. The SKAO budget for the first 10 years – which includes the construction of the telescopes, the salaries of observatory staff and the start of first operations – is €2 billion. The actual telescope itself costs around €1.2 billion.
Why did it take 30 years for the SKA project to be approved?
Back in the late 1980s/early 1990s, radio astronomers were looking ahead to the next big questions. The first mention of what we call the SKA was at a conference in Albuquerque, New Mexico, celebrating the 10th anniversary of the Very Large Array, which is still a state-of-the-art radio telescope. A colleague pulled together discussions and wrote a paper proposing the “Hydrogen Array”. It was clear we would need approximately one square kilometre of collecting area, which meant there had to be a lot of innovation in the telescopes to keep things affordable. A lot of the early design work was funded by the European Commission and we formed an international steering committee to coordinate the effort. But it wasn’t until 2011 that the SKA Organisation was formed, allowing us to go out and raise the money, put the organisational structure in place, confirm the locations, formalise the detailed design and then go and build the telescopes. There was a lot of exploration surrounding the details of the intergovernmental organisation – at one point we were discussing joining ESO.
Building the SKA 10 years earlier would have been extremely difficult, however. One reason is that we would have missed out on the big-data technology and innovation revolution. Another relates to the cost of power in these remote regions: SKA’s Western Australia site is 200 km from the nearest power grid, so we are powering things with photovoltaics and batteries, the cost of which has dropped dramatically in the past five years.
What are the key ingredients for the successful management of large science projects?
One has to have a diplomatic manner. We’ve got 16 countries involved all the way from China to Canada and in both hemispheres, and you have to work closely with colleagues and diverse people all the way up to ministerial level. Being sure the connections with the government are solid and having the right connections are key. We also put a lot of effort into conveying the societal impact of the SKA. Just as CERN invented the web, Wi-Fi came out of radio astronomy, as did a lot of medical imaging technology, and we have been working hard to identify future knowledge-transfer areas.
It also would have been much harder if I did not have a radio-astronomy background, because a lot of what I had to do in the early days was to rely on a network of radio-astronomy contacts around the world to sign up for the SKA and to lobby their governments. While I have no immediate plans to step aside, I think 10 or 12 years is a healthy period for a senior role. When the SKAO council begins the search for my successor, I do hope they recognise the need to have at least an astronomer, if not radio astronomer.
I look at science as an interlinked ecosystem
Finally, it is critical to have the right team, because projects like this are too large to keep in one person’s head. The team I have is the best I’ve ever worked with. It’s a fantastic effort to make all this a reality.
What are the long-term operational plans for the SKA?
The SKA is expected to operate for around 50 years, and our science case is built around this long-term aspiration. In our first phase, whose construction has started and should end in 2028/2029, we will have just under 200 dishes in South Africa, whereas we’d like to have potentially up to 2500 dishes there at the appropriate time. Similarly, in Western Australia we have a goal of up to a million low-frequency antennas, eight times the size of what we’re building now. Fifty years is somewhat arbitrary, and there are not yet any funded plans for such an expansion, but the dishes and antennas themselves will easily last for that time. The electronics are a different matter. That’s why the Lovell Telescope, which I can see outside my window here at SKAO HQ, is still an active science instrument after 65 years, because the electronics inside are state of the art. In terms of its collecting area, it is still the third largest steerable dish on Earth!
How do you see the future of big science more generally?
If there is a bright side to the COVID-19 pandemic, it has forced governments to recognise how critical science and expert knowledge are to survive, and hopefully that has translated into more realism regarding climate change for example. I look at science as an interlinked ecosystem: the hard sciences like physics build infrastructures designed to answer fundamental questions and produce technological impact, but they also train science graduates who enter other areas. The SKAO governments recognise the benefits of what South African colleagues call human capital development: that scientists and engineers who are inspired by and develop through these big projects will diffuse into industry and impact other areas of society. My experience of the senior civil servants that I have come across tells me that they understand this link.
Describing itself as a big-data graph-analytics start-up, gluoNNet seeks to bring data analysis from CERN into “real-life” applications. Just two years old, the 12-strong firm based in Geneva and London has already aided clients with decision making by simplifying open-to-public datasets. With studies predicting that in three to four years almost 80% of data and analytics innovations may come from graph technologies, the physicist-based team aims to be the “R&D department” for medium-sized companies and help them evaluate massive volumes of data in a matter of minutes.
gluoNNet co-founder and president Daniel Dobos, an honorary researcher at the Lancaster University, first joined CERN in 2002, focusing on diamond and silicon detectors for the ATLAS experiment. A passion to share technology with a wider audience soon led him to collaborate with organisations and institutes outside the field. In 2016 he became head of foresight and futures for the United Nations-hosted Global Humanitarian Lab, which strives to bring up-to-date technology to countries across the world. Together with co-founder and fellow ATLAS collaborator Karolos Potamianos, an Ernest Rutherford Fellow at the University of Oxford, the pair have been collaborating on non-physics projects since 2014. An example is the THE Port Association, which organises in-person and online events together with CERN IdeaSquare and other partners, including “humanitarian hackathons”.
CERN’s understanding of big data is different to other’s
Daniel Dobos
gluoNNet was a natural next step to bring data analysis from high-energy physics into broader applications. It began as a non-profit, with most work being non-commercial and helping non-governmental organisations (NGOs). Working with UNICEF, for example, gluoNNet tracked countries’ financial transactions on fighting child violence to see if governments were standing by their commitments. “Our analysis even made one country – which was already one of the top donors – double their contribution, after being embarrassed by how little was actually being spent,” says Dobos.
But Dobos was quick to realise that for gluoNNet to become sustainable it had to incorporate, which it did in 2020. “We wanted to take on jobs that were more impactful, however they were also more expensive.” A second base was then added in the UK, which enabled more ambitious projects to be taken on.
Tracking flights
One project arose from an encounter at CERN IdeaSquare. The former head of security of a major European airline had visited CERN and noticed the particle-tracking technology as well as the international and collaborative environment; he believed something similar was needed in the aviation industry. During the visit a lively discussion about the similarities between data in aviation and particle tracking emerged. This person later became a part of the Civil Aviation Administration of Kazakhstan, which gluoNNet now works with to create a holistic overview of global air traffic (see image above). “We were looking for regulatory, safety and ecological misbehaviour, and trying to find out why some airplanes are spending more time in the air than they were expected to,” says Kristiane Novotny, a theoretical physicist who wrote her PhD thesis at CERN and is now a lead data scientist at gluoNNet. “If we can find out why, we can help reduce flight times, and therefore reduce carbon-dioxide emissions due to shorter flights.”
Using experience acquired at CERN in processing enormous amounts of data, gluoNNet’s data-mining and machine-learning algorithms benefit from the same attitude as that at CERN, explains Dobos. “CERN’s understanding of big data is different to other’s. For some companies, what doesn’t fit in an Excel sheet is considered ‘big data’, whereas at CERN this is miniscule.” Therefore, it is no accident that most in the team are CERN alumni. “We need people who have the CERN spirit,” he states. “If you tell people at CERN that we want to get to Mars by tomorrow, they will get on and think about how to get there, rather than shutting down the idea.”
Though it’s still early days for gluoNNet, the team is undertaking R&D to take things to the next level. Working with CERN openlab and the Middle East Technical University’s Application and Research Center for Space and Accelerator Technologies, for example, gluoNNet is exploring the application of quantum-computing algorithms (namely quantum-graph neural networks) for particle-track reconstruction, as well as industrial applications, such as the analysis of aviation data. Another R&D effort, which originated at the Pan European Quantum Internet Hackathon 2019, aims to make use of quantum key distribution to achieve a secure VPN (virtual private network) connection.
One of gluoNNet’s main future projects is a platform that can provide an interconnected system for analysts and decision makers at companies. The platform would allow large amounts of data to be uploaded and presented clearly, with Dobos explaining, “Companies have meetings with data analysts back and forth for weeks on decisions; this could be a place that shortens these decisions to minutes. Large technology companies start to put these platforms in place, but they are out of reach for small and medium sized companies that can’t develop such frameworks internally.”
The vast amounts of data we have available today hold invaluable insights for governments, companies, NGOs and individuals, says Potamianos. “Most of the time only a fraction of the actual information is considered, missing out on relationships, dynamics and intricacies that data could reveal. With gluoNNet, we aim to help stakeholders that don’t have in-house expertise in advanced data processing and visualisation technologies to get insights from their data, making its complexity irrelevant to decision makers.”
Helmut Weber, CERN director of administration from 1992 to 1994, passed away on 16 July. Born in 1947, he obtained his PhD from the Technical University of Vienna, after which he pursued a steep career in the aerospace industry, where he acquired considerable managerial proficiency. Prior to joining CERN, Helmut had been chairman of the board of directors of Skyline Products (US), and member of the board of directors of the ERC (France).
Helmut played a significant role during CERN’s transition from the LEP era to the LHC project. During his three-year appointment, as successor to Georges Vianès and predecessor to Maurice Robin, he was able to implement many necessary improvements to the CERN administration. Examples include the reorganisation of the finance division (split into procurement and accounting divisions) and the creation of a CERNwide working group to standardise administrative procedures using a common online database. He also resolved a number of looming issues carried forward from the LEP era, such as the debt to the CERN Pension Fund and the financial claims made by the Euro–LEP consortium.
Furthermore, together with Meinhard Regler and the active support of CERN (including Kurt Hübner and Philip Bryant), Helmut promoted AUSTRON, a project proposal for a pulsed high-flux neutron spallation source as an international research centre for central Europe. Although this project could unfortunately not be realised due to lack of funding, the MedAustron facility for proton/ion therapy and research was eventually built as an alternative in Wiener Neustadt. It is now fully operational, serving as a successful example of technology transfer from elementary particle physics to medical applications.
Helmut Weber’s most important legacy is, however, his straightforward, uncompromising and honest character that helped to resolve many contentious internal issues at CERN. When he left the organisation, he had made many friends amongst his former colleagues, who will always remember him and miss him.
Norwegian experimental particle physicist Egil Sigurd Lillestøl passed away in Valence, France, on 27 September. He will be remembered as a passionate colleague with exceptional communication and teaching skills, and a friend with many personal interests. He was able to explain the most complex systems and mechanisms in physics so that even the layperson felt they understood it.
Egil Lillestøl obtained his PhD from the University of Bergen in 1970. By which time he had already spent three years (1964–1967) as a fellow at CERN. He was appointed associate professor at his alma mater the same year, and then left for Paris in 1973 where he was a guest researcher at Collège de France. In 1984 Lillestøl was appointed full professor in experimental particle physics in Bergen, where he became central in the PLUTO collaboration at DESY, DELPHI and then ATLAS at CERN.
Over time, CERN became Lillestøl’s main laboratory, first as a paid associate, later as a guest professor and eventually as a staff member, contributing to the management of the experimental programme and significantly improving the conditions for the visiting scientists at the laboratory.
In Norway he acted as national coordinator of CERN activities in preparation for the LHC. He was instrumental in the organisation of the community and discussions of future funding models at the national level, in particular to accommodate the long-term commitments needed for the ATLAS and ALICE construction projects.
Egil Lillestøl played a pivotal role in the CERN Schools of Physics from 1992 until 2009, relaunching the European School of High-Energy Physics as annual events organised in collaboration with JINR, and establishing a new biennial series of schools in Latin America from 2001. He worked tirelessly on preparations for each event, in collaboration with local organisers in each host country, as well as on-site during the two-week-long events.
The Latin-American schools were an important element in increasing the involvement of scientists and institutes from the region in the CERN experimental programme, for which he deserves much credit. Beyond his official duties, he took great pleasure in interacting with the participants of the schools during their free time, and in the evenings he could often be found playing piano to accompany their singing.
As a founding member of the International Thorium Energy Committee, Lillestøl was a strong proponent for thorium-based nuclear power. He was also one of the main drivers behind the UNESCO-supported travelling exhibition “Science bringing nations together”, organised jointly by JINR and CERN.
As a teacher and a lecturer, Lillestøl was a role model. He always tailored his presentations to match the audience. His tabletop book The Search for Infinity, co-authored with Gordon Fraser and Inge Sellevåg, became a bestseller and has been published in nine language editions.
Egil Lillestøl was a bon viveur who spread joy around him. He had an impressive repertoire of anecdotes, including topics such as how to cold-smoke salmon. He enjoyed sports and was active in the CERN clubs for cycling, skiing and sailing. He leaves behind his wife and former colleague, Danielle, and two adult children from his first marriage.
Simon Eidelman, a leading researcher at the Budker Institute of Nuclear Physics in Novosibirsk, Russia, and a professor of Novosibirsk State University (NSU), passed away on 28 June.
He was a key member of experimental collaborations at Novosibirsk, CERN and KEK, and a leading author in the Particle Data Group. Eidelman served the high-energy physics community in a variety of ways, including as Novosibirsk’s correspondent for this magazine for more than 20 years.
Simon (Semyon) Eidelman was born in Odessa in 1948. He went to Novosibirsk aged 15 to participate in a national mathematics Olympiad, and ended up staying to attend a special high school for extraordinarily gifted students. He then studied physics at NSU. Even before graduating, in 1968 Simon joined the Budker Institute and remained there his entire professional life. In parallel, he was a faculty member at NSU and held the high-energy physics chair for 10 years. Simon always cared for, helped and supported students and young colleagues.
Meson expert
Eidelman’s scientific activity mostly concerned experiments at e+e– colliders, beginning with participation in the discovery of multi-hadron events at the pioneering VEPP-2 collider.
In 1974 he moved to experiments with the OLYA detector at the upgraded VEPP-2M, where a comprehensive study of e+e– annihilation into hadrons was performed up to an energy of 1.4 GeV. Later, this detector was moved to the VEPP-4 collider, where high-precision measurements of the J/ψ and ψʹ masses were performed. Simon’s work at VEPP-2 and VEPP-4, and the analysis of the so-called box anomaly, made him one of the world’s leading experts on vector mesons. Together with Lery Kurdadze and Arkady Vainshtein, he also performed the first comparison of QCD sum rules with experiment.
Simon became one of the pioneers in the evaluation of the hadronic contribution to the anomalous magnetic moment of the muon
Simon was a key member of several major experimental collaborations: KEDR, CMD-2 and CMD-3 at Novosibirsk, LHCb at CERN and Belle, Belle II and g-2/EDM at J-PARC. Recently he contributed to the KLF proposal at JLab to build a secondary beam of neutral kaons to be used with the GlueX setup for strange-hadron spectroscopy. Just last year he proposed to measure the charged kaon mass with unprecedented precision using the Siddharta X-ray experiment at DAΦNE in Frascati – which would have yielded a dramatic improvement on determinations of the masses of charmonium-like exotic mesons.
Thanks to his deep understanding of hadron-production cross sections, Simon became one of the pioneers in the evaluation of the hadronic contribution to the anomalous magnetic moment of the muon, g-2. He was a founding member of the Muon g-2 Theory Initiative and a key contributor to its first white paper, published last year, which provides the community consensus for the Standard Model prediction. He was also an authority on strongly interacting hadrons and resonances, as well as the τ lepton and two-photon physics.
Simon was a key author in the international Particle Data Group (PDG) for 30 years, leading the PDG subgroup responsible for meson resonances since 2006. In recognition of his contributions, he was chosen to be the first author of the 2004 edition of the Review of Particle Physics. He was also a great source of inspiration for the Quarkonium Working Group (QWG). Attendees of the QWG workshops will remember his lucid presentations, his great enthusiasm for research and his keen scientific insights. Moreover, he was greatly appreciated for his wisdom and calm counsel during intense discussions.
Superb editor
Thanks to his deep knowledge and wide scientific horizons, combined with a wonderful sense of humour and a kind and friendly nature, Simon possessed a unique ability to galvanise colleagues into joint projects within many international collaborations and meetings. He was also deeply engaged in training the next generations of physicists, most recently being the driving force behind the school on muon g-2.
Simon was also a superb scientific editor. He had a rare gift of formulating scientific problems and results clearly and concisely, providing an invaluable contribution to the very large number of papers that he authored, co-authored and refereed. Several international meetings have been dedicated to Simon’s memory, including CHARM 2021 and the 4th Plenary Workshop of the Theory Initiative.
We have lost a remarkable physicist, and a dear and kind person. All who had the privilege of knowing and working with Simon Eidelman will always remember him as an invaluable colleague.
Year after year, particle physicists celebrate the luminosity records established at accelerators around the world. On 15 June 2020, for example, a new world record for the highest luminosity at a particle collider was claimed by SuperKEKB at the KEK laboratory in Tsukuba, Japan. Electron–positron collisions at the 3 km-circumference machine had reached an instantaneous luminosity of 2.22 × 1034 cm–2s–1 – surpassing the 27 km-circumference LHC’s record of 2.14 × 1034 cm–2s–1 set with proton–proton collisions in 2018. Within a year, SuperKEKB had celebrated a new record of 3.1 × 1034 cm–2s–1 (CERN Courier September/October 2021 p8).
Beyond the setting of new records, precise knowledge of the luminosity at particle colliders is vital for physics analyses. Luminosity is our “standard candle” in determining how many particles can be squeezed through a given space (per square centimetre) at a given time (per second); the more particles we can squeeze into a given space, the more likely they are to collide, and the quicker the experiments fill up their tapes with data. Multiplied by the cross section, the luminosity gives the rate at which physicists can expect a given process to happen, which is vital for searches for new phenomena and precision measurements alike. Luminosity milestones therefore mark the dawn of new eras, like the B-hadron or top-quark factories at SuperKEKB and LHC (see “High-energy data” figure). But what ensures we didn’t make an accidental blunder in calculating these luminosity record values?
Physics focus
Physicists working at the precision frontier need to infer with percent-or-less accuracy how many collisions are needed to reach a certain event rate. Even though we can produce particles at an unprecedented event rate at the LHC, however, their cross section is either too small (as in the case of Higgs-boson production processes) or impacted too much by theoretical uncertainty (for example in the case of Z-boson and top-quark production processes) to enable us to establish the primary event rate with a high level of confidence. The solution comes down to extracting one universal number: the absolute luminosity.
The fundamental difference between quantum electrodynamics (QED) and chromodynamics (QCD) influences how luminosity is measured at different types of colliders. On the one hand, QED provides a straightforward path to high precision because the absolute rate of simple final states is calculable to very high accuracy. On the other, the complexity in QCD calculations shapes the luminosity determination at hadron colliders. In principle, the luminosity can be inferred by measuring the total number of interactions occurring in the experiment (i.e. the inelastic cross section) and normalising to the theoretical QCD prediction. This technique was used at the SppS and Tevatron colliders. A second technique, proposed by Simon van der Meer at the ISR (and generalised by Carlo Rubbia for the pp case), could not be applied to such single-ring colliders. However, this van der Meer-scan method is a natural choice at the double-ring RHIC and LHC colliders, and is described in the following.
The LHC-experiment collaborations perform a precise luminosity inference from data (“absolute calibration”) by relating the collision rate recorded by the subdetectors to the luminosity of the beams. With the implementation of multiple collisions per bunch crossing (“pileup”) and intense collision-induced radiation, which acts as a background source, dedicated luminosity-sensitive detector systems called luminometers also had to be developed (see “Luminometers” figure). To maximise the precision of the absolute calibration, beams with large transverse dimensions and relatively low intensities are delivered by the LHC operators during a dedicated machine preparatory session, usually held once a year and lasting for several hours. During these unconventional sessions, called van der Meer beam-separation scans, the beams are carefully displaced with respect to each other in discrete steps, horizontally and vertically, while observing the collision rate in the luminometers (see “Closing in” figure). This allows the effective width and height of the two-dimensional interaction region, and thus the beam’s transverse size, to be measured. Sources of systematic uncertainty are either common to all experiments and are estimated in situ, for example residual differences between the measured beam positions and those provided by the operational settings of the LHC magnets, or depend on the scatter between luminometers. A major challenge with this technique is therefore to ensure that the obtained absolute calibration as extracted under the specialised van der Meer conditions is still valid when the LHC operates at nominal pileup (see “Stability shines” figure).
Stepwise approach
Using such a stepwise approach, the CMS collaboration obtained a total systematic uncertainty of 1.2% in the luminosity estimate (36.3 fb–1) of proton–proton collisions in 2016 – one of the most precise luminosity measurements ever made at bunched-beam hadron colliders. Recently, taking into account correlations between the years 2015–2018, CMS further improved on its preliminary estimate for the proton–proton luminosity at higher collision energies of 13 TeV. The full Run-2 data sample corresponds to a cumulative (“integrated”) luminosity of 140 fb–1 with a total uncertainty of 1.6%, which is comparable to the preliminary estimate from the ATLAS experiment.
In the coming years, in particular when the High-Luminosity LHC (HL-LHC) comes online, a similarly precise luminosity calibration will become increasingly important as the LHC pushes the precision frontier further. Under those conditions, which are expected to produce 3000 fb–1 of proton–proton data by the end of LHC operations in the late 2030s (see “Precision frontier” figure), the impact from (at least some of) the sources of uncertainty is expected to be larger due to the expected high pileup. However, they can be mitigated using techniques already established in Run 2 and/or are currently under deployment. Overall, the strategy for the HL-LHC should combine three different elements: maintenance and upgrades of existing detectors; development of new detectors; and adding dedicated readouts to other planned subdetectors for luminosity and beam monitoring data. This will allow us to meet the tight luminosity performance target (≤ 1%) while maintaining a good diversity of luminometers.
Given that accurate knowledge of luminosity is a key ingredient of most physics analyses, experiments also release precision estimates for specialised data sets, for example using either proton–proton collisions at lower centre-of-mass energies or involving nuclear collisions at different per-nucleon centre-of-mass energies, as needed by the ALICE but also ATLAS, CMS and LHCb experiments. On top of the van der Meer method, the LHCb collaboration uniquely employs a “beam-gas imaging” technique in which vertices of interactions between beam particles and gas nuclei in the beam vacuum are used to measure the transverse size of the beams without the need to displace them. In all cases, and despite the fact that the experiments are located at different interaction points, their luminosity-related data are used in combination with input from the LHC beam instrumentation. Close collaboration among the experiments and LHC operators is therefore a key prerequisite for precise luminosity determination.
Protons versus electrons
Contrary to the approach at hadron colliders, the operation of the SuperKEKB accelerator with electron–positron collisions allows for an even more precise luminosity determination. Following well-known QED processes, the Belle II experiment recently reported an almost unprecedented precision of 0.7% for data collected during April–July 2018. Though electrons and positrons conceptually give the SuperKEKB team a slightly easier task, its new record for the highest luminosity set at a collider is thus well established.
SuperKEKB’s record is achieved thanks to a novel “crabbed waist” scheme, originally proposed by accelerator physicist Pantaleo Raimondi. In the coming years this will enable the luminosity of SuperKEKB is to be increased by a factor of almost 30 to reach its design target of 8 × 1035 cm–2s–1. The crabbed waist scheme, which works by squeezing the vertical height of the beams at the interaction point, is also envisaged for the proposed Future Circular Collider (FCC-ee) at CERN. It also differs from the “crab-crossing” technology, based on special radiofrequency cavities, which is now being implemented at CERN for the high-luminosity phase of the LHC. While the LHC has passed the luminosity crown to SuperKEKB, taken together, novel techniques and the precise evaluation of their outcome continue to push forward both the accelerator and related physics frontiers.
At the heart of the ITER fusion experiment is an 18 m-tall, 1000-tonne superconducting solenoid – the largest ever built. Its 13 T field will induce a 15 MA plasma current inside the ITER tokamak, initiating a heating process that ultimately will enable self-sustaining fusion reactions. Like all-things ITER, the scale and power of the central solenoid is unprecedented. Fabrication of its six niobium-tin modules began nearly 10 years ago at a purpose-built General Atomics facility in California. The first module left the factory on 21 June and, after traveling more than 2400 km by road and then crossing the Atlantic, the 110 tonne component arrived at the ITER construction site in southern France on 9 September. During a small ceremony marking the occasion, the director of engineering and projects for General Atomics described the job as: “among the largest, most complex and demanding magnet programmes ever undertaken” and “the most important and significant project of our careers.”
The US is one of seven ITER members, along with China, the European Union, India, Japan, Korea and Russia, who ratified an international agreement in 2007. Each member shares in the cost of project construction, operation and decommissioning, and also in the experimental results and any intellectual property. Europe is responsible for the largest portion of construction costs (45.6%), with the remainder shared equally by the other members. Mirroring the successful model of collider experiments at CERN, the majority (85%) of ITER-member contributions are to be delivered in the form of completed components, systems or buildings – representing untold hours of highly skilled work both in the member states and at the ITER site.
First plasma
Assembly of the tokamak, which got under way in 2020, marks an advance to a crucial new phase for the ITER project. Production of its 18 D-shaped coils that provide the toroidal magnetic field, each 17 m high and weighing 350 tonnes, is in full swing, while its circular poloidal coils are close to completion. The remaining solenoid modules and all other major tokamak components are scheduled to be on site by mid-2023. Despite the impact of the global pandemic, the ITER teams are working towards the baseline target for “first plasma” by the end of 2025, with more than 2000 persons on site each day.
ITER’s purpose is to demonstrate the scientific and technological feasibility of fusion power for peaceful purposes. Key objectives are defined for this demonstration, namely: production of 500 MW of fusion power with a ratio of fusion power to input heating power (Q) of at least 10 for at least 300 seconds, and sustainment of fusion power with Q = 5 consistent with steady-state operation. The key factor in reaching these objectives is the world’s largest tokamak, a concept whose name comes from a Russian acronym roughly translated “toroidal chamber with magnetic coils”. This could also describe CERN’s Large Hadron Collider (LHC), but as we will see, the two magnetic confinement schemes are significantly different.
Among the largest, most complex and demanding magnet programmes ever undertaken
ITER chose deuterium and tritium (heavier variants of ordinary hydrogen) for its fuel because the D–T cross-section is the highest of all known fusion reactions. However, the energy at which the cross-section is maximum (~65 keV) is equivalent to almost 1 billion degrees. As a result, the fuel will no longer be in the form of gas as it is introduced but in the plasma state, where it is broken down to its electrically charged components (ions and electrons). As in the LHC, the electric charge introduces the possibility to hold the ions and electrons in place using magnetic fields generated by electromagnets – in both cases by superconducting magnets held at temperatures near absolute zero to avoid massive electrical consumption.
A simple picture of how the magnets in ITER work together to confine a plasma with temperatures greater than 100 million degrees begins with the toroidal field coils(see “Trapping a plasma” figure). Eighteen of these are arranged to make a magnetic field that is circular-centered on a vertical line. Charged particles, to the crudest approximation, follow the magnetic field, so it would seem that the problem of confining them is solved. However, at the next level of approximation, the charged particles actually make small “gyro-orbits”, like beads on a wire. This introduces a difficulty because the “gyroradius” of these orbits depends on the strength of the magnetic field, and the toroidal magnetic field increases in strength closer to the vertical line defining its centre. This means that the gyroradius is smaller on the inner part of the orbit, which leads to a vertical motion of the charged particles. Since the direction of motion depends on the charge of the particle, however, the opposite charges move away from each other. This makes a vertical electric field which, when combined with the toroidal field, rapidly expels charged particles radially outward – eliminating confinement! Two Russian physicists, Tamm and Sakharov, proposed the idea in the 1950s that a current flowing in the plasma in the toroidal direction would generate a net helical field and charged particles flowing along the total field would short out the electric field, leading to confinement. This was the invention of the tokamak magnetic confinement concept.
Magnetic configuration
In ITER, this current is generated by the powerful central solenoid, aligned on the vertical line at the centre of the toroidal field. It acts as the primary winding of a transformer, with the plasma as the secondary. There remains one more issue to address, again with magnets. The pressure and current in the plasma result in a force that tries to push the plasma further from the vertical line at the centre. To counter this force in ITER, six “poloidal field” coils are aligned – again about the vertical centerline – to generate vertical fields that push the plasma back toward the vertical line and also shape the plasma in ways that enhance the performance. A number of correction coils will complete ITER’s complex magnetic configuration, which will demonstrate the deployment of the Nb3Sn conductor – the same as is being implemented for high-field accelerator magnets at the High-Luminosity LHC and as proposed for future colliders – on a massive scale. CERN signed a collaboration agreement with ITER in 2008 concerning the design of high-temperature superconducting current leads and other magnet technologies, and acted as one of the “reference” laboratories for testing ITER’s superconducting strands.
Despite the pandemic disrupting production and transport, the first step of ITER’s tokamak assembly sequence – the installation of the base of the cryostat into the tokamak bioshield – was achieved in May 2020. The ITER cryostat, which must be made of non-magnetic stainless steel, will keep the entire (30 m diameter by 30 m high) tokamak assembly at the low temperatures necessary for the magnets to function. It comes in four pieces (base, lower and upper cylinders, and lid) that are welded together in the tokamak building. At 1250 tonnes, the cryostat-base lift was the heaviest of the entire assembly sequence, its successful completion officially starting the assembly sequence (see “Heavy lifting” image). Later in 2020, the lower cylinder was then installed and welded to the base.
Bottle up
With the “bottle” to hold the tokamak placed in position, installation of the electromagnets could begin. The two poloidal field coils at the bottom of the tokamak, PF6 and PF5, had to be installed first. PF6 was placed inside the cryostat earlier this year (see “Poloidal descent” image), while the second was lifted into place this September. The next big milestone is the assembly and installation of the first “sector” of the tokamak. The vacuum vessel in which the fusion plasma is made is divided into nine equal sectors (like the slices of an orange), due to limitations on the lifting capacity and to facilitate parallel fabrication of these large objects. Each sector of the vacuum vessel (see “Monster moves” image) has two toroidal field coils associated with it.
In August, this vacuum vessel and its associated thermal shields were assembled together with the toroidal field coils on the sector sub-assembly tool for the first time (see “Shaping up” image). Once joined into a single unit, it will be installed in the cryostat in late 2021. The second vacuum-vessel sector arrived on site in August and will be assembled with the two associated toroidal-field coils already on site, with a target to install the final unit in the cryostat early in 2022. Sector components are scheduled to arrive, be put together, and then installed in the cryostat and welded together in assembly-line fashion, with the closure of the vacuum vessel scheduled for the end of 2023. The six central-solenoid modules are also to be assembled outside the cryostat into a single structure and installed in the cryostat shortly before closure. Following the arrival of the first module this summer, the second is complete and ready for shipping. Of the remaining four niobium-titanium poloidal field magnets, three are being fabricated on-site because they are too large to transport by road and all four are in advanced stages of production.
Of course, there is more to ITER than its tokamak. In parallel, work on the supporting plant is under way. Four large transformers, which draw the steady-state electrical power from the grid, have been in operation since early 2019, while the medium- and low-voltage load centres that power clients in the plant buildings have been turned over to the operations division. The secondary and tertiary cooling systems, the chilled water and demineralised water plants, and the compressed-air and breathable-air plants are also currently being commissioned. The three large transformers that connect the pulsed power supplies for the magnets and the plasma heating systems have been qualified for operation on the 400 kV grid. The next big steps are the start of functional testing of the cryoplant and the reactive power compensation at the end of this year, and of the magnet power supplies and the first plasma heating system early in 2022.
Perhaps the most common question one encounters when talking about ITER is: when will tokamak operations begin? Following the closure of the vacuum vessel in 2023, the current baseline schedule includes one year of installation work inside the cryostat before its closure, followed by integrated commissioning of the tokamak in 2025, culminating in “first plasma” by the end of 2025. By mandate from ITER’s governing body, the ITER Council, this schedule was put into place in 2016 as the “fastest technically achievable”, meaning no contingency. Clearly the pandemic has impacted the ability to meet that schedule, but the actual impact is still not possible to determine accurately. The challenge in this assessment is that 85% of the ITER components are delivered as in-kind contributions from the ITER members, and the pandemic has affected and continues to affect the manufacturing work on items that take years to complete. The components now being installed were substantially complete at the onset of the pandemic, but even these deliveries have encountered difficulties due to the disruption of the global shipping industry. Component installation in the tokamak complex has also been impacted by limited availability of components, goods and services. The possibility of recovery actions or further restrictions is not possible to predict with the needed accuracy today. In this light, the ITER Council has challenged us to do the best possible effort to maintain the baseline schedule, while preparing an assessment of the impact for consideration of a revised baseline schedule next year. The ITER Organization, domestic agencies in the ITER members responsible for supplying in-kind components, and contractors and suppliers around the world are working together to meet this additional challenge.
What the future holds
ITER is expected to operate for 20 years, providing crucial information about both the science and the technology necessary for a fusion power plant. For the science, beyond the obvious interest in meeting ITER’s performance objectives, qualitative frontiers will be crossed in two essential areas of plasma physics. First, ITER will be the first “burning” plasma, where the dominant heating power to sustain the fusion output comes directly from fusion itself. Aspects of the relevant physics have been studied for many years, but the operating point of ITER places it in a fundamentally different regime from present experiments. The same is true of the second frontier: the handling of heat and particle exhaust in ITER. There is a qualitative difference predicted by our best simulation capabilities between the ITER operating point and present experiments. This is also the first touch-point between the physics and the technology: the physics must enable the survival of the wall, while the wall must allow the plasma physics to yield the conditions needed for the fusion reactions. Other essential technologies such as the means to make new fusion fuel (tritium), recycling of the fuel in use in real-time and remote handling for maintenance activities will all be pioneered in ITER.
ITER will provide crucial information about both the science and technology necessary for a fusion power plant
While ITER will demonstrate the potential for fusion energy to become the dominant source of energy production, harnessing that potential requires the demonstration not just of the scientific and technical capabilities but of the economic feasibility too. The next steps along that path are true demonstration power plants – “DEMOs” in fusion jargon – that explore these steps. ITER members are already exploring DEMO options, but no commitments have yet been made. The continuing advance of ITER is critical not just to motivate these next steps but also as a vision of a future where the world is powered by an energy source with universally available fuel and no impact on the environment. What a tremendous gift that would be for future generations.
On 23 July, the great US theoretical physicist Steve Weinberg passed away in hospital in Austin, Texas, aged 88. He was a towering figure in the field, and made numerous seminal contributions to particle physics and cosmology that are part of the backbone of our current understanding of the fundamental laws of nature. He is part of the reduced rank of scientists who, in the course of history, have radically changed the way we understand the universe and our place in it.
Weinberg was born in New York, the son of Jewish immigrants, Eve and Frederick Weinberg. He attended the Bronx High School of Science, where he met Sheldon Glashow, later to become his Harvard colleague and with whom he would share the 1979 Nobel Prize in Physics. Towards the end of high school, Weinberg was already set on becoming a theoretical physicist. He obtained his undergraduate degree at Cornell University in 1954, and then spent a year doing graduate work at the Niels Bohr Institute in Copenhagen, after which he returned to the US to complete his graduate studies at Princeton. His PhD advisor was Sam Treiman and his thesis topic was the application of renormalisation theory to the effects of strong interactions in weak processes. Weinberg obtained his degree in 1957 and then spent two years at Columbia University. From 1959 to 1969 he was at Lawrence Berkeley Laboratory and later UC Berkeley, where he got his tenure in 1964. He was on leave at Harvard (1966–1967) and MIT (1967–1969), where he became professor of physics (1969–1973) and then moved to Harvard (1973–1983), where he succeeded Julian Schwinger as Higgins Professor of Physics. Weinberg joined the faculty of the University of Texas at Austin as the Josey Regental Professor of Physics in 1982, and remained there for the rest of his life.
Immense contributions
Perhaps his best known contribution to physics is his formulation of electroweak unification in the context of gauge theories and using the Brout–Englert–Higgs mechanism of symmetry breaking to give mass to the W and Z bosons, while sparing the photon (CERN Courier November 2017 p25). The names Glashow, Weinberg and Salam are forever associated with the spontaneously broken SU(2) × U(1) gauge theory, which unified the electromagnetic and weak interactions and provided a large number of predictions that have been experimentally confirmed. The most concise and elegant presentation of the theory appears in Weinberg’s famous 1967 paper: “A Model of Leptons”, one of the most cited papers in the history of physics, and a great example of clear science writing (CERN Courier November 2017 p31). At the time, the first family of quarks and leptons was known, but the second was incomplete. After a substantial amount of experimental and theoretical work, we now have the full formulation of the Standard Model (SM) describing our best knowledge of the fundamental laws of nature. This is a collective journey starting with the discovery of the electron in 1897, and concluding with the discovery of the scalar particle of the SM (the Higgs boson) at CERN in 2012. Weinberg was deeply involved with the building of the SM before and beyond his 1967 paper.
Normal humans would need to live several lives to accomplish so much
It is impossible to do justice to all the scientific contributions of Weinberg’s career, but we can list a few of them. In the early 1960s he embarked on the study of symmetry breaking, and wrote a seminal contribution with Goldstone and Salam describing in detail and in full generality the mechanism of spontaneous symmetry breaking in the context of quantum field theory, providing sound bases to the earlier discoveries of Nambu and Goldstone. Around the same time, he worked out the general structure of scattering amplitudes with the emission of arbitrary numbers of photons and gravitons. It is remarkable that this work has played a very important role in the recent study of asymptotic symmetries in general relativity and gauge theories (for example, Bondi–Metzner–Sachs symmetries and generalisations, and the general theory of Feynman amplitudes).
From jets to GUTs
Together with George Sterman, Weinberg started the study of jets in QCD, whose importance in modern high-energy experiments can hardly be exaggerated. He (and independently Frank Wilczek) realised that in the Peccei–Quinn mechanism invoked to solve the strong-CP problem, there is a light pseudoscalar particle lurking in the background. This is the infamous axion, also a prime candidate for dark-matter particles and whose experimental search has been actively pursued for decades. Weinberg was one of the pioneers in the formulation of effective field theories that transformed the traditional approach to quantum field theory. He was the founder of chiral perturbation theory, one of the initiators of relativistic quantum theories at finite temperature, and of asymptotic safety, which has been used in some approaches to quantum gravity. In 1979 he (and independently Leonard Susskind) introduced the notion of technicolour – an alternative to the Brout–Englert–Higgs mechanism in which the scalar particle of the SM appears as a composite fermion, which some find more appealing, but so far has little experimental support. Finally, we can mention his work on grand unification together with Howard Georgi and Helen Quinn, where they used the renormalisation group to understand in detail how a single coupling in the ultraviolet evolves in such a way that in the infrared it generates the coupling constants of the strong, weak and electromagnetic interactions.
Astronomical arguments
Steven Weinberg also made profound contributions in his work on the cosmological constant. In 1989 he used astronomical arguments to indicate that the vacuum energy is many orders of magnitude smaller than would be expected from modern theories of elementary particles. His bound on its possible value based on anthropic reasoning is as deep as it is unsettling. And it agrees surprisingly well with the measured value, as inferred from observations of receding, distant supernovae. It shatters Einstein’s dream of unification, when he asked himself whether the Almighty had any choice in creating the universe. Anthropic reasoning opens the door to theories of the multiverse that may also be considered as inevitable in some versions of inflationary cosmology, and in the theory of the string landscape of possible vacua for our universe. Among all the parameters of the current standard models of cosmology and particle physics, the question of which are environmental and which are fundamental becomes meaningful. Some of their values may ultimately have only a purely statistical explanation based on anthropism. “It’s a depressing kind of solution to the problem,” remarked Weinberg recently in the Courier: “But as I’ve said: there are many conditions that we impose on the laws of nature, such as logical consistency, but we don’t have the right to impose the condition that the laws should be such that they make us happy!” (CERN Courier March/April 2021 p51). On the one hand, his work led to the unification of the weak and electromagnetic forces; on the other the landscape of possibilities points against a unique universe. The tension between both points of view continues.
Weinberg also mastered the art of writing for non-experts. One of the most influential science books written for the general public is his masterpiece The First Three Minutes (1977), which provides a wonderful exposition of modern cosmology, the expansion of the universe, the cosmic microwave background radiation, and of Big Bang nucleosynthesis. Towards the end of the epilogue he formulated his famous statement that generated heated discussions with philosophers and theologians: “The more the universe seems comprehensible, the more it seems pointless.” In the next paragraph he tempers the coldness somewhat: “The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.” But the implied meaning that the laws of nature have no purpose continues to be as provocative as when it was made originally. The debate will linger on for a long time.
Controversies and passions
Weinberg’s non-technical books exhibit an extraordinary erudition in numerous subjects. His approach is original and thorough, and always illuminating. He did not shy away from delicate and controversial discussions. Weinberg was a declared atheist, with a rather negative opinion on the influence of religion on human history and society. He showed remarkable courage to be outspoken and to engage in public debates about it. Again in his 1977 book, he wrote: “Anything that we scientists can do to weaken the hold of religion should be done and may in the end be our greatest contribution to civilisation.” Needless to say, such statements raised a number of blisters in some quarters. He was also a champion of scientific reductionism, something that was not very well received in many philosophical communities. He was clearly passionate about science and scientific principles, and in defence of the search for truth. In Dreams of a Final Theory (2011) he described his fight to avoid the demise of the Superconducting Super Collider (SSC). His ardent and convincing argument about the value of basic science, and also its importance as a motor of economic and technological growth, were not enough to convince sufficient members of the House of Representatives and the project was cancelled in 1993. It was a very hard blow to the US and global high-energy physics communities. The discussion had another great scientist on the other side: Phil Anderson, who passed away in 2020. It is not obvious if Anderson was against particle physics, or against big science. What is clear is that given the size of the budget deficit in the US (now and then), what was saved by not building the SSC did not go to “table top” science.
In a 2015 interview to Third Way, Weinberg explained his philosophy and strategy when writing for the general public: “When we talk about science as part of the culture of our times, we would better make it part of that culture by explaining what we are doing. I think it is very important not to write down to the public. You have to keep in mind that you are writing for people who are not mathematically trained, but are just as smart as you are.”This empathy and respect for the reader is immediately apparent as soon as you open any of his books, and together with the depth and breadth of his insight, explains their success.
He also excelled in the writing of technical books. In the early 1960s Weinberg became interested in astrophysics and cosmology, leading, among other things, to the landmark Gravitation and Cosmology (1971). The book became an instant classic, and it is still useful to learn about many aspects of general relativity and the early universe. In the 1990s he published a masterful three-volume set on The Quantum Theory of Fields, which is probably the definitive treatment on the subject in the 20th century. In 2008 he published Cosmology, an important update of his 1971 work, providing self-contained explanations of the ideas and formulas that are used and tested in modern cosmological observations. He also published Lectures on Quantum Mechanics in 2015, among one of the very best books on the subject, where the depth of his knowledge and insight shine throughout. The man had not lost his grit. Only this year, he published what he described as an advanced undergraduate textbook Foundations of Modern Physics, based on a lecture course he was asked to give at Austin. What distinguishes his scientific books from many others is that, in addition to the care and erudition with which the material is presented, they are also interspersed with all kinds of golden nuggets. Weinberg never avoids some of the conceptual difficulties that plague the subjects, and it is a real pleasure to find deep and inspiring clarifications.
It is not possible to list all his awards and honours, but let’s mention that he was elected to the US National Academy of Sciences in 1972, was awarded the Dannie Heineman Prize for Mathematical Physics in 1977 and the Nobel Prize in Physics in 1979. He was also a foreign honorary member of the Royal Society of London, received a Special Breakthrough Prize in Fundamental Physics in 2020, and has been invited to give the most prestigious lectures on the planet. Normal humans would need to live several lives to accomplish so much.
A great general
Lately, Weinberg was interested in fundamental problems in the foundations and interpretation of quantum mechanics, and in the study of gravitational waves and what we can learn about the distribution of matter in the universe between us and their sources – two subjects of very active current research. In a 2020 preprint “Models of lepton and quark masses”, he returned to a problem that he last tackled in 1972, the fermion mass hierarchy.
His legacy will continue to inspire physicists for generations to come
He also continued lecturing until almost the very end. Weinberg was an avid reader of military history, as evidenced in some of his writings, and as with a great general, he died with his boots on.
The news of his demise spread like a tsunami in our community, and led us into a state of mourning. When such a powerful voice is permanently silenced, we are all inevitably diminished. His legacy will continue to inspire physicists for generations to come.
Steven Weinberg is survived by his wife Louise, professor of law at the University of Texas, whom he married in 1954, his daughter Elizabeth, a medical doctor, and a granddaughter Gabrielle.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.