CERN surveyors have performed the first geodetic measurements for a possible Future Circular Collider (FCC), a prerequisite for high-precision alignment of the accelerator’s components. The millimetre-precision measurements are one of the first activities undertaken by the FCC feasibility study, which was launched last year following the recommendation of the 2020 update of the European strategy for particle physics. During the next three years, the study will explore the technical and financial viability of a 100 km collider at CERN, for which the tunnel is a top priority. Geology, topography and surface infrastructure are the key constraints on the FCC tunnel’s position, around which civil engineers will design the optimal route, should the project be approved.
The FCC would cover an area about 10 times larger than the LHC, in which every geographical reference must be pinpointed with unprecedented precision. To provide a reference coordinate system, in May the CERN surveyors, in conjunction with ETH Zürich, the Federal Office of Topography Swisstopo, and the School of Engineering and Management Vaud, performed geodetic levelling measurements along an 8 km profile across the Swiss–French border south of Geneva.
Such measurements have two main purposes. The first is to determine a high-precision surface model, or “geoid”, to map the height above sea level in the FCC region. The second purpose is to improve the present reference system, whose measurements date back to the 1980s when the tunnel housing the LHC was built.
“The results will help to evaluate if an extrapolation of the current LHC geodetic reference systems and infrastructure is precise enough, or if a new design is needed over the whole FCC area,” says Hélène Mainaud Durand, group leader of CERN’s geodetic metrology group.
The FCC feasibility study, which involves more than 140 universities and research institutions from 34 countries, also comprises technological, environmental, engineering, political and economic considerations. It is due to be completed by the time the next strategy update gets under way in the middle of the decade. Should the outcome be positive, and the project receive the approval of CERN’s member states, civil-engineering works could start as early as the 2030s.
Time-stamped files stated by Tim Berners-Lee to contain the original source code for the web and digitally signed by him, have sold for US$5.4 million at auction. The files were sold as a non-fungible token (NFT), a form of a crypto asset that uses blockchain technology to confer uniqueness.
The web was originally conceived at CERN to meet the demand for automated information-sharing between physicists spread across universities and institutes worldwide. Berners-Lee wrote his first project proposal in March 1989, and the first website, which was dedicated to the World Wide Web project itself and hosted on Berners-Lee’s NeXT computer, went live in the summer of 1991. Less than two years later, on 30 April 1993, and after several iterations in development, CERN placed version three of the software in the public domain. It deliberately did so on a royalty-free, “no-strings-attached” basis, addressing the memo simply “To whom it may concern.”
The seed that led CERN to relinquish ownership of the web was planted 70 years ago, in the CERN Convention, which states that results of its work were to be “published or otherwise made generally available” – a culture of openness that continues to this day.
The auction offer describes the NFT as containing approximately 9555 lines of code, including implementations of the three languages and protocols that remain fundamental to the web today: HTML (Hypertext Markup Language), HTTP (Hypertext Transfer Protocol) and URIs (Uniform Resource Identifiers). The lot also includes an animated visualisation of the code, a letter written by Berners-Lee reflecting on the process of creating it, and a Scalable Vector Graphics representation of the full code created from the original files.
Bidding for the NFT, which auction- house Sotheby’s claims is its first-ever sale of a digital-born artefact, opened on 23 June and attracted a total of 51 bids. The sale will benefit initiatives that Berners-Lee and his wife Rosemary Leith support, stated a Sotheby’s press release.
In Climate Change and Energy Options for a Sustainable Future, nuclear physicists Dinesh Kumar Srivastava and V S Ramamurthy explore global policies for an eco-friendly future. Facing the world’s increasing demand for energy, the authors argue for the replacement of fossil fuels with a new mixture of green energy sources including wind energy, solar photovoltaics, geothermal energy and nuclear energy. Srivastava is a theoretical physicist and Ramamurthy is an experimental physicist with research interests in heavy-ion physics and the quark–gluon plasma. Together, they analyse solutions offered by science and technology with a clarity that will likely surpass the expectations of non-expert readers. Following a pedagogical approach with vivid illustrations, the book offers an in-depth description of how each green-energy option could be integrated into a global-energy strategy.
In the first part of the book, the authors provide a wealth of evidence demonstrating the pressing reality of climate change and the fragility of the environment. Srivastava and Ramamurthy then examine unequal access to energy across the globe. There should be no doubt that human wellbeing is decided by the rate at which power is consumed, they write, and providing enough energy to everyone on the planet to reach a human-development index of 0.8, which is defined by the UN as high human development, calls for about 30 trillion kWh per year – roughly double the present global capacity.
Human wellbeing is decided by the rate at which power is consumed
Srivastava and Ramamurthy present the basic principles of alternative renewable sources, and offer many examples, including agrivoltaics in Africa, a floating solar-panel station in California and wind-turbines in the Netherlands and India. Drawing on their own expertise, they discuss nuclear energy and waste-management, accelerator-driven subcritical systems, and the use of high-current electron accelerators for water purification. The book then finally turns to sustainability, showing by means of a wealth of scientific data that increasing the supply of renewable energy, and reducing carbon-intensive energy sources, can lead to sustainable power across the globe, both reducing global-warming emissions and stabilising energy prices for a fairer economy. The authors stress that any solution should not compromise quality of life or development opportunities in developing countries.
This book could not be more timely. It is an invaluable resource for scientists, policymakers and educators.
On 1 January, after a long struggle with a serious illness, Anatoly Vasilievich Efremov of the Bogoliubov Laboratory of Theoretical Physics (BLTP) at JINR, Dubna, Russia, passed away. He was an outstanding physicist, and a world expert in quantum field theory and elementary particle physics.
Anatoly Efremov was born in Kerch, Crimea, to the family of a naval officer. Since childhood, he retained his love for the sea and was an excellent yachtsman. After graduating from Moscow Engineering Physics Institute in 1958, where among his teachers were Isaak Pomeranchuk and his master’s thesis advisor Yakov Smorodinsky, he started working at BLTP JINR. At the time, Dmitrij Blokhintsev was JINR director. Anatoly always considered him as his teacher, as he did Dmitry Shirkov under whose supervision he defended his PhD thesis “Dispersion theory of low-energy scattering of pions” in 1962.
In 1971, Anatoly defended his DSc dissertation “High-energy asymptotics of Feynman diagrams”. The underlying work immediately found application in the factorisation of hard processes in quantum chromodynamics (QCD), which is now the theoretical basis of all hard-hadronic processes. Of particular note are his 1979 articles (written together with his PhD student A V Radyushkin) about the asymptotic behaviour of the pion form factor in QCD, and the evolution equation for hard exclusive processes, which became known as the ERBL (Efremov–Radyushkin–Brodsky–Lepage) equation. Proving the factorisation of hard processes enabled many subtle effects in QCD to be described, in particular parton correlations, which became known as the ETQS (Efremov–Teryaev–Qiu–Sterman) mechanism.
During the past three decades, Efremov, together with his students and colleagues, devoted his attention to several problems: the proton spin; the role of the axial anomaly and spin of gluons in the spin structure of a nucleon; correlations of the spin of partons; and momenta of particles in jets (“handedness”). These effects served as the theoretical basis for polarised particle experiments at RHIC at Brookhaven, the SPS at CERN and the new NICA facility at JINR. Anatoly was a member of the COMPASS collaboration at the SPS, where he helped to measure the effects he had predicted.
In 1976 he suggested the first model for the production of cumulative particles at x > 1 off nuclei. Within QCD, Efremov was the first to develop the concept of nuclear quark–parton structure function, which entails the presence in the nucleus of a hard collective quark sea. This naturally explains both the EMC nuclear effect and cumulative particle production, and unambiguously indicates the existence of multi- quark density fluctuations (fluctons) – a prediction that was later confirmed and led to the so-called nuclear super-scaling phenomenon. Today, similar effects of fluctons or short-range correlations are investigated in a fixed-target experiment at NICA and in several experiments at Jlab in the US.
Throughout his life, Anatoly continued to develop concrete manifestations of his ideas based on fundamental theory
Throughout his life, Anatoly continued to develop concrete manifestations of his ideas based on fundamental theory, becoming a teacher and advisor of many physicists at JINR, in Russia and abroad. In 1991 he initiated and became the permanent chair of the organising committee of the Dubna International Workshops on Spin Physics at High Energies. He was a long-term and authoritative member of the International Spin Physics Committee coordinating work in this area, and a regular visitor to the CERN theory unit since the 1970s.
Anatoly Vasilievich Efremov was the undisputed scientific leader, who initiated studies of quantum chromodynamics and spin physics in Dubna, one of the key BLTP JINR staff, and at the same time a modest and very friendly person, enjoying the highest authority and respect of colleagues. It is this combination of scientific and human qualities that made Anatoly Efremov’s personality unique, and this is how we will remember him.
This short film focuses on mechanic turned physicist Rana Adhikari, who contributed to the 2016 discovery of gravitational waves with the Laser Interferometer Gravitational-wave Observatory (LIGO). A laid-back, confident character, Adhikari takes us through the basics of LIGO, while touching upon the future of the field and the public’s view on fundamental research, all while directors Currimbhoy, McCarthy and Pedri facilitate the conversation, which runs at just over 12 minutes.
Following high-school, Adhikari spent time as a car mechanic. Upon reading Einstein’s Medium of Relativity during Hurricane Erin, however, he decided that he wanted to “test the speed of light.” Now, he is a professor at Caltech and a member of the LIGO collaboration, and was awarded a 2019 New Horizons in Physics Prize for his role in the gravitational-wave discovery.
In the film, recorded in 2018, Adhikari explains how fundamental research can be something everyone can get behind, in a world where it is “easy to think we’re all doomed,” and describes the power that rests on collaborations to show the importance of coming together, expressing, “It is a statement of collective willpower.” Through varying shots of him at a blackboard, in and around his experiment, and documentary-style face-to-face discussions, the audience quickly gets to know a positive thinker for whom work is clearly a passion, not a job.
The directors trust Adhikari to take centre stage and explain the world of gravitational waves through accurate metaphors that seem freestyled, yet concise. A sharp cut to a shot of turtles seems unnatural at first, before transforming into an analogy of Adhikari himself – the turtles going underwater and popping their heads up into different streams representing Adhikari’s curiosity, and how he got into the field in the first place.
The film is littered with references to music, most notably with comparisons between guitar strings and the vibrations that LIGO physicists are searching for. After playing a short, smooth riff, Adhikari states his unusual way of analysing data. “It is easier to do maths later – sometimes it’s better to just feel it.” He then plays us the “sound” file of two black holes colliding; a short chirp that is repeated as punchy statements about the long history of gravitational waves are overlayed onto the film.
We should be exploring fundamentals driven by curiosity
Towards the end, the focus takes a shift towards the public’s view on fundamental research. “Lasers weren’t created to scan items in supermarkets,” states Adhikari. “We should be exploring fundamentals driven by curiosity.” The film closes on Adhikari discussing the future of LIGO, tapping a glass to cause a lengthy ring representing the search for longer-wavelength gravitational waves.
Through Adhikari’s story, LIGO: The way the universe Is, I think will inspire anyone who feels alienated or intimidated by fundamental research.
The need for innovation in machine learning (ML) transcends any single experimental collaboration, and requires more in-depth work than can take place at a workshop. Data challenges, wherein simulated “black box” datasets are made public, and contestants design algorithms to analyse them, have become essential tools to spark interdisciplinary collaboration and innovation. Two have recently concluded. In both cases, contestants were challenged to use ML to figure out “what’s in the box?”
LHC Olympics
The LHC Olympics (LHCO) data challenge was launched in autumn 2019, and the results were presented at the ML4Jets and Anomaly Detection workshops in spring and summer 2020. A final report summarising the challenge was posted to arXiv earlier this year, written by around 50 authors from a variety of backgrounds in theory, the ATLAS and CMS experiments, and beyond. The name of this community effort was inspired by the first LHC Olympics that took place more than a decade ago, before the start of the LHC. In those olympics, researchers were worried about being able to categorise all of the new particles that would be discovered when the machine turned on. Since then, we have learned a great deal about nature at TeV energy scales, with no evidence yet for new particles or forces of nature. The latest LHC Olympics focused on a different challenge – being able to find new physics in the first place. We now know that new physics must be rare and not exactly like what we expected.
In order to prepare for rare and unexpected new physics, organisers Gregor Kasieczka (University of Hamburg), Benjamin Nachman (Lawrence Berkeley National Laboratory) and David Shih (Rutgers University) provided a set of black-box datasets composed mostly of Standard Model (SM) background events. Contestants were charged with identifying any anomalous events that would be a sign of new physics. These datasets focused on resonant anomaly detection, whereby the anomaly is assumed to be localised – a “bump hunt”, in effect. This is a generic feature of new physics produced from massive new particles: the reconstructed parent mass is the resonant feature. By assuming that the signal is localised, one can use regions away from the signal to estimate the background. The LHCO provided one R&D dataset with labels and three black boxes to play with: one with an anomaly decaying into two two-pronged resonances, one without an anomaly, and one with an anomaly featuring two different decay modes (a dijet decay X → qq and a trijet decay X → gY, Y → qq).There are currently no dedicated searches for these signals in LHC data.
No labels
About 20 algorithms were deployed on the LHCO datasets, including supervised learning, unsupervised learning, weakly supervised learning and semi-supervised learning. Supervised learning is the most widely used method across science and industry, whereby each training example has a label: “background” or “signal”. For this challenge, the data do not have labels as we do not know exactly what we are looking for, and so strategies trained with labels from a different dataset often did not work well. By contrast, unsupervised learning generally tries to identify events that are rarely or never produced by the background; weakly supervised methods use some context from data to provide noisy labels; and semi-supervised methods use some simulation information in order to have a partial set of labels. Each method has its strengths and weaknesses, and multiple approaches are usually needed to achieve a broad coverage of possible signals.
The Dark Machines data challenge focused on developing algorithms broadly sensitive to non-resonant anomalies
The best performance on the first black box in the LHCO challenge, as measured by finding and correctly characterising the anomalous signals, was by a team of cosmologists at Berkeley (George Stein, Uros Seljak and Biwei Dai) who compared the phase-space density between a sliding signal region and sidebands (see “Olympian algorithm” figure). Overall, the algorithms did well on the R&D dataset, and some also did well on the first black box, with methods that made use of likelihood ratios proving particularly effective. But no method was able to detect the anomalies in the third black box, and many teams reported a false signal for the second black box. This “placebo effect’’ illustrates the need for ML approaches to have an accurate estimation of the background and not just a procedure for identifying signals. The challenge for the third black box, however, required algorithms to identify multiple clusters of anomalous events rather than a single cluster. Future innovation is needed in this department.
Dark Machines
A second data challenge was launched in June 2020 within the Dark Machines initiative. Dark Machines is a research collective of physicists and data scientists who apply ML techniques to understand the nature of dark matter – as we don’t know the nature of dark matter, it is critical to search broadly for its anomalous signatures. The challenge was organised by Sascha Caron (Radboud University), Caterina Doglioni (University of Lund) and Maurizio Pierini (CERN), with notable contributions from Bryan Ostidiek (Harvard University) in the development of a common software infrastructure, and Melissa van Beekveld (University of Oxford) for dataset generation. In total, 39 participants arranged in 13 teams explored various unsupervised techniques, with each team submitting multiple algorithms.
By contrast with LHCO, the Dark Machines data challenge focused on developing algorithms broadly sensitive to non-resonant anomalies. Good examples of non-resonant new physics include many supersymmetric models and models of dark matter – anything where “invisible” particles don’t interact with the detector. In such a situation, resonant peaks become excesses in the tails of the missing-transverse-energy distribution. Two datasets were provided: R&D datasets including a concoction of SM processes and many signal samples for contestants to develop their approaches on; and a black-box dataset mixing SM events with events from unspecified signal processes. The challenge has now formally concluded, and its outcome was posted on arXiv in May, but the black-box has not been opened to allow the community to continue to test ideas on it.
A wide variety of unsupervised methods have been deployed so far. The algorithms use diverse representations of the collider events (for example, lists of particle four-momenta, or physics quantities computed from them), and both implicit and explicit approaches for estimating the probability density of the background (for example, autoencoders and “normalising flows”). While no single method universally achieved the highest sensitivity to new-physics events, methods that mapped the background to a fixed point and looked for events that were not described well by this mapping generally did better than techniques that had a so-called dynamic embedding. A key question exposed by this challenge that will inspire future innovation is how best to tune and combine unsupervised machine-learning algorithms in a way that is model independent with respect to the new physics describing the signal.
The enthusiastic response to the LHCO and Dark Machines data challenges highlights the important future role of unsupervised ML at the LHC and elsewhere in fundamental physics. So far, just one analysis has been published – a dijet-resonance search by the ATLAS collaboration using weakly-supervised ML – but many more are underway, and these techniques are even being considered for use in the level-one triggers of LHC experiments (see Hunting anomalies with an AI trigger). And as the detection of outliers also has a large number of real-world applications, from fraud detection to industrial maintenance, fruitful cross-talk between fundamental research and industry is possible.
The LHCO and Dark Machines data challenges are a stepping stone to an exciting experimental programme that is just beginning.
How might artificial intelligence make an impact on theoretical physics?
John Ellis (JE): To phrase it simply: where do we go next? We have the Standard Model, which describes all the visible matter in the universe successfully, but we know dark matter must be out there. There are also puzzles, such as what is the origin of the matter in the universe? During my lifetime we’ve been playing around with a bunch of ideas for tackling those problems, but haven’t come up with solutions. We have been able to solve some but not others. Could artificial intelligence (AI) help us find new paths towards attacking these questions? This would be truly stealing theoretical physicists’ lunch.
Anima Anandkumar (AA): I think the first steps are whether you can understand more basic physics and be able to come up with predictions as well. For example, could AI rediscover the Standard Model? One day we can hope to look at what the discrepancies are for the current model, and hopefully come up with better suggestions.
JE: An interesting exercise might be to take some of the puzzles we have at the moment and somehow equip an AI system with a theoretical framework that we physicists are trying to work with, let the AI loose and see whether it comes up with anything. Even over the last few weeks, a couple of experimental puzzles have been reinforced by new results on B-meson decays and the anomalous magnetic moment of the muon. There are many theoretical ideas for solving these puzzles but none of them strike me as being particularly satisfactory in the sense of indicating a clear path towards the next synthesis beyond the Standard Model. Is it imaginable that one could devise an AI system that, if you gave it a set of concepts that we have, and the experimental anomalies that we have, then the AI could point the way?
AA: The devil is in the details. How do we give the right kind of data and knowledge about physics? How do we express those anomalies while at the same time making sure that we don’t bias the model? There are anomalies suggesting that the current model is not complete – if you are giving that prior knowledge then you could be biasing the models away from discovering new aspects. So, I think that delicate balance is the main challenge.
JE: I think that theoretical physicists could propose a framework with boundaries that AI could explore. We could tell you what sort of particles are allowed, what sort of interactions those could have and what would still be a well-behaved theory from the point of view of relativity and quantum mechanics. Then, let’s just release the AI to see whether it can come up with a combination of particles and interactions that could solve our problems. I think that in this sort of problem space, the creativity would come in the testing of the theory. The AI might find a particle and a set of interactions that would deal with the anomalies that I was talking about, but how do we know what’s the right theory? We have to propose some other experiments that might test it – and that’s one place where the creativity of theoretical physicists will come into play.
AA: Absolutely. And many theories are not directly testable. That’s where the deeper knowledge and intuition that theoretical physicists have is so critical.
Is human creativity driven by our consciousness, or can contemporary AI be creative?
AA: Humans are creative in so many ways. We can dream, we can hallucinate, we can create – so how do we build those capabilities into AI? Richard Feynman famously said “What I cannot create, I do not understand.” It appears that our creativity gives us the ability to understand the complex inner workings of the universe. With the current AI paradigm this is very difficult. Current AI is geared towards scenarios where the training and testing distributions are similar, however, creativity requires extrapolation – being able to imagine entirely new scenarios. So extrapolation is an essential aspect. Can you go from what you have learned and extrapolate new scenarios? For that we need some form of invariance or understanding of the underlying laws. That’s where physics is front and centre. Humans have intuitive notions of physics from early childhood. We slowly pick them up from physical interactions with the world. That understanding is at the heart of getting AI to be creative.
JE: It is often said that a child learns more laws of physics than an adult ever will! As a human being, I think that I think. I think that I understand. How can we introduce those things into AI?
Could AI rediscover the Standard Model?
AA: We need to get AI to create images, and other kinds of data it experiences, and then reason about the likelihood of the samples. Is this data point unlikely versus another one? Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems. When you are watching me, it’s not just a free-flowing system going from the retina into the brain; there’s also a feedback system going from the inferior temporal cortex back into the visual cortex. This kind of feedback is fundamental to us being conscious. Building these kinds of mechanisms into AI is the first step to creating conscious AI.
JE: A lot of the things that you just mentioned sound like they’re going to be incredibly useful going forward in our systems for analysing data. But how is AI going to devise an experiment that we should do? Or how is AI going to devise a theory that we should test?
AA: Those are the challenging aspects for an AI. A data-driven method using a standard neural network would perform really poorly. It will only think of the data that it can see and not about data that it hasn’t seen – what we call “zero-short generalisation”. To me, the past decade’s impressive progress is due to a trinity of data, neural networks and computing infrastructure, mainly powered by GPUs [graphics processing units], coming together: the next step for AI is a wider generalisation to the ability to extrapolate and predict hitherto unseen scenarios.
Across the many tens of orders of magnitude described by modern physics, new laws and behaviours “emerge” non-trivially in complexity (see Emergence). Could intelligence also be an emergent phenomenon?
JE: As a theoretical physicist, my main field of interest is the fundamental building blocks of matter, and the roles that they play very early in the history of the universe. Emergence is the word that we use when we try to capture what happens when you put many of these fundamental constituents together, and they behave in a way that you could often not anticipate if you just looked at the fundamental laws of physics. One of the interesting developments in physics over the past generation is to recognise that there are some universal patterns that emerge. I’m thinking, for example, of phase transitions that look universal, even though the underlying systems are extremely different. So, I wonder, is there something similar in the field of intelligence? For example, the brain structure of the octopus is very different from that of a human, so to what extent does the octopus think in the same way that we do?
AA: There’s a lot of interest now in studying the octopus. From what I learned, its intelligence is spread out so that it’s not just in its brain but also in its tentacles. Consequently, you have this distributed notion of intelligence that still works very well. It can be extremely camouflaged – imagine being in a wild ocean without a shell to protect yourself. That pressure created the need for intelligence such that it can be extremely aware of its surroundings and able to quickly camouflage itself or manipulate different tools.
JE: If intelligence is the way that a living thing deals with threats and feeds itself, should we apply the same evolutionary pressure to AI systems? We threaten them and only the fittest will survive. We tell them they have to go and find their own electricity or silicon or something like that – I understand that there are some first steps in this direction, computer programs competing with each other at chess, for example, or robots that have to find wall sockets and plug themselves in. Is this something that one could generalise? And then intelligence could emerge in a way that we hadn’t imagined?
Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems
AA: That’s an excellent point. Because what you mentioned broadly is competition – different kinds of pressures that drive towards good, robust objectives. An example is generative adversarial models, which can generate very realistic looking images. Here you have a discriminator that challenges the generator to generate images that look real. These kinds of competitions or games are getting a lot of traction and we have now passed the Turing test when it comes to generating human faces – you can no longer tell very easily whether it is generated by AI or if it is a real person. So, I think those kinds of mechanisms that have competition built into the objective they optimise are fundamental to creating more robust and more intelligent systems.
JE: All this is very impressive – but there are still some elements that I am missing, which seem very important to theoretical physics. Take chess: a very big system but finite nevertheless. In some sense, what I try to do as a theoretical physicist has no boundaries. In some sense, it is infinite. So, is there any hope that AI would eventually be able to deal with problems that have no boundaries?
AA: That’s the difficulty. These are infinite-dimensional spaces… so how do we decide how to move around there? What distinguishes an expert like you from an average human is that you build your knowledge and develop intuition – you can quickly make judgments and find which narrow part of the space you want to work on compared to all the possibilities. That’s the aspect that is so difficult for AI to figure out. The space is enormous. On the other hand, AI does have a lot more memory, a lot more computational capacity. So can we create a hybrid system, with physicists and machine learning in tandem, to help us harness the capabilities of both AI and humans together? We’re currently exploring theorem provers: can we use the theorems that humans have proven, and then add reinforcement learning on top to create very fast theorem solvers? If we can create such fast theorem provers in pure mathematics, I can see them being very useful for understanding the Standard Model and the gaps and discrepancies in it. It is much harder than chess, for example, but there are exciting programming frameworks and data sets available, with efforts to bring together different branches of mathematics. But I don’t think humans will be out of the loop, at least for now.
“Our job is to be part of the scientific community and show that there can be religious people and priests who are scientists,” says Gabriele Gionti, a Roman Catholic priest and theoretical physicist specialising in quantum gravity who is resident at the Vatican Observatory.
“Our mission is to do good science,” agrees Guy Consolmagno, a noted planetary scientist, Jesuit brother and the observatory’s director. “I like to say we are missionaries of science to the believers.”
Not only missionaries of faith, then, but also of science. And there are advantages.
“At the Vatican Observatory, we don’t have to write proposals, we don’t have to worry about tenure and we don’t have to have results in three years to get our money renewed,” says Consolmagno, who is directly appointed by the Pope. “It changes the nature of the research that is available to us.”
“Here I have had time to just study,” says Gionti, who explains that he was able to extend his research to string theory as a result of this extra freedom. “If you are a postdoc or under tenure, you don’t have this opportunity.”
“I remember telling a friend of mine that I don’t have to write grant proposals, and he said, ‘how do I get in on this?’” jokes Consolmagno, a native of Detroit. “I said that he needed to take a vow of celibacy. He replied, ‘it’s worth it!’.”
Cannonball moment
Clad in T-shirts, Gionti and Consolmagno don’t resemble the priests and monks seen in movies. They are connected to monastic tradition, but do not withdraw from the world. As well as being full-time physicists, both are members of the Society of Jesus – a religious order that traces its origin to 1521, when Saint Ignatius of Loyola was struck in the leg by a cannonball at the Battle of Pamplona. Today they help staff at an institution that was founded in 1891, though its origins arguably date back to attempts to fix the date for Easter in 1582.
“It was at the end of the 19th century that the myth began that the church was anti-science, and they would use Galileo as the excuse,” says Consolmagno, explaining that the Pope at the time, Pope Leo XIII, wanted to demonstrate that faith and science were fully compatible. “The first thing that the Vatican Observatory did was to take part in the Carte du Ciel programme,” he says, hinting at a secondary motivation. “Every national observatory was given a region of the sky. Italy was given one region and the Vatican was given another. So, de facto, the Vatican became seen as an independent nation state.”
The observatory quickly established itself as a respected scientific organisation. Though it is staffed by priests and brothers, there is an absolute rule that science comes first, says Consolmagno, and the stereotypical work of a priest or monk is actually a temptation to be resisted. “Day-to-day life as a scientist can be tedious, and it can be a long time until you see a reward, but pastoral life can be rewarding immediately,” he explains.
Consolmagno was a planetary scientist for 20 years before becoming a Jesuit. By contrast, Gionti, who hails from Capua in Italy, joined after his first postdoc at UC Irvine in California. Neither reports encountering professional prejudice as a result of their vocation. “I think that’s a generational thing,” says Consolmagno. “Scientists working in the 1970s and 1980s were more likely to be anti-religious, but nowadays it’s not the case. You are looked on as part of the multicultural nature of the field.”
And besides, antagonism between science and religion is largely based on a false dichotomy, says Consolmagno. “The God that many atheists don’t believe in is a God that we also don’t believe in.”
The observatory’s director pushes back hard on the idea that faith is incompatible with physics. “It doesn’t tell me what science to do. It doesn’t tell me what the questions and answers are going to be. It gives me faith that I can understand the universe using reason and logic.”
Surprised by CERN
Due to light pollution in Castel Gandolfo, a new outpost of the Vatican Observatory was established in Tucson, Arizona, in 1980. A little later in the day, when the Sun was rising there, I spoke to Paul Gabor – an astrophysicist, Jesuit priest and deputy director for the Tucson observatory. Born in Košice, Slovakia, Gabor was a summer student at CERN in 1992, working on the development of the electromagnetic calorimeter of the ATLAS experiment, a project he later continued in Grenoble, thanks to winning a scholarship at the university. “We were making prototypes and models and software. We tested the actual physical models in a couple of test-beam runs – that was fun,” he recalls.
Gabor was surprised at how he found the laboratory. “It was an important part of my journey, because I was quite surprised that I found CERN to be full of extremely nice people. I was expecting everyone to be driven, ambitious, competitive and not necessarily collaborative, but people were very open,” he says. “It was a really good human experience for me.”
“When I finally caved in and joined the Jesuit order in 1995, I always thought, well, these scientists definitely are a group that I got to know and love, and I would like to, in one way or another, be a minister to them and be involved with them in some way.”
“Something that I came to realise, in a beginning, burgeoning kind of way at CERN, is the idea of science being a spiritual journey. It forms your personality and your soul in a way that any sustained effort does.”
Scientific athletes
“Experimental science can be a journey to wisdom,” says Gabor. “We are subject to constant frustration, failure and errors. We are confronted with our limitations. This is something that scientists have in common with athletes, for example. These long labours tend to make us grow as human beings. I think this point is quite important. In a way it explains my experience at CERN as a place full of nice, generous people.”
Surprisingly, however, despite being happy with life as a scientific religious and religious scientist, Gabor is not recruiting.
“There is a certain tendency to abandon science to join the priesthood or religious life,” he says. “This is not necessarily the best thing to do, so I urge a little bit of restraint. Religious zeal is a great thing, but if you are in the third year of a doctorate, don’t just pack up your bags and join a seminary. That is not a very prudent thing to do. That is to nobody’s benefit. This is a scenario that is all too common unfortunately.”
Consolmagno also offers words of caution. “50% of Jesuits leave the order,” he notes. “But this is a sign of success. You need to be where you belong.”
But Gionti, Consolmagno and Gabor all agree that, if properly discerned, the life of a scientific religious is a rewarding one in a community like the Vatican Observatory. They describe a close-knit group with a common purpose and little superficiality.
“Faith gives us the belief that the universe is good and worth studying,” says Consolmagno. “If you believe that the universe is good, then you are justified in spending your life studying things like quarks, even if it is not useful. Believing in God gives you a reason to study science for the sake of science.”
Jürgen G Körner, a well-known German theoretical physicist at the Johannes Gutenberg University in Mainz, passed away after a brief illness on 16 July 2021 at the age of 82.
Jürgen was born in Hong Kong in 1939, as the fourth child of a Hamburg merchant’s family. After the family returned to Germany in 1949, he attended the secondary school in Blankenese and studied physics at the Technical University of Berlin and the University of Hamburg. He received his PhD from Northwestern University, Illinois, in 1966 under Richard Capps. He then held research positions at Imperial College London, Columbia University, the University of Heidelberg and DESY. He completed his habilitation at the University of Hamburg in 1976.
In 1982 Jürgen became a professor of theoretical particle physics at Johannes Gutenberg University, where he remained for the rest of his career. His research interests included the phenomenology of elementary particles, heavy-quark physics, spin physics, radiative corrections and exclusive decay processes. He made pioneering contributions to the heavy-quark effective theory with applications to exclusive hadron decays. He also studied mass and spin effects in inclusive and exclusive processes in the Standard Model, and developed the helicity formalism describing angular distributions in exclusive hadron decays. Jürgen’s other notable contributions include the Körner–Pati–Woo theorem providing selection rules for baryon transitions and a relativistic formalism for electromagnetic excitations of nucleon resonances.
Jürgen collaborated with theoretical physicists worldwide and published about 250 papers in leading physics journals, including several influential reviews on the physics of baryons. He also contributed to the development of strong relations between German and Russian particle physicists. Together with colleagues from the Joint Institute for Nuclear Research, Dubna, and leading German and Russian universities he initiated a series of international workshops on problems in heavy-quark physics (Dubna: 1993–2019, Bad Honnef: 1994 and Rostock: 1997).
Jürgen was a cheerful person, attentive to the needs of his colleagues and friends, and always ready to help. He liked to travel and was actively involved in sports, especially football and cycling. Despite various commitments, he always found time for discussions. He cherished good conversations about physics and made a lasting impact on our lives. We will always remember him.
Steven Weinberg, one of the greatest theoretical physicists of all time, passed away on 23 July, aged 88. He revolutionised particle physics, quantum field theory and cosmology with conceptual breakthroughs that still form the foundation of our understanding of physical reality.
Weinberg is well known for the unified theory of weak and electromagnetic forces, which earned him the Nobel Prize in Physics in 1979, jointly awarded with Sheldon Glashow and Abdus Salam, and led to the prediction of the Z and W vector bosons, later discovered at CERN in 1983. His breakthrough was the realisation that some new theoretical ideas, initially believed to play a role in the description of nuclear strong interactions, could instead explain the nature of the weak force. “Then it suddenly occurred to me that this was a perfectly good sort of theory, but I was applying it to the wrong kind of interaction. The right place to apply these ideas was not to the strong interactions, but to the weak and electromagnetic interactions,” as he later recalled. With his work, Weinberg had made the next step in the unification of physical laws, after Newton understood that the motion of apples on Earth and planets in the sky are governed by the same gravitational force, and Maxwell understood that electric and magnetic phenomena are the expression of a single force.
In my life, I have built only one model
Steven Weinberg
In his research, Weinberg always focused on an overarching vision of physics and not on a model description of any single phenomenon. At a lunch among theorists, when a colleague referred to him as a model builder, he jokingly retorted: “I am not a model builder. In my life, I have built only one model.” Indeed, Weinberg’s greatest legacy is his visionary approach to vast areas of physics, in which he starts from complex theoretical concepts, reinterprets them in original ways, and applies them to the description of the physical world. A good example is his construction of effective field theories, which are still today the basic tool to understand the Standard Model of particle interactions. His inimitable way of thinking has been the inspiration and guidance for generations of physicists, and it will certainly continue to serve future generations.
Steven Weinberg is among the very few individuals who, during the course of the history of civilisation, have radically changed the way we look at the universe.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.