Comsol -leaderboard other pages

Topics

Becoming T-shaped

Heike Riel

For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them. With a background in semiconductor physics and a leadership role in one of the world’s top industrial research labs, Riel learnt to harness the skills she picked up in academia, and now uses them to build real-world applications. Today, IBM collaborates with academia and industry partners on projects ranging from quantum computing and cybersecurity to developing semiconductor chips for AI hardware.

“I chose semiconductor physics because I wanted to build devices, use electronics and understand photonics,” says Riel, who spent her academic years training to be an applied physicist. “There’s fundamental science to explore, but also something that can be used as a product to benefit society. That combination was very motivating.”

Hands-on mindset

For experimental physicists, this hands-on mindset is crucial. But experiments also require infrastructure that can be difficult to access in purely academic settings. “To do experiments, you need cleanrooms, fabrication tools and measurement systems,” explains Riel. “These resources are expensive and not always available in university labs.” During her first industry job at Hewlett-Packard in Palo Alto, Riel realised just how much she could achieve if given the right resources and support. “I felt like I was then the limit, not the lab,” she recalls.

This experience led Riel to proactively combine academic and industrial research in her PhD with IBM, where cutting-edge experiments are carried out towards a clear, purpose-driven goal within a structured research framework, leaving lots of leeway for creativity. “We explore scientific questions, but always with an application in mind,” says Riel. “Whether we’re improving a product or solving a practical problem, we aim to create knowledge and turn it into impact.”

Shifting gears

According to Riel, once you understand the foundations of fundamental physics, and feel as though you have learnt all the skills you can leach from it, then it’s time to consider shifting gears and expanding your skills with economics or business. In her role, understanding economic value and organisational dynamics is essential. But Riel advises against independently pursuing an MBA. “Studying economics or an MBA later is very doable,” she says. “In fact, your company might even financially support you. But going the other way – starting with economics and trying to pick up quantum physics later – is much harder.”

Riel sees university as a precious time to master complex subjects like quantum mechanics, relativity and statistical physics – topics that are difficult to revisit later in life. “It’s much easier to learn theoretical physics as a student than to go back to it later,” she says. “It builds something more important than just knowledge: it builds your tolerance for frustration, and your capacity for deep logical thinking. You become extremely analytical and much better at breaking down problems. That’s something every employer values.”

In demand

High-energy physicists are even in high demand in fields like consulting, says Riel. A high-achieving academic has a really good chance at being hired, as long as they present their job applications effectively. When scouring applications, recruiters look for specific key words and transferable skills, so regardless of the depth or quality of your academic research, the way you present yourself really counts. Physics, Riel argues, teaches a kind of thinking that’s both analytical and resilient. With experimental physics, your application can be tailored towards hands-on experience and understanding tangible solutions to real-world problems. For theoretical physicists, your application should demonstrate logical problem-solving and thinking outside of the box. “The winning combination is having aspects of both,” says Riel.

On top of that, research in physics increases your “frustration tolerance”. Every physicist has faced failure at one point during their academic career. But their determination to persevere is what makes them resilient. Whether this is through constantly thinking on your feet, or coming up with new solutions to the same problems, this resilience is what can make a physicist’s application pierce through the others. “In physics, you face problems every day that don’t have easy answers, and you learn how to deal with that,” explains Riel. “That mindset is incredibly useful, whether you’re solving a semiconductor design problem or managing a business unit.”

Academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application

Riel champions the idea of the “T-shaped person”: someone with deep expertise in one area (the vertical stroke of the T) and broad knowledge across fields (the horizontal bar of the T). “You start by going deep – becoming the go-to person for something,” says Riel. This deep knowledge builds your credibility in your desired field: you become the expert. But after that, you need to broaden your scope and understanding.

That breadth can include moving between fields, working on interdisciplinary projects, or applying physics in new domains. “A T-shaped person brings something unique to every conversation,” adds Riel. “You’re able to connect dots that others might not even see, and that’s where a lot of innovation happens.”

Adding the bar on the T means that you can move fluidly between different fields, including through academia and industry. For this reason, Riel believes that the divide between academia and industry is less rigid than people assume, especially in large research organisations like IBM. “We sit in that middle ground,” she explains. “We publish papers. We work with universities on fundamental problems. But we also push toward real-world solutions, products and economic value.”

The difficult part is making the leap from academia to industry. “You need the confidence to make the decision, to choose between working in academia or industry,” says Riel. “At some point in your PhD, your first post-doc, or maybe even your second, you need to start applying your practical skills to industry.” Companies like IBM offer internships, PhDs, research opportunities and temporary contracts for physicists all the way from masters students to high-level post-docs. These are ideal ways to get your foot in the door of a project, get work published, grow your network and garner some of those industry-focused practical skills, regardless of the stage you are at in your academic career. “You can learn from your colleagues about economy, business strategy and ethics on the job,” says Riel. “If your team can see you using your practical skills and engaging with the business, they will be eager to help you up-skill. This may mean supporting you through further study, whether it’s an online course, or later an MBA.”

Applied knowledge

Riel notes that academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application. “US funding is often tied to applications, and they are much stronger at converting research into tangible products, whereas in Europe there is still more of a divide between knowledge creation and the next step to turn this into products,” she says. “But personally, I find it most satisfying when I can apply what I learn to something meaningful.”

That applied focus is also cyclical, she says. “At IBM, projects to develop hardware often last five to seven years. Software development projects have a much faster turnaround. You start with an idea, you prove the concept, you innovate the path to solve the engineering challenges and eventually it becomes a product. And then you start again with something new.” This is different to most projects in academia, where a researcher contributes to a small part of a very long-term project. Regardless of the timeline of the project, the skills gained from academia are invaluable.

For early-career researchers, especially those in high-energy physics, Riel’s message is reassuring: “Your analytical training is more useful than you think. Whether you stay in academia, move to industry, or float between both, your skills are always relevant. Keep learning and embracing new technologies.”

The key, she says, is to stay flexible, curious and grounded in your foundations. “Build your depth, then your breadth. Don’t be afraid of crossing boundaries. That’s where the most exciting work happens.”

The history of heavy ions

Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe. Before he passed away last May, he and fellow editors Sanja Damjanovic (GSI), Volker Metag (University of Giessen) and Jürgen Schukraft (Yale University) finalised the manuscript for Scientist and Visionary – a new biographical work that offers both a retrospective on Specht’s wide-ranging scientific contributions and a snapshot of four decades of evolving research at CERN, GSI and beyond.

Precision and rigour

Specht began his career in nuclear physics under the mentorship of Heinz Maier-Leibnitz at the Technische Universität München. His early work was grounded in precision measurements and experimental rigour. Among his most celebrated early achievements were the discoveries of superheavy quasi-molecules and quasi-atoms, where electrons can be bound for short times to a pair of heavy ions, and nuclear-shape isomerism, where nuclei exhibit long-lived prolate or oblate deformations. These milestones significantly advanced the understanding of atomic and nuclear structure. Around 1979, he shifted focus, joining the emerging efforts at CERN to explore the new frontier of ultra-relativistic heavy-ion collisions, which was started five years earlier at Berkeley by the GSI-LBL collaboration. It was Bill Willis, one of CERN’s early advocates for high-energy nucleus–nucleus collisions, who helped draw Specht into this developing field. That move proved foundational for both Specht and CERN.

From the early 1980s through to 2010, Specht played leading roles in four CERN nuclear-collision experiments: R807/808 at the Intersecting Storage Rings, and HELIOS, CERES/NA45 and NA60 at the Super Proton Synchrotron (SPS). As the book describes, he was instrumental, and not only in their scientific goals, namely to search for the highest temperatures of the newly formed hot, dense QCD matter, exceeding the well established Hagedorn limiting hadron fluid temperature of roughly 160 MeV. The overarching aim was to establish that quasi-thermalised gluon matter and even quark–gluon matter can be created at the SPS. Specht was also involved in the design and execution of these detectors. At the Universität Heidelberg, he built a heavy-ion research group and became a key voice in securing German support for CERN’s heavy-ion programme.

CERES was Spechts brainchild, and stood out for its bold concept

As spokesperson of the HELIOS experiment from 1984 onwards, Specht gained recognition as a community leader. But it was CERES, his brainchild, that stood out for its bold concept: to look for thermal dileptons using a hadron-blind detector – a novel idea at the time that introduced the concept of heavy-ion collision experiments. Despite considerable scepticism, CERES was approved in 1989 and built in under two years. Its results on sulphur–gold collisions became some of the most cited of the SPS era, offering strong evidence for thermal lepton-pair production, potentially from a quark–gluon plasma – a hot and deconfined state of QCD matter then hypothesised to exist at high temperatures and densities, such as in the early universe. Such high temperatures, above the hadrons’ limiting Hagedorn temperature of 160 MeV, had not yet been experimentally demonstrated at LBNL’s Bevalac and Brookhaven’s Alternating Gradient Synchrotron.

Advising ALICE

In the early 1990s, while CERES was being upgraded for lead–gold runs, Specht co-led a European Committee for Future Accelerators working group that laid the groundwork for ALICE, the LHC’s dedicated heavy-ion experiment. His Heidelberg group formally joined ALICE in 1993. Even after becoming scientific director of GSI in 1992, Specht remained closely involved as an advisor.

Specht’s next major CERN project was NA60, which collided a range of nuclei in a fixed-target experiment at the SPS and pushed dilepton measurements to new levels of precision. The NA60 experiment achieved two breakthroughs: a nearly perfect thermal spectrum consistent with blackbody radiation of temperatures 240 to 270 MeV, some hundred MeV above the previous highest hadron Hagedorn temperature of 160 MeV. Clear evidence of in-medium modification of the ρ meson was observed, due to meson collisions with nucleons and heavy baryon resonances, showing that this medium is not only hot, but also that its net baryon density is high. These results were widely seen as strong confirmation of the lattice–QCD-inspired quark–gluon plasma hypothesis. Many chapter authors, some of whom were direct collaborators, others long-time interpreters of heavy-ion signals, highlight the impact NA60 had on the field. Earlier claims, based on competing hadronic signals for deconfinement, such as strong collective hydrodynamic flow, J/ψ melting and quark recombination, were often also described by hadronic transport theory, without assuming deconfinement.

Hans Joachim Specht: Scientist and Visionary

Specht didn’t limit himself to fundamental research. As director of GSI, he oversaw Europe’s first clinical ion-beam cancer therapy programme using carbon ions. The treatment of the first 450 patients at GSI was a breakthrough moment for medical physics and led to the creation of the Heidelberg Ion Therapy centre in Heidelberg, the first hospital-based hadron therapy centre in Europe. Specht later recalled the first successful treatment as one of the happiest moments of his career. In their essays, Jürgen Debus, Hartmut Eickhoff and Thomas Nilsson outline how Specht steered GSI’s mission into applied research without losing its core scientific momentum.

Specht was also deeply engaged in institutional planning, helping to shape the early stages of the Facility for Antiproton and Ion Research, a new facility to study heavy ion collisions, which is expected to start operations at GSI at the end of the decade. He also initiated plasma-physics programmes, and contributed to the development of detector technologies used far beyond CERN or GSI. In parallel, he held key roles in international science policy, including within the Nuclear Physics Collaboration Committee, as a founding board member of the European Centre for Theoretical Studies in Nuclear Physics in Trento, and at CERN as chair of the Proton Synchrotron and Synchro-Cyclotron Committee, and as a decade-long member of the Scientific Policy Committee.

The book doesn’t shy away from more unusual chapters either. In later years, Specht developed an interest in the neuroscience of music. Collaborating with Hans Günter Dosch and Peter Schneider, he explored how the brain processes musical structure – an example of his lifelong intellectual curiosity and openness to interdisciplinary thinking.

Importantly, Scientist and Visionary is not a hagiography. It includes a range of perspectives and technical details that will appeal to both physicists who lived through these developments and younger researchers unfamiliar with the history behind today’s infrastructure. At its best, the book serves as a reminder of how much experimental physics depends not just on ideas, but on leadership, timing and institutional navigation.

That being said, it is not a typical scientific biography. It’s more of a curated mosaic, constructed through personal reflections and contextual essays. Readers looking for deep technical analysis will find it in parts, especially in the sections on CERES and NA60, but its real value lies in how it tracks the development of large-scale science across different fields, from high-energy physics to medical applications and beyond.

For those interested in the history of CERN, the rise of heavy-ion physics, or the institutional evolution of European science, this is a valuable read. And for those who knew or worked with Hans Specht, it offers a fitting tribute – not through nostalgia, but through careful documentation of the many ways Hans shaped the physics and the institutions we now take for granted.

Two takes on the economics of big science

At the 2024 G7 conference on research infrastructure in Sardinia, participants were invited to think about the potential socio-economic impact of the Einstein Telescope. Most physicists would have no expectation that a deeper knowledge of gravitational waves will have any practical usage in the foreseeable future. What, then, will be the economic impact of building a gravitational-wave detector hundreds of metres underground in some abandoned mines? What will be the societal impact of several kilometres of lasers and mirrors?

Such questions are strategically important for the future of fundamental science, which is increasingly often big science. Two new books tackle its socio-economic impacts head on, though with quite different approaches, one more qualitative in its research, and the other more quantitative. What are the pros and cons of qualitative versus quantitative analysis in social sciences? Personally, as an economist, at a certain point I would tend to say show me the figures! But, admittedly, when assessing the socio-economic impact of large-scale research infrastructures, if good statistical data is not available, I would always prefer a fine-grained qualitative analysis to quantitative models based on insufficient data.

Big Science, Innovation & Societal Contributions, edited by Shantha Liyanage (CERN), Markus Nordberg (CERN) and Marilena Streit-Bianchi (vice president of ARSCIENCIA), takes the qualitative route – a journey into mostly uncharted territory, asking difficult questions about the socio-economic impact of large-scale research infrastructures.

Big Science, Innovation & Societal Contributions

Some figures about the book may be helpful: the three editors were able to collect 15 chapters, with about 100 figures and tables, to involve 34 authors, to list more than 700 references, and to cover a wide range of scientific fields, including particle physics, astrophysics, medicine and computer science. A cursory reading of the list of about 300 acronyms, from AAI (Architecture Adaptive Integrator) to ZEPLIN (ZonEd Proportional scintillation in Liquid Noble gas detector), would be a good test to see how many research infrastructures and collaborations you already know.

After introducing the LHC, a chapter on new accelerator technologies explores a remarkable array of applications of accelerator physics. To name a few: CERN’s R&D in superconductivity is being applied in nuclear fusion; the CLOUD experiment uses particle beams to model atmospheric processes relevant to climate change (CERN Courier January/February 2025 p5); and the ELISA linac is being used to date Australian rock art, helping determine whether it originates from the Pleistocene or Holocene epochs (CERN Courier March/April 2025 p10).

A wide-ranging exploration of how large-scale research infrastructures generate socio-economic value

The authors go on to explore innovation with a straightforward six-step model: scanning, codification, abstraction, diffusion, absorption and impacting. This is a helpful compass to build a narrative. Other interesting issues discussed in this part of the book include governance mechanisms and leadership of large-scale scientific organisations, including in gravitational-wave astronomy. No chapter better illustrates the impact of science on human wellbeing than the survey of medical applications by Mitra Safavi-Naeini and co-authors, which covers three major domains of applications in medical physics: medical imaging with X-rays and PET; radio­therapy targeting cancer cells internally with radioactive drugs or externally using linacs; and more advanced but expensive particle-therapy treatments with beams of protons, helium ions and carbon ions. Personally, I would expect that some of these applications will be enhanced by artificial intelligence, which in turn will have an impact on science itself in terms of digital data interpretation and forecasting.

Sociological perspectives

The last part of the book takes a more sociological perspective, with discussions about cultural values, the social responsibility to make sure big data is open data, and social entrepreneurship. In his chapter on the social responsibility of big science, Steven Goldfarb stresses the importance of the role of big science for learning processes and cultural enhancement. This topic is particularly dear to me, as my previous work on the cost–benefit analysis of the LHC revealed that the value of human capital accumulation for early-stage researchers is among the biggest contributions to the machine’s return on investment.

I recommend Big Science, Innovation & Societal Contributions as a highly infor­mative, non-technical and updated introduction to the landscape of big science, but I would suggest complemen­ting it with another very recent book, The Economics of Big Science 2.0, edited by Johannes Gutleber and Panagiotis Charitos, both currently working at CERN. Charitos was also the co-editor of the volume’s predecessor, The Economics of Big Science, which focuses more on science policy, as well as public investment in science.

Why a “2.0” book? There is a shift of angle. The Economics of Big Science 2.0 builds upon the prior volume, but offers a more quantitative perspective on big science. Notably, it takes advantage of a larger share of contributions by economists, including myself as co-author of a chapter about the public’s perception of CERN.

The Economics of Big Science 2.0

It is worth clarifying that economics, as a domain within the paradigm of social sciences more generally, has its rules of the game and style. For example, the social sciences can be used as an umbrella encompassing sociology, political science, anthropology, history, management and communication studies, linguistics, psychology and more. The role of economics within sociology is to build quantitative models and to test them with statistical evidence, a field also known as econometrics.

Here, the authors excel. The Economics of Big Science 2.0 offers a wide-ranging exploration of how large-scale research infrastructures generate socio-economic value, primarily driven by quantitative analysis. The authors explore a diverse range of empirical methods, from forming cost–benefit analyses to evaluating econometric modelling, allowing them to assess the tangible effects of big science across multiple fields. There is a unique challenge for applied economics here, as big science centres by definition do not come in large numbers, however the authors involve large numbers of stakeholders, allowing for a statistical analysis of impacts, and the estimation of expected values, standard errors and confidence intervals.

Societal impact

The Economics of Big Science 2.0 examines the socio-economic impact of ESA’s space programmes, the local economic benefits from large-scale facilities and the efficiency benefits from open science. The book measures public attitudes toward and awareness of science within the context of CERN, offering insights into science’s broader societal impacts. It grounds its analyses in a series of focused case studies, including particle colliders such as the LHC and FCC, synchrotron light sources like ESRF and ALBA, and radio telescopes such as SARAO, illustrating the economic impacts of big science through a quantitative lens. In contrast to the more narrative and qualitative approach of Big Science, Innovation & Societal Contributions, The Economics of Big Science 2.0 distinguishes itself through a strong reliance on empirical data.

Ivan Todorov 1933–2025

Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February in his hometown of Sofia. He is best known for his prominent works on the group-theoretical methods and the mathematical foundations of quantum field theory.

Ivan was born on 26 October 1933 into a family of literary scholars who played an active role in Bulgarian academic life. After graduating from the University of Sofia in 1956, he spent several years at JINR in Dubna and at IAS Princeton, before joining INRNE in Sofia. In 1974 he became a full member of the Bulgarian Academy of Sciences.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions. The classification and the complete description of the unitary representations of the conformal group have been collected in two well known and widely used monographs by him and his collaborators. Ivan’s research on constructive quantum field theories and the books devoted to the axiomatic approach have largely influenced modern developments in this area. His early scientific results related to the analytic properties of higher loop Feynman diagrams have also found important applications in perturbative quantum field theory.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions

The scientifically highly successful international conferences and schools organised in Bulgaria during the Cold War period under the guidance of Ivan served as meeting grounds for leading Russian and East European theoretical physicists and their West European and American colleagues. They were crucial for the development of theoretical physics in Bulgaria.

Everybody who knew Ivan was impressed by his vast culture and acute intellectual curiosity. His profound and deep knowledge of modern mathematics allowed him to remain constantly in tune with new trends and ideas in theoretical physics. Ivan’s courteous and smiling way of discussing physics, always peppered with penetrating comments and suggestions, was inimitable. His passing is a great loss for theoretical physics, especially in Bulgaria, where he mentored a generation of researchers.

Jonathan L Rosner 1941–2025

Jon Rosner

Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025. He made profound contributions to particle physics, particularly in quark dynamics and the Standard Model.

Born in New York City, Rosner grew up in Yonkers, NY. He earned his Bachelor of Arts in Physics from Swarthmore College in 1962 and completed his PhD at Princeton University in 1965 with Sam Treiman as his thesis advisor. His early academic appointments included positions at the University of Washington and Tel Aviv University. In 1969 he joined the faculty at the University of Minnesota, where he served until 1982. That year, he became a professor at the University of Chicago, where he remained a central figure in the Enrico Fermi Institute and the Department of Physics until his retirement in 2011.

Rosner’s research spanned a broad spectrum of topics in particle physics, with a focus on the properties and interactions of quarks and leptons in the Standard Model and beyond.

In a highly influential paper in 1969, he pointed out that the duality between hadronic s-channel scattering and t-channel exchanges could be understood graphically, in terms of quark worldlines. Approximately three months before the “November revolution”, i.e. the experimental discovery of charm–anticharm particles, together with the late Mary K Gaillard and Benjamin W Lee, Jon published a seminal paper predicting the properties of hadronic states containing charm quarks.

He made significant contributions to the study of mesons and baryons, exploring their spectra and decay processes. His work on quarkonium systems, particularly the charmonium and bottomonium states, provided critical insights into the strong force that binds quarks together. He also made masterful use of algebraic methods in predicting and analysing CP-violating observables.

In more recent years, Jon focused on exotic combinations of quarks and antiquarks, tetra­quarks and pentaquarks. In 2017 he co-authored a Physical Review Letters paper that provided the first robust prediction of a bbud tetraquark that would be stable under the strong interaction (CERN Courier November/December 2024 p33).

What truly set Jon apart was his rare ability to seamlessly integrate theoretical acumen with practical experimental engagement. While primarily a theoretician, he held a deep appreciation for experimental data and actively participated in the experimental endeavour. A prime example of this was his long-standing involvement with the CLEO collaboration at Cornell University.

He also collaborated on studies related to the detection of cosmic-ray air showers and contributed to the development of prototype systems for detecting radio pulses associated with these high-energy events. His interdisciplinary approach bridged theoretical predictions with experimental observations, enhancing the coherence between theory and practice in high-energy physics.

Unusually for a theorist, Jon was a high-level expert in electronics, rooted through his deep life-long interest in amateur short-wave radio. As with everything else, he did it very thoroughly, from physics analysis to travelling to solar eclipses to take advantage of the increased propagation range of the electromagnetic waves caused by changes in the ionosphere. Rosner was also deeply committed to public service within the scientific community. He served as chair of the Division of Particles and Fields of the American Physical Society in 2013, during which he played a central role in organising the “Snowmass on the Mississippi” conference. This event was an essential part of the long-term strategic planning for the US high-energy physics programme. His leadership and vision were widely recognised and appreciated by his peers.

Throughout his career, Rosner received numerous accolades. He was a fellow of the American Physical Society and was awarded fellowships from the Alfred P. Sloan Foundation and the John Simon Guggenheim Memorial Foundation. His publication record includes more than 500 theoretical papers, reflecting his prolific and highly impactful career in physics. He is survived by his wife, Joy, their two children, Hannah and Benjamin, and a granddaughter, Sadie.

César Gómez 1954–2025

César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025 after a short fight against illness, leaving his friends and colleagues with a deep sense of loss.

César gained his PhD in 1981 from Universidad de Salamanca, where he became professor after working at Harvard, the Institute for Advanced Study and CERN. He held an invited professorship at the Université de Genève between 1987 and 1991, and in this same year, he moved to Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, where he eventually became a founding member of the Instituto de Física Teórica (IFT) UAM–CSIC. He became emeritus in 2024.

Among the large number of topics he worked on during his scientific career, César was initially fascinated by the dynamics of gauge theories. He dedicated his postdoctoral years to problems concerning the structure of the quantum vacuum in QCD, making some crucial contributions.

Focusing in the 1990s on the physics of two-dimensional conformal field theories, he used his special gifts to squeeze physics out of formal structures, leaving his mark in works ranging from superstrings to integrable models, and co-authoring with Martí Ruiz-Altaba and Germán Sierra the book Quantum Groups in Two-Dimensional Physics (Cambridge University Press, 1996). With the new century and the rise of holography, César returned to the topics of his youth: the renormalisation group and gauge theories, now with a completely different perspective.

Far from settling down, in the last decade we discover a very daring César, plunging together with Gia Dvali and other collaborators into a radical approach to understand symmetry breaking in gauge theories, opening new avenues in the study of black holes and the emergence of spacetime in quantum gravity. The magic of von Neumann algebras inspired him to propose an elegant, deep and original understanding of inflationary universes and their quantum properties. This research programme led him to one of his most fertile and productive periods, sadly truncated by his unexpected passing at a time when he was bursting with ideas and projects.

César’s influence went beyond his papers. After his arrival at CSIC as an international leader in string theory, he acted as a pole of attraction. His impact was felt both through the training of graduate students, as well as by the many courses he imparted that left a lasting memory on the new generations.

Contrasting with his abstract scientific style, César also had a pragmatic side, full of vision, momentum and political talent. A major part of his legacy is the creation of the IFT, whose existence would be unthinkable without César among the small group of theoretical physicists from Universidad Autónoma de Madrid and CSIC who made a dream come true. For him, the IFT was more than his research institute, it was the home he helped to build.

Philosophy was a true second career for César, dating back to his PhD in Salamanca and strengthened at Harvard, where he started a lifelong friendship with Hilary Putnam. The philosophy of language was one of his favourite subjects for philosophical musings, and he dedicated to it an inspiring book in Spanish in 2003.

Cesar’s impressive and eclectic knowledge of physics always transformed blackboard discussions into a delightful and fascinating experience, while his extraordinary ability to establish connections between apparently remote notions was extremely motivating at the early stages of a project. A regular presence at seminars and journal clubs, and always conspicuous by his many penetrating and inspiring questions, he was a beloved character among graduate students, who felt the excitement of knowing that he could turn every seminar into a unique event.

César was an excellent scientist with a remarkable personality. He was a wonderful conversationalist on any possible topic, encouraging open discussions free of prejudice, and building bridges with all conversational partners. He cherished his wife Carmen and daughters Ana and Pepa, who survive him.

Farewell, dear friend. May you rest in peace, and may your memory be our blessing.

The battle of the Big Bang

As Arthur Koestler wrote in his seminal 1959 work The Sleepwalkers, “The history of cosmic theories … may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at, reminds one more of a sleepwalker’s performance than an electronic’s brain.” Koestler’s trenchant observation about the state of cosmology in the first half of the 20th century is perhaps even more true of cosmology in the first half of the 21st, and Battle of the Big Bang: The New Tales of Our Cosmic Origins provides an entertaining – and often refreshingly irreverent – update on the state of current collective obsessions and controlled schizophrenias in cosmology’s effort to understand the origin of the universe. The product of a collaboration between a working cosmologist (Afshordi) and a science communicator (Halper), Battle of the Big Bang tells the story of our modern efforts to comprehend the nature of the first moments of time, back to the moment of the Big Bang and even before.

Rogues gallery

The story told by the book combines lucid explanations of a rogues’ gallery of modern cosmological theories, some astonishingly successful, others less so, interspersed with anecdotes culled from Halper’s numerous interviews with key players in the game. These stories of the real people behind the theories add humanistic depth to the science, and the balance between Halper’s engaging storytelling and Afshordi’s steady-handed illumination of often esoteric scientific ideas is mostly a winning combination; the book is readable, without sacrificing too much scientific depth. In this respect, Battle of the Big Bang is reminiscent of Dennis Overbye’s 1991 Lonely Hearts of the Cosmos. As with Overbye’s account of the famous conference-banquet fist fight between Rocky Kolb and Gary Steigman, there is no shortage here of renowned scientists behaving like children, and the “mean girls of cosmology” angle makes for an entertaining read. The story of University of North Carolina professor Paul Frampton getting catfished by cocaine smugglers posing as model Denise Milani and ending up in an Argentine prison, for example, is not one you see coming.

Battle of the Big Bang: The New Tales of Our Cosmic Origins

A central conflict propelling the narrative is the longstanding feud between Andrei Linde and Alan Guth, both originators of the theory of cosmological inflation, and Paul Steinhardt, also an originator of the theory who later transformed into an apostate and bitter critic of the theory he helped establish.

Inflation – a hypothesised period of exponential cosmic expansion by more than 26 orders of magnitude that set the initial conditions for the hot Big Bang – is the gorilla in the room, a hugely successful theory that over the past several decades has racked up win after win when confronted by modern precision cosmology. Inflation is rightly considered by most cosmologists to be a central part of the “standard” cosmology, and its status as a leading theory inevitably makes it a target of critics like Steinhardt, who argue that inflation’s inherent flexibility means that it is not a scientific theory at all. Inflation is introduced early in the book, and for the remainder, Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation: multiverses, bouncing universes, new universes birthed from within black holes, extra dimensions, varying light speed and “mirror” universes with reversed time all make appearances, a dizzying inventory of our most recent collective obsessions and schizophrenias.

In the later chapters, Afshordi describes some of his own efforts to formulate an alternative to inflation, and it is here that the book is at its strongest; the voice of a master of the craft confronting his own unconscious assumptions and biases makes for compelling reading. I have known Niayesh as a friend and colleague for more than 20 years. He is a fearlessly creative theorist with deep technical skill, but he has the heart of a rebel and a poet, and I found myself wishing that the book gave his unique voice more room to shine, instead of burying it beneath too many mundane pop-science tropes; the book could have used more of the science and less of the “science communication”. At times the pop-culture references come so thick that the reader feels as if he is having to shake them off his leg.

Compelling arguments

Anyone who reads science blogs or follows science on social media is aware of the voices, some of them from within mainstream science and many from further out on the fringe, arguing that modern theoretical physics suffers from a rigid orthodoxy that serves to crowd out worthy alternative ideas to understand problems such as dark matter, dark energy and the unification of gravity with quantum mechanics. This has been the subject of several books such as Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong. A real value in Battle of the Big Bang is to provide a compelling counterargument to that pessimistic narrative. In reality, ambitious scientists like nothing better than overturning a standard paradigm, and theorists have put the standard model of cosmology in the cross hairs with the gusto of assassins gunning for John Wick. Despite – or perhaps because of – its focus on conflict, this book ultimately paints a picture of a vital and healthy scientific process, a kind of controlled chaos, ripe with wild ideas, full of the clash of egos and littered with the ashes of failed shots at glory.

What the book is not is a reliable scholarly work on the history of science. Not only was the manuscript rather haphazardly copy-edited (the renowned Mount Palomar telescope, for example, is not “two hundred foot”, but in fact 200 inches), but the historical details are sometimes smoothed over to fit a coherent narrative rather than presented in their actual messy accuracy. While I do not doubt the anecdote of David Spergel saying “we’re dead”, referring to cosmic strings when data from the COBE satellite was first released, it was not COBE that killed cosmic strings. The blurry vision of COBE could accommodate either strings or inflation as the source of fluctuations in the cosmic microwave background (CMB), and it took a clearer view to make the distinction. The final nail in the coffin came from BOOMERanG nearly a decade later, with the observation of the second acoustic peak in the CMB. And it was not, as claimed here, BOOMERanG that provided the first evidence for a flat geometry to the cosmos; that happened a few years earlier, with the Saskatoon and CAT experiments.

Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation

The book makes a point of the premature death of Dave Wilkinson, when in fact he died at age 67, not (as is implied in the text) in his 50s. Wilkinson – who was my freshman physics professor – was a great scientist and a gifted teacher, and it is appropriate to memorialise him, but he had a long and productive career.

Besides these points of detail, there are some more significant omissions. The book relates the story of how the Ukrainian physicist Alex Vilenkin, blacklisted from physics and working as a zookeeper in Kharkiv, escaped the Soviet Union. Vilenkin moved to SUNY Buffalo, where I am currently a professor, because he had mistaken Mendel Sachs, a condensed matter theorist, for Ray Sachs, who originally predicted fluctuations in the CMB. It’s a funny story, and although the authors note that Vilenkin was blacklisted for refusing to be an informant for the KGB, they omit the central context that he was Jewish, one of many Jews banished from academic life by Soviet authorities who escaped the stifling anti-Semitism of the Soviet Union for scientific freedom in the West. This history resonates today in light of efforts by some scientists to boycott Israeli institutes and even blacklist Israeli colleagues. Unlike the minutiae of CMB physics, this matters, and Battle of the Big Bang should have been more careful to tell the whole story.

New frontiers in science in the era of AI

New Frontiers in Science in the Era of AI

At a time when artificial intelligence is more buzzword than substance in many corners of public discourse, New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. This book is not another breathless ode to ChatGPT or deep learning, nor a dry compilation of technical papers. Instead, it’s a broad and ambitious survey, spanning particle physics, evolutionary biology, neuroscience and AI ethics, that seeks to make sense of how emerging technologies are reshaping not only the sciences but knowledge and society more broadly.

The book’s chapters, written by established researchers from diverse fields, aim to avoid jargon while attracting non-specialists, without compromising depth. The book offers an insight into how physics remains foundational across scientific domains, and considers the social, ethical and philosophical implications of AI-driven science.

The first section, “New Physics World”, will be the most familiar terrain for physicists. Ugo Moschella’s essay, “What Are Things Made of? The History of Particles from Thales to Higgs”, opens with a sweeping yet grounded narrative of how metaphysical questions have persisted alongside empirical discoveries. He draws a bold parallel between the ancient idea of mass emerging from a cosmic vortex and the Higgs mechanism, a poetic analogy that holds surprising resonance. Thales, who lived roughly from 624 to 545 BCE, proposed that water is the fundamental substance out of which all others are formed. Following his revelation, Pythagoras and Empedocles added three more items to complete the list of the elements: earth, air and fire. Aristotle added a fifth element: the “aether”. The physical foundation of the standard cosmological model of the ancient world is then rooted in the Aristotelian conceptions of movement and gravity, argues Moschella. His essay lays the groundwork for future chapters that explore entanglement, computation and the transition from thought experiments to quantum technology and AI.

A broad and ambitious survey spanning particle physics, evolutionary biology, neuroscience and AI ethics

The second and third sections venture into evolutionary genetics, epigenetics (the study of heritable changes in gene expression) and neuroscience – areas more peripheral to physics, but timely nonetheless. Contributions by Eva Jablonka, evolutionary theorist and geneticist from Tel Aviv University, and Telmo Pievani, a biologist from the University of Padua, explore the biological implications of gene editing, environmental inheritance and self-directed evolution, as well as the ever-blurring boundaries between what is considered “natural” versus “artificial”. The authors propose that the human ability to edit genes is itself an evolutionary agent – a novel and unsettling idea, as this would be an evolution driven by a will and not by chance. Neuroscientist Jason D Runyan reflects compellingly on free will in the age of AI, blending empirical work with philosophical questions. These chapters enrich the central inquiry of what it means to be a “knowing agent”: someone who acts on nature according to its will, influenced by biological, cognitive and social factors. For physicists, the lesson may be less about adopting specific methods and more about recognising how their own field’s assumptions – about determinism, emergence or complexity – are echoed and challenged in the life sciences.

Perspectives on AI

The fourth section, “Artificial Intelligence Perspectives”, most directly addresses the book’s central theme. The quality, scientific depth and rigour are not equally distributed between these chapters, but are stimulating nonetheless. Topics range from the role of open-source AI in student-led AI projects at CERN’s IdeaSquare and real-time astrophysical discovery. Michael Coughlin and colleagues’ chapter on accelerated AI in astrophysics stands out for its technical clarity and relevance, a solid entry point for physicists curious about AI beyond popular discourse. Absent is an in-depth treatment of current AI applications in high-energy physics, such as anomaly detection in LHC triggers or generative models for simulation. Given the book’s CERN affiliations, this omission is surprising and leaves out some of the most active intersections of AI and high-energy physics (HEP) research.

Even as AI expands our modelling capacity, the epistemic limits of human cognition may remain permanent

The final sections address cosmological mysteries and the epistemological limits of human cognition. David H Wolpert’s epilogue, “What Can We Know About That Which We Cannot Even Imagine?”, serves as a reminder that even as AI expands our modelling capacity, the epistemic limits of human cognition – including conceptual blind spots and unprovable truths – may remain permanent. This tension is not a contradiction but a sobering reflection on the intrinsic boundaries of scientific – and more widely human – knowledge.

This eclectic volume is best read as a reflective companion to one’s own work. For advanced students, postdocs and researchers open to thinking beyond disciplinary boundaries, the book is an enriching, if at times uneven, read.

To a professional scientist, the book occasionally romanticises interdisciplinary exchange between specialised fields without fully engaging with the real methodological difficulties of translating complex concepts to the other sciences. Topics including the limitations of current large-language models, the reproducibility crisis in AI research, and the ethical risks of data-driven surveillance would have benefited from deeper treatment. Ethical questions in HEP may be less prominent in the public eye, but still exist. To mention a few, there are the environmental impact of large-scale facilities, the question of spending a substantial amount of public money on such mega-science projects, the potential dual-use concerns of the technologies developed, the governance of massive international collaborations and data transparency. These deserve more attention, and the book could have explored them more thoroughly.

A timely snapshot

Still, the book doesn’t pretend to be exhaustive. Its strength lies in curating diverse voices and offering a timely snapshot of science, as well as shedding light on ethical and philosophical questions associated with science that are less frequently discussed.

There is a vast knowledge gap in today’s society. Researchers often become so absorbed in their specific domains that they lose sight of their work’s broader philosophical and societal context and the need to explain it to the public. Meanwhile, public misunderstanding of science, and the resulting confusion between fact, theory and opinion, is growing. This gulf provides fertile ground for political manipulation and ideological extremism. New Frontiers in Science in the Era of AI has the immense merit of trying to bridge that gap. The editors and contributors deserve credit for producing a work of both scientific and societal relevance.

Quantum culture

Kanta Dihal

How has quantum mechanics influenced culture in the last 100 years?

Quantum physics offers an opportunity to make the impossible seem plausible. For instance, if your superhero dies dramatically but the actor is still on the payroll, you have a few options available. You could pretend the hero miraculously survived the calamity of the previous instalment. You could also pretend the events of the previous instalment never happened. And then there is Star Wars: “Somehow, Palpatine returned.”

These days, however, quantum physics tends to come to the rescue. Because quantum physics offers the wonderful option to maintain that all previous events really happened, and yet your hero is still alive… in a parallel universe. Much is down to the remarkable cultural impact of the many-worlds interpretation of quantum physics, which has been steadily growing in fame (or notoriety) since Hugh Everett introduced it
in 1957.

Is quantum physics unique in helping fiction authors make the impossible seem possible?

Not really! Before the “quantum” handwave, there was “nuclear”: think of Dr Atomic from Watchmen, or Godzilla, as expressions of the utopian and dystopian expectations of that newly discovered branch of science. Before nuclear, there was electricity, with Frankenstein’s monster as perhaps its most important product. We can go all the way back to the invention of hydraulics in the ancient world, which led to an explosion of tales of liquid-operated automata – early forms of artificial intelligence – such as the bronze soldier Talos in ancient Greece. We have always used our latest discoveries to dream of a future in which our ancient tales of wonder could come true.

Is the many-worlds interpretation the most common theory used in science fiction inspired by quantum mechanics?

Many-worlds has become Marvel’s favourite trope. It allows them to expand on an increasingly entangled web of storylines that borrow from a range of remakes and reboots, as well as introducing gender and racial diversity into old stories. Marvel may have mainstreamed this interpretation, but the viewers of the average blockbuster may not realise exactly how niche it is, and how many alternatives there are. With many interpretations vying for acceptance, every once in a while a brave social scientist ventures to survey quantum-physicists’ preferences. These studies tend to confirm the dominance of the Copenhagen interpretation, with its collapse of the wavefunction rather than the branching universes characteristic of the Everett interpretation. In a 2016 study, for instance, only 6% of quantum physicists claimed that Everett was their favourite interpretation. In 2018 I looked through a stack of popular quantum-physics books published between 1980 and 2017, and found that more than half of these books endorse the many-worlds interpretation. A non-physicist might be forgiven for thinking that quantum physicists are split between two equal-sized enemy camps of Copenhagenists and Everettians.

What makes the many-worlds interpretation so compelling?

Answering this brings us to a fundamental question that fiction has enjoyed exploring since humans first told each other stories: what if? “What if the Nazis won the Second World War?” is pretty much an entire genre by itself these days. Before that, there were alternate histories of the American Civil War and many other key historical events. This means that the many-worlds interpretation fits smoothly into an existing narrative genre. It suggests that these alternate histories may be real, that they are potentially accessible to us and simply happening in a different dimension. Even the specific idea of branching alternative universes existed in fiction before Hugh Everett applied it to quantum mechanics. One famous example is the 1941 short story The Garden of Forking Paths by the Argentinian writer Jorge Luis Borges, in which a writer tries to create a novel in which everything that could happen, happens. His story anticipated the many-worlds interpretation so closely that Bryce DeWitt used an extract from it as the epigraph to his 1973 edited collection The Many-Worlds Interpretation of Quantum Mechanics. But the most uncanny example is, perhaps, Andre Norton’s science-fiction novel The Crossroads of Time, from 1956 – published when Everett was writing his thesis. In her novel, a group of historians invents a “possibility worlds” theory of history. The protagonist, Blake Walker, discovers that this theory is true when he meets a group of men from a parallel universe who are on the hunt for a universe-travelling criminal. Travelling with them, Blake ends up in a world where Hitler won the Battle of Britain. Of course, in fiction, only worlds in which a significant change has taken place are of any real interest to the reader or viewer. (Blake also visits a world inhabited by metal dinosaurs.) The truly uncountable number of slightly different universes Everett’s theory implies are extremely difficult to get our heads around. Nonetheless, our storytelling mindsets have long primed us for a fascination with the many-worlds interpretation.

Have writers put other interpretations to good use?

For someone who really wants to put their physics degree to use in their spare time, I’d recommend the works of Greg Egan: although his novel Quarantine uses the controversial conscious collapse interpretation, he always ensures that the maths checks out. Egan’s attitude towards the scientific content of his novels is best summed up by a quote on his blog: “A few reviewers complained that they had trouble keeping straight [the science of his novel Incandescence]. This leaves me wondering if they’ve really never encountered a book that benefits from being read with a pad of paper and a pen beside it, or whether they’re just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this.”

What other quantum concepts are widely used and abused?

We have Albert Einstein to thank for the extremely evocative description of quantum entanglement as “spooky action at a distance”. As with most scientific phenomena, a catchy nickname such as this one is extremely effective for getting a concept to stick in the popular imagination. While Einstein himself did not initially believe quantum entanglement could be a real phenomenon, as it would violate local causality, we now have both evidence and applications of entanglement in the real world, most notably in quantum cryptography. But in science fiction, the most common application of quantum entanglement is in faster-than-light communication. In her 1966 novel Rocannon’s World, Ursula K Le Guin describes a device called the “ansible”, which interstellar travellers use to instantaneously communicate with each other across vast distances. Her term was so influential that it now regularly appears in science fiction as a widely accepted name for a faster-than-light communications device, the same way we have adopted the word “robot” from the 1920 play R.U.R. by Karel Čapek.

Fiction may get the science wrong, but that is often because the story it tries to tell existed long before the science

How were cultural interpretations of entanglement influenced by the development of quantum theory?

It wasn’t until the 1970s that no-signalling theorems conclusively proved that entanglement correlations, while instantaneous, cannot be controlled or used to send messages. Explaining why is a lot more complex than communicating the notion that observing a particle here has an effect on a particle there. Once again, quantum physics seemingly provides just enough scientific justification to resolve an issue that has plagued science fiction ever since the speed of light was discovered: how can we travel through space, exploring galaxies, settling on distant planets, if we cannot communicate with each other? This same line of thought has sparked another entanglement-related invention in fiction: what if we can send not just messages but also people, or even entire spaceships, across faster-than-light distances using entanglement? Conveniently, quantum physicists had come up with another extremely evocative term that fit this idea perfectly: quantum teleportation. Real quantum teleportation only transfers information. But the idea of teleportation is so deeply embedded in our storytelling past that we can’t help extrapolating it. From stories of gods that could appear anywhere at will to tales of portals that lead to strange new worlds, we have always felt limited by the speeds of travel we have managed to achieve – and once again, the speed of light seems to be a hard limit that quantum teleportation might be able to get us around. In his 2003 novel Timeline, Michael Crichton sends a group of researchers back in time using quantum teleportation, and the videogame Half-Life 2 contains teleportation devices that similarly seem to work through quantum entanglement.

What quantum concepts have unexplored cultural potential?

Clearly, interpretations other than many worlds have a PR problem, so is anyone willing to write a chart topper based on the relational interpretation or QBism? More generally, I think that any question we do not yet have an answer to, or any theory that remains untestable, is a potential source for an excellent story. Richard Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.” Ironically, it is precisely because of this that quantum physics has become such a widespread building block of science fiction: it is just hard enough to understand, just unresolved and unexplained enough to keep our hopes up that one day we might discover that interstellar communication or inter-universe travel might be possible. Few people would choose the realities of theorising over these ancient dreams. That said, the theorising may never have happened without the dreams. How many of your colleagues are intimately acquainted with the very science fiction they criticise for having unrealistic physics? We are creatures of habit and convenience held together by stories, physicists no less than everyone else. This is why we come up with catchy names for theories, and stories about dead-and-alive cats. Fiction may often get the science wrong, but that is often because the story it tries to tell existed long before the science.

A scientist in sales

Massimiliano Pindo

The boundary between industry and academia can feel like a chasm. Opportunity abounds for those willing to bridge the gap.

Massimiliano Pindo began his career working on silicon pixel detectors at the DELPHI experiment at the Large Electron–Positron Collider. While at CERN, Pindo developed analytical and technical skills that would later become crucial in his career. But despite his passion for research, doubts clouded his hopes for the future.

“I wanted to stay in academia,” he recalls. “But at that time, it was getting really difficult to get a permanent job.” Pindo moved from his childhood home in Milan to Geneva, before eventually moving back in with his parents while applying for his next research grant. “The golden days of academia where people got a fixed position immediately after a postdoc or PhD were over.”

The path forward seemed increasingly unstable, defined by short-term grants, constant travel and an inability to plan long-term. There was always a constant stream of new grant applications, but permanent contracts were few and far between. With competition increasing, job stability seemed further and further out of reach. “You could make a decent living,” Pindo says, “but the real problem was you could not plan your life.”

Translatable skills

Faced with the unpredictability of academic work, Pindo transitioned into industry – a leap that eventually led him to his current role as marketing and sales director at Renishaw, France, a global engineering and scientific technology company. Pindo was confident that his technical expertise would provide a strong foundation for a job beyond academia, and indeed he found that “hard” skills such as analytical thinking, problem-solving and a deep understanding of technology, which he had honed at CERN alongside soft skills such as teamwork, languages and communication, translated well to his work in industry.

“When you’re a physicist, especially a particle physicist, you’re used to breaking down complex problems, selecting what is really meaningful amongst all the noise, and addressing these issues directly,” Pindo says. His experience in academia gave him the confidence that industry challenges would pale in comparison. “I was telling myself that in the academic world, you are dealing with things that, at least on paper, are more complex and difficult than what you find in industry.”

Initially, these technical skills helped Pindo become a device engineer for a hardware company, before making the switch to sales. The gradual transition from academia to something more hands-on allowed him to really understand the company’s product on a technical level, which made him a more desirable candidate when transitioning into marketing.

“When you are in B2B [business-to-business] mode and selling technical products, it’s always good to have somebody who has technical experience in the industry,” explains Pindo. “You have to have a technical understanding of what you’re selling, to better understand the problems customers are trying to solve.”

However, this experience also allowed him to recognise gaps in his knowledge. As he began gaining more responsibility in his new, more business-focused role, Pindo decided to go back to university and get an MBA. During the programme, he was able to familiarise himself with the worlds of human resources, business strategy and management – skills that aren’t typically the focus in a physics lab.

Pindo’s journey through industry hasn’t been a one-way ticket out of academia. Today, he still maintains a foothold in the academic world, teaching strategy as an affiliated professor at the Sorbonne. “In the end you never leave the places you love,” he says. “I got out through the door – now I’m getting back in through the window!”

Transitioning between industry and academia was not entirely seamless. Misconceptions loomed on both sides, and it took Pindo a while to find a balance between the two.

“There is a stereotype that scientists are people who can’t adapt to industrial environments – that they are too abstract, too theoretical,” Pindo explains. “People think scientists are always in the clouds, disconnected from reality. But that’s not true. The science we make is not the science of cartoons. Scientists can be people who plan and execute practical solutions.”

The misunderstanding, he says, goes both ways. “When I talk to alumni still in academia, many think that industry is a nightmare – boring, routine, uninteresting. But that’s also false,” Pindo says. “There’s this wall of suspicion. Academics look at industry and think, ‘What do they want? What’s the real goal? Are they just trying to make more money?’ There is no trust.”

Tight labour markets

For Pindo, this divide is frustrating and entirely unnecessary. Now with years of experience navigating both worlds, he envisions a more fluid connection between academia and industry – one that leverages the strengths of both. “Industry is currently facing tight labour markets for highly skilled talent, and academia doesn’t have access to the money and practical opportunities that industry can provide,” says Pindo. “Both sides need to work together.”

To bridge this gap, Pindo advocates a more open dialogue and a revolving door between the two fields – one that allows both academics and industry professionals to move fluidly back and forth, carrying their expertise across boundaries. Both sides have much to gain from shared knowledge and collaboration. One way to achieve this, he suggests, is through active participation in alumni networks and university events, which can nurture lasting relationships and mutual understanding. If more professionals embraced this mindset, it could help alleviate the very instability that once pushed him out of academia, creating a landscape where the boundaries between science and industry blur to the benefit of both.

“Everything depends on active listening. You always have to learn from the person in front of you, so give them the chance to speak. We have a better world to build, and that comes only from open dialogue and communication.”

bright-rec iop pub iop-science physcis connect