Comsol -leaderboard other pages

Topics

A bridge between popular and textbook science

Most popular science books are written to reach the largest audience possible, which comes with certain sacrifices. The assumption is that many readers might be deterred by technical topics and language, especially by equations that require higher mathematics. In physics one can therefore usually distinguish textbooks from popular physics books by flicking through the pages and checking for symbols.

The Biggest Ideas in the Universe: space, time, and motion, the first in a three-part series by Sean Carroll, goes against this trend. Written for “…people who have no mathematical experience than high-school algebra, but are willing to look at an equation and think about what it means”, there is no point in the book at which things are muddied because the maths becomes too advanced.

Concepts and theories

The first part of the book covers nine topics including conservation, space–time, geometry, gravity and black holes. Carroll spends the first few chapters introducing the reader to the thought process of a theoretical physicist: how to develop a sense for symmetries, the conservation of charges and expansions in small parameters. It also gives readers a fast introduction to calculus using geometric arguments to define derivatives and integrals. By the end of the third chapter, the concepts of differential equations, phase space and the principle of least action have been introduced.

The centre part of the book focusses on geometry. A discussion of the meaning of space and time in physics is followed by the introduction of Minkowski spacetime, with considerable effort given to the philosophical meaning of these concepts. The third part is the most technical. It covers differential geometry, a beautiful derivation of Einstein’s equation of general relativity and the final chapter uses the Schwarzschild solution to discuss black holes.

The Biggest Ideas in the Universe

It is a welcome development that publishers and authors such as Carroll are confident that books like this will find a sizeable readership (another good, recent example of advanced popular physics texts is Leonard Susskind’s “A Theoretical Minimum” series). Many topics in physics can only be fully appreciated if the equations are explained and if chapters go beyond the limitations of typical popular science books. Carroll’s writing style and the structure of the book help to make this case: all concepts are carefully introduced and even though the book is very dense and covers a lot of material, everything is interconnected and readers won’t feel lost while reading. Regular reference to the historical steps in discovering theo­ries and concepts loosen up the text. Two examples are the correspondence between Leibniz and Clarke about the nature of space and the interesting discussion of Einstein and Hilbert’s different approaches to general relativity. The whole series of books, of which two of the three parts will be published soon, is accompanied by recorded lectures that are freely available online and present the topic of every chapter, along with answers to questions on these topics.

It is difficult to find any weaknesses in this book. Figures are often labelled with symbols that readers not used to physics notation can find in the text, so more text in the figures would make them even more accessible. Strangely, the section introducing entropy is not supported by equations and, given the technical detail of all other parts of the book, Carroll could have taken advantage of the mathematical groundwork of the previous chapters here.

I want to emphasise that every topic discussed in The Biggest Ideas in the Universe is well established physics. No flashy but speculative theories or unbalanced focus on science-fiction ideas, which are often used to attract readers to theoretical physics, appear. It stands apart from similar titles by offering insights that can only be obtained if the underlying equations are explained and not just mentioned.

Anyone who is interested in fundamental physics is encouraged to read this book, especially young people interested in studying physics because they will get an excellent idea of the type of physical arguments they will encounter at university. Those who think their mathematical background isn’t sufficient will likely learn many new things, even though the later chapters are quite technical. And if you are at the other end of the spectrum, such as a working physicist, you will find the philosophical discussions of familiar concepts and the illuminating arguments included to elicit physical intuition most useful.

Digging deeper into invisible Higgs-boson decays

ATLAS figure 1

Studies of the Higgs boson by ATLAS and CMS have observed and measured a large spectrum of production and decay mechanisms. Its relatively long lifetime and low expected width (4.1 MeV, compared with the GeV-range decay widths of the W and Z bosons) make the Higgs boson a sensitive probe for small couplings to new states that may measurably distort its branching fractions. The search for invisible or yet undetected decay channels is thus highly relevant.

Dark-matter (DM) particles created in LHC collisions would have no measurable interaction with the ATLAS detector and thus would be “invisible”, but could still be detected via the observation of missing transverse momentum in an event, similarly to neutrinos. The Standard Model (SM) predicts the Higgs boson to decay invisibly via H → ZZ*→ 4ν in only 0.1% of cases. However, this value could be significantly enhanced if the Higgs boson decays into a pair of (light enough) DM particles. Thus, by constraining the branching fraction of Higgs-boson decays to invisible particles it is possible to constrain DM scenarios and probe other physics beyond the SM (BSM).

The ATLAS collaboration has performed comprehensive searches for invisible decays of the Higgs boson considering all its major production modes: vector-boson fusion with and without additional final-state photons, gluon fusion in association with a jet from initial-state radiation, and associated production with a leptonically decaying Z boson or a top quark–antiquark pair. The results of these searches have now been combined, including inputs from Runs 1 and 2 analyses. They yield an upper limit of 10.7% on the branching ratio of the Higgs boson to invisible particles at 95% confidence level, for an unprecedented expected sensitivity of 7.7%. The result is used to extract upper limits on the spin-independent DM-nucleon scattering cross section for DM masses smaller than about 60 GeV in a variety of Higgs-portal models (figure 1). In this range and for the models considered, invisible Higgs-boson decays are more sensitive than the results from DM-nucleon scattering detection experiments.

ATLAS figure 2

An alternative way to constrain possible undetected decays of the Higgs boson is to measure its total decay width ΓH. Combining the observed value of the width with measurements of the branching fractions to observed decays allows the partial width for decays to new particles to be inferred. Directly measuring ΓH at the LHC is not possible as it is much smaller than the detector resolution. However, ΓH can be constrained by taking advantage of an unusual feature of the H  ZZ(*) decay channel: the rapid increase in available phase space for the H  ZZ(*) decay as mH approaches the 2mZ threshold counteracts the mass dependence of Higgs-boson production. Furthermore, this far “off-shell” production above 2mZ has a negligible ΓH dependence, unlike “on-shell” production near the Higgs-boson mass at 125 GeV. Comparing the Higgs-boson production rates in these two regions therefore allows an indirect measurement of ΓH. Although some assumptions are required (e.g. that the relation between on-shell and off-shell production is not modified by BSM effects), the measurement is sensitive to the value of ΓH expected in the SM. Recently, ATLAS measured the off-shell production cross-section using both the four-charged lepton (4l) and two-charged lepton plus two neutrino (2l2v) final states, finding evidence for off-shell Higgs-boson production with a significance of 3.3 σ (figure 2). By combining both the previously measured on-shell Higgs-boson production-cross section and the of-shell Higgs-boson production-cross section, ΓH was found to be 4.5+3.3–2.5 MeV, which agrees with the SM prediction of 4.1 MeV but leaves plenty of room for possible BSM contributions.

This sensitivity will improve thanks to the new data to be collected in Run 3 of the LHC, which should more than triple the size of the Run 2 dataset.

Design principles of theoretical physics

“Now I know what the atom looks like!” Ernest Rutherford’s simple statement belies the scientific power of reductionism. He had recently discovered that atoms have substructure, notably that they comprise a dense positively charged nucleus surrounded by a cloud of negatively charged electrons. Zooming forward in time, that nucleus ultimately gave way further when protons and neutrons were revealed at its core. A few stubborn decades later they too gave way with our current understanding being that they are comprised of quarks and gluons. At each step a new layer of nature is unveiled, sometimes more, sometimes less numerous in “building blocks” than the one prior, but in every case delivering explanations, even derivations, for the properties (in practice, parameters) of the previous layer. This strategy, broadly defined as “build microscopes, find answers” has been tremendously successful, arguably for millennia.

Natural patterns

While investigating these successively explanatory layers of nature, broad patterns emerge. One of which is known colloquially as “naturalness”. This pattern essentially asserts that in reversing the direction and going from one microscopic theory, “the UV-completion”, to its larger-scale shell, “the IR”, the values of parameters measured in the latter are, essentially, “typical”. Typical, in the sense that they reflect the scales, magnitudes and, perhaps most importantly, the symmetries of the underlying UV completion. As Murray Gell-Mann once said: “everything not forbidden is compulsory”.

So, if some symmetry is broken by a large amount by some interaction in the UV theory, the same symmetry, in whatever guise it may have adopted, will also be broken by a large amount in the IR theory. The only exception to this is accidental fine-tuning, where large UV-breakings can in principle conspire and give contributions to IR-breakings that, in practical terms, accidentally cancel to a high degree, giving a much smaller parameter than expected in the IR theory. This is colloquially known as “unnaturalness”.

There are good examples of both instances. There is no symmetry in QCD that could keep a proton light; unsurprisingly it has mass of the same order as the dominant mass scale in the theory, the QCD scale, mp ~ ΛQCD. But there is a symmetry in QCD that keeps the pion light. The only parameters in UV theory that break this symmetry are the light quark masses. Thus, the pion mass-squared is expected to be around m2π ~ mqΛQCD. Turns out, it is.

There are also examples of unnatural parameters. If you measure enough different physical observables, observations that are unlikely on their own become possible in a large ensemble of measurements – a sort of theoretical “look elsewhere effect”. For example, consider the fact that the Moon almost perfectly obscures the Sun during a lunar eclipse. There is no symmetry which requires that the angular size of the Moon should almost match that of the Sun to an Earth-based observer. Yet, given many planets and many moons, this will of course happen for some planetary systems.

However, if an observation of a parameter returns an apparently unnatural value, can one be sure that it is accidentally small? In other words, can we be confident we have definitively explored all possible phenomena in nature that can give rise to naturally small parameters? 

From 30 January to 3 February, participants of an informal CERN theory institute “Exotic Approaches to Naturalness” sought to answer this question. Drawn from diverse corners of the theorist zoo, more than 130 researchers gathered, both virtually and in person, to discuss questions of naturalness. The invited talks were chosen to expose phenomena in quantum field theory and beyond which challenge the naive naturalness paradigm.

Coincidences and correlations

The first day of the workshop considered how apparent numerical coincidences can lead to unexpectedly small parameters in the IR due to the result of selection rules that do not immediately manifest from a symmetry, known as “natural zeros”. A second set of talks considered how, going beyond quantum field theory, the UV and IR can potentially be unexpectedly correlated, especially in theories containing quantum gravity, and how this correlation can lead to cancellations that are not apparent from a purely quantum field theory perspective.

The second day was far-ranging, with the first talk unveiling some lower dimensional theories of the sort one more readily finds in condensed matter systems, in which “topological” effects lead to constraints on IR parameters. A second discussed how fundamental properties, such as causality, can impose constraints on IR parameters unexpectedly. The last demonstrated how gravitational effective theories, including those describing the gravitational waves emitted in binary black hole inspirals, have their own naturalness puzzles.

The ultimate goal is to now go forth and find new angles of attack on the biggest naturalness questions in fundamental physics

Midweek, alongside an inspirational theory colloquium by Nathaniel Craig (UC Santa Barbara), the potential role of cosmology in naturalness was interrogated. An early example made famous by Steven Weinberg concerns the role of the “anthropic principle” in the presently measured value of the cosmological constant. However, since then, particularly in recent years, theorists have found many possible connections and mechanisms linking naturalness questions to our universe and beyond.

The fourth day focussed on the emerging world of generalised and higher-form symmetries, which are new tools in the arsenal of the quantum field theorist. It was discussed how naturalness in IR parameters may potentially arise as a consequence of these recently uncovered symmetries, but whose naturalness would otherwise be obscured from view within a traditional symmetry perspective. The final day studied connections between string theory, the swampland and naturalness, exploring how the space of theories consistent with string theory leads to restricted values of IR parameters, which potentially links to naturalness. An eloquent summary was delivered by Tim Cohen (CERN).

Grand slam

In some sense the goal of the workshop was to push back the boundaries by equipping model builders with new and more powerful perspectives and theoretical tools linked to questions of naturalness, broadly defined. The workshop was a grand slam in this respect. However, the ultimate goal is to now go forth and use these new tools to find new angles of attack on the biggest naturalness questions in fundamental physics, relating to the cosmological constant and the Higgs mass.

The Standard Model, despite being an eminently marketable logo for mugs and t-shirts, is incomplete. It breaks down at very short distances and thus it is the IR of some more complete, more explanatory UV theory. We don’t know what this UV theory is, however, it apparently makes unnatural predictions for the Higgs mass and cosmological constant. Perhaps nature isn’t unnatural and generalised symmetries are as-yet hidden from our eyes, or perhaps string theory, quantum gravity or cosmology has a hand in things? It’s also possible, of course, that nature has fine-tuned these parameters by accident, however, that would seem – à la Weinberg – to point towards a framework in which such parameters are, in principle, measured in many different universes. All of these possibilities, and more, were discussed and explored to varying degrees.

Perhaps the most radical possibility, the most “exotic approach to naturalness” of all, would be to give up on naturalness altogether. Perhaps, in whatever framework UV completes the Standard Model, parameters such as the Higgs mass are simply incalculable, unpredictable in terms of more fundamental parameters, at any length scale. Shortly before the advent of relativity, quantum mechanics, and all that have followed from them, Lord Kelvin (attribution contested) once declared: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. The breadth of original ideas presented at the “Exotic Approaches to Naturalness” workshop, and the new connections constantly being made between formal theory, cosmology and particle phenomenology, suggest it would be similarly unwise now, as it was then, to make such a wager.

We can’t wait for a future collider

Imagine a world without a high-energy collider. Without our most powerful instrument for directly exploring the smallest scales, we would be incapable of addressing many open questions in particle physics. With the US particle-physics community currently debating which machines should succeed the LHC and how we should fit into the global landscape, this possibility is a serious concern. 

The good news is that physicists generally agree on the science case for future colliders. Questions surrounding the Standard Model itself, in particular the microscopic nature of the Higgs boson and the origin of electroweak symmetry breaking, can only be addressed at high-energy colliders. We also know the Standard Model is not the complete picture of the universe. Experimental observations and theoretical concerns strongly suggest the existence of new particles at the multi-TeV scale. 

The latest US Snowmass exercise and the European strategy update both advocate for the fast construction of an e+e Higgs factory followed by a multi-TeV collider. The former will enable us to measure the Higgs boson’s couplings to other particles with an order of magnitude better precision than the High-Luminosity LHC. The latter is crucial to unambiguously surpass exclusions from the LHC, and would be the only experiment where we could discover or exclude minimal dark-matter scenarios all the way up to their thermal targets. Most importantly, precise measurements of the Brout–Englert–Higgs potential at a 10 TeV scale collider are essential to understand what role the Higgs plays in the origin and evolution of the universe. 

We haven’t yet agreed on what to build, where and when. We face an unprecedented choice between scaling up existing collider technologies or pursuing new, compact and power-efficient options. We must also choose between centering the energy frontier at a single lab or restoring global balance to the field by hosting colliders at different sites. Our choices in the next few years could determine the next century of particle physics. 

Snowmass community workshop

The Future Circular Collider programme – beginning with a large circular e+e collider (FCC-ee) with energies ranging from 90 to 365 GeV, followed by a pp collider with energies up to 100 TeV (FCC-hh) – would build on the infrastructure and skills currently present at CERN. A circular e+e machine could support multiple interaction points, produce higher luminosity than a linear machine for energies of interest, and its tunnel could be re-used for a pp collider. While this staged approach has driven success in our field for decades, scaling up to a circumference of 100 km raises serious questions about feasibility, cost and power consumption. As a new assistant professor, I am also deeply concerned about gaps in data-taking and time­-scales. Even if there are no delays, I will likely retire during the FCC-ee run and die before the FCC-hh produces collisions. 

In contrast, there is a growing contingent of physicists who think that a paradigm shift is essential to reach the 10 TeV scale and beyond. The International Muon Collider collaboration has determined that, with targeted R&D to address engineering challenges and make design progress, a few-TeV μ+μ collider could be realised on a 20-year technically limited timeline, and would set the stage for an eventual 10 TeV machine. The latter could enable a mass reach equivalent to a 50–200 TeV hadron collider, in addition to precision electroweak measurements, with a lower price tag and significantly smaller footprint. A muon collider also opens the possibility to host different machines at different sites, easing the transition between projects and fostering a healthier, more global workforce. Assuming the technical challenges can be overcome, a muon collider would therefore be the most attractive way forward.

Assuming the technical challenges can be overcome, a muon collider would be the most attractive way forward

We are not yet ready to decide which path is most optimal, but we are already time-constrained. It is increasingly likely that the next machine will not turn on until after the High Luminosity-LHC. The most senior person today who could reasonably participate is roughly only 10 years into a permanent job. Early-career faculty, who would use this machine, are experienced enough to have well-informed opinions, but are not senior enough to be appointed to decision-making panels. While we value the wisdom of our senior colleagues, future colliders are inherently “early-career colliders”, and our perspectives must be incorporated. 

The US must urgently invest in future collider R&D. If other areas of physics progress faster than the energy frontier, our colleagues will disengage, move elsewhere and might not come back. If the size of the field and expertise atrophy before the next machine, we risk imperilling future colliders altogether. We agree on the physics case. We want the opportunity to access higher energies in our lifetimes. Let’s work together to choose the right path forward.

Stanisław Jadach 1947–2023

Stanisław Jadach, an outstanding theoretical physicist, died on 26 February at the age of 75. His foundational contributions to the physics programmes at LEP and the LHC, and for the proposed Future Circular Collider at CERN, have significantly helped to advance the field of elementary particle physics and its future aspirations.

Born in Czerteż, Poland, Jadach graduated in 1970 with a masters in physics from Jagiellonian University. There, he also defended his doctorate, received his habilitation degree and worked until 1992. During this period, whilst partly under martial law in Poland, Jadach took trips to Leiden, Paris, London, Stanford and Knoxville, and formed collaborations on precision theory calculations based on Monte Carlo event-generator methods. In 1992 he moved to the Institute of Nuclear Physics Polish Academy of Sciences (PAS) where, receiving the title of professor in 1994, he worked until his death. 

Prior to LEP, all calculations of radiative corrections were based on first- and, later, partially second-order results. This limited the theoretical precision to the 1% level, which was unacceptable for experiment. In 1987 Jadach solved that problem in a single-author report, inspired by the classic work of Yennie, Frautschi and Suura, featuring a new calculational method for any number of photons. It was widely believed that soft-photon approximations were restricted to many photons with very low energies and that it was impossible to relate, consistently, the distributions of one or two energetic photons to those of any number of soft photons. Jadach and his colleagues solved this problem in their papers in 1989 for differential cross sections, and later in 1999 at the level of spin amplitudes. A long series of publications and computer programmes for re-summed perturbative Standard Model calculations ensued. 

Most of the analysis of LEP data was based exclusively on the novel calculations provided by Jadach and his colleagues. The most important concerned the LEP luminosity measurement via Bhabha scattering, the production of lepton and quark pairs, and the production and decay of W and Z boson pairs. For the W-pair results at LEP2, Jadach and co-workers intelligently combined separate first-order calculations for the production and decay processes to achieve the necessary 0.5% theoretical accuracy, bypassing the need for full first-order calculations for the four-fermion process, which were unfeasible at the time. Contrary to what was deemed possible, Jadach and his colleagues achieved calculations that simultaneously take into account QED radiative corrections and the complete spin–spin correlation effects in the production and decay of two tau leptons. He also had success in the 1970s in novel simulations of strong interaction processes.

After LEP, Jadach turned to LHC physics. Among other novel results, he and his collaborators developed a new constrained Markovian algorithm for parton cascades, with no need to use backward evolution and predefined parton distributions, and proposed a new method, using a “physical” factorisation scheme, for combining a hard process at next-to leading order with a parton cascade, much simpler and more efficient than alternative methods.

Jadach was already updating his LEP-era calculations and software towards the increased precision of FCC-ee, and is the co-editor and co-author of a major paper delineating the need for new theoretical calculations to meet the proposed collider’s physics needs. He co-organised and participated in many physics workshops at CERN and in the preparation of comprehensive reports, starting with the famous 1989 LEP Yellow Reports.

Jadach, a member of the Polish Academy of Arts and Sciences (PAAS), received the most prestigious awards in physics in Poland: the Marie Skłodowska-Curie Prize (PAS), the Marian Mięsowicz Prize (PAAS), and the prize of the Minister of Science and Higher Education for lifetime scientific achievements. He was also a co-initiator and permanent member of the international advisory board of the RADCOR conference.

Stanisław (Staszek) was a wonderful man and mentor. Modest, gentle and sensitive, he did not judge or impose. He never refused requests and always had time for others. His professional knowledge was impressive. He knew almost everything about QED, and there were few other topics in which he was not at least knowledgeable. His erudition beyond physics was equally extensive. He is already profoundly and dearly missed.

Vittorio Giorgio Vaccaro 1941–2023

Accelerator physicist Vittorio Giorgio Vaccaro passed away after a short illness on 11 February 2023 in his hometown of Naples, Italy. 

Vittorio graduated in 1965 from the University of Naples Federico II. He soon moved to CERN as a fellow, where he remained from 1966 to 1969, contributing to the design and commissioning of the first high-intensity hadron collider, the Intersecting Storage Rings. At CERN, Vittorio introduced the concept of beam-coupling impedance to model the instabilities that were experienced above transition energy, writing a seminal report (Longitudinal instability of a coasting beam above transition, due to the action of lumped discontinuities), in which he described for the first time the action of discontinuities in the transverse section of a beam pipe as an impedance. His theory, which after his initial intuition he developed together with Andy Sessler, Alessandro G Ruggiero and many other colleagues, has become a fundamental tool in the design of particle accelerators. 

In 1969 he returned to his alma mater in Naples as professor of electromagnetic fields at the faculty of engineering, and continued teaching until he retired. He created an accelerator-physics team in association with INFN within the faculty of physics, and throughout his career remained closely related to CERN, where he visited regularly and where he sent many of his students. 

Vittorio collaborated with practically all the studies and accelerator projects in Europe, from the CERN machines to DAFNE, the European Spallation Source and HERA-B at DESY. The group in Naples became, thanks to him, a reference in the world of accelerators for the development of the theory of beam-coupling impedance of accelerator components and the associated bench measurements. Since the mid-1990s, he became increasingly interested in the development of linear accelerators for proton therapy, participating in a large collaboration with the TERA foundation, CERN and INFN. In 2003 he led a new collaboration between the University of Naples and several sections of INFN, which produced the first linac module at 3 GHz capable of accelerating protons from a 30 MeV cyclotron.

In 2019 Vittorio was awarded the IPAC Xie Jialin Award for outstanding work in the accelerator field “For his pioneering studies on instabilities in particle-beam physics, the introduction of the impedance concept in storage rings and, in the course of his academic career, for disseminating knowledge in accelerator physics throughout many generations of young scientists”.

It is difficult to find the words to recall Vittorio’s immense human qualities, his deep culture and his profound humanity. Several of his students are now scattered around the world, continuing his efforts to propose technical solutions to accelerator-physics problems based on a deep understanding of the phenomena of beam instability. Vittorio was moved by a sincere passion for science, and an irresistible curiosity for everything and everyone around him, which always brought him to approach anyone with an open and friendly spirit. 

We will deeply miss a passionate mentor and colleague, his wide knowledge, energy, friendship and humanity. 

A game changer for CERN

Patrick Geeraert

How did the idea for Science Gateway come about?

I was on detachment at the European Southern Observatory (ESO) in Garching when I was called back to CERN in 2017. The idea for a flagship education and outreach project was already quite advanced, and since I had triggered the construction of ESO’s Supernova planetarium and visitor centre during my mandate as director of administration, the CERN Director-General (DG) thought I could build on this for CERN. There had been various projects for buildings based around the Globe in the past, but they never quite took off. However, the then-new directorate wanted to create a new space for education and outreach targeting the general public of all ages. The DG also made it clear that a large auditorium for CERN events should be part of any plan, and that the entire construction should be financed by donations. I started to work on the concept.

The Italian architect Renzo Piano had visited CERN independently and fell in love with our values. When he left, he said: “If one day I can do something for you, don’t hesitate.” A few months later he proposed to draw the building. In June 2018 he showed us his first mockup, the “space station” design you see today. It crossed the Route de Meyrin and encroached on land designated for agricultural use on the north side and the CERN kindergarten on the south side. The design complicated matters, but on the other hand it was really inspiring. My first thought was that the budget I had will not be sufficient because what is expensive when you do construction are the facades, and here we had five buildings, complicated ones, with some parts suspended. But it was so original, so much in the DNA of CERN, that we thought, okay, let it be five. 

What will be in the buildings? 

There are three “pavilions” and two “tubes”. On the north side of the Science Gateway, we have a 900-seat auditorium where we can host large CERN meetings such as collaboration weeks, as well as hiring the venue out. It’s modular so we can split it in up to three different rooms and host independent events if needed. This element of the building caused most of the headaches. The second pavilion will house the reception, shop and restaurant. On the upper floor we have the two large lab spaces, where we will have two school groups at a time. Between the restaurant and the auditorium we have a natural amphitheatre where we can also hold events. 

 

Science Gateway

Then we enter the two tubes straddling the Route de Meyrin, which are exhibition areas. The first is about CERN – engaging visitors with accelerators, detectors, data acquisition and IT, etc. In the second tube, one half is a journey back to the Big Bang and the other is about open questions such as dark matter, dark energy, extra dimensions and such topics, where we will have art pieces to engage visitors. The third pavilion is an exhibition about the quantum world. The bridge linking the buildings is 220 m long and you can walk from one side to the other unimpeded.

How was the construction managed, and when will the building be open to the public? 

The first problem was that the north side of the Science Gateway, previously a temporary car park, was on agricultural land. We had to reclassify that piece of land for it to be authorised to build on, which is extremely complicated in Geneva. The process usually takes at least 10 years if it is successful at all, and we got it done in one. We had a very constructive process with our host authorities, whom I would like to thank warmly for their support, and the Renzo Piano team had made a case with drawings and models to help communicate our vision. We got the building permit in September 2019 and launched a procurement process for the construction and for the scenographers regarding the exhibitions. In November 2020 we signed the contract with the construction companies and they started to erect the site barracks at the end of 2020. The construction is due to be completed this summer. It was an extremely aggressive schedule, made more difficult by the pandemic and factors relating to Russia’s invasion of Ukraine. The inauguration will very likely be in the first week of October, with first visitors in the next day. I would like to thank the competent and dedicated work of all CERN’s departments and services that have contributed to the success of this project.

Who is the Science Gateway for? 

The main objective is to inspire the next generation to engage in STEM (science, technology, engineering, mathematics) studies and careers. To do that, first you need to have a programme for different age ranges. Whereas traditionally we target 16 years and above, Science Gateway will start with workshops for visitors as young as five. The exhibitions are suited to all ages above eight. Ideally, we want to engage visitors before they reach high school because that’s typically when girls start to think that STEM subjects are not for them. Another important audience is parents, so Science Gateway is also geared towards families and to show adults what it means to be a scientist along with showing diverse role models. The exhibits and installations are developed by a mix of in-house and outside expertise. For the labs, we rely on our education team, which has the experience of S’Cool LAB, but now that we have extended the age range of our audiences, we will also work closely with, for instance, the LEGO foundation, one of our donors, who are very strong in education programmes for children aged 5 to 12. Finally, Science Gateway is an opportunity for us to engage with VIPs and decision makers, to bring support to fundamental research and explain its impact on society.

How many visitors do you expect?

A lot! Currently we have more than 300,000 demands for guided tours per year and we can only satisfy about half of them. From those 300,000, more than 70% are based more than 800 km away. The Science Gateway will allow us to welcome up to 500,000 people per year, which is more than 1000 per day on average. We will continue to attract schools and visitors from all CERN member states and beyond, that’s for sure, and increase capacity for hands-on lab activities in particular. We also expect many more local visitors. Entry will be free, and we will be open to visitors all year, every day except Mondays. The Science Gateway will only be closed on 24, 25 and 31 December, and 1 January. For groups of 12 or more, people have to book in advance. But individuals and families can just show up on the day and access the auditorium, exhibition tubes, restaurant and the quantum-world pavilion. On the campus, they will also find temporary exhibitions in the Globe, and Ideasquare will also propose activities. Visitors can book a guided tour in the morning for that same day. Guided tours will remain at the same level as today, and we are trying to reduce pressure on existing restaurants on the Meyrin site with the new Science Gateway restaurant.

How is the Science Gateway funded?

The construction, landscaping, exhibitions and everything you will see in the building on day one are all funded from donations, with the main ones comings from Stellantis Foundation and a private foundation in Geneva. CERN is very grateful to all donors for their generosity. It’s about CHF 90 million in total, with some donors sponsoring particular exhibits or spaces. For the operations, the cost is estimated at around CHF 4 million per year. This will be funded from a mix of income from the infrastructure (for example, the shop, restaurant, parking and auditorium) and some limited CERN budget. The operational costs are for staffing in addition to maintenance of the equipment, cleaning and maintaining the forest that surrounds the building. 

What is the operational model?

A Science Gateway operations group has been created from the former visits service. With the exception of a small increase in industrial services contracts and two fellows, there are basically no recruitments. We will heavily rely on volunteers, from members of the personnel to users and other people linked with CERN. We already have a pool of guides who provide on average 16,000 hours per year on guided tours and we need to double that amount to ensure the Science Gateway operates as required. We will encourage more people to become guides and start training in July. We want to emphasise that, in addition to the rewards of engaging visitors with CERN’s science, this experience will be useful to their professional lives. We are also considering giving certificates and possibly accreditations. Ideally we should have about 650 guides each giving 48 hours per year. 

What is the environmental philosophy behind Science Gateway?

We want to pass on the message that we’re sustainable. We’ll be carbon neutral when we are in the operations phase, and solar panels on the roof of the three pavilions will produce much more energy than we need, with 40% going back into the CERN grid. The use of geothermal probes was explored but had to be abandoned due to local geology. Heating and cooling will be provided by heat exchangers powered by our solar panels. In the restaurant we will avoid single-use plastics, and lights will be dimmed in the evening and switched off at night. There will also be a charge for parking to encourage visitors to come by public transport. We wanted to show the link between science and nature, and that’s why we have the forest, with 400 trees and 13,000 shrubs.

How does it feel to see the project coming to completion?

When we started discussions six or so years ago, I thought I had less than a 10% chance of success because the project was so ambitious and had to be completely funded by donations. . However, it was strongly supported by the directorate, which was also very active in raising funds. The fact that it was to be built on agricultural land was another factor. There were more reasons for it to fail than to succeed. But the challenge was worth it. The phase during which we were doing the design of the construction with the architects was really interesting. I think we had 50 different versions, trying to define a design that would fit both the architects’ vision and our programme. With the construction, things start to become less fun. But we are almost there now and the Science Gateway will be a game changer for CERN, so I’m pretty proud of it. I had planned to retire at the end of the construction, but now I’ve decided to stay a bit longer and see the first steps of CERN’s new big baby. 

Event celebrates 50 years of Kobayashi–Maskawa theory

Quarks change their flavour through the weak interaction, and the strength of the flavour mixing is parametrised by the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which is an essential part of the Standard Model. This year marks the 60th anniversary of Nicola Cabibbo’s paper describing the mixing between down and strange quarks. It also marks the 50th anniversary of the paper by Makoto Kobayashi and Toshihide Maskawa, published in February 1973, which explained the origin of CP violation by generalising the quark mixing to three generations. To celebrate the magnificent accomplishments of quark-flavour physics during the past 50 years and to discuss the future of this important topic, a symposium was held at KEK in Tsukuba, Japan on 11 February, attracting about 150 participants from around the globe, including Makoto Kobayashi himself.

Opening the event, Masanori Yamauchi, director-general of KEK, summarised the early history of Kobayashi-Maskawa (KM) theory and the ideas to test it as a theory of CP violation. He recalled his time as a member of the Belle collaboration at the KEKB accelerator, including the memorable competition with the BaBar experiment at SLAC during the late 1990s and early 2000s, which finally led to the conclusion that KM theory explains the observed CP violation. Kobayashi and Maskawa shared one half of the 2008 Nobel Prize in Physics “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature”.

The scientific sessions were initiated by Amarjit Soni (BNL), who summarised various ideas to measure CP violation from cascade decays of B mesons including the celebrated papers by A I Sanda and co-workers in 1980–1981, which gave a strong motivation to build B factories. Stephen Olsen (Chung Ang University), who was one of the leaders of the Belle collaboration, looked back at the situation in the early 1980s when B-meson mixing was first observed, and emphasised the role of the accelerator physicists who achieved the 100-fold increase in luminosity that was necessary to measure CP angles. Adrian Bevan (Queen Mary University of London) added a perspective from the BaBar experiment, while the more recent impressive development by the LHCb experiment was summarised by Patrick Koppenburg (Nikhef).

Theoretical developments remain an integral part of quark-flavour physics. Matthias Neubert (University of Mainz) gave an overview of the theoretical tools developed to understand B-meson decays, which include heavy-quark symmetry, heavy-quark effective field theory, heavy-quark expansion and QCD factorisation, and Zoltan Ligeti (LBNL) summarised concurrent developments of theory and experiment to determine the sides of the CKM triangle. Lattice QCD also played a central role in the determination of the CKM matrix elements by providing precision computation of non-perturbative parameters, as discussed by Aida El-Khadra (University of Illinois).

There are valuable lessons from the KM paper when applied to the search beyond the Standard Model

The B sector is not the only place where CP violation is observed. Indeed, it was first observed in kaon mixings, and important pieces of information have been obtained since then. A number of theoretical ideas dedicated to the study of kaon CP violation were discussed by Andrzej Buras (Technical University of Munich), and experimental projects were overviewed by Taku Yamanaka (Osaka University).

There are still unsolved mysteries around quark-flavour physics. The most notable is the origin of the fermion generations, which may only be understood by accumulating more data to find any discrepancy with the Standard Model. SuperKEKB/Belle II, the successor of KEKB/Belle, plans to accumulate 50 times more data in the coming decades, while LHCb will continue to improve the precision of measurement in hadronic collisions. Nanae Taniguchi (KEK) reported the current status of SuperKEKB/Belle II, which has been in physics operation since 2019 and has already broken peak-luminosity records in e+e collisions. Gino Isidori (University of Zurich) gave his view on the possible shape of physics to come. “There are valuable lessons from the KM paper, which are still valuable today, when applied to the search beyond the Standard Model,” he concluded. 

As a closing remark, Makoto Kobayashi reminisced about the time when he built the theory as well as the time when the KEKB/Belle experiment was running. “I was able to watch the development of the B factory so closely from the very beginning,” he said. “I am grateful to the colleagues who gave me such a great opportunity.”

Majorana neutrinos remain at large

Majorana Demonstrator cryostat

Neutrinoless double-beta decay (0νββ) remains as elusive as ever, following publication of the final results from the Majorana Demonstrator experiment at SURF, South Dakota, in February. Based on six years’ monitoring of ultrapure 76Ge crystals, corresponding to an exposure of 64.5 kg × yr, the collaboration has confirmed that the half-life of 0νββ in this isotope is greater than 8.3 × 1025 years. This translates to an upper limit of an effective neutrino mass mββ of 113–269 meV, and complements a number of other 0νββ experiments that have recently concluded data-taking. 

Whereas double-beta decay is known to occur in several nuclides, its neutrinoless counterpart is forbidden by the Standard Model. That’s because it involves the simultaneous decay of two neutrons into two protons with the emission of two electrons and no neutrinos, which is only possible if neutrinos and antineutrinos are identical “Majorana” particles such that the two neutrinos from the decay cancel each other out. Such a process would violate lepton-number conservation, possibly playing a role in the matter–antimatter asymmetry in the universe, and be a direct sign of new physics. The discovery that neutrinos have mass, which is a necessary condition for them to be Majorana particles, motivated experiments worldwide to search for 0νββ in a variety of candidate nuclei.

Germanium-based detectors have an excellent energy resolution, which is key to be able to resolve the energy of the electrons emitted in potential 0νββ decays. The Majorana Demonstrator is also located 1.5 km underground, with low-noise electronics and ultrapure in-house-grown electroformed copper surrounding the detectors to shield it from background events. Despite a lower exposure, the collaboration was able to achieve similar limits to the GERDA experiment at Gran Sasso National Laboratory, which set a lower limit on the 76Ge 0νββ half-life of 1.8 × 1026 yr. Also among the projects of the collaboration is an ongoing search for the influence of dark-matter particles in the decay of metastable 180mTa – nature’s rarest isotope. Although no hints have been found so far, the search has already improved the sensitivity of dark-matter searches in nuclei significantly. 

The search has already improved the sensitivity of dark-matter searches in nuclei significantly

Other experiments, such as KamLAND- ZEN and EXO-200, use 136Xe to search for 0νββ. While the former recently set the most stringent limit of 2.3 × 1026 yr and is ongoing, the latter arrived at a value of 3.5 × 1025 yr with a total 136Xe exposure of 234.1 kg × yr based on its full dataset. Searches at Gran Sasso with CUORE using 1t × yr exposure of 130Te led to a half-life of 2.2 × 1025 yr and at CUORE’s successor, CUPID-0, which used 82Se with a total exposure of 8.82 kg × yr, of the order 1023 yr.

Having demonstrated the required sensitivity for 0νββ detection in 76Ge, the designs of Majorana Demonstrator and GERDA have been incorporated into the next-generation experiment LEGEND-200, which uses high-purity germanium detectors surrounded by liquid argon. The experiment, based at Gran Sasso, started operations last spring and could have initial results later this year, says co-spokesperson Steven Elliot (LANL): “Once all the detectors are installed, we plan to run for five years, while the next stage, LEGEND-1000, is proceeding through the DOE Critical Decision process. We hope to begin construction in summer 2026, with first data available early next decade.”

Neutrino pheno week back at CERN

Supernova 1987A

Since its inception in 2013, the CERN Neutrino Platform has evolved into a worldwide hub for both experimental and theoretical neutrino physics. Besides its multifaceted activities in hardware development – including most notably the ProtoDUNE detectors for the international long-baseline neutrino programme in the US – the platform also hosts a vibrant group of theorists.

From 13 to 17 March this group once again hosted the CERN Neutrino Platform Pheno Week, after a COVID-related hiatus of more than three years. With about 100 in-person participants and 200 more on Zoom, the meeting has become one of the largest in the field – a testament to the ever-growing popularity of neutrinos among particle physicists, even though neutrinos are the most elusive among all known elementary particles.

Talks at the March event reflected the full breadth of the subject, with the first days devoted to novel theoretical models explaining the peculiar relations observed among neutrino masses and mixing angles, and to understanding the way in which neutrinos interact with nuclei. The latter topic is particularly complex, given the vast range of energies in which neutrinos are studied – from non-relativistic cosmic background neutrinos with sub-meV energies to PeV-scale neutrinos observed in neutrino telescopes. An especially popular topic has also been the possibility of discovering physics beyond the Standard Model in the neutrino sector. In fact, because of their ability to mix with hypothetical “dark sector” fermions – that is, fermions potentially related to the physics of dark matter, or even dark matter itself – neutrinos offer a unique window to new physics.

The second part of the workshop was devoted to the neutrino’s role in astrophysics and cosmology. “There’s actually a two-way relationship between neutrinos and the cosmos,” explained invited speaker John Beacom (Ohio State University). “On the one hand, astrophysical and cosmological observations can teach us a lot about neutrino properties. On the other, neutrinos are unique cosmic messengers, and from observations at neutrino telescopes we can learn fascinating things about stars, galaxies and the evolution of the universe.” In recent years, for instance, neutrinos have allowed physicists to shed new light on the century-old problem of where ultra-high-energy cosmic rays come from. And the next galactic supernova – an event that happens on average every 30 to 100 years – will be a treasure trove of new information, given that we expect to observe tens of thousands of neutrinos from such an event. At the same time, cosmology sets the strongest upper limits on the absolute scale of neutrino masses, and with the next generation of cosmological surveys we have every expectation to achieve an actual measurement of this quantity. This is interesting because neutrino oscillations, while establishing that neutrinos have non-zero mass, are only sensitive to differences of squared masses, not to the absolute mass scale.

The programme of the Neutrino Platform Pheno Week closed with a tour of the ProtoDUNE experiments, giving the mostly theory-oriented audience an impression of how the magnificent machines testing our theories of the neutrino sector are being developed and assembled.

bright-rec iop pub iop-science physcis connect