Comsol -leaderboard other pages

Topics

Five sigma revisited

The standard criterion for claiming a discovery in particle physics is that the observed effect should have the equivalent of a five standard-deviation (5σ) discrepancy with already known physics, i.e. the Standard Model (SM). This means that the chance of observing such an effect or larger should be at most 3 × 10–7, assuming it is merely a statistical fluctuation, which corresponds to the probability of correctly guessing whether a coin will fall down heads or tails for each of 22 tosses. Statisticians claim that it is crazy to believe probability distributions so far into their tails, especially when systematic uncertainties are involved; particle physicists still hope that they provide some measure of the level of (dis)agreement between data and theory. But what is the origin of this convention, and does it remain a relevant marker for claiming the discovery of new physics?

There are several reasons why the stringent 5σ rule is used in particle physics. The first is that it provides some degree of protection against falsely claiming the observation of a discrepancy with the SM. There have been numerous 3σ and 4σ effects in the past that have gone away when more data was collected. A relatively recent example was an excess of diphoton events at an energy of 750 GeV seen in both the ATLAS and CMS data of 2015, but which was absent in the larger data samples of 2016. 

Systematic errors provide another reason, since such effects are more difficult to assess than statistical uncertainties and may be underestimated. Thus in a systematics-dominated scenario, if our estimate is a factor of two too small, a more mundane 3σ fluctuation could incorrectly be inflated to an apparently exciting 6σ effect. A potentially more serious problem is a source of systematics that has not even been considered by the analysts, the so-called “unknown unknowns”. 

Know your p-values 

Another reason underlying the 5σ criterion is the look-elsewhere effect, which involves the “p-values” for the observed effect. These are defined as the probability of a statistical fluctuation causing a result to be as extreme as the one observed, or more so, assuming some null hypothesis. For example, in tossing an unbiased coin 10 times, and observing eight of them to be tails when we bet on each of them being heads, it is the probability of being wrong eight or nine or 10 times (5.5%). A small p-value indicates a tension between the theory and the observation. 

Higgs signals

Particle-physics analyses often look for peaks in mass spectra, which could be the sign of a new particle. An example is shown in the “Higgs signals” figure, which contains data from CMS used to discover the Higgs boson (ATLAS has similar data). Whereas the local p-value of an observed effect is the chance of a statistical fluctuation being at least as large as the observed one at its specific location, more relevant is a global p-value corresponding to a fluctuation anywhere in the analysis, which has a higher probability and hence reduces the significance. The local p-values corresponding to the data in “Higgs signals” are shown in the figure “p-values”. 

A non-physics example highlighting the difference between local and global p-values was provided by an archaeologist who noticed that a direction defined by two of the large stones at the Stonehenge monument pointed at a specific ancient monument in France. He calculated that the probability of this was very small, assuming that the placement of the stones was random (local p-value), and hence that this favoured the hypothesis that Stonehenge was designed to point in that way. However, the chance that one of the directions, defined by any pair of stones, was pointing at an ancient monument anywhere in the world (global p-value) is above 50%. 

Current practice for model-dependent searches in particle physics, however, is to apply the 5σ criterion to the local p-value, as was done in the search for the Higgs boson. One reason for this is that there is no unique definition of “elsewhere”; if you are a graduate student, it may be just your own analysis, while for CERN’s Director-General, “anywhere in any analysis carried out with data from CERN” may be more appropriate. Another is that model-independent searches involving machine-learning techniques are capable of being sensitive to a wide variety of possible new effects, and it is hard to estimate what their look-elsewhere factor should be. Clearly, in quoting global p-values it is essential to specify your interpretation of elsewhere. 

Local p-values

A fourth factor behind the 5σ rule is plausibility. The likelihood of an observation is the probability of the data, given the model. To convert this to the more interesting probability of the model, given the data, requires the Bayesian prior probability of the model. This is an example of the probability of an event A, assuming that B is true, not in general being the same as the probability of B, given A. Thus the probability of a murderer eating toast for breakfast may be 60%, but the probability of someone who eats toast for breakfast being a murderer is thankfully much smaller (about one in a million). In general, our belief in the plausibility in a model for a particular version of new physics is much smaller than for the SM, thus being an example of the old adage that “extraordinary claims require extraordinary evidence”.  Since these factors vary from one analysis to another, one can argue that it is unreasonable to use the same discovery criterion everywhere. 

There are other relevant aspects of the discovery procedure. Searches for new physics can be just tests for consistency with the SM; or they can see which of two competing hypotheses (“just SM” or “SM plus new physics”) provides a better fit to the data. The former are known as goodness-of-fit tests and may involve χ2, Kolmogorov–Smirnov or similar tests; the latter are hypothesis tests, often using the likelihood ratio. They are sometimes referred to as model-independent and model-dependent, respectively, each having its own advantages and limitations. However, the degree of model dependence is a continuous spectrum rather than a binary choice.

It is unreasonable to regard 5.1σ as a discovery, but 4.9σ as not. Also, should we regard the one with better observed accuracy or better expected accuracy as the preferred result? Blind analyses are recommended, in that this removes the possibility of the analyser adjusting selections to influence the significance of the observed effect. Some non-blind searches have such a large and indeterminate look-elsewhere effect that they can only be regarded as hints of new physics, to be confirmed by future independent data. Theory calculations also have uncertainties, due for example to parameters in the model or difficulties with numerical predictions. 

Discoveries in progress 

A useful exercise is to review a few examples that might be (or might have been) discoveries. A recent example involves the ATLAS and CMS observation of events involving four-top quarks. Apart from the similarity of the heroic work of the physicists involved, these analyses have interesting contrasts with the Higgs-boson discovery. First, the Higgs discovery involved clear mass peaks, while the four-top events simply caused an enhancement of events in the relevant region of phase space (see “Four tops” figure). Then, the four-top production is just a verification of an SM prediction and indeed it would have been more of a surprise if the measured rate had been zero. So this is just an observation of an expected process, rather than a new discovery. Indeed, both preprints use the word “observation” rather than “discovery”. Finally, although 5σ was the required criterion for discovering the Higgs boson, surely a lower level of significance would have been sufficient for the observation of four-top events. 

The output from a graph neural network

Going back further in time, an experiment in 1979 claimed to observe free quarks by measuring the electrical charge of small spheres levitated in an oscillating electric field; several gave multiples of 1/3, which was regarded as a signature of single quarks. Luis Alvarez noted that the raw results required sizeable corrections and suggested that a blind analysis should be performed on future data. The net result was that no further papers were published on this work. This demonstrates the value of blind analyses.

A second historical example is precision measurements at the Large Electron Positron collider (LEP). Compared with the predictions of the SM, including the then-known particles, deviations were observed in the many measurements made by the four LEP experiments. A much better fit to the data was achieved by including corrections from the (at that time hypothesised) top quark and Higgs boson, which enabled approximate mass ranges to be derived for them. However, it is now accepted that the discoveries of the top quark and the Higgs boson were subsequently made by their direct observations at the Tevatron and at the LHC, rather than by their virtual effects at LEP.

The muon magnetic moment is a more contemporary case. This quantity has been measured and also predicted to incredible precision, but a discrepancy between the two values exists at around the 4σ level, which could be an indication of contributions from virtual new particles. The experiment essentially measures just this one quantity, so there is no look-elsewhere effect. However, even if this discrepancy persists in new data, it will be difficult to tell if it is due to the theory or experiment being wrong, or whether it requires the existence of new, virtual particles. Also, the nature of such virtual particles could remain obscure. Furthermore, a recent calculation using lattice gauge theory of the “vacuum hadronic polarisation” contribution to the predicted value of the magnetic moment brings it closer to the observed value (see “Measurement of the moment” figure). Clearly it will be worth watching how this develops. 

Our hope for the future is that the current 5σ criterion will be replaced by a more nuanced approach for what qualifies as a discovery

The so-called flavour anomalies are another topical example. The LHCb experiment has observed several anomalous results in the decays of B mesons, especially those involving transitions of a b quark to an s quark and a lepton pair. It is not yet clear whether these could be evidence for some real discrepancies with the SM prediction (i.e. evidence for new physics), or simply and more mundanely an underestimate of the systematics. The magnitude of the look-elsewhere effect is hard to estimate, so independent confirmation of the observed effects would be helpful. Indeed, the most recent result from LHCb for the R(K) parameter, published in December 2022, is much more consistent with the SM. It appears that the original result was affected by an overlooked background source. Repeated measurements by other experiments are eagerly awaited. 

A surprise last year was the new result by the CDF collaboration at the former Tevatron collider at Fermilab, which finished collecting data many years ago, on the mass of the W boson (mW), which disagreed with the SM prediction by 7σ. It is of course more reasonable to use the weighted average of all mW measurements, which reduces the discrepancy, but only slightly. A subsequent measurement by ATLAS disagreed with the CDF result; the CMS determination of mW is awaited with interest. 

Nuanced approach

It is worth noting that the muon g-2, flavour and mW discrepancies concern tests of the SM predictions, rather than direct observation of a new particle or its interactions. Independent confirmations of the observations and the theoretical calculations would be desirable.

Measurement of the moment

One of the big hopes for further running of the LHC is that it will result in the “discovery” of Higgs pair production. But surely there is no reason to require a 5σ discrepancy with the SM in order to make such claim? After all, the Higgs boson is known to exist, its mass is known and there is no big surprise in observing its pair-production rate being consistent with the SM prediction. “Confirmation” would be a better word than “discovery” for this process. In fact, it would be a real discovery if the di-Higgs production rate was found to be significantly above or below the SM prediction. A similar argument could be applied to the searches for single top-quark production at hadron colliders, and decays such as H → μμ or Bs→ μμ. This should not be taken to imply that LHC running can be stopped once a suitable lower level of significance is reached. Clearly there will be interest in using more data to study di-Higgs production in greater detail. 

Our hope for the future is that the current 5σ criterion will be replaced by a more nuanced approach for what qualifies as a discovery. This would include just quoting the observed and expected p-values; whether the analysis is dominated by systematic uncertainties or statistical ones; the look-elsewhere effect; whether the analysis is robust; the degree of surprise; etc. This may mean leaving it for future measurements to determine who deserves the credit for a discovery. It may need a group of respected physicists (e.g. the directors of large labs) to make decisions as to whether a given result merits being considered a discovery or needs further verification. Hopefully we will have several of these interesting decisions to make in the not-too-distant future. 

Future colliders are particle observatories

In no other field of science is the promise of revolutionary discovery the only standard by which future proposals are held. Yet in particle physics a narrative persists that the current lack of new physics beyond the Standard Model (SM) is putting the future of the field in doubt. This pessimism is misguided. 

Take cosmology and astrophysics. These are fundamental sciences whose aim is nothing more than to better understand the objects within their remit. Telescopes and other instruments point at the universe at large, observing to ever higher precision, farther than ever before, in new, previously inaccessible regimes. The Gaia, JWST and LIGO instruments, which cost between $1–10 billion each, had clear scientific cases: to simply do better science.

Not once in ESA’s list of Gaia science objectives is dark matter or dark energy mentioned. Gaia’s scientific potential is fulfilled not by the promise of new physics discoveries but by improving precision astrometry, uncovering more of the known astrophysical objects and testing further the standard cosmological model. JWST is a success if it makes sharper observations and peers out farther than ever, regardless of whether it discovers new types of exotic phenomena or sees the same objects as before but better. LIGO was not considered a failure for having discovered gravitational-wave signals in agreement with Einstein’s general theory of relativity; nor is the future of gravitational-wave observatories in doubt as a consequence. 

Particle physics is pushing the boundaries of our understanding in the other direction – looking inwards rather than outwards. The discovery of the Higgs boson, like that of gravitational waves, opens an entirely new window for probing our universe. Its agreement with the SM until now does nothing to diminish the need for a future Higgs observatory. Higgs aside, new elementary particle processes are continually being unveiled, from the long-predicted quantum scattering of light by light to complex interactions involving multiple bosons or fermions, most recently in the spectacular observation of four top quarks by ATLAS and CMS.

Gaia, JWST and LIGO had clear scientific cases: to simply do better science

Moreover, unlike cosmology and astrophysics, particle physics can do more than observe. It is an experimental science in the truest sense: set up the initial conditions, repeat the experiment, then analyse what comes out. The ability to directly manipulate the elementary building blocks of our world both complements and works symbiotically with astrophysical and cosmological observations. We need all eyes open on the universe to make progress; blinding one eye will not make the other sharper.

A better name can help

In this spirit, the CERN Future Circular Collider (FCC) is a bold and ambitious proposal for ensuring another thriving century of particle physics. As a multi- generational project, it would be our era’s cathedral to knowledge and wonder about the universe. However, the FCC cannot always remain a future collider if it ever becomes reality. When it comes to be renamed, the CERN International Particle Observatory would be more apt. This better reflects the role of colliders as general-purpose tools to do good science.  

Tevong You

The International Particle Observatory will cost around $10 billion for a high-precision observatory, starting in the 2040s. A high-energy observatory would then follow in the 2070s. Is it worth it? Should we not be more concerned with climate change? Both questions must be put in the context of other areas of government spending and the value of fundamental physics. For example, an Olympic Games funded by a single nation, for a month’s worth of entertainment, costs about $10 billion. The same price tag shared across multiple countries over decades, to uncover fundamental knowledge that stands for all time, is a pittance by comparison. Furthermore, studies have shown that the economic return of investment in CERN outweighs the cost. We get back more than we put in. 

The value of the enterprise itself benefits society in myriad indirect ways, which does not place it at odds with practical issues such as climate change. On the contrary, a new generation of particle-physics experiments stimulating cutting-edge engineering, technology, computing and data analysis, while fostering international collaboration and inspiring popular culture, creates the right conditions for tackling other problems. Particle physics helps humanity prosper in the long run, and has already played an indispensable role in creating our modern world.

Building an International Particle Observatory is a win–win proposition. It pays for itself, contributes to a better society, improves our understanding of the universe by orders of magnitude, and advances our voyage of exploration into the unknown. We just need to shift our narrative to one that emphasises the tremendous range of fundamental science to be done. A better name can help. 

EPS announces 2023 awards

The High Energy and Particle Physics Division of the European Physical Society (HEPP-EPS) has announced the winners of this year’s awards.

Cecilia Jarlskog

The EPS High Energy and Particle Physics Prize is awarded for an outstanding contribution in a experimental, theoretical or technological achievement. This year, the recipients are Cecilia Jarlskog for the discovery of an invariant measure of CP violation in both quark and lepton sectors; and the Daya Bay and RENO collaborations for the observation of short-baseline reactor electron-antineutrino disappearance, providing the first determination of the neutrino mixing angle, which paves the way for the detection of CP violation in the lepton sector.

The 2023 Giuseppe and Vanna Cocconi Prize (honouring contributions in particle astrophysics and cosmology in the past 15 years) is awarded to the SDSS/BOSS/eBOSS collaborations for their outstanding contributions to observational cosmology, including the development of the baryon-acoustic oscillation measurement into a prime cosmological tool, using it to robustly probe the history of the expansion rate of the Universe back to one-fifth of its age providing crucial information on dark energy, the Hubble constant, and neutrino masses.

The 2023 Gribov Medal is awarded to Netta Engelhardt for her groundbreaking contributions to the understanding of quantum information in gravity and black-hole physics. This medal goes to early-career researchers working in theoretical physics or field theory.

Valentina Cairo

The 2023 Young Experimental Physicist Prize of the High Energy and Particle Physics Division of the EPS ­– for early-career experimental physicists – is awarded to Valentina Cairo for her outstanding contributions to the ATLAS experiment: from the construction of the inner tracker, to the development of novel track and vertex reconstruction algorithms and to searches for di-Higgs boson production.

Honouring achievements in outreach, education, and the promotion of diversity, the 2023 Outreach Prize of the High Energy and Particle Physics Division of the EPS is awarded to Jácome (Jay) Armas. It recognizes his outstanding combination of activities on science communication, most notably for the “Science & Cocktails” event series, revolving around science lectures which incorporate elements of the nightlife such as music/art performances and cocktail craftsmanship and reaching out to hundreds of thousands in five different cities world-wide.

All awards will be presented at the EPS Conference on High Energy Physics, which will take place in Hamburg from 21 to 25 August.

Full citations can be found here: http://eps-hepp.web.cern.ch/eps-hepp/prizes.php

Giorgio Brianti 1930–2023

Giorgio Brianti, a pillar of CERN throughout his 40-year career, passed away on 6 April at the age of 92. He played a major role in the success of CERN and in particular the LEP project, and his legacy lives on across the whole of the accelerator complex.

Giorgio began his engineering studies at the University of Parma and continued them for three years in Bologna, where he obtained his laurea degree in May 1954. Driven by a taste for research, he learned, thanks to his thesis advisor, that Edoardo Amaldi was setting up an international organization in Geneva called CERN and was invited to meet him in Rome in June 1954. In his autobiography – written for his family and friends – Giorgio describes this meeting as follows: “Edoardo Amaldi received me very warmly and, after various discussions, he said to me: ‘you can go home: you will receive a letter of appointment from Geneva soon’. I thus had the privilege of participating in one of the most important intellectual adventures in Europe, and perhaps the world, which in half a century has made CERN ‘the’ world laboratory for particle physics.”

Giorgio had boundless admiration for John Adams, who had been recruited by Amaldi a year earlier, recounting: “John was only 34 years old, but had a very natural authority. To say that we had a conversation would be an exaggeration, due to my still very hesitant English, but I understood that I was assigned to the magnet group”. After participating in the design of the main bending magnets for the Proton Synchrotron, Giorgio was sent by Adams to Genoa for three years to supervise the construction of 100 magnets made by the leading Italian company in the sector, Ansaldo. Upon his return, he was entrusted with the control group and in 1964 he was appointed head of the synchro-cyclotron (SC) division. After only four years he was asked to create a new division to build a very innovative synchrotron – the Booster – capable of injecting protons into the PS and significantly increasing the intensity of the accelerated current. He described this period as perhaps his happiest from a technical point of view. Adams – who had been appointed Director General of the new CERN-Lab II to construct the 400 GeV Super Proton Synchrotron (SPS) – also entrusted Giorgio with designing and building the experimental areas and their beam lines. The 40th anniversary of their inauguration was celebrated with him in 2018 and the current fixed-target experimental programme profits to this day from his foresight.

Giorgio has left us not only an intellectual but also a spiritual legacy

In January 1979 Giorgio was made head of the SPS division, but only two years later he was called to a more important role, that of technical director, by the newly appointed Director General Herwig Schopper. As Giorgio writes: “The main objectives of the mandate were to build the LEP… which was to be installed in a 27 km circumference tunnel over 100 m deep, and to complete the SPS proton-antiproton program, a very risky enterprise, but whose success in 1982 and 1983 was decisive for the future of CERN”. The enormous technical work required to transform the SPS into a proton-antiproton collider that went on to discover the W and Z bosons took place in parallel with the construction of LEP and the launch of the Large Hadron Collider (LHC) project, which Giorgio personally devoted himself to starting in 1982.

The LHC occupied Giorgio for nearly 15 years, starting from almost nothing. As he writes: “It was initially a quasi-clandestine activity to avoid possible reactions from the delegates of the Member States, who would not have understood an initiative parallel to that of the LEP. The first public appearance of the potential project, which already bore the name Large Hadron Collider, took place at a workshop held in Lausanne and at CERN in the spring of 1984.”

The LHC project received a significant boost from Carlo Rubbia, who became Director General in 1989 and appointed Giorgio as director of future accelerators. While LEP was operating at full capacity during these years, under his leadership new technologies were developed and the first prototypes of high-field superconducting magnets were created. The construction programme for the LHC was preliminarily approved in 1994, under the leadership of Chris Llewellyn Smith. In 1996, one year after Giorgio’s retirement, the final approval was granted. Giorgio continued to work, of course! In particular, in 1996 he agreed to chair the advisory committee of the Proton Ion Medical Machine Study, a working group established within CERN aimed at designing and developing a new synchrotron for medical purposes for the treatment of radio-resistant tumours with carbon ion beams. The first centre was built in Italy, in Pavia, by the Italian Foundation National Centre for Oncological Hadrontherapy (CNAO). He was also an active member of the editorial board of the book “Technology meets Research,” which celebrated 60 years of interaction at CERN between technology and fundamental science.

Giorgio has left us not only an intellectual but also a spiritual legacy. He was a man of great moral rigour, with a strong and contemplative Christian faith, determined to achieve his goals but mindful not to hurt others. He was very attached to his family and friends. His intelligence, kindness, and generosity shone through his eyes and – despite his reserved character – touched the lives of everyone he met.

Altarelli awards honour young scientists

On 27 March, during the 30th edition of the Deep-Inelastic Scattering and Related Subjects workshop (DIS2023) held in Michigan, Adinda de Wit and Yong Zhao received the 2023 Guido Altarelli Awards for experiment and theory. The prizes, named after CERN’s Guido Altarelli, who made seminal contributions to QCD, recognise exceptional achievements from young scientists in deep-inelastic scattering and related subjects.

CMS collaborator Adinda de Wit (University of Zurich) was awarded the experimental prize for her achievements in understanding the nature of the Higgs boson, including precision studies of its couplings and decay channels. She received her PhD from Imperial College London, then took up a postdoc position at DESY followed by the University of Zurich and is presently at LLR. Co-convener of the CMS Higgs physics analysis group and past co-convener of the CMS Higgs combination and properties group, de Wit also received the Herta-Sponer-Prize by the German Physical Society.

Yong Zhao (Argonne National Laboratory) was awarded the theory prize for fundamental contributions to ab initio calculations of parton distributions in lattice QCD. He received his PhD from the University of Maryland, and then held postdoc positions at Brookhaven and  MIT before joining Argonne laboratory as an assistant physicist. Yong also received the 2022 Kenneth G. Wilson Award for Excellence in Lattice Field Theory for fundamental contributions to calculations of parton physics on lattice.

During the award ceremony, Nobel laureate Giorgio Parisi joined in via Zoom to reminisce about his collaboration with Altarelli. Together they contributed to QCD evolution equations for parton densities, known as the Altarelli-Parisi or DGLAP equations.

The DIS series covers a large spectrum of topics in high-energy physics. One part of the conference is devoted to the most recent results from large experiments at Brookhaven, CERN, DESY, Fermilab, Jlab and KEK, as well as corresponding theoretical advances. The workshop demonstrated how DIS and related subjects permeate a broad range of physics topics from hadron colliders to spin physics, neutrino physics and more. The next workshop will be held in Grenoble, France from 8-12 April 2024.

Sharing experience, building connections

Like many physicists, Valeria Pettorino’s fascination with science started when she was a child. Her uncle, a physicist himself, played a major role by sharing his passion for science fiction, strings and extra dimensions. She studied physics and obtained her PhD from the University of Naples in 2005, followed by a postdoc at the University of Torino and then SISSA in Italy. In 2012 her path took her to the University of Geneva and a Marie Curie Fellowship, where she worked with theorist Martin Kunz from UNIGE/CERN – a mentor and role model ever since. 

Visiting CERN was an invaluable experience that led to lifelong connections. “Meeting people who worked on particle-physics missions always piqued my interest, as they had such interesting stories and experiences to share,” Valeria explains. “I collaborated and worked alongside people from different areas in cosmology and particle physics, and I got the opportunity to connect with scientists working in different experiments.”

After the fellowship, Valeria went to the University of Heidelberg as a research group leader, and during this time she was selected for the “Science to Data Science” programme by the AI software company Pivigo. Working on artificial intelligence and unsupervised learning to analyse healthcare data for a start-up company in London, it presented her with the opportunity to widen her skillset. 

Valeria’s career trajectory turned towards space science in 2007, when she began working for the Euclid mission of the European Space Agency (ESA) due to launch this year, with the aim to measure the geometry of the universe for the study of dark matter and energy. Currently co-lead of the Euclid theory science working group, Valeria has held a number of roles in the mission, including deputy manager of the communication group. In 2018 she became the CEA representative for Euclid–France communication and is currently director of research for the CEA astrophysics department/CosmoStat lab. She also worked on data analysis for ESA’s Planck mission from 2009 to 2018. 

Mentoring and networking 

In both research collaborations, Valeria worked on numerous projects that she coordinated from start to finish. While leading teams, she studied management with the goal of enabling everyone to reach their full potential. She also completed training in science diplomacy, which helped her gain valuable transferrable skills. “I decided to be proactive in developing my knowledge and started attending webinars, and then training on science diplomacy. I wanted to deepen my understanding on how science can have an impact on the world and society.” In 2022 Valeria was selected to participate in the first Science Diplomacy Immersion Programme organised by the Geneva Science and Diplomacy Anticipator (GESDA), which aims to take advantage of the ecosystem of international organisations in Geneva to anticipate, accelerate and translate emerging scientific themes into concrete actions. 

I wanted to deepen my understanding on how science can have an impact on the world and society

Sharing experience and building connections between people have been a theme in Valeria’s career. Nowhere is this better illustrated than her role, since 2015, as a mentor for the Supernova Foundation – a worldwide mentoring and networking programme for women in physics. “Networking is very important in any career path and having the opportunity to encounter people from a diverse range of backgrounds allows you to grow your network both personally and professionally. The mentoring programme is open to all career levels. There are no barriers. It is a global network of people from 53 countries and there are approximately 300 women in the programme. I am convinced that it is a growing community that will continue to thrive.” Valeria has also acted as mentor for Femmes & Science (a French initiative by Paris-Saclay University) in 2021–2022, and was recently appointed as one of 100 mentors worldwide for #space4women, an initiative of the United Nations Office of Outer Space Affairs to support women pursuing studies in space science.

A member of the CERN Alumni Network, Valeria thoroughly enjoys staying connected with CERN. “Not only is the CERN Alumni Network excellent for CERN as it brings together a wide range of people from many career paths, but it also provides an opportunity for its members to understand and learn how science can be used outside of academia.”

New physics in b decays

There are compelling reasons to believe that the Standard Model (SM) of particle physics, while being the most successful theory of the fundamental structure of the universe, does not offer the complete picture of reality. However, until now, no new physics beyond the SM has been firmly established through direct searches at different energy scales. This motivates indirect searches, performed by precision examination of phenomena sensitive to contributions from possible new particles, and comparing their properties with the SM expectations. This is conceptually similar to how, decades ago, our understanding of radioactive beta decay allowed the existence and properties of the W boson to be predicted.

New Physics in b decays, by Marina Artuso, Gino Isidori and the late Sheldon Stone, is dedicated to precision measurements in decays of hadrons containing a b quark. Due to their high mass, these hadrons can decay into dozens of different final states, providing numerous ways to challenge our understanding of particle physics. As is usual for indirect searches, the crucial task is to understand and control all SM contributions to these decays. For b-hadron decays, the challenge is to control the effects of the strong interaction, which is difficult to calculate.

Both sides of the coin

The authors committed to a challenging task: providing a snapshot of a field that has developed considerably during the past decade. They highlight key measurements that generated interest in the community, often due to hints of deviations from the SM expectations. Some of the reported anomalies have diminished since the book was published, after larger datasets were analysed. Others continue to intrigue researchers. This natural scientific progress leads to a better understanding of both the theoretical and experimental sides of the coin. The authors exercise reasonable caution over the significance of the anomalies they present, warning the reader of the look-elsewhere effect, and carefully define the relevant observables. When discussing specific decay modes, they explain their choice compared to other processes. This pedagogical approach makes the book very useful for early-career researchers diving into the topic. 

The book starts with a theoretical introduction to heavy-quark physics within the SM, plotting avenues for searches for possible new-physics effects. Key theoretical concepts are introduced, along with the experiments that contributed most significantly to the field. The authors continue with an overview of “traditional” new-physics searches, strongly interleaving them with precision measurements of the free parameters of the SM, such as the couplings between quarks and the W boson. By determining these parameters precisely with several alternative experimental approaches, one hopes to observe discrepancies. An in-depth review of the experimental measurements, also featuring their complications, is confronted with theoretical interpretations. While some of the discrepancies stand out, it is difficult to attribute them to new physics as long as alternative interpretations are not excluded.

New Physics in b Decays

The second half of the book dives into recent anomalies in decays with leptons, and the theoretical models attempting to address them. The authors reflect on theoretical and experimental work of the past decade and outline a number of pathways to follow. The book concludes with a short overview of searches for processes that are forbidden or extremely suppressed in the SM, such as lepton-flavour violation. These transitions, if observed, would represent an undeniable signature of new physics, although they only arise in a subset of new-physics scenarios. Such searches therefore allow strong limits to be placed on specific hypotheses. The book concludes with the authors’ view of the near future, which is already becoming reality. They expect the ongoing LHCb and Belle II experiments to have a decisive word on the current flavour anomalies, but also to deliver new, unexpected surprises. They rightly conclude that “It is difficult to make predictions, especially about the future.”

The remarkable feature of this book is that it is written by physicists who actively contributed to the development of numerous theoretical concepts and key experimental measurements in heavy-quark physics over the past decades. Unfortunately, one of the authors, Sheldon Stone, could not see his last book published. Sheldon was the editor of the book B decays, which served as the handbook on heavy-quark physics for decades. One can contemplate the impressive progress in the field by comparing the first edition of B decays in 1992 with New Physics in b decays. In the 1990s, heavy-quark decays were only starting to be probed. Now, they offer a well-oiled tool that can be used for precision tests of the SM and searches for minuscule effects of possible new physics, using decays that happen as rarely as once per billion b-hadrons.

The key message of this book is that theory and experiment must go hand in hand. Some parameters are difficult to calculate precisely and they need to be measured. The observables that are theoretically clean are often challenging experimentally. Therefore, the searches for new physics in b decays focus on processes that are accessible both from the theoretical and experimental points of view. The reach of such searches is constantly being broadened by painstakingly refining calculations and developing clever experimental techniques, with progress achieved through the routine work of hundreds of researchers in several experiments worldwide.

Exploring the origins of matter–antimatter asymmetry

The first edition of the International Workshop on the Origin of Matter–Antimatter Asymmetry (CP2023), hosted by École de Physique des Houches, took place from 12 to 17 February. Around 50 physicists gathered to discuss the central problem connecting particle physics and cosmology: CP violation. Since one of the very first schools dedicated to time-reversal symmetry in the summer of 1952, chaired by Wolfgang Pauli, research has progressed significantly, especially with the formulation by Sakharov of the conditions necessary to produce the observed matter–antimatter asymmetry in the universe.

The workshop programme covered current and future experimental projects to probe the Sakharov conditions: collider measurements of CP violation (LHCb, Belle II, FCC-ee), searches for electric dipole moments (PSI, FNAL), long-baseline neutrino experiments (NOvA, DUNE, T2K, Hyper-Kamiokande, ESSnuSB) and searches for baryon- and lepton-number violating processes such as neutrinoless double beta decay (GERDA, CUORE, CUPID-Mo, KamLAND-Zen, EXO-200) and neutron–antineutron oscillations (ESS). These were put in context with the different theoretical approaches to baryogenesis and leptogenesis.

With the workshop’s aim to provide a discussion forum for junior and senior scientists from various backgrounds, and following the tradition of the Ecole des Houches, a six-hour mini-school took place in parallel with more specialised talks. A first lecture by Julia Harz (University of Mainz) introduced the hypotheses related to baryogenesis, and another by Adam Falkowski (IJCLab) described how CP violation is treated in effective field theory. Each lecture provided both a common theoretical background, and an opportunity to discuss the fundamental motivation driving experimental searches for new sources of CP violation in particle physics.

In his summary talk, Mikhail Shaposhnikov (EPFL Lausanne) explained that it is impossible to identify which mechanism leads to the existing baryon asymmetry in the universe. He added that we live in exciting times and reviewed the vast number of opportunities in experiment and theory lying ahead.

A bridge between popular and textbook science

Most popular science books are written to reach the largest audience possible, which comes with certain sacrifices. The assumption is that many readers might be deterred by technical topics and language, especially by equations that require higher mathematics. In physics one can therefore usually distinguish textbooks from popular physics books by flicking through the pages and checking for symbols.

The Biggest Ideas in the Universe: space, time, and motion, the first in a three-part series by Sean Carroll, goes against this trend. Written for “…people who have no mathematical experience than high-school algebra, but are willing to look at an equation and think about what it means”, there is no point in the book at which things are muddied because the maths becomes too advanced.

Concepts and theories

The first part of the book covers nine topics including conservation, space–time, geometry, gravity and black holes. Carroll spends the first few chapters introducing the reader to the thought process of a theoretical physicist: how to develop a sense for symmetries, the conservation of charges and expansions in small parameters. It also gives readers a fast introduction to calculus using geometric arguments to define derivatives and integrals. By the end of the third chapter, the concepts of differential equations, phase space and the principle of least action have been introduced.

The centre part of the book focusses on geometry. A discussion of the meaning of space and time in physics is followed by the introduction of Minkowski spacetime, with considerable effort given to the philosophical meaning of these concepts. The third part is the most technical. It covers differential geometry, a beautiful derivation of Einstein’s equation of general relativity and the final chapter uses the Schwarzschild solution to discuss black holes.

The Biggest Ideas in the Universe

It is a welcome development that publishers and authors such as Carroll are confident that books like this will find a sizeable readership (another good, recent example of advanced popular physics texts is Leonard Susskind’s “A Theoretical Minimum” series). Many topics in physics can only be fully appreciated if the equations are explained and if chapters go beyond the limitations of typical popular science books. Carroll’s writing style and the structure of the book help to make this case: all concepts are carefully introduced and even though the book is very dense and covers a lot of material, everything is interconnected and readers won’t feel lost while reading. Regular reference to the historical steps in discovering theo­ries and concepts loosen up the text. Two examples are the correspondence between Leibniz and Clarke about the nature of space and the interesting discussion of Einstein and Hilbert’s different approaches to general relativity. The whole series of books, of which two of the three parts will be published soon, is accompanied by recorded lectures that are freely available online and present the topic of every chapter, along with answers to questions on these topics.

It is difficult to find any weaknesses in this book. Figures are often labelled with symbols that readers not used to physics notation can find in the text, so more text in the figures would make them even more accessible. Strangely, the section introducing entropy is not supported by equations and, given the technical detail of all other parts of the book, Carroll could have taken advantage of the mathematical groundwork of the previous chapters here.

I want to emphasise that every topic discussed in The Biggest Ideas in the Universe is well established physics. No flashy but speculative theories or unbalanced focus on science-fiction ideas, which are often used to attract readers to theoretical physics, appear. It stands apart from similar titles by offering insights that can only be obtained if the underlying equations are explained and not just mentioned.

Anyone who is interested in fundamental physics is encouraged to read this book, especially young people interested in studying physics because they will get an excellent idea of the type of physical arguments they will encounter at university. Those who think their mathematical background isn’t sufficient will likely learn many new things, even though the later chapters are quite technical. And if you are at the other end of the spectrum, such as a working physicist, you will find the philosophical discussions of familiar concepts and the illuminating arguments included to elicit physical intuition most useful.

Design principles of theoretical physics

“Now I know what the atom looks like!” Ernest Rutherford’s simple statement belies the scientific power of reductionism. He had recently discovered that atoms have substructure, notably that they comprise a dense positively charged nucleus surrounded by a cloud of negatively charged electrons. Zooming forward in time, that nucleus ultimately gave way further when protons and neutrons were revealed at its core. A few stubborn decades later they too gave way with our current understanding being that they are comprised of quarks and gluons. At each step a new layer of nature is unveiled, sometimes more, sometimes less numerous in “building blocks” than the one prior, but in every case delivering explanations, even derivations, for the properties (in practice, parameters) of the previous layer. This strategy, broadly defined as “build microscopes, find answers” has been tremendously successful, arguably for millennia.

Natural patterns

While investigating these successively explanatory layers of nature, broad patterns emerge. One of which is known colloquially as “naturalness”. This pattern essentially asserts that in reversing the direction and going from one microscopic theory, “the UV-completion”, to its larger-scale shell, “the IR”, the values of parameters measured in the latter are, essentially, “typical”. Typical, in the sense that they reflect the scales, magnitudes and, perhaps most importantly, the symmetries of the underlying UV completion. As Murray Gell-Mann once said: “everything not forbidden is compulsory”.

So, if some symmetry is broken by a large amount by some interaction in the UV theory, the same symmetry, in whatever guise it may have adopted, will also be broken by a large amount in the IR theory. The only exception to this is accidental fine-tuning, where large UV-breakings can in principle conspire and give contributions to IR-breakings that, in practical terms, accidentally cancel to a high degree, giving a much smaller parameter than expected in the IR theory. This is colloquially known as “unnaturalness”.

There are good examples of both instances. There is no symmetry in QCD that could keep a proton light; unsurprisingly it has mass of the same order as the dominant mass scale in the theory, the QCD scale, mp ~ ΛQCD. But there is a symmetry in QCD that keeps the pion light. The only parameters in UV theory that break this symmetry are the light quark masses. Thus, the pion mass-squared is expected to be around m2π ~ mqΛQCD. Turns out, it is.

There are also examples of unnatural parameters. If you measure enough different physical observables, observations that are unlikely on their own become possible in a large ensemble of measurements – a sort of theoretical “look elsewhere effect”. For example, consider the fact that the Moon almost perfectly obscures the Sun during a lunar eclipse. There is no symmetry which requires that the angular size of the Moon should almost match that of the Sun to an Earth-based observer. Yet, given many planets and many moons, this will of course happen for some planetary systems.

However, if an observation of a parameter returns an apparently unnatural value, can one be sure that it is accidentally small? In other words, can we be confident we have definitively explored all possible phenomena in nature that can give rise to naturally small parameters? 

From 30 January to 3 February, participants of an informal CERN theory institute “Exotic Approaches to Naturalness” sought to answer this question. Drawn from diverse corners of the theorist zoo, more than 130 researchers gathered, both virtually and in person, to discuss questions of naturalness. The invited talks were chosen to expose phenomena in quantum field theory and beyond which challenge the naive naturalness paradigm.

Coincidences and correlations

The first day of the workshop considered how apparent numerical coincidences can lead to unexpectedly small parameters in the IR due to the result of selection rules that do not immediately manifest from a symmetry, known as “natural zeros”. A second set of talks considered how, going beyond quantum field theory, the UV and IR can potentially be unexpectedly correlated, especially in theories containing quantum gravity, and how this correlation can lead to cancellations that are not apparent from a purely quantum field theory perspective.

The second day was far-ranging, with the first talk unveiling some lower dimensional theories of the sort one more readily finds in condensed matter systems, in which “topological” effects lead to constraints on IR parameters. A second discussed how fundamental properties, such as causality, can impose constraints on IR parameters unexpectedly. The last demonstrated how gravitational effective theories, including those describing the gravitational waves emitted in binary black hole inspirals, have their own naturalness puzzles.

The ultimate goal is to now go forth and find new angles of attack on the biggest naturalness questions in fundamental physics

Midweek, alongside an inspirational theory colloquium by Nathaniel Craig (UC Santa Barbara), the potential role of cosmology in naturalness was interrogated. An early example made famous by Steven Weinberg concerns the role of the “anthropic principle” in the presently measured value of the cosmological constant. However, since then, particularly in recent years, theorists have found many possible connections and mechanisms linking naturalness questions to our universe and beyond.

The fourth day focussed on the emerging world of generalised and higher-form symmetries, which are new tools in the arsenal of the quantum field theorist. It was discussed how naturalness in IR parameters may potentially arise as a consequence of these recently uncovered symmetries, but whose naturalness would otherwise be obscured from view within a traditional symmetry perspective. The final day studied connections between string theory, the swampland and naturalness, exploring how the space of theories consistent with string theory leads to restricted values of IR parameters, which potentially links to naturalness. An eloquent summary was delivered by Tim Cohen (CERN).

Grand slam

In some sense the goal of the workshop was to push back the boundaries by equipping model builders with new and more powerful perspectives and theoretical tools linked to questions of naturalness, broadly defined. The workshop was a grand slam in this respect. However, the ultimate goal is to now go forth and use these new tools to find new angles of attack on the biggest naturalness questions in fundamental physics, relating to the cosmological constant and the Higgs mass.

The Standard Model, despite being an eminently marketable logo for mugs and t-shirts, is incomplete. It breaks down at very short distances and thus it is the IR of some more complete, more explanatory UV theory. We don’t know what this UV theory is, however, it apparently makes unnatural predictions for the Higgs mass and cosmological constant. Perhaps nature isn’t unnatural and generalised symmetries are as-yet hidden from our eyes, or perhaps string theory, quantum gravity or cosmology has a hand in things? It’s also possible, of course, that nature has fine-tuned these parameters by accident, however, that would seem – à la Weinberg – to point towards a framework in which such parameters are, in principle, measured in many different universes. All of these possibilities, and more, were discussed and explored to varying degrees.

Perhaps the most radical possibility, the most “exotic approach to naturalness” of all, would be to give up on naturalness altogether. Perhaps, in whatever framework UV completes the Standard Model, parameters such as the Higgs mass are simply incalculable, unpredictable in terms of more fundamental parameters, at any length scale. Shortly before the advent of relativity, quantum mechanics, and all that have followed from them, Lord Kelvin (attribution contested) once declared: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. The breadth of original ideas presented at the “Exotic Approaches to Naturalness” workshop, and the new connections constantly being made between formal theory, cosmology and particle phenomenology, suggest it would be similarly unwise now, as it was then, to make such a wager.

bright-rec iop pub iop-science physcis connect