Highly energetic cosmic rays reach Earth from all directions and at all times, yet it has been challenging to conclusively identify their sources. Being charged, cosmic rays are easily deflected by interstellar magnetic fields during their propagation and thereby lose any information about where they originated from. On the other hand, highly energetic photons and neutrinos remain undeflected. Observations of high-energy photons and neutrinos are therefore crucial clues towards unravelling the mystery of cosmic-ray sources and accelerators.
Four years ago, the IceCube collaboration announced the identification of the blazar TXS 0506+056 as a source of high-energy cosmic neutrinos, the first of its kind (CERN Courier September 2018 p7). This was one of the early examples of multi-messenger astronomy wherein a high-energy neutrino event detected by IceCube, which was coincident in direction and time with a gamma-ray flare from the blazar, prompted an investigation into this object as a potential astrophysical neutrino source.
Point source
In the following years, IceCube made a full-sky scan for point-like neutrino sources, and in 2020, the collaboration found an excess coincident with the Seyfert II galaxy, NGC1068, that was inconsistent with a background-only hypothesis. However, with a statistical significance of only 2.9σ, it was insufficient to claim a detection. In November 2022, after a more detailed analysis with a longer live-time and improved methodologies, the collaboration confirmed NGC1068 to be a point source of high-energy neutrinos at a significance of 4.2σ.
IceCube’s measurements usher in a new era of neutrino astronomy
Messier 77, also known as the Squid Galaxy or NGC1068, is located 47 million light years away in the constellation Cetus and was discovered in 1780. Today, we know it to be an active galaxy: at its centre lies an active galactic nucleus (AGN), which is a luminous and compact region powered by a super massive black hole (SMBH), surrounded by an accretion disk. Specifically, it is a Seyfert II galaxy, which is an active galaxy that is viewed edge-on, with the line-of-sight passing through the accretion region that obscures the centre.
The latest search used data from the fully completed IceCube detector. Several calibrations and alignments were also made to the data-acquisition procedure and an advanced event reconstruction algorithm was deployed. The search was conducted in the Northern Hemisphere of the sky, i.e. by detecting neutrinos from “below”, so that Earth could screen background atmospheric muons.
Three different searches were carried out to locate possible point-like neutrino sources. The first involved scanning the sky for a statistically significant excess over background, while the other two used a catalogue of 110 sources that was developed in the 2020 study, the difference between the two being the statistical methods used. The results showed an excess of 79+22–20muon–neutrino events, with the main contribution coming from neutrinos in the energy range of 1.5 to 15 TeV, while the all-flavour flux is expected to be a factor of three higher. All the events contributing to the excess were well-reconstructed within the detector, with no signs of anomalies, and the results were found not to be dominated by just one or a few individual events. The results were also in line with phenomenological models that predict the production of neutrinos and gamma rays in sources such as NGC1068.
IceCube’s measurements usher in a new era of neutrino astronomy and take researchers a step closer to understanding not only the origin of high-energy cosmic rays but also the immense power of massive black holes, such as the one residing inside NGC1068.
From 1 to 4 November, the first International Conference on Quantum Technologies for High-Energy Physics (QT4HEP) was held at CERN. With 224 people attending in person and many more following online, the event brought together researchers from academia and industry to discuss recent developments and, in particular, to identify activities within particle physics that can benefit most from the application of quantum technologies.
Opening the event, Joachim Mnich, CERN director for research and computing, noted that CERN is widely recognised, including by its member states, as an important platform for promoting applications of quantum technologies for both particle physics and beyond. “The journey has just begun, and the road is still long,” he said, “but it is certain that deep collaboration between physicists and computing experts will be key in capitalising on the full potential of quantum technologies.”
The conference was organised by the CERN Quantum Technology Initiative (CERN QTI), which was established in 2020, and followed a successful workshop on quantum computing in 2018 that marked the beginning of a range of new investigations into quantum technologies at CERN. CERN QTI covers four main research areas: quantum theory and simulation; quantum sensing, metrology and materials; quantum computing and algorithms; and quantum communication and networks. The first day’s sessions focused on the first two: quantum theory and simulation, as well as quantum sensing, metrology and materials. Topics covered included the quantum simulation of neutrino oscillations, scaling up atomic interferometers for the detection of dark matter, and the application of quantum traps and clocks to new-physics searches.
Building partnerships
Participants showed an interest in broadening collaborations related to particle physics. Members of the quantum theory and quantum sensing communities discussed ways to identify and promote areas of promise relevant to CERN’s scientific programme. It is clear that many detectors in particle physics can be enhanced – or even made possible – through targeted R&D in quantum technologies. This fits well with ongoing efforts to implement a chapter on quantum technologies in the European Committee for Future Accelerators’ R&D roadmap for detectors, noted Michael Doser, who coordinates the branch of CERN QTI focused on sensing, metrology and materials.
For the theory and simulation branch of CERN QTI, the speakers provided a useful overview of quantum machine learning, quantum simulations of high-energy collider events and neutrino processes, as well as making quantum-information studies of wormholes testable on a quantum processor. Elina Fuchs, who coordinates this branch of CERN QTI, explained how quantum advantages have been found for toy models of increased physical relevance. Furthermore, she said, developing a dictionary that relates interactions at high energies to lower energies will enhance knowledge about new-physics models learned from quantum-sensing experiments.
The conference demonstrated the clear potential of different quantum technologies to impact upon particle-physics research
The second day’s sessions focused on the remaining two areas, with talks on quantum-machine learning, noise gates for quantum computing, the journey towards a quantum internet, and much more. These talks clearly demonstrated the importance of working in interdisciplinary, heterogeneous teams when approaching particle-physics research with quantum-computing techniques. The technical talks also showed how studies on the algorithms are becoming more robust, with a focus on trying to address problems that are as realistic as possible.
A keynote talk from Yasser Omar, president of the Portuguese Quantum Institute, presented the “fleet” of programmes on quantum technologies that has been launched since the EU Quantum Flagship was announced in 2018. In particular, he highlighted QuantERA, a network of 39 funding organisations from 31 countries; QuIC, the European Quantum Industry Consortium; EuroQCI, the European Quantum Communication Infrastructure; EuroQCS, the European Quantum Computing and Simulation Infrastructure; and the many large national quantum initiatives being launched across Europe. The goal, he said, is to make Europe autonomous in quantum technologies, while remaining open to international collaboration. He also highlighted the role of World Quantum Day – founded in 2021 and celebrated each year on 14 April – in raising awareness around the world of quantum science.
Jay Gambetta, vice president of IBM Quantum, gave a fascinating talk on the path to quantum computers that exceed the capabilities of classical computers. “Particle physics is a promising area for looking for near-term quantum advantage,” he said. “Achieving this is going to take both partnership with experts in quantum information science and particle physics, as well as access to tools that will make this possible.”
Industry and impact
The third day’s sessions – organised in collaboration with CERN’s knowledge transfer group – were primarily dedicated to industrial co-development. Many of the extreme requirements faced by quantum technologies are shared with particle physics, such as superconducting materials, ultra-high vacuum, precise timing, and much more. For this reason, CERN has built up a wealth of expertise and specific technologies that can directly address challenges in the quantum industry. CERN strives to maximise the impact of all of its technologies and know-how on society in many ways to ease the transfer of CERN’s knowledge to industry and society. One focus is to see which technologies might help to build robust quantum-computing devices. Already, CERN’s White Rabbit technology, which provides sub-nanosecond accuracy and picosecond precision of synchronisation for the LHC accelerator chain, has found its way to the quantum community, noted Han Dols, business development and entrepreneurship section leader.
Several of the day’s talks focused on challenges around trapped ions and control systems. Other topics covered included the potential of quantum computing for drug development, measuring brain function using quantum sensors, and developing specialised instrumentation for quantum computers. Representatives of several start-up companies, as well as from established technology leaders, including Intel, Atos and Roche, spoke during the day. The end of the third day was dedicated to crucial education, training and outreach initiatives. Google provided financial support for 11 students to attend the conference, and many students and researchers presented posters.
Marieke Hood, executive director for corporate affairs at the Geneva Science and Diplomacy Anticipator (GESDA) foundation, also gave a timely presentation about the recently announced Open Quantum Institute (OQI). CERN is part of a coalition of science and industry partners proposing the creation of this institute, which will work to ensure that emerging quantum technologies tackle key societal challenges. It was launched at the 2022 GESDA Summit in October, during which CERN Director-General Fabiola Gianotti highlighted the potential of quantum technologies to help achieve key UN Sustainable Development Goals. “The OQI acts at the interface of science and diplomacy,” said Hood. “We’re proud to count CERN as a key partner for OQI, its experience of multinational collaboration will be most useful to help us achieve these ambitions.”
The final day of the conference was dedicated to hands-on workshops with three different quantum-computing providers. In parallel, a two-day meeting of the “Quantum Computing 4HEP” working group, organised by CERN, DESY and the IBM Quantum Network, took place.
Qubit by qubit
Overall, the QT4HEP conference demonstrated the clear potential of different quantum technologies to impact upon particle-physics research. Some of these technologies are here today, while others are still a long way off. Targeted collaboration across disciplines and the academia–industry interface will help ensure that CERN’s research community is ready to maximise on the potential of these technologies.
“Widespread quantum computing may not be here yet, but events like this one provide a vital platform for assessing the opportunities this breakthrough technology could deliver for science,” said Enrica Porcari, head of the CERN IT department. “Through this event and the CERN QTI, we are building on CERN’s tradition of bringing communities together for open discussion, exploration, co-design and co-development of new technologies.”
When did you first know you had a passion for pure mathematics?
I have had a passion for mathematics since my first year in school. At that time I did not realise what “pure mathematics” was, but maths was my favourite subject from a very early age.
What is number theory, in terms that a humble particle physicist can understand?
In fact, “number theory” is not well defined and any interesting question about numbers, geometric shapes and functions can be seen as a question for a number theorist.
What motivated you to work on sphere-packing?
I think it is a beautiful problem, something that can be easily explained. Physicists know what a Euclidean space and a sphere are, and everybody knows the problem from stacking oranges or apples. What is a bit harder to explain is that mathematicians are not trying to model a particular physical situation. Mathematicians are not bound to phenomena in nature to justify their work, they just do it. We do not need to model any physical situation, which is a luxury. The work could have an accidental application, but this is not the primary goal. Physicists, especially theorists, are used to working in multi-dimensional spaces. At the same time, these dimensions have a special interpretation in physics.
What fascinates you most about working on theoretical rather than applied mathematics?
My motivation often comes out of curiosity and my belief that the solutions to the problems will become useful at some point in the future. But it is not my job to judge or to define the usefulness. My belief is that the fundamental questions must be answered, so that other people can use this knowledge later. It is important to understand the phenomena in mathematics and in science in general, and the possibility of discovering something that other people have not yet. Maybe it is possible to come up with other ideas for detectors, which become interesting. When I look at physics detectors, for example, it fascinates me how complex these machines are and how many tiny technical solutions must be invented to make it all work.
How did you go about cracking the sphere-stacking problem?
I think there was an element of luck that I could find the correct idea to solve this problem because many people worked on it before. I was fortunate to find the right solution. The initial problem came from geometry, but the final solution came from Fourier analysis, via a method called linear programming.
I think a mathematical reality exists on its own and sometimes it does describe actual physical phenomena
In 2003, mathematicians Henry Cohn and Noam Elkies applied the linear programming method to the sphere-packing problem and numerically obtained a nearly optimal upper bound in dimensions 8 and 24. Their method relied on constructing an auxiliary, “magic”, function. They computed this function numerically but could not find an explicit formula for it. My contribution was to find the explicit formula for the magic function.
What applications does your work have, for example in quantum gravity?
After I solved the sphere-packing problem in dimension 8 in 2016, CERN physicists worked on the relation between two-dimensional conformal field theory and quantum gravity. From what I understand, conformal field theories are mathematically totally different from sphere-packing problems. However, if one wants to optimise certain parameters in the conformal field theory, physicists use a method called “bootstrap”, which is similar to the linear programming that I used. The magic functions I used to solve the sphere-packing problem were independently rediscovered by Thomas Hartman, Dalimil Mazác and Leonardo Rastelli.
Are there applications beyond physics?
One of the founders of modern computer science, Claude Shannon, realised that sphere-packing problems are not only interesting geometric problems that pure mathematicians like me can play with, but they are also a good model for error-correcting codes, which is why higher-dimensional sphere packing problems became interesting for mathematicians. A very simplified version of the original model could be the following. An error is introduced during the transmission of a message. Assuming the error is under control, the corrupted message is still close to the original message. The remedy is to select different versions of the messages called codewords, which we think are close to the original message but at the same time far away from each other, so that they do not mix with each other. In geometric language, this situation is an exact analogy of sphere-packing, where each code word represents the centre of the sphere and the sphere around the centre represents the cloud of possible errors. The spheres will not intersect if their centres are far away from each other, which allows us to decode the corrupted message.
Do you view mathematics as a tool, or a deeper property of reality?
Maybe it is a bit idealistic, but I think a mathematical reality exists on its own and sometimes it does describe actual physical phenomena, but it still deserves our attention if not. In our mathematical world, we have chances to realise that something from this abstract mathematical world is connected to other fields, such as physics, biology or computer science. Here I think it’s good to know that the laws of this abstract world often provide us with useful gadgets, which can be used later to describe the other realities. This whole process is a kind of “spiral of knowledge” and we are in one of its turns.
In the Standard Model (SM) of particle physics, the only way the Higgs boson can decay without leaving any traces in the LHC detectors is through the four-neutrino decay, H → ZZ → 4ν, which has an expected branching fraction of only 0.1%. This very small value can be seen as a difficulty but is also an exciting opportunity. Indeed, several theories of physics beyond the SM predict considerably enhanced values for the branching fraction of invisible Higgs-boson decays. In one of the most interesting scenarios, the Higgs boson acts as a portal to the dark sector by decaying to a pair of dark matter (DM) particles. Measurements of the “Higgs to invisible” branching fraction are clearly among the most important tools available to the LHC experiments in their searches for direct evidence of DM particles.
The CMS collaboration recently reported the combined results of different searches for invisible Higgs-boson decays, using data collected at 7, 8 and 13 TeV centre-of-mass energies. To find such a rare signal among the overwhelming background produced by SM processes, the study considers events in most Higgs-boson production modes: via vector boson (W or Z) fusion, via gluon fusion and in association with a top quark–antiquark pair or a vector boson. In particular, the analysis looked at hadronically decaying vector bosons or top quark–antiquark pairs. A typical signature for invisible Higgs-boson decays is a large missing energy in the detector, so that the missing transverse energy plays a crucial role in the analysis. No significant signal has been seen, so a new and stricter upper limit is set on the probability that the Higgs boson decays to invisible particles: 15% at 95% confidence level.
This result has been interpreted in the context of Higgs-portal models, which introduce a dark Higgs sector and consider several dark Higgs-boson masses. The extracted upper limits on the spin-independent DM-nucleon scattering cross section, shown in figure 1 for a range of DM mass points, have better sensitivities than those of direct searches over the 1–100 GeV range of DM masses. Once the Run 3 data will be added to the analysis, much stricter limits will be reached or, if we are lucky, evidence for DM production at the LHC will be seen.
Lepton number is a quantum number that represents the difference in the number of leptons and antileptons participating in a process, while lepton flavour is a corresponding quantity that accounts for each generation of lepton (e, μ or τ) separately. Lepton number is always conserved but lepton flavour violation (LFV) is known to exist in nature, as this phenomenon has been observed in neutrino oscillations – the transition of a neutral lepton of a given flavour to one with a different flavour. This observation motivates searches for additional manifestations of LFV that may be the result of beyond-the-Standard Model (SM) physics, key among which is the search for LFV decays of the Higgs boson.
The ATLAS collaboration has recently announced the results of searches for H → eτ and H → μτ decays based on the full Run 2 data set, which was collected at a centre-of-mass energy of 13 TeV. The unstable τ lepton decays to an electron or a muon and two neutrinos, or to one or more hadrons and one neutrino. Most of the background events in these searches arise from SM processes such as Z →ττ, the production of top–antitop and weak-boson pairs, as well as from events containing misidentified or non-prompt leptons (fake leptons). These fake leptons originate from secondary decays, for example of charged pions. Several multivariate analysis techniques were used for each final state to provide the maximum separation between signal and background events.
To ensure the robustness of the measurement, two background estimation methods were employed: a Monte Carlo (MC) template method in which the background shapes were extracted from MC and normalised to data, and a “symmetry method”, which used only the data and relied on an approximate symmetry between prompt electrons and prompt muons. Any difference between the branching fractions B(H → eτμ) and B(H → μτe), where the subscripts μ and e represent the decay modes of the τ lepton, would break this symmetry. In both cases, contributions from events containing fake leptons were estimated directly from the data.
The MC-template method enables the measurement of the branching ratios of the LFV decay modes. Searches based on the MC-template method for background estimation involve both leptonic and hadronic decays of τ leptons. A simultaneous measurement of the H → eτ and H → μτ decay modes was performed. For the H → μτ (H → eτ) search, a 2.5 (1.6) standard deviation upward fluctuation above the SM background prediction is observed. The observed (expected) upper limits on the branching fractions B(H → eτ) and B(H → μτ) at 95% confidence level are slightly below 0.2% (0.1%), which are the most stringent limits obtained by the ATLAS experiment on these quantities. The result of the simultaneous measurement of the H → eτ and H → μτ branching fractions is compatible with the SM prediction within 2.2 standard deviations (see figure 1).
The observed upper limits on the branching fractions are the most stringent limits obtained by the ATLAS experiment
The symmetry method is particularly sensitive to the difference in the two LFV decay branching ratios. For this measurement, only the fully leptonic final states were used. Special attention was paid to correctly account for asymmetries induced by the different detector response to electrons and muons, especially regarding the trigger and offline efficiency values for lepton reconstruction, identification and isolation, as well as regarding contributions from fake leptons. The measurement of the branching ratio difference indicates a small but not significant upward deviation for H → μτ compared to H → eτ. The best-fit value for the difference between B(H → μτe) and B(H → eτμ) is (0.25 ± 0.10)%.
The expected twice-larger LHC Run 3 dataset at the higher centre-of-mass energy of 13.6 TeV will shed further light on these results.
The Cabibbo–Kobayashi–Maskawa (CKM) matrix describes the couplings between the quarks and the weak charged current, and contains within it a phase γ that changes sign under consideration of antiquarks rather than quarks. In the Standard Model (SM), this phase is the only known difference in the interactions of matter and anti-matter, a consequence of the breaking of charge-parity (CP) symmetry. While the differences within the SM are known to be far too small to explain the matter-dominated universe, it is still of paramount importance to precisely determine this phase to provide a benchmark against which any contribution from new physics can be compared.
A new measurement recently presented by the LHCb collaboration uses a novel method to determine γ using decays of the type B± → D[K∓π±π±π∓]h± (h = π, K). CP violation in such decays is a consequence of the interference between two tree-level processes with a weak phase that differs by γ, and thus provide a theoretically clean probe of the SM. The new aspect of this measurement compared to those performed previously lies in the partitioning of the five-dimensional phase space of the D-decay into a series of independent regions, or bins. In these bins, the asymmetries between B+ and B– meson decay rates can receive large enhancements from the hadronic interactions in the D-meson decay. The enhancement for one of such bins can be seen in figure 1, which shows the invariant mass spectrum of the B+ and B– meson candidates, where the correctly reconstructed decays peak at around 5.3 GeV. The observed asymmetry in this region is around 85%, which is the largest difference in the behaviour of matter and antimatter ever measured. Observables from the different bins are combined with information on the hadronic interactions in the D-meson decay from charm-threshold experiments to obtain γ = 55 ± 9°, which is compatible withprevious determinations and is the second most precise single measurement.
The matter–antimatter asymmetry reaches 85% in a certain region, the largest ever observed
The LHCb average value of γ is then determined by combining this analysis with the measurements in many other B and D decays, where in all cases the SM contribution is expected to be dominant. Measurements of charm decays are also included to better constrain both the parameters of charm mixing, which also play an important role in the measurements of B-meson decays at the current level of precision and help to constrain the hadronic interactions in some of the D decays. In particular, included for the first time in this combination is a measurement of yCP, which is proportional to the difference in lifetimes of the two neutral charm mesons, and was determined using two-body decays of the D meson using the entire LHCb data set collected so far.
The overall impact of these additional analyses reduces the uncertainty on γ by more than 10%, corresponding to adding around a year of data taking across all decay modes.
The improvements in the knowledge of yCP is also dramatic, reducing the uncertainty by around 40%. While the value of γ is found to be compatible with determinations that would be more susceptible to new physics, the precision of the comparison is starting to approach the level of a few degrees, at which discrepancies may start to be observable.
Given that the current uncertainties on many of the key input analyses to the combination are predominately statistical in nature, measurements of these fundamental flavour-physics parameters with the upgraded LHCb detector, and beyond, are an intriguing prospect for new-physics searches.
For almost 40 years, charmonium, a bound state of a heavy charm–anticharm pair (hence also called a hidden charm), has provided a unique probe to study the properties of the quark–gluon plasma (QGP), the state of matter composed by deconfined quarks and gluons present in the early instants of the universe and produced experimentally in ultrarelativistic heavy-ion collisions. Charmonia come in a rich variety of states. In a new analysis investigating how these different bound charmonium states are affected by the QGP, the ALICE collaboration has opened a novel way to study the strong interaction at extreme temperatures and densities.
In the QGP, the production of charmonium is suppressed due to “colour screening” by the large number of quarks and gluons present. The screening, and thus the suppression, increases with the temperature of the QGP and is expected to affect different charmonium states to different degrees. The production of the ψ(2S) state, for example, which is 10 times more weakly bound and two times larger in size than the most tightly bound state, the J/ψ, is expected to be more suppressed.
This hierarchical suppression is not the only fate of charmonia in the quark–gluon plasma. The large number of charm quarks and antiquarks in the plasma – up to about 100 in head-on lead–lead collisions – also gives rise to a mechanism, called recombination, that forms new charmonia and counters the suppression to a certain extent. This process is expected to depend on the type and momentum of the charmonia, with the more weakly bound charmonia being produced through recombination later in the evolution of the plasma and charmonia with the lowest (transverse) momentum having the highest recombination rate.
Previous studies, using data first from the Super Proton Synchrotron and then from the LHC, have shown that the production of the ψ(2S) state is indeed more suppressed than that of the J/ψ, and ALICE has also previously provided evidence of the recombination mechanism in J/ψ production. But so far, no studies of ψ(2S) production at low transverse particle momentum had been precise enough to provide conclusive results in this momentum regime, preventing a complete picture of ψ(2S) production from being obtained.
The ALICE collaboration has now reported the first measurements of ψ(2S) production down to zero transverse momentum, based on lead–lead collision data from the LHC collected in 2015 and 2018. The results indicate that the ψ(2S) yield is largely suppressed with respect to a proton–proton baseline, almost a factor of two more suppressed than the J/ψ. The suppression, shown as a function of the collision centrality (Npart) in the figure, is quantified through the nuclear modification factor (RAA), which compares the particle production in lead–lead collisions with respect to the expectations based on proton–proton collisions.
Theoretical predictions based on a transport approach that includes suppression and recombination of charmonia in the QGP (TAMU) or on the Statistical Hadronisation Model (SHMc), which assumes charmonia to be formed only at hadronisation, describe the J/ψ data, while the ψ(2S) production is underestimated in central events by the SHMc. This observation represents one of the first indications that dynamical effects in the QGP, as taken into account in the transport models, are needed to reproduce the yields of the various charmonium states. It also shows that precision studies, including these and those of other charmonia, and foreseen for Run 3 of the LHC, may lead to a final understanding of the modification of the force binding these states in the extreme environment of the QGP.
This carefully crafted edition highlights the scientific life of 2004 Nobel laureate Frank Anthony Wilczek, and the developments of theoretical physics related to his research. Frank Wilczek: 50 Years of Theoretical Physics is a collection of essays, original research papers and the reminiscences of Wilczek’s friends, students and followers. Wilczek is an exceptional physicist with an extraordinary mathematical talent. The 23 articles represent his vivid research journey from pure particle physics to cosmology, quantum black holes, gravitation, dark matter, applications of field theory to condensed matter physics, quantum mechanics, quantum computing and beyond.
In 1973 Wilczek discovered, together with his doctoral advisor David Gross, asymptotic freedom through which the field theory of the strong interaction, quantum chromodynamics (QCD), was firmly established. Independently that year, the same work was done by David Politzer, and all three shared the Nobel prize in 2004. Wilczek’s major work includes the solution of the strong-CP problem by predicting the hypothetical axion, a result of the spontaneously broken Peccei–Quinn symmetry. In 1982 he predicted the quasiparticle “anyon”, for which evidence was found in a 2D electronic system in 2020. This satisfies the need for a new variant for 2D systems as the properties of fermions and bosons are not transferable.
Original research papers included in this book were written by pioneering scientists, such as Roman Jackiw and Edward Witten, who are either co-inventors or followers of Wilczek’s work. The articles cover recent developments of QCD, quantum-Hall liquids, gravitational waves, dark energy, superfluidity, the Standard Model, symmetry breaking, quantum time-crystals, quantum gravity and more. Many colour photographs, musical tributes to anyons, memories of quantum-connection workshops and his contribution to the Tsung-Dao Lee Institute in Shanghai complement the volume. The book ends with Wilczek’s publication list, which documents the most significant developments in theoretical particle physics during the past 50 years.
Wilczek is an exceptional physicist with an extraordinary mathematical talent
Though this book is not an easy read in places, and the connections between articles are not always clear, a patient and careful reader will be rewarded. The collection combines rigorous scientific discussions with an admixture of Wilczek’s life, wit, scientific thoughts and teaching – a precious and timely tribute to an exceptional physicist.
Special Topics in Accelerator Physics by Alexander Wu Chao introduces the global picture of accelerator physics, clearly establishing the scope of the book from the first page. The derivation and solution of concepts and equations is didactic throughout the chapters. Chao takes readers by the hand and guides them through important formulae and their limitations step-by-step, such that the reader does not miss the important parts – an extremely useful tactic for advanced masters or doctoral students when their topic of interest is among the eight special topics described.
In the first chapter, I particularly liked the way the author transitions from the Vlasov equation, a very powerful technique for studying beam–beam effects, towards the Fokker–Planck equation describing the statistical interaction of charged particles inside an accelerator. Chao pedagogically introduces the potential-well distortion, which is complemented by illustrations. The discussion on wakefield acceleration, taking readers deeper into the subject and extending it both for proton and electron beams, is timely. Extending the Fokker–Planck equation to 2D and 3D systems is particularly advanced but at the same time important. The author discusses the practical applications of the transient beam distribution in simple steps and introduces the higher order moments later. The proposed exercises, for some of which solutions are provided, are practical as well.
In chapter two, the concept of symplecticity, the conservation of phase space (a subject that causes much confusion), is discussed with concrete examples. Naming issues are meticulously explained, such as using the term short-magnet rather than thin-lens approximation in formula 2.6. Symplectic models for quadrupole magnets are introduced and the following discussion is extremely useful for students and accelerator physicists who will use symplectic codes such as MAD-X and who would like to understand the mathematical framework of their operation. This nicely conjuncts with the next chapter and the book offers useful insights to how these codes operate. In the discussion about third-order integration, Chao makes occasional mental leaps, which could be mitigated with an additional sentence. Although the discussion on higher order and canonical integrators is rather specialised, it is still very useful.
The author introduces the extremely convenient and broadly used truncated power series algebra (TPSA) technique, used to obtain maps, in chapter three. Chao explains in a simple manner the transition from the pre-TPSA algorithms (such as TRANSPORT or COSY) to symplectic algorithms such as MAD-X or PTC, as well as the reason behind this evolution. The clear “drawbacks” discussion is very useful in this regard.
The transition to Lie algebra in chapter four is masterful and pedagogical. Lie algebras, which can be an advanced topic and come with many formulas, are the main focus in this section of the book. In particular, the non-linearity of the drift space, which is absent of fields, should catch the reader’s attention. This is followed by specialised applications for expert readers only. One of this chapter’s highlights is the derivation of the sextupole pairing, which is complemented by that of Taylor maps up to the second order and its Lie algebra, although it would be better if the “Our plan” section was placed at the beginning of the chapter.
Chapter five covers proton-spin dynamics. Spinor formulas and the Froissart–Stora equation for the polarisation change are developed and explained. The Siberian snake technique remains one of the most well-known to retain beam polarisation, which the author discusses in detail. This links elegantly to chapter six, which introduces the reader to electron-spin dynamics where synchrotron radiation is the dominant effect and therefore constitutes a completely different research area. Chao focuses on the differences between the quantum and classical approach to synchrotron radiation, a phenomenon that cannot be ignored in high-brightness machines. Analogies between protons and electrons are then very well summarised in the recap figure 6.3. Section 6.5 is important for storage rings and leads smoothly to the Derbenev–Kondratenko formula and its applications.
Echoes
Chapter seven looks at echoes, a key technique when measuring diffusion in an accelerator, where the author introduces the reader to the generality of the term and the concept of echoes in accelerator physics. Transverse echoes (with and without diffusion) are quite analytical and the figures are didactic.
The book concludes with a very complete, concise and detailed chapter about beam–beam effects, which acts as an introduction to collider–accelerator physics for coherent- and incoherent-effects studies. Although synchro-betatron couplings causing resonant instabilities are advanced topics, they are often seen in practice when operating the machines, and the book offers the theoretical background for a deeper understanding of these effects.
Special Topics in Accelerator Physics is well written and develops the advanced subjects in a comprehensive, complete and pedagogical way.
High-energy physics spans a wide range of energies, from a few MeV to TeV, that are all relevant. It is therefore often difficult to take all phenomena into account at the same time. Effective field theories (EFTs) are designed to break down this range of scales into smaller segments so that physicists can work in the relevant range. Theorists “cut” their theory’s energy scale at the order of the mass of the lightest particle omitted from the theory, such as the proton mass. Thus, multi-scale problems reduce to separate and single-scale problems (see “Scales” image). EFTs are today also understood to be “bottom-up” theories. Built only out of the general field content and symmetries at the relevant scales, they allow us to test hypotheses efficiently and to select the most promising ones without needing to know the underlying theories in full detail. Thanks to their applicability to all generic classical and quantum field theories, the sheer variety of EFT applications is striking.
In hindsight, particle physicists were working with EFTs from as early as Fermi’s phenomenological picture of beta decay in which a four-fermion vertex replaces the W-boson propagator because the momentum is much smaller compared to the mass of the W boson (see “Fermi theory” image). Like so many profound concepts in theoretical physics, EFT was first considered in a narrow phenomenological context. One of the earliest instances was in the 1960s, when ad-hoc methods of current algebras were utilised to study weak interactions of hadrons. This required detailed calculations, and a simpler approach was needed to derive useful results. The heuristic idea of describing hadron dynamics with the most general Lagrangian density based on symmetries, the relevant energy scale and the relevant particles, which can be written in terms of operators multiplied by Wilson coefficients, was yet to be known. With this approach, it was possible to encode local symmetries in terms of the current algebra due to their association with conserved currents.
For strong interactions, physicists described the interaction between pions with chiral perturbation theory, an effective Lagrangian, which simplified current algebra calculations and enabled the low-energy theory to be investigated systematically. This “mother” of modern EFTs describes the physics of hadrons and remains valid to an energy scale of the proton mass. Heavy-quark effective theory (HQET), introduced by Howard Georgi in 1990, complements chiral perturbation theory by describing the interactions of charm and bottom quarks. HQET allowed us to make predictions on B-meson decay rates, since the corrections could now be classified. The more powers of energy are allowed, the more infinities appear. These infinities are cancelled by available counter-terms.
Similarly, it is possible to regard the Standard Model as the truncation of a much more general theory including non-renormalisable interactions, which yield corrections of higher order in energy. This perception of the whole Standard Model as an effective field theory started to be formed in the late 1970s by Weinberg and others (see “All things EFT: a lecture series hosted at CERN” panel). Among the known corrections to the Standard Model that do not satisfy its approximate symmetries are neutrino masses, postulated in the 1960s and discovered via the observation of neutrino oscillations in the late 1990s. While the scope of EFTs was unclear initially, today we understand that all successful field theories, with which we have been working in many areas of theoretical physics, are nothing but effective field theories. EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments. The former is crucial for making accurate theoretical predictions, while the latter is central to the physics programme of CERN in general.
EFTs in particle physics
More than a decade has passed since the first run of the LHC, in which the Higgs boson and the mechanism for electroweak symmetry breaking were discovered. So far, there are no signals of new physics beyond the SM. EFTs are well suited to explore LHC physics in depth. A typical example for an event involving two scales is Higgs-boson production because there is a factor 10–100 between its mass and transverse momentum. The calculation of each Higgs-boson production process leads to large logarithms that can invalidate perturbation theory due to the large-scale separation. This is just one of many examples of the two-scale problem that arises when the full quantum field theory approach for high-energy colliders is applied. Traditionally, such two-scale problems have been treated in the framework of QCD factorisation and resummation.
Over the past two decades, it has been possible to recast two-scale problems at high-energy colliders with the advent of soft-collinear effective theory (SCET). SCET is nowadays a popular framework that is used to describe Higgs physics, jets and their substructure, as well as more formal problems, such as power corrections to reconstruct full amplitudes eventually. The difference between HQET and SCET is that SCET considers long-distance interactions between quarks and both soft and collinear particles, whereas HQET takes into account only soft interactions between a heavy quark and a parton. SCET is just one example where the EFT methodology has been indispensable, even though the underlying theory at much higher energies is known. Other examples of EFT applications include precision measurements of rare decays that can be described by QCD with its approximate chiral symmetry, or heavy quarks at finite temperature and density. EFT is also central to a deeper understanding of the so-called flavour anomalies, enabling comparisons between theory and experiment in terms of particular Wilson coefficients.
A novel global lecture series titled “All things EFT” was launched at CERN in autumn 2020 as a cross-cutting online series focused on the universal concept of EFT, and its application to the many areas where it is now used as a core tool in theoretical physics. Inaugurated in a formidable historical lecture by the late Steven Weinberg, who reviewed the emergence and development of the idea of EFT through to its perception nowadays as encompassing all of quantum field theory and beyond, the lecture series has amassed a large following that is still growing. The series featured outstanding speakers, world-leading experts from cosmology to fluid dynamics, condensed-matter physics, classical and quantum gravity, string theory, and of course particle physics – the birthing bed of the powerful EFT framework. The second year of the series was kicked off in a lecture dedicated to the memory of Weinberg by Howard Georgi, who looked back on the development of heavy-quark effective theory and its immediate aftermath.
Moreover, precision measurements of Higgs and electroweak observables at the LHC and future colliders will provide opportunities to detect new physics signals, such as resonances in invariant mass plots, or small deviations from the SM, seen in tails of distributions for instance at the HL-LHC – testing the perception of the SM as a low-energy incarnation of a more fundamental theory being probed at the electroweak scale. This is dubbed the SMEFT (SM EFT) or HEFT (Higgs EFT), depending on whether the Higgs fields are expressed in terms of the Higgs doublet or the physical Higgs boson. This particular EFT framework has recently been implemented in the data-analysis tools at the LHC, enabling the analyses across different channels and even different experiments (see “LHC physics” image). At the same time, the study of SMEFT and HEFT has sparked a plethora of theoretical investigations that have uncovered its remarkable underlying features, for example allowing EFT to be extended or placing constraints on the EFT coefficients due to Lorentz invariance, causality and analyticity.
EFTs in gravity
Since the inception of EFT, it was believed that the framework is applicable only to the description of quantum field theories for capturing the physics of elementary particles at high-energy scales, or alternatively at very small length scales. Thus, EFT seemed mostly irrelevant regarding gravitation, for which we are still lacking a full theory valid at quantum scales. The only way in which EFT seemed to be pertinent for gravitation was to think of general relativity as a first approximation to an EFT description of quantum gravity, which indeed provided a new EFT perspective at the time. However, in the past decade it has become widely acknowledged that EFT provides a powerful framework to capture gravitation occurring completely across large length scales, as long as these scales display a clear hierarchy.
The most notable application to such classical gravitational systems came when it was realised that the EFT framework would be ideal to handle gravitational radiation emitted at the inspiral phase of a binary of compact objects, such as black holes. At this phase in the evolution of the binary, the compact objects are moving at non-relativistic velocities. Using the small velocity as the expansion parameter exhibits the separation between the various characteristic length scales of the system. Thus, the physics can be treated perturbatively. For example, it was found that even couplings manifestly change in classical systems across their characteristic scales, which was previously believed to be unique to quantum field theories. The application of EFT to the binary inspiral problem has been so successful that the precision frontier has been pushed beyond the state of the art, quickly surpassing the reach of work that has been focused on the two-body problem for decades via traditional methods in general relativity.
This theoretical progress has made an even broader impact since the breakthrough direct discovery of gravitational waves (GWs) was announced in 2016. An inspiraling binary of black holes merged into a single black hole in less than a split second, releasing an enormous amount of energy in the form of GWs, which instigated even greater, more intense use of EFTs for the generation of theoretical GW data. In the coming years and decades, a continuous increase in the quantity and quality of real-world GW data is expected from the rapidly growing worldwide network of ground-based GW detectors, and future space-based interferometers, covering a wide range of target frequencies (see “Next generation” image).
EFTs in cosmology
Cosmology is inherently a cross-cutting domain, spanning scales over about 1060 orders of magnitude, from the Planck scale to the size of the observable universe. As such, cosmology generally cannot be expected to be tackled directly by each of the fundamental theories that capture particle physics or gravity. The correct description of cosmology relies heavily on the work in many disparate areas of research in theoretical and experimental physics, including particle physics and general relativity among many more.
The development of EFT applications in cosmology – including EFTs of inflation, dark matter, dark energy and even EFTs of large-scale structure – has become essential to make observable predictions in cosmology. The discovery of the accelerated expansion of the universe in 1998 shows our difficulty in understanding gravity both at the quantum regime and the classical one. The cosmological constant problem and dark-matter paradigm might be a hint for alternative theories of gravity at very large scales. Indeed, the problems with gravity in the very-high and very-low energy range may well be tied together. The science programme of next-generation large surveys, such as ESA’s Euclid satellite (see “Expanding horizons” image), rely heavily on all these EFT applications for the exploitation of the enormous data that is going to be collected to constrain unknown cosmological parameters, thus helping to pinpoint viable theories.
The future of EFTs in physics
The EFT framework plays a key role at the exciting and rich interface between theory and experiment in particle physics, gravity and cosmology as well as in other domains, such as condensed-matter physics, which were not covered here. The technology for precision measurements in these domains is constantly being upgraded, and in the coming years and decades we are heading towards a growing influx of real-world data of higher quality. Future particle-collider projects, such as the Future Circular Collider at CERN, or China’s Circular Electron Positron Collider, are being planned and developed. Precision cosmology is also thriving, with an upcoming next-generation of very large surveys, such as the ground-based LSST, or space-based Euclid. GW detectors keep improving and multiplying, and besides those that are currently operating many more are planned, aimed at measuring various frequency ranges, which will enable a richer array of sources and events to be found.
EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments across all domains of physics
Half a century after the concept has formally emerged, effective field theory is still full of surprises. Recently, the physical space of EFTs has been studied as a fundamental entity in its own right. These studies, by numerous groups worldwide, have exposed a new hidden “totally positive’’ geometric structure dubbed the EFT-hedron that constrains the EFT expansion in any quantum field theory, and even string theory, from first principles, including causality, unitarity and analyticity, to be satisfied by any amplitudes of these theories. This recent formal progress reflects the ultimate leap in the perception of EFT nowadays as the most fundamental and most generic theory concept to capture the physics of nature at all scales. Clearly, in the vast array of formidable open questions in physics that still lie ahead, effective field theory is here to stay – for good.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.