What are the essential requirements for the formation of a quark–gluon plasma (QGP)? Do only the most violent, head-on lead–lead (Pb–Pb) interactions at the LHC provide such conditions? The answer to such questions will provide key insights into the mechanisms driving the QGP towards equilibration, converting kinetic collision energy into a hot and strongly interacting medium.
Recent measurements of proton–lead (p–Pb) and proton–proton (pp) collisions at the LHC have shown intriguing hints of QGP-like behaviour in such systems, which were initially thought to be too small for QGP formation. Experimentalists classify p–Pb collisions by a parameter called the event activity (EA), which is characterised by particle or energy production in the forward Pb-going direction; the most violent p–Pb collisions, with the largest EA, exhibit correlations that are characteristic of the collective flow of the QGP. Verification of this picture requires measurements of other QGP signals, notably the “quenching” of energetic quark and gluon jets as they propagate through the dense QCD medium.
Jets arise from the scattering of quarks and gluons in the incoming projectiles, and are produced predominantly in azimuthally back-to-back pairs. The first jet-quenching measurements in p–Pb collisions looked for suppression of the inclusive production rate of high momentum hadrons and jets by counting all such objects and comparing them to a reference rate from proton–proton (pp) collisions. Some inclusive suppression measurements indicate significant jet suppression in the highest-EA p–Pb collisions. Quantitative comparison to the pp collision reference spectrum requires the assumption that high-EA is correlated with central p–Pb collisions, in which the proton ploughs through the centre of the Pb-nucleus. However, the relation between forward particle and energy production used to measure EA with the geometry of a p–Pb collision may be modified in events containing jets, complicating its interpretation. An approach to jet quenching in p–Pb that does not invoke this assumption is therefore needed.
For this purpose, the ALICE collaboration has reported measurements of the semi-inclusive distribution of jets recoiling from a high-momentum hadron trigger (h+jet) in p–Pb collisions, as a function of EA. The h+jet distribution is self-normalising, due to the back-to-back nature of jet-pair production: jet quenching is observed as a reduction in jet rate per trigger, without comparison to a pp reference spectrum or the assumption that high-EA corresponds to central p–Pb collisions. The analysis applies a data-driven statistical approach to correct the complex uncorrelated background, enabling the accurate measurement of recoil jets over a broad phase space in the complex LHC environment.
The upper panel of the figure shows distributions of this observable, Δrecoil, for p–Pb collisions with high and low EA. Jet quenching corresponds to the transport of energy out of the jet cone, thereby suppressing Δrecoil for high EA. The ratio is however consistent with unity at all jet energies, indicating negligible jet quenching effects within the uncertainties.
These data provide a limit on the magnitude of medium-induced energy transport to large angles due to jet quenching: for events with high EA, medium-induced charged energy transport out of the jet cone is less than 0.4 GeV/c (90% confidence level). This limit is a factor 20 smaller than the magnitude of jet quenching measured using this observable in Pb–Pb collisions, in contrast to some of the current inclusive jet suppression measurements in p–Pb collisions. This result challenges theoretical models that predicted strong jet quenching in p–Pb collisions. Comparison of these data with the surviving models promises new insight into QGP formation in small systems, and the fundamental processes of equilibration in QCD.
The ATLAS collaboration has released a set of comprehensive results that illuminates the properties of the Higgs boson with improved precision, using its decay into two photons with LHC collisions recorded at a centre-of-mass energy of 13 TeV.
The Higgs-to-two-photons decay played a crucial role in the discovery of the Higgs boson in 2012 owing to the excellent mass resolution and well-modelled backgrounds in this channel. Following the discovery, the properties of the Higgs boson can be probed more precisely using the large 13 TeV dataset.
One major result of the new study is the measurement of the signal strength μ, defined as the ratio of the number of observed and expected Higgs boson events. The signal strength is measured to be μ = 0.99+0.15–0.14 – in good agreement with the Standard Model expectation. The precision could be improved by a factor of two with respect to the previous measurements at energies at 7 and 8 TeV. The precision of signal-strength measurements of individual Higgs boson production modes are also improved significantly thanks to a better understanding of the ATLAS detector, the increased rate of Higgs production at 13 TeV and the extended use of machine-learning techniques to identify specific production processes.
Another key result of the present study are the measurements of nine simplified template cross sections (STXS), which refer to the cross sections of specific Higgs production channels measured in different kinematic regions. Measurements of STXS are corrected for the impact of the Higgs-boson decay and incorporate the acceptance of the experiment, so that they can be combined across Higgs boson channels and experiments (see figure, left).
The properties of the Higgs boson are further investigated by measuring 20 differential and two double-differential cross sections. The Higgs boson transverse momentum (figure, right) and rapidity, the number and properties of jets produced in association with the Higgs boson, and several angular relations that allow us to probe its spin and CP quantum numbers are measured. Five of these distributions are used to search for new CP-even and CP-odd couplings between the Higgs boson and vector bosons or gluons. No significant deviations from the Standard Model predictions are observed.
Collectively, this new set of results at the highest LHC energies sheds light on the fundamental properties of the Higgs boson and extends our knowledge obtained from the first running period of the LHC.
Anomalies in decays of B mesons, in which a bottom quark changes flavour to become a charmed quark, reported by the LHCb, Belle and Babar collaborations, have triggered considerable excitement in the particle-physics community (see “Beauty quarks test lepton universality“). The combined results of these experiments suggest that the decay rates of B → D τ ν and B → D* τ ν differ by more than four standard deviations from the Standard Model (SM) predictions.
Several phenomenological studies have suggested that these differences could be explained by the existence of hypothetical new particles called leptoquarks (LQs), which couple to both leptons and quarks. Such particles appear naturally in several scenarios of new physics, including models inspired by grand unified theories or Higgs-compositeness models. Leptoquarks that couple to the third generation of SM fermions (top and bottom quarks, and the tau lepton and its associated neutrino) are considered to be of particular interest to explain these flavour anomalies.
Leptoquarks coupling to fermions of the first and also the second generation of the SM have been the target of many searches by collider experiments at the successive energy frontiers (SPS, LEP, HERA, Tevatron). The most sensitive searches have been performed at the LHC, resulting in the exclusion of LQs with masses below 1.1 TeV. Searches for third-generation LQs were first performed at the Tevatron, and the baton has now been passed to the LHC.
The first investigation by the CMS collaboration used events recorded at an energy of 8 TeV during LHC Run 1, and targeted LQ pair production via the strong interaction with the decay channel of the LQ to a top quark and a tau lepton. The result of this search, reported by CMS in 2015, was that third-generation LQs with masses below 0.685 TeV were excluded. These early results have now been extended using the 2016 dataset at 13 TeV, employing more sophisticated analysis methods. The new search investigates final states containing an electron or a muon, one or two tau leptons that decay to hadrons and additional jets. To achieve sensitivity to the largest possible range of LQ masses, the analysis uses several event categories in which deviations from the SM predictions are searched for. The SM backgrounds mainly consist of top-quark pair production and W+ jets events, whose contributions are derived from the data rather than from simulation.
No significant indication of the existence of third-generation LQs has yet been found in any of the categories studied (see left-hand figure). The collaboration was therefore able to place exclusion limits on the product of the production cross section and branching fraction as small as 0.01 pb, which translate into lower limits on LQ masses extending above 1 TeV.
Combining the result of a search for the pair-production of supersymmetric bottom squarks, which can be reinterpreted as a search for LQs in the decay mode of a bottom quark and a tau neutrino, results in limits that probe the TeV mass range over all possible LQ branching ratios (see figure, right). Another recent search targets different LQs that decay into a bottom quark and a tau lepton. Using a smaller dataset at 13 TeV, this search excludes masses below 0.85 TeV for a unity branching fraction.
This is the first time that searches at the LHC have achieved sufficient sensitivity to explore the mass range favoured by phenomenological analyses of LQs and the current flavour anomalies. No hints of these states have been found, but analyses are under way using larger datasets and including additional signatures.
Three decades since astronomers first detected planets outside our solar system, exoplanets are now being discovered at a rate of hundreds per year. Although it is reasonable to assume other galaxies than our own contain planets, no direct detections of such objects have been made owing to their small size and their large distances from Earth.
Now, however, radiation emitted around a distant black hole has revealed the existence of extragalactic planets in a galaxy 3.8 billion light years away, located between the black hole and us. The planets, which have no way of being directly detected using any kind of existing telescope, are visible thanks to the small gravitational distortions they inflict on X-rays emanating from the more distant black hole.
The discovery was made by Xinyu Dai and Eduardo Guerras from the University of Oklahoma in the US using data from the Chandra X-ray Observatory. The distant black hole in question, which forms the supermassive centre of the quasar RX J1131-1231, is surrounded by an accretion disk that heats up as it orbits and emits radiation at X-ray wavelengths. Thanks to a fortunate cosmic alignment, this radiation is amplified by gravitational lensing and therefore can be studied accurately. The lensing galaxy positioned between Earth and the quasar causes light from RX J1131-1231 to bend around it, appearing to us not as a normal point-source but as a ring with four bright spots (see figure). The spots are a result of radiation coming from the same location of the quasar, which initially followed different paths but ended up being directed to the Earth.
Dai and Guerras focused on the spectral features of iron, a strong emission line that reveals details of the accretion disk, and found that this emission line is not just shifted in energy but that the amount of the shift varies with time. Although a shift in the frequency of this line is common, for example due to relative velocities between observers, its position is generally very stable with time when studying a specific object. Based on the 38 times RX J1131-1231 had been observed by the Chandra satellite during the past decade, the Oklahoma duo found that the energy varied significantly between observations in all of the four bright points of the ring.
These observations thus form the best evidence for the existence of extragalactic planets.
This feature can be explained using microlensing. The intermediate lensing galaxy is not a uniform mass but rather consists of small point masses, mainly stars and planets. As the relatively small objects within the lensing galaxy move, the light from the quasar passing through it is deflected in slightly different ways, causing different parts of the accretion disk to be amplified at different levels over time. As the different parts of the disk appear to emit at different energies, the measured variations in the energy of this emission line can be explained by the movement of objects within the lensing galaxy. The question is: what objects could cause such changes over time scales of several years?
Stars, being so numerous and massive, are one good candidate explanation. But Dai and Guerras calculated that the chance for a star to cause such short-term variations is very small. A better candidate, suggest fits to analytical models, is unbound planets which do not orbit a star. The Chandra data were best described by a model in which, for each star, there are more than 2000 unbound planets with masses between that of the Moon and Jupiter. Although the exact population of such planets is not well known even for our own galaxy, their number is well within the existing constraints. These observations thus form the best evidence for the existence of extragalactic planets and, by also providing the number of such planets in that galaxy, teach us something about the number of unbound planets we can expect in our own galaxy.
Of all the puzzling features of the Standard Model of particle physics (SM), one of the most vexing is the arrangement of the elementary particles into families or generations. Each pair of fermions comes in three and apparently only three copies: the electron, muon, tau leptons and their associated neutrinos, and three pairs of quarks. The only known difference between generations is the different strengths of their interactions with the Higgs field, known as the Yukawa couplings. This results in different masses for each particle, giving a wide range of experimental signatures.
In the case of the charged leptons (electrons, muons and taus), this pattern also results in one simple post-diction, known as lepton universality (LU): other than effects related to their different masses, all the SM interactions treat the three charged leptons identically. During the past couple of decades, LU has been tested to sub-percent precision in interactions of photons and weak bosons, and in transitions between light quarks. These measurements were made, for example, at the Large Electron–Positron (LEP) collider at CERN in decays of W and Z bosons, by the PIENU and NA62 fixed-target experiments in decays of pions and kaons, and in J/ψ decays by the BES-III, CLEO and KEDR collaborations. However, LU has never been established to such a degree of precision in decays of heavy quarks.
Measurements from Run 1 of decays of beauty hadrons at the LHCb experiment, in addition to earlier results from the B-factories Belle at KEKB and BaBar at PEP-II, have hinted at potential deviations from LU. None is statistically significant on its own but, taken together, the results have led to speculation on whether non-SM forces exist or phenomena that treat leptons differently depending on their flavour are at play. If a deviation from LU was to be confirmed, it would be clear evidence for physics processes beyond the SM and perhaps a sign that we are finally moving towards an understanding of the structure of the fermions.
Two classes
The results so far concern two classes of transitions in b-quark hadron decays, exemplified in figure 1. Measurements of highly suppressed flavour-changing neutral-current (FCNC) decays, b → sℓ+ℓ−, hint at a difference involving muons and electrons, while measurements of the more frequent leading-order or tree-level decays, b → cℓ+νℓ, hint at a difference between muons and taus. These two classes of decays present very different challenges, both experimentally and theoretically. The latter, semi-leptonic, decays of b-quark hadrons proceed through tree-level diagrams in which a virtual W boson decays into a lepton–neutrino pair. Measurements of decays involving electrons and muons show no deviations with respect to the SM within the current level of precision. In contrast, measurements of decays involving τ leptons are only marginally in agreement with the SM expectation. The quantity that is experimentally measured is the ratio of branching fractions RD(*) = BF(B → D(*)τ+ντ)/BF(B → D(*)ℓ+νℓ), with ℓ = e or μ. This ratio is precisely predicted in the SM owing to the cancellation of the leading uncertainty that stems from the knowledge of the decay form-factors.
Interest in these decay modes was heightened in 2012 when the BaBar collaboration found values for RD and RD* above the SM prediction. This was followed in 2015 by results from the Belle collaboration that were also consistently high. Experimentally, such semi-tauonic beauty decays are extremely difficult to measure because taus are not reconstructed directly and at least two undetected neutrinos are present in the final state. To get around this, the BaBar and Belle experiments used both B mesons produced from Υ(4S) decays. By reconstructing the decay of one B meson in the event, the teams were able to infer the recoil of the other, “signal”, B decay. This tagging technique, based on the known momentum of the initial-state positron–electron pair and therefore that of the Υ(4S), allows the determination of the momentum of the Bsignal, the reconstruction of its decay under the assumption that only neutrinos escape detection, and the separation of signal and background.
The study of beauty-hadron decays to final states involving τ leptons was deemed not to be feasible at hadron colliders such as the LHC. This is a result of the unknown momentum of the colliding partons and the significantly more complex environment with respect to electron–positron B-factories in terms of particle densities, detector occupancy, trigger and detection efficiencies. However, due to the significant Lorentz boost and the excellent performance of the LHCb vertex locator, the decay vertices of the b-hadrons produced at the LHC are well separated from the proton–proton interaction point. This enables the collaboration to approximate the b-hadron momentum and its decay kinematics with sufficient resolution to preserve the discrimination between signal and background.
Exploiting the tau
The first measurement of RD* at a hadron collider was performed by LHCb researchers in 2015 using the decays of the τ lepton into a muon and two neutrinos. This measurement again came out higher than the SM prediction, thus strengthening the tension between theory and experiment raised by Belle and BaBar.
In 2017, LHCb reported another RD* measurement by exploiting the decay of the τ lepton into three charged pions and a neutrino. This measurement was considered to be even more difficult than the previous one due to the large backgrounds from B decays and the apparent lack of discriminating variables. Nevertheless, the presence of a τ decay vertex significantly detached from the b-hadron decay vertex allows the most abundant backgrounds to be suppressed. The residual background, due to b-hadrons decaying to a D* and another charm meson that subsequently gives three pions in a detached vertex topology, is reduced by exploiting the different resonant structure of the three-pion system. The resulting measurement of RD* is larger than, although compatible with, the SM prediction, and consistent with previous determinations.
The combined world average of RD* and RD measurements, known to precisions of 5 and 10%, respectively, remains in tension with the SM prediction at a level of four standard deviations (figure 2). This provides solid motivation for further LU tests in semi-tauonic decays of B hadrons. In the next years, the LHCb collaboration will therefore extend the RD* measurement to the datasets collected in Run 2 and continue to study semi-tauonic decays of other b-quark hadrons.
In early 2018 the first measurement of RJ/ψ was performed, probing LU in the Bc sector. While the result was higher than the SM, the current uncertainty is large and the SM prediction is not yet firm. However, it can be an interesting test for the future. An important extension of this already rich physics programme, already being explored by Belle, will consider observables other than branching fractions, such as polarisation and angular distributions of the final-state particles. This will provide crucial insight when interpreting the current anomalies in terms of new-physics models.
The plot thickens
The results described above concern tree-level semi-leptonic decays. In contrast, the other relevant class of transitions for testing LU, b → sℓ+ℓ−, are highly suppressed because there are no tree-level FCNCs in the SM. This increases the sensitivity to the possible existence of new physics. The presence of new particles contributing to these processes could lead to a sizeable increase or decrease in the rate of particular decays, or change the angular distribution of the final-state particles. Tests of LU in these decays involve measurements of the ratio of branching fractions between muon and electron decay modes RK(*) = BF(B → K(*)μ+μ−)/BF(B → K(*)e+e−).
These modes represent a considerable challenge because the highly energetic LHC environment causes electrons to emit a large amount of bremsstrahlung radiation as they traverse the material of the LHCb detector. This effect complicates the analysis procedure, for example making it more difficult to separate the signal and backgrounds where one or more particles have not been reconstructed. Fortunately, there are several control samples in the data that can be used to study electron reconstruction effects, such as the resonant decays B → K(*)(J/ψ→ e+e−), and ultimately the precision is dominated by the statistical uncertainty of the decays involving electrons. Despite this, the LHCb measurements dominate the world precision.
Three measurements of RK(*) have been performed by the LHCb experiment with the Run 1 data: two in the B0→ K*0ℓ+ℓ− decay mode (RK*) and one in the B+→ K+ℓ+ℓ− decay mode (RK). The results are more precise than those performed at previous experiments, and all have a tendency to sit below the SM predictions (figure 3). The BaBar and Belle experiments have also measured these LU ratios and found them to be consistent with the SM, albeit with a larger uncertainty.
Assuming that rather than being statistical fluctuations these deviations arise from new physics, one can ask the question: what is driving the RK and RK* anomalies? Is the electron decay rate being enhanced or the muon suppressed, or both? One could get an answer to this question by looking at the differential branching fractions of the decays B+→ K+μ+μ−, B0→ K+0μ+μ− and Bs0→ φμ+μ−. Although with small statistical significance, all these branching fractions consistently sit below the SM predictions, indicating that something could be destructively interfering with the muonic decay amplitude. If a new particle was really contributing to the B decay amplitude, then one would naturally expect it to also influence the angular distribution of the decay products. Intriguingly, by studying the angular distribution of B0→ K*0μ+μ− decays one observes discrepancies that can be interpreted as being compatible with the expectation based on the central values of RK and RK*.
Can we conclude it is due to new physics? Unfortunately not. Information such as branching fractions and angular observables are affected by non-perturbative QCD effects. In principle, these can be controlled, but there is an open question about whether the interference of fully hadronic decays such as B0→ K*0J/ψ could mimic some of the discrepancies seen. This contribution is very hard to calculate and will most likely require controlling in the data directly.
All the results so far probing LU at LHCb are based on LHC Run 1 data recorded at a centre-of-mass energy of 7 and 8 TeV. Measurements of the RK and RK* ratios can be significantly improved over future years with the analysis of the full Run 2 data at an energy of 13 TeV. LHCb will also broaden its search for LU violation to other types of FCNC decays, such as Bs→ φμ+μ−. Another interesting avenue, recently taken up by Belle, is to compare the angular distributions of the decays B0→ K*0μ+μ− and B → K*0e+e−. If LU were indeed violated, then one would expect to see differences between the angular distributions of muons and electrons as well as the decay rates.
Potential explanations
It is possible that the anomalies seen in tree-level and FCNC decays are related. The tree-level decays are sensitive to new physics at the TeV scale, whereas the FCNC decays are sensitive to scales of the order 10 TeV on account of the SM suppression of loop-level decays. If one would like to explain both anomalies with a single model, then this must also be suppressed in its contribution to b → sℓ+ℓ− decays compared to b → cτ+ντ decays. This can be done by either forbidding FCNC processes at tree level, like in the SM, or by having a hierarchical flavour structure where the coupling to third-generation leptons is enhanced with respect to muons. Amongst several speculations, the most promising model in this regard introduces the well known concept of leptoquarks, which are particles that carry both lepton and quark quantum numbers (figure 4). The mass scale for such a leptoquark could be around 1 TeV, which is clearly very interesting for direct searches at the LHC.
The theoretical options open up if one would like to explain only one set of anomalies. For example, the loop-level anomalies can be explained with a Z′ boson of a few TeV in mass, although the allowed parameter space for such a model competes with the constraints imposed by Bs matter–antimatter oscillations. Overall, there are many possible models proposed that can explain one or both of these anomalies, and differentiating between them would become an exciting challenge if these were to be confirmed.
In any case, the amount of data analysed for the measurements described here corresponds to just one-third of what will be available by the end of 2018 at LHCb. Meanwhile, following a major overhaul of the KEK accelerator, the Belle-II experiment is about to start operations in Japan and is expected to collect data until 2025 (CERN Courier September 2016 p32). The two experiments are designed for the study of heavy-flavour physics, and their complementary characteristics will allow researchers to perform ultra-precise measurements of decays of b-quark hadrons. Hence, the prospects for continuing to test lepton universality in the next decade and beyond are excellent.
The demanding and creative environment of fundamental science is a fertile breeding ground for new technologies, especially unexpected ones. Many significant technological advances, from X-rays to nuclear magnetic resonance and the Web, were not themselves a direct objective of the underlying research, and particle accelerators exemplify this dynamic transfer from the fundamental to the practical. From isotope separation, X-ray radiotherapy and, more recently, hadron therapy, there are now many categories of accelerators dedicated to diverse user communities across the sciences, academia and industry. These include synchrotron light sources, X-ray free-electron lasers (XFELs) and neutron spallation sources, and enable research that often has direct societal and economic implications.
During the past decade or so, high-gradient linear accelerator technology developed for fundamental exploration has matured to the point where it is being transferred to applications beyond high-energy physics. Specifically, the unique requirements for the Compact Linear Collider (CLIC) project at CERN have led to a new high-gradient “X-band” accelerator technology that is attracting the interest of light-source and medical communities, and which would have been difficult for those communities to advance themselves due to their diverse nature.
Set to operate until the mid-2030s, the Large Hadron Collider (LHC) collides protons at an energy of 13 TeV. One possible path forward for particle physics in the post-LHC, “beyond the Standard Model”, era is a high-energy linear electron–positron collider. CLIC envisions an initial-energy 380 GeV centre-of-mass facility focused on precision measurements of the Higgs boson and the top quark, which are promising targets to search for deviations from the Standard Model (CERN Courier November 2016 p20). The machine could then, guided by the results from the LHC and the initial-stage linear collider, be lengthened to reach energies up to 3 TeV for detailed studies of this high energy regime. CLIC is overseen by the Linear Collider Collaboration along with the International Linear Collider (ILC), a lower energy electron–positron machine envisaged to operate initially at 250 GeV (CERN Courier January/February 2018 p7).
The accelerator technology required by CLIC has been under development for around 30 years and the project’s current goals are to provide a robust and detailed design for the update of the European Strategy for Particle Physics, with a technical design report by 2026 if resources permit. One of the main challenges in making CLIC’s 380 GeV initial energy stage cost effective, while guaranteeing its reach to 3 TeV, is generating very high accelerating gradients. The gradient needed for the high-energy stage of CLIC is 100 MV/m, which equates to 30 km of active acceleration. For this reason, the CLIC project has made a major investment in developing high-gradient radio-frequency (RF) technology that is feasible, reliable and cheap.
Evading obstacles
Maximising the accelerating gradient leads to a shorter linac and thus a less expensive facility. But there are two main limiting factors: the increasing need of peak RF power and the limitation of accelerating-structure surfaces to support increasingly strong electromagnetic fields. Circumventing these obstacles has been the focus of CLIC activities for several years.
One way to mitigate the increasing demand for peak power is to use higher frequency accelerating structures (figure 1), since the power needed for fixed-beam energy goes up linearly with gradient but goes down approximately with the inverse square root of the RF frequency. The latest XFELs SACLA in Japan and SwissFEL in Switzerland operate at “C-band” frequencies of 5.7 GHz, which enables a gradient of around 30 MV/m and a peak power requirement of around 12 MW/m in the case of SwissFEL. This increase in frequency required a significant technological investment, but CLIC’s demand for 3 TeV energies and high beam current requires a peak power per metre of 200 MW/m! This challenge has been under study since the late 1980s, with CLIC first focusing on 30 GHz structures and the Next Linear Collider/Joint Linear Collider community developing 11.4 GHz “X-band” technology. The twists and turns of these projects are many, but the NLC/JLC project ceased in 2005 and CLIC shifted to X-band technology in 2007. CLIC also generates high peak power using a two-beam scheme in which RF power is locally produced by transferring energy from a low-energy, high-current beam to a high-energy, low-current beam. In contrast to the ILC, CLIC adopts normal-conducting RF technology to go beyond the approximately 50 MV/m theoretical limit of existing superconducting cavity geometries.
The second main challenge when generating high gradients is more fundamental than the practical peak-power requirements. A number of phenomena come to life when the metal surfaces of accelerating structures are subject to very high electromagnetic fields, the most prominent being vacuum arcing or breakdown, which induces kicks to the beam that result in a loss of luminosity. A CLIC accelerating structure operating at 100 MV/m will have surface electric fields in excess of 200 MV/m, sometimes leading to the formation of a highly conductive plasma directly above the surface of the metal. Significant progress has been made in understanding how to maximise gradient despite this effect, and a key insight has been the identification of the role of local power flow. Pulsed surface heating is another troubling high-field phenomenon faced by CLIC, where ohmic losses associated with surface currents result in fatigue damage to the outer cavity wall and reduced performance. Understanding these phenomena has been essential to guide the development of an effective design and technology methodology for achieving gradients in excess of 100 MV/m.
Test-stand physics
Critical to CLIC’s development of high-gradient X-band technology has been an investment in four test stands, which allowed investigations of the complex, multi-physics effects that affect high-power behaviour in operational structures (figure 2). The test stands provided the RF klystron power, dedicated instrumentation and diagnostics to operate, measure and optimise prototype RF components. In addition, to investigate beam-related effects, one of the stands was fed by a beam of electrons from the former “CTF3” facility. This has since been replaced by the CLEAR test facility, at which experiments will come on line again next year (CERN Courier November 2017 p8).
While the initial motivation for the CLIC test stands was to test prototype components, high-gradient accelerating structures and high-power waveguides, the stands are themselves prototype RF units for linacs – the basic repeatable unit that contains all the equipment necessary to accelerate the beam. A full linac, of course, needs many other subsystems such as focusing magnets and beam monitors, but the existence of four operating units that can be easily visited at CERN has made high-gradient and X-band technology serious options for a number of linac applications in the broader accelerator community. An X-band test stand at KEK has also been operational for many years and the group there has built and tested many CLIC prototype structures.
With CLIC’s primary objective being to provide practical technology for a particle-physics facility in the multi-TeV range, it is rather astonishing that an application requiring a mere 45 MeV beam finds itself benefiting from the same technology. This small-scale project, called Smart*Light, is developing a compact X-ray source for a wide range of applications including cultural heritage, metallurgy, geology and medical, providing a practical local alternative to a beamline at a large synchrotron light source. Led by the University of Eindhoven in the Netherlands, Smart*Light produces monochromatic X-rays via inverse Compton scattering, in which X-rays are produced by “bouncing” a laser pulse off an electron beam. The project teams aims to make the equipment small and inexpensive enough to be able to integrate it in a museum or university setting, and is addressing this objective with a 50 MV/m-range linac powered by one of the two standard CLIC test-stand configurations (a 6 MW Toshiba klystron). Funding has been awarded to construct the first prototype system and, once operational, Smart*Light will pursue commercial production.
Another Compton-source application is the TTX facility at Tsinghua University in China, which is based on a 45 MeV beam. The Tsinghua group plans to increase the energy of the X-rays by upgrading the energy of their electron linac, which must be done by increasing the accelerating gradient because the facility is housed in an existing radiation-shielded building. The energy increase will occur in two steps: the first will raise the accelerating gradient by upgrading parts of the existing S-band 3 GHz RF system, and the second will be to replace sections with an X-band system to increase the gradient up to 70 MV/m. The Tsinghua X-band power source will also implement a novel “corrector cavity” system to flatten the power compressed pulse that is also now part of the 380 GeV CLIC baseline design. Tsinghua has successfully tested a standard CLIC structure to more than 100 MV/m at KEK, demonstrating that high-gradient technology can be transferred, and has taken delivery of a 50 MW X-band klystron for use in a test stand.
Perhaps the most significant X-band application is XFELs, which produce intense and short X-ray bursts by passing a very low-emittance electron beam through an undulator magnet. The electron linac represents a substantial fraction of the total facility cost and the number of XFELs is presently quite limited. Demand for facilities also exceeds the available beam time. Operational facilities include LCWS at SLAC, FERMI at Trieste and SACLA at Riken, while the European XFEL in Germany, the Pohang Light Source in Korea and SwissFEL are being commissioned (CERN Courier July/August 2017 p18), and it is expected that further facilities will be built in the coming years.
XFEL applications
CLIC technology, both the high-frequency and high-gradient aspects, has the potential to significantly reduce the cost of such X-ray facilities, allowing them to be funded at the regional and possibly even university scale. In combination with other recent advances in injectors and undulators, the European Union project CompactLight has recently received a design study grant to examine the benefits of CLIC technology and to prepare a complete technical design report for a small-scale facility (CERN Courier December 2017 p8).
A similar type of electron linac, in the 0.5–1 GeV range, is being proposed by Frascati Laboratory in Italy for XFEL development, in addition to the study of advanced plasma-acceleration techniques. To fit the accelerator in a building on the Frascati campus, the group has decided to use a high-gradient X-band for their linac and has joined forces with CLIC to develop it. The cooperation includes Frascati staff visiting CERN to help run the high-gradient test facilities and the construction of their own test stand at Frascati, which is an important advance in testing its capability to use CLIC technology.
In addition to providing a high-performance technology for acceleration, high-gradient X-band technology is the basis for two important devices that manipulate the beam in low-emittance and short-bunch electron linacs, as used in XFELs and advanced development linacs. The first is the energy-spread lineariser, which uses a harmonic of the accelerating frequency to correct the energy spread along the bunch and enable shorter bunches. A few years ago a collaboration between Trieste, PSI and CERN made a joint order for the first European X-band frequency (11.994 GHz) 50 MW klystrons from SLAC, and jointly designed and built the lineariser structures, which have significantly improved the performance of the Elettralight source in Trieste and become an essential element of SwissFEL.
Following the CLIC test stand and lineariser developments, a new commercial X-band klystron has become available, this time at the lower power of 6 MW and supplied by Canon (formerly Toshiba). This new klystron is ideally suited for lineariser systems and one has recently been constructed at the soft X-ray XFEL at SINAP in Shanghai, which has a long-standing collaboration with CLIC on high-gradient and X-band technology. Back in Europe, Daresbury Laboratory has decided to invest in a lineariser system to provide the exceptional control of the electron bunch characteristics needed for its XFEL programme, which is being developed at its CLARA test facility. Daresbury has been working with CLIC to define the system, and is now procuring an RF power system based on the 6 MW Toshiba klystron and pulse compressor. This will certainly be a major step in the ease of adoption of X-band technology.
The second major high-gradient X-band beam manipulation application is the RF deflector, which is used at the end of an XFEL to measure the bunch characteristics as a function of position along the bunch. High-gradient X-band technology is well suited to this application and there is now widespread interest to implement such systems. Teams at FLASH2, FLASH-Forward and SINBAD at DESY, SwissFEL and CLIC are collaborating to define common hardware, including a variable polarisation deflector to allow a full 6D characterisation of the electron bunch. SINAP is also active in this domain. The facility is awaiting delivery of three 50 MW CPI klystrons to power the deflectors and will build a standard CLIC test structure for tests at CERN in addition to a prototype X-band XFEL structure in the context of CompactLight.
The rich exchange between different projects in the high-gradient community is typified by PSI and in particular the SwissFEL. Many essential features of the SwissFEL have a linear-collider heritage, such as the micron-precision diamond machining of the accelerating structures, and SwissFEL is now returning the favour. For example, a pair of CLIC X-band test accelerating structures are being tested at CERN to examine the high-gradient potential of PSI’s fabrication technology, showing excellent results: both structures can operate at more than 115 MV/m and demonstrate potential cost savings for CLIC. In addition, the SwissFEL structures have been successfully manufactured to micron precision in a large production series – a level of tolerance that has always been an important concern for CLIC. Now that the PSI fabrication technology is established, the laboratory is building high-gradient structures for other projects such as Elettra, which wishes to increase its X-ray energy and flux but has performance limitations with its 3 GHz linac.
Beyond light sources
High-gradient technology is now working its way beyond electron linacs, particularly in the treatment of cancer. The most common accelerator-based cancer treatment is X-rays, but protons and heavy ions offer many potential advantages. One drawback of hadron therapy is the high cost of the accelerators, which are currently circular. A new generation of linacs offer the potential for smaller, lower cost facilities with additional flexibility.
The TERA foundation has studied such linac-based solutions and a firm called ADAM is now commercialising a version with a view to building a compact hadron-therapy centre (CERN Courier January/February 2018 p25). To demonstrate the potential of high gradients in this domain, members of CLIC received support from the CERN knowledge transfer fund to adapt CLIC technology to accelerate protons in the relevant energy range, and the first of two structures is now under test. The predicted gradient above was 50 MV/m, but the structure has exceeded 55 MV/m and also behaves consistently when compared to the almost 20 CLIC structures. We now know that it is possible to reach high accelerating gradients even for protons, and projects based on compact linacs can now move forward with confidence.
Collaboration has driven the wider adoption of CLIC’s high-gradient technology. A key event took place in 2005 when CERN management gave CLIC a clear directive that, with LHC construction limiting available resources, the study must find outside collaborators. This was achieved thanks to a strong effort by CLIC researchers, also accompanied by a great deal of activity in electron linacs in the accelerator community.
We should not forget that the wider adoption of X-band and high-gradient technology is extremely important for CLIC itself. First, it enlarges the commercial base, driving costs down and reliability up, and making firms more likely to invest. Another benefit is the improved understanding of the technology and its operability by accelerator experts, with a broadened user base bringing new ideas. Harnessing the creative energy of a larger group has already yielded returns to the CLIC study, for instance addressing important industrialisation and cost-reduction issues.
The role of high-gradient and X-band technology is expanding steadily, with applications at a surprisingly wide range of scales. Despite having started in large linear colliders, the use of the technology now starts to be dominated by a proliferation of small-scale applications. Few of these were envisaged when CLIC was formulated in the late 1980s – XFELs were in their infancy at the time. As the technology is applied further, its performance will rise even more, perhaps even leading to the use of smaller applications to build a higher energy collider. The interplay of different communities can make advances beyond what any could on their own, and it is an exciting time to be part of this field.
It would be impossible for anyone to conceive of carrying out a particle-physics experiment today without the use of computers and software. Since the 1960s, high-energy physicists have pioneered the use of computers for data acquisition, simulation and analysis. This hasn’t just accelerated progress in the field, but driven computing technology generally – from the development of the World Wide Web at CERN to the massive distributed resources of the Worldwide LHC Computing Grid (WLCG) that supports the LHC experiments. For many years these developments and the increasing complexity of data analysis rode a wave of hardware improvements that saw computers get faster every year. However, those blissful days of relying on Moore’s law are now well behind us (see “CPU scaling comes to the end of an era”), and this has major ramifications for our field.
The high-luminosity upgrade of the LHC (HL-LHC), due to enter operation in the mid-2020s, will push the frontiers of accelerator and detector technology, bringing enormous challenges to software and computing (CERN Courier October 2017 p5). The scale of the HL-LHC data challenge is staggering: the machine will collect almost 25 times more data than the LHC has produced up to now, and the total LHC dataset (which already stands at almost 1 exabyte) will grow many times larger. If the LHC’s ATLAS and CMS experiments project their current computing models to Run 4 of the LHC in 2026, the CPU and disk space required will jump by between a factor of 20 to 40 (figures 1 and 2).
Even with optimistic projections of technological improvements there would be a huge shortfall in computing resources. The WLCG hardware budget is already around 100 million Swiss francs per year and, given the changing nature of computing hardware and slowing technological gains, it is out of the question to simply throw more resources at the problem and hope things will work out. A more radical approach for improvements is needed. Fortunately, this comes at a time when other fields have started to tackle data-mining problems of a comparable scale to those in high-energy physics – today’s commercial data centres crunch data at prodigious rates and exceed the size of our biggest Tier-1 WLCG centres by a large margin. Our efforts in software and computing therefore naturally fit into and can benefit from the emerging field of data science.
A new way to approach the high-energy physics (HEP) computing problem began in 2014, when the HEP Software Foundation (HSF) was founded. Its aim was to bring the HEP software community together and find common solutions to the challenges ahead, beginning with a number of workshops organised by a dedicated startup team. In the summer of 2016 the fledgling HSF body was charged by WLCG leaders to produce a roadmap for HEP software and computing. With help from a planning grant from the US National Science Foundation, at a meeting in San Diego in January 2017, the HSF brought community and non-HEP experts together to gather ideas in a world much changed from the time when the first LHC software was created. The outcome of this process was summarised in a 90-page-long community white paper released in December last year.
The report doesn’t just look at the LHC but considers common problems across HEP, including neutrino and other “intensity-frontier” experiments, Belle II at KEK, and future linear and circular colliders. In addition to improving the performance of our software and optimising the computing infrastructure itself, the report also explores new approaches that would extend our physics reach as well as ways to improve the sustainability of our software to match the multi-decade lifespan of the experiments.
Almost every aspect of HEP software and computing is presented in the white paper, detailing the R&D programmes necessary to deliver the improvements the community needs. HSF members looked at all steps from event generation and data taking up to final analysis, each of which presents specific challenges and opportunities.
Souped-up simulation
Every experiment needs to be grounded in our current knowledge of physics, which means that generating simulated physics events is essential. For much of the current HEP experiment programme it is sufficient to generate events based on leading-order calculations – a relatively modest task in terms of computing requirements. However, already at Run 2 of the LHC there is an increasing demand for next-to-leading order, or even next-to-next-to-leading order, event generators to allow more precise comparisons between experiments and the Standard Model predictions (CERN Courier April 2017 p18). These calculations are particularly challenging both in terms of the software (e.g. handling difficult integrations) and the mathematical technicalities (e.g. minimising negative event weights), which greatly increase the computational burden. Some physics analyses based on Run-2 data are limited by theoretical uncertainties and, by Run 4 in the mid-2020s, this problem will be even more widespread. Investment in technical improvements of the computation is therefore vital, in addition to progress in our underlying theoretical understanding.
Increasingly large and sophisticated detectors, and the search for rarer processes hidden amongst large backgrounds, means that particle physicists need ever-better detector simulation. The models describing the passage of particles through the detector need to be improved in many areas for high precision work at the LHC and for the neutrino programme. With simulation being such a huge consumer of resources for current experiments (often representing more than half of all computing done), it is a key area to adapt to new computing architectures.
Vectorisation, whereby processors can execute identical arithmetic instructions on multiple pieces of data, would force us to give up the simplicity of simulating each particle individually. The best way to do this is one of the most important R&D topics identified by the white paper. Another is to find ways to reduce the long simulation times required by large and complex detectors, which exacerbates the problem of creating simulated data sets with sufficiently high statistics. This requires research into generic toolkits for faster simulation. In principle, mixing and digitising the detector hits at high pile-up is a problem that is particularly suited for parallel processing on new concurrent computing architectures – but only if the rate at which data is read can be managed.
This shift to newer architectures is equally important for our software triggers and event-reconstruction code. Investing more effort in software triggers, such as those already being developed by the ALICE and LHCb experiments for LHC Run 3, will help control the data volumes and enable analyses to be undertaken directly from initial reconstruction by avoiding an independent reprocessing step. For ATLAS and CMS, the increased pile-up at high luminosity makes charged-particle tracking within a reasonable computing budget a critical challenge (figure 3). Here, as well as the considerable effort required to make our current code ready for concurrent use, research is needed into the use of new, more “parallelisable” algorithms, which maintain physics accuracy. Only these would allow us to take advantage of the parallel capabilities of modern processors, including GPUs (just like the gaming industry has done, although without the need there to treat the underlying physics with such care). The use of updated detector technology such as track triggers and timing detectors will require software developments to exploit this additional detector information.
For final data analysis, a key metric for physicists is “time to insight”, i.e. how quickly new ideas can be tested against data. Maintaining that agility will be a huge challenge given the number of events physicists have to process and the need to keep the overall data volume under control. Currently a number of data-reduction steps are used, aiming at a final dataset that can fit on a laptop but bloating the storage requirements by creating many intermediate data products. In the future, access to dedicated analysis facilities that are designed for a fast turnaround without tedious data reduction cycles may serve the community’s needs better.
This draws on trends in the data-analytics industry, where a number of products, such as Apache Spark, already offer such a data-analysis model. However, HEP data is usually more complex and highly structured, and integration between the ROOT analysis framework and new systems will require significant work. This may also lend itself better to approaches where analysts concentrate on describing what they want to achieve and a back-end engine takes care of optimising the task for the underlying hardware resource. These approaches also integrate better with data-preservation requirements, which are increasingly important for our field. Over and above preserving the underlying bits of data, a fundamental challenge is to preserve knowledge about how to use this data. Preserved knowledge can help new analysts to start their work more quickly, so there would be quite tangible immediate benefits to this approach.
A very promising general technique for adapting our current models to new hardware is machine learning, for which there exist many excellent toolkits. Machine learning has the potential to further improve the physics reach of data analysis and may also speed up and improve the accuracy of physics simulation, triggering and reconstruction. Applying machine learning is very much in vogue, and many examples of successful applications of these data-science techniques exist, but real insight is required to know where best to invest for the HEP community. For example, a deeper understanding of the impact of such black boxes and how they relate to underlying physics, with good control of systematics, is needed. It is expected that such techniques will be successful in a number of areas, but there remains much research to be done.
New challenges
Supporting the computational training phase necessary for machine learning brings a new challenge to our field. With millions of free parameters being optimised across large GPU clusters, this task is quite unlike those currently undertaken on the WLCG grid infrastructure and represents another dimension to the HL-LHC data problem. There is a need to restructure resources at facilities and to incorporate commercial and scientific clouds into the pool available for HEP computing. In some regions high-performance computing facilities will also play a major role, but these facilities are usually not suitable for current HEP workflows and will need more consistent interfaces as well as the evolution of computing systems and the software itself. Optimising storage resources into “data lakes”, where a small number of sites act as data silos that stream data to compute resources, could be more effective than our current approaches. This will require enhanced delivery of data over the network to which our computing and software systems will need to adapt. A new generation of managed networks, where dedicated connections between sites can be controlled dynamically, will play a major role.
The many challenges faced by the HEP software and computing community over the coming decade are wide ranging and hard. They require new investment in critical areas and a commitment to solving problems in common between us, and demand that a new generation of physicists is trained with updated computing skills. We cannot afford a “business as usual” approach to solving these problems, nor will hardware improvements come to our rescue, so software upgrades need urgent attention.
The recently completed roadmap for software and computing R&D is a unique document because it addresses the problems that our whole community faces in a way that was never done before. Progress in other fields gives us a chance to learn from and collaborate with other scientific communities and even commercial partners. The strengthening of links, across experiments and in different regions, that the HEP Software Foundation has helped to produce, puts us in a good position to move forward with a common R&D programme that will be essential for the continued success of high-energy physics.
It is just over five years ago that the discovery of the Higgs boson was announced, to great fanfare in the world’s media, as a crowning success of CERN’s Large Hadron Collider (LHC). The excitement of those days now seems a distant memory, replaced by a growing sense of disappointment at the lack of any major discovery thereafter.
While there are valid reasons to feel less than delighted by the null results of searches for physics beyond the Standard Model (SM), this does not justify a mood of despondency. A particular concern is that, in today’s hyper-connected world, apparently harmless academic discussions risk evolving into a negative outlook for the field in broader society. For example, a recent news article in Nature led on the LHC’s “failure to detect new particles beyond the Higgs”, while The Economist reported that “Fundamental physics is frustrating physicists”. Equally worryingly, the situation in particle physics is sometimes negatively contrasted with that for gravitational waves: while the latter is, quite rightly, heralded as the start of a new era of exploration, the discovery of the Higgs is often described as the end of a long effort to complete the SM.
Let’s look at things more positively. The Higgs boson is a totally new type of fundamental particle that allows unprecedented tests of electroweak symmetry breaking. It thus provides us with a novel microscope with which to probe the universe at the smallest scales, in analogy with the prospects for new gravitational-wave telescopes that will study the largest scales. There is a clear need to measure its couplings to other particles – especially its coupling with itself – and to explore potential connections between the Higgs and hidden or dark sectors. These arguments alone provide ample motivation for the next generation of colliders including and beyond the high-luminosity LHC upgrade.
So far the Higgs boson indeed looks SM-like, but some perspective is necessary. It took more than 40 years from the discovery of the neutrino to the realisation that it is not massless and therefore not SM-like; addressing this mystery is now a key component of the global particle-physics programme. Turning to my own main research area, the beauty quark – which reached its 40th birthday last year – is another example of a long-established particle that is now providing exciting hints of new phenomena (see Beauty quarks test lepton universality ). One thrilling scenario, if these deviations from the SM are confirmed, is that the new physics landscape can be explored through both the b and Higgs microscopes. Let’s call it “multi-messenger particle physics”.
How the results of our research are communicated to the public has never been more important. We must be honest about the lack of new physics that we all hoped would be found in early LHC data, yet to characterise this as a “failure” is absurd. If anything, the LHC has been more successful than expected, leaving its experiments struggling to keep up with the astonishing rates of delivered data. Particle physics is, after all, about exploring the unknown; the analysis of LHC data has led to thousands of publications and a wealth of new knowledge, and there is every possibility that there are big discoveries waiting to be made with further data and more innovative analyses. We also should not overlook the returns to society that the LHC has brought, from technology developments with associated spin-offs to the training of thousands of highly skilled young researchers.
The level of expectation that has been heaped on the LHC seems unprecedented in the history of physics. Has any other facility been considered to have produced disappointing results because only one Nobel-prize winning discovery was made in its first few years of operation? Perhaps this reflects that the LHC is simply the right machine at the right time, but that time is not over: our new microscope is set to run for the next two decades and bring physics at the TeV scale into clear focus. The more we talk about that, the better our long-term chances of success.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
Aharon (Rony) Casher was born in Haifa, Israel, and graduated from the Technion where he performed his thesis work on condensed bosonic systems under Micha Revzen. He then went to Yeshiva University in New York, where he wrote a well-known paper with Joel Lebowitz on heat flow in random harmonic chains. This is also where his longstanding collaborations with Yakir Aharonov and Lenny Susskind began.
The Aharonov–Casher effect, which is dual to the Aharonov–Bohm effect, is textbook material and also led to a beautiful result on the number of zero modes in 2D magnetic fields. With Lev Vaidman, Casher and Aharonov developed the mathematics underpinning weak measurements; and in a separate work with Shimon Yankielowicz they introduced the mechanism of magnetic vacuum condensation for confinement in QCD. The early suggestion by Aharon, Susskind and John Kogut that a vacuum polarisation mechanism can account for quark confinement was extremely influential. Additional, important joint papers on strong interactions, partons and spontaneous chiral symmetry breaking appeared in the early 1970s. The collaboration with Susskind also led to Aharons’ familiarity with string theories and to the early paper with Aharonov of a dual string model for spinning particles.
In the high-energy physics community, Aharon is best known for his work on spontaneous chiral symmetry breaking in QCD. In a singly authored paper he provided a beautiful insight into this subject, followed by a famous paper with Tom Banks that related such breaking to the enhanced density of the low eigenvalues of the Dirac operator. These topics dominated Aharon’s interest throughout the 1970s and early 1980s. His deep knowledge of topological field theory and understanding of non-perturbative effects enabled him to make key and long-lasting contributions.
Aharon often visited Brussels, where he worked with François Englert and others on supergravity, quantum gravity and studies of the early universe. Englert, in turn, became a frequent visitor at Tel Aviv University, and non-perturbative effects in quantum gravity and possible connections to the physics of black holes became a shared passion of both. Although Aharon gave a series of influential lectures on string theory at Tel Aviv shortly after the 1984 “string revolution”, and published with Englert, Nicolai and Taormina a paper showing that all superstring theories are contained in the bosonic string, he was critical of strings as the ultimate theory of nature. He was an independent thinker, uncompromisingly honest when analysing novel ideas in theoretical physics.
Aharon stayed at Tel Aviv for almost 50 years, his knowledge and remarkable talents enabling him to teach any subject in theoretical physics from memory alone. He was accessible to students and attracted many who subsequently had independent academic careers, including Neuberger, Nissan Itzhaki and Yigal Shamir. Aharon was an avid reader, interested in literature, history, science fiction, sports and politics. One could have an interesting conversation with him on any topic.
Aharon was highly negligent as a self- promoter and was in science for the sheer pleasure of doing it. He rarely gave talks about his work, preferring to think and calculate at his desk, and his collaborators and many others had the deepest respect for him. His ability to keep challenging us and to relentlessly pursue the subtleties that could harbour fatal flaws helped maintain our own scientific integrity. Aharon will be deeply missed.
This new textbook of nuclear physics aims to provide a review of the foundations of this branch of physics as well as to present more modern topics, including the important developments of the last 20 years. Even though well-established textbooks exist in this field, the authors propose a more comprehensive essay for students who want to go deeper both in understanding the basic principles of nuclear physics and in learning about the problems that researchers are currently addressing. Indeed, a renewed interest has lately revitalised this field, following the availability of new experimental facilities and increased computational resources.
Another objective of this book, which is based on the lectures and teaching experience of the authors, is to clarify, at each step, the relationship between theoretical equations and experimental observables, as well as to highlight useful methods and algorithms from computational physics.
The last few chapters cover topics not normally included in standard courses of nuclear physics, and reflect the scientific interests – and occasionally the point of view – of the authors. Many problems are also provided at the end of each chapter, and some of them are fully solved.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.