Jean-Pierre Blaser, a former director of the Swiss Institute of Nuclear Research (SIN), passed away in his home in Switzerland on 29 August 2019 at the age of 96.
In 1948 Blaser finished his physics studies at ETH Zurich, going on to participate in the development of a cyclotron at ETH built by Paul Scherrer during the Second World War. From 1952–1955 he carried out experiments with mesons at the synchrocyclotron in Pittsburgh before becoming director of the observatory in Neuchâtel from 1955–1959. In 1959 he was appointed as Scherrer’s successor and inherited from him the planning group for a new cyclotron. Originally, Scherrer wanted to copy the 88-inch cyclotron at Berkeley and use it for research in nuclear physics, but Blaser wanted something more ambitious. After receiving advice from accelerator experts at CERN, among them Pierre Lapostolle, he proposed a 500 MeV cyclotron for the production of mesons.
The key for such a meson factory was to extract the high-intensity proton beam with very low losses. The leader of Blaser’s cyclotron group, Hans Willax, realised that a conventional cyclotron would have high losses at extraction and in 1962 had the brilliant idea to break up the cyclotron magnets into separate sectors to leave space for high-voltage cavities. Blaser immediately supported the idea and pushed to get this expensive project approved by the Swiss government. Against all odds and against some strong opposition, he finally succeeded. In 1968 he founded SIN in Villigen and was its director for the next 20 years.
In a last-minute decision, based on results of CERN experiments which showed that the production of pions would strongly increase with energy, the energy of the SIN cyclotron was increased from 500 to 590 MeV. Even top accelerator specialists like the late Henry Blosser had doubts that the SIN crew would reach the ambitious design goal of a 100 μA beam current. But Blaser and Willax were convinced, anticipating that the original 72 MeV injector cyclotron would be the limiting factor and eventually would have to be replaced. In January 1974 the first protons were extracted from the ring, and at the end of 1976 the design current of 100 μA was reached. More highlights followed, right up to 2009 with 2.4 mA protons at 590 MeV and a new world record of 1.4 MW in average beam power achieved – a record that still holds today. These results gave Blaser great satisfaction, even after his retirement in 1990. Before that date he initiated in 1988 the new Paul Scherrer Institute (PSI), a combination of SIN and the neighbouring reactor institute EIR.
From the start of the accelerator project, Blaser saw the potential of particle beams to irradiate tumours. The first step, for which a superconducting solenoid was constructed, was to use pions for the treatment of deep-seated tumours. In 1984 the irradiation of eye tumours started, using protons, and to date more than 7000 patients have been treated. Later, a new superconducting cyclotron was acquired and two more gantries are now in operation. Blaser strongly supported all activities in the medical application of cyclotrons, and gave his advice to a new cyclotron project in South Africa – becoming elected as a foreign associate of the Royal Society of South Africa for his efforts.
Jean-Pierre Blaser was blessed with great intuition based on a thorough knowledge of the basic laws of physics. He was open to new and unconventional ideas and he fully motivated young scientists with his trust in their abilities. In his free time he enjoyed exploring the landscapes of Switzerland either by foot or as a pilot with a light aeroplane. But his top priority was his family. He enjoyed enormously the company of his wife Frauke, their two daughters Claudine and Nicole, and their four grandchildren. In Jean-Pierre, his family and the accelerator community loses a great personality.
Badanaval Venkatasubba Sreekantan, a pioneering cosmic-ray physicist and a member of Homi Bhabha’s team of scientists, who played an important role in the development of post-colonial Indian science, died at his home in Bangalore on 27 October 2019.
Sreekantan was born on 30 June 1925 near Mysore in South India. After receiving his master’s degree in physics in 1947 with a specialisation in wireless technology from Mysore University, he joined the Indian Institute of Science at Bengaluru as a research scholar, where he heard about Bhabha and his newly formed Tata Institute of Fundamental Research (TIFR) in Mumbai. Attracted by Bhabha’s charisma, he joined TIFR in July 1948 and began a long, illustrious scientific career of almost 44 years.
In 1951 Bhabha sent Sreekantan down the deep Champion Reef Gold Mine at Kolar Gold Fields (KGF) near Bengaluru to measure the flux of cosmic-ray muons at varying depths. This pioneering initiative not only earned Sreekantan a PhD but also paved the way for setting up a deep underground laboratory. A series of follow-up experiments at KGF, carried out during the early 1960s, extended his previous measurements of muon intensity to the deepest level available; finally, after reaching a depth of 2700 m, recording no muons after two months of exposure. Sreekantan and his collaborators realised that such a deep underground site with minimal cosmic-ray muon background would be an ideal site to detect atmospheric neutrinos. A series of seven neutrino telescopes were quickly set up at a depth of 2300 m and in early 1965 they recorded the first atmospheric-neutrino event, contemporaneously with the detection from another underground neutrino experiment set up by Fred Reines in a South African mine. It was an important milestone, given how important the study of neutrinos underground would later become.
During early 1980s Sreekantan and his collaborators built two detectors, one at a depth of 2300 m and the other at 2000 m, to study the stability of the proton. These two experiments ran for more than a decade and put strong limits on the proton lifetime. He was also instrumental in starting a high-altitude cosmic-ray laboratory at Udhagamandalam (Ooty) in the State of Tamil Nadu to study the hadronic components of cosmic-ray showers.
Sreekantan quickly recognised the importance of the emerging field of X-ray astronomy for probing high-energy processes in the universe. In 1967 he started balloon-borne experiments to study cosmic X-ray sources and built a strong group that went on to develop expertise in the fabrication of highly sophisticated X-ray detectors for space-borne astronomy missions. The multi-wavelength astronomy observatory Astrosat, launched by the Indian Space Research Organisation in September 2015, is a testimony to the strength of the group. A very high-energy gamma-ray observation programme using the atmospheric Cherenkov technique, which was started by Sreekantan and his collaborators in Ooty in the 1970s, is being continued in Ladakh with a low-energy threshold.
Sreekantan became director of TIFR in 1975, and over the next 12 years steered the institute with distinction and left a rich legacy of high-quality research programmes as well as several new TIFR centres and field stations. In 1992, after a long and eventful scientific carrier at TIFR, Sreekantan moved to Bengaluru and was offered a chair at the newly created National Institute of Advance Studies. His research interest shifted from physical sciences to the philosophical aspects of science and in particular to the abstract topic of consciousness and its scientific and philosophical basis. He remained an alert and active researcher and was engaged in his academic activities with great eagerness until the very end. His death marks the end of a glorious chapter of experimental cosmic-ray research in India.
On 24 November 1959, CERN’s Proton Synchrotron (PS) first accelerated beams to an energy of 24 GeV. 60 years later, it is still at the heart of CERN’s accelerator complex, delivering beams to the fixed-target physics programme and the LHC with intensities exceeding the initial specifications by orders of magnitude. To celebrate the anniversary a colloquium was held at CERN on 25 November 2019, with PS alumni presenting important phases in the life of the accelerator.
The PS and its sister machine, Brookhaven’s Alternating Gradient Synchrotron, are the world’s oldest accelerators
The PS is CERN’s oldest operating accelerator, and, together with its sister machine, the Alternating Gradient Synchrotron (AGS) at Brookhaven National Laboratory in the US, one of the two oldest still operating accelerators in the world. Both designs are based on the innovative concept of the alternating gradient, or strong-focusing, principle developed by Ernest Courant, Milton Stanley Livingston, Hartland Snyder and John Blewett. This technique allowed a significant reduction in the size of the vacuum chambers and magnets, and unprecedented beam energies. In 1952 the CERN Council endorsed a study for a synchrotron based on the alternating-gradient principle, and construction of a machine with a design-energy range from 20 to 30 GeV was approved in October 1953. Its design, manufacture and construction took place from 1954 to 1959. Protons made first turns on 16 September 1959, and on 24 November beam was accelerated beyond transition and to an energy of 24 GeV. On 8 December the design energy of 28.3 GeV was reached and the design intensity exceeded, at 3 × 1010 protons per pulse.
The PS has proven to be a flexible design, with huge built-in potential. Though the first experiments were performed with internal targets, extractions to external targets were soon added to the design, and further innovative extraction schemes were added through the years. On the accelerator side, the intensity was progressively ramped up, with the commissioning of the PS Booster in 1972, the repeated increase of the injection energy, and many improvements in the PS itself. Through the years more and more users requested beam from the PS, for example the EAST area, antiproton physics, and a neutron time-of-flight facility.
With the commissioning of the ISR, SPS, LEP and LHC machines, the PS took on a new role as an injector of protons, antiprotons, leptons and ions, while continuing its own physics programme. A new challenge was the delivery of beams for the LHC: these beams need to be transversely very dense (“bright”), and have a longitudinal structure that is generated using the different radio-frequency systems of the PS, with the PS thereby contributing its fair share to the success of the LHC. And there are more challenges ahead. The LHC’s high-luminosity upgrade programme demands beam parameters out of reach for today’s injector complex, motivating the ambitious LHC Injectors Upgrade Project. Installations are now in full swing, and Run 3 will take CERN’s PS into a new parameter regime and into another interesting chapter in its life.
The LHC will restart in May 2021, marking the beginning of Run 3, announced the CERN management on 13 December. Beginning two months after the initially planned date, Run 3 will be extended by one year, until the end of 2024, to maximise physics data taking. Then, during long-shutdown three between 2025 and mid-2027, all of the equipment needed for the high-luminosity configuration of the LHC (HL-LHC) and its experiments will be installed. The HL-LHC is scheduled to come into operation at the end of 2027 and to run for up to a decade. Its factor-five or more increase in levelled luminosity is driving ambitious detector upgrade programmes among the LHC experiments. The experiments are replacing numerous components, even entire subdetectors, often working at the limits of current technology, to increase their physics reach. The extra time incorporated into the new schedule will enable the collaborations to ready themselves for Run 3 and beyond.
Since the start of long-shutdown two in December 2018, extensive upgrades of CERN’s accelerator complex and experiments have been taking place. The pre-accelerator chain is being entirely renovated as part of the LHC Injectors Upgrade project, and new equipment is being installed in the LHC, while development of the HL-LHC’s Nb3Sn magnets continues above ground. On the morning of 13 December, civil engineers made the junction between the underground facilities at Points 1 and 5 of the accelerator – linking the HL-LHC to the LHC, and marking the latest project milestone (see image).
“The HL-LHC is in full swing and the machine and civil engineering is on track,” says Lucio Rossi, HL-LHC project leader. “The schedule is drawn up in a global way, taking into account every aspect of the machine, experiment and infrastructure readiness, entirely with the aim to maximise the physics. The overall HL-LHC timetable is flexible in the sense that it will depend on actual results.”
Hard-scattering processes in hadronic collisions generate parton showers – highly collimated collections of quarks and gluons that subsequently fragment into hadrons, producing jets. In ultra-relativistic nuclear collisions, the parton shower evolves in a hot and dense quark–gluon plasma (QGP) created by the collision. Interactions of the partons with the plasma lead to reduced parton and jet energies, and modified properties. This phenomenon, known as jet quenching, results in the suppression of jet yields – a suppression that is hypothesised to depend on the structure of the jet. High-momentum shower components with a large angular separation are resolved by the medium, however, it is thought that the plasma has a characteristic angular scale below which they are not resolved, but interact as a single partonic fragment.
Using 5.02 TeV lead–lead collision data taken at the LHC in 2018 and corresponding pp data collected in 2017, ATLAS has measured large-radius jets by clustering smaller-radius jets with transverse momenta pT > 35 GeV. (This procedure suppresses contributions from the underlying event and excludes soft radiation, so that the focus remains on hard partonic splittings.) The sub-jets are further re-clustered in order to obtain the splitting scale, √d12, which represents the transverse momentum scale for the hardest splitting in the jet – a measure of the angular separation between the high-momentum components.
ATLAS has investigated the effect of the splitting scale on jet quenching using the nuclear modification factor (RAA), which is the ratio between the jet yields measured in lead–lead and pp collisions, scaled by the estimated average number of binary nucleon–nucleon collisions. An RAA value of unity indicates no suppression in the QGP, whereas a value below one indicates a suppressed jet yield. The measurement is corrected for background fluctuations and instrumental resolution via an unfolding procedure.
The figure shows RAA for large-radius jets as a function of the average number of participating nucleons – a measure of the centrality of the collision, as glancing collisions involve only a handful of nucleons, whereas head-on collisions involve a large fraction of the 207 or so nucleons in each lead nucleus. RAA is presented separately for large-radius jets with a single isolated high-momentum sub-jet and for those with multiple sub-jets in three intervals of the splitting scale √d12. As expected, jets are increasingly suppressed for more head-on collisions (figure 1). More pertinently to this analysis, and for all centralities, yields of large-radius jets that consist of several sub-jets are found to be significantly more suppressed than those that consist of a single small-radius jet. This observation is qualitatively consistent with the hypothesis that jets with hard internal splittings lose more energy, and provides a new perspective on the role of jet structure in jet suppression. Further progress will require comparison with theoretical models.
There is a longstanding puzzle concerning the value of the Cabibbo–Kobayashi–Maskawa matrix element |Vcb|, which describes the coupling between charm and beauty quarks in W± interactions. This fundamental parameter of the Standard Model has been measured with two complementary methods. One uses theinclusive rate of b-hadron decays into final states containing a c hadron and a charged lepton; the other measures the rate of a specific (exclusive) semileptonic B decay, e.g. B0 → D*–μ+νμ. The world average of results using the inclusive approach, |Vcb|incl = (42.19 ± 0.78) × 10–3, differs from the average of results using the exclusive approach, |Vcb|excl = (39.25 ± 0.56) × 10–3, by approximately three standard deviations.
So far, exclusive determinations have been carried out only at e+e– colliders, using B0 and B+ decays. Operating at the ϒ(4S) resonance, the full decay kinematics can be determined, despite the undetected neutrino, and the total number of B mesons produced, needed to measure |Vcb|, is known precisely. The situation is more challenging in a hadron collider – but the LHCb collaboration has just completed an exclusive measurement of |Vcb| based, for the first time, on Bs0 decays.
The exclusive determination of |Vcb| relies on the description of strong-interaction effects for the b and c quarks bound in mesons, the so-called form factors (FF). These are functions of the recoil momentum of the c meson in the b-meson rest frame, and are calculated using non-perturbative QCD techniques, such as lattice QCD or QCD sum rules. A key advantage of semileptonic Bs0 decays, compared to B0/+ decays, is that their FF can be more precisely computed. Recently, the FF parametrisation used in the exclusive determination has been considered to be a possible origin of the inclusive–exclusive discrepancy, and comparisons between the results for |Vcb| obtained using different parametrisations, such as that by Caprini, Lellouch and Neubert (CLN) and that by Boyd, Grinstein and Lebed (BGL), are considered a key check.
Both parametrisations are employed by LHCb in a new analysis of Bs0 → Ds(*)–μ+νμ decays, using a novel method that does not require the momentum of particles other than Ds– and μ+ to be estimated. The analysis also uses B0 → D(*)–μ+νμ as a normalisation mode, which has the key advantage that many systematic effects cancel in the ratio. With the form factors and relative efficiency-corrected yields in hand, obtaining |Vcb| requires only a few more inputs: branching fractions that were well measured at the B-factories, and the ratio of Bs0 and B0 production fractions measured at LHCb.
The values of |Vcb| obtained are (41.4 ± 1.6) × 10–3 and (42.3 ± 1.7) × 10–3in the CLN and BGL parametrisations, respectively. These results are compatible with each other and agree with previous measurements with exclusive decays, as well as the inclusive determination (figure 1).This new technique can also be applied to B0 decays, giving excellent prospects for new |Vcb| measurements at LHCb. They will also benefit from expected improvements at Belle II to a key external input, the B0 → D(*)–μ+νμ branching fraction. Belle II’s own measurement of |Vcb| is also expected to have reduced systematic uncertainties. In addition, new lattice QCD calculations for the full range of the D*– recoil momentum are expected soon and should give valuable constraints on the form factors. This synergy between theoretical advances, Belle II and LHCb (and its upgrade, due to start in 2021) will very likely say the final word on the |Vcb| puzzle.
Though a free parameter in the Standard Model, the mass of the Higgs boson is important for both theoretical and experimental reasons. Most peculiarly from a theoretical standpoint, our current knowledge of the masses of the Higgs boson and the top quark imply that the quartic coupling of the Higgs vanishes and becomes negative tantalisingly close to, but just before, the Planck scale. There is no established reason for the Standard Model to perch near to this boundary. The implication is that the vacuum is almost but not quite stable, and that on a timescale substantially longer than the age of the universe, some point in space will tunnel to a lower energy state and a bubble of true vacuum will expand to fill the universe. Meanwhile, from an experimental perspective, it is important to continually improve measurements so that uncertainty on the mass of the Higgs boson eventually rivals the value of its width. At that point, measuring the Higgs-boson mass can provide an independent method to determine the Higgs-boson width. The Higgs-boson width is sensitive to the existence of possible undiscovered particles and is expected to be a few MeV according to the Standard Model.
The CMS collaboration recently announced the most precise measurement of the Higgs-boson massachieved thus far, at 125.35 ± 0.15 GeV – a precision of roughly 0.1%. This very high precision was achieved thanks to an enormous amount of work over many years to carefully calibrate and model the CMS detector when it measures the energy and momenta of the electrons, muons and photons necessary for the measurement.
The most recent contribution to this work was a measurement of the mass in the di-photon channel using data collected at the LHC by the CMS collaboration in 2016 (figure 1). This measurement was made using the lead–tungstate crystal calorimeter, which uses approximately 76,000 crystals, each weighing about 1.1 kg, to measure the energy of the photons. A critical step of this analysis was a precise calibration of each crystal’s response using electrons from Z-boson decay, and accounting for the tiny difference between the electron and photon showers in the crystals.
This new result was combined with earlier results obtained with data collected between 2011 and 2016. One measurement was in the decay channel to two Z bosons, which subsequently decay into electron or muon pairs, and another was a measurement in the di-photon channel made with earlier data. The 2011 and 2012 data combined yield 125.06 ± 0.29 GeV. The 2016 data yield 125.46 ± 0.17 GeV. Combining these yields CMS’s current best precision of 125.35 ± 0.15 GeV (figure 2). This new precise measurement of the Higgs-boson mass will not, at least not on its own, lead us in a new direction of physics, but it is an indispensable piece of the puzzle of the Standard Model – and one fruit of the increasing technical mastery of the LHC detectors.
What would you say were the best and the worst of times in your half-century-long career as a theorist?
The two best times, in chronological order, were the 1979 discovery of the gluon in three-jet events at DESY, which Mary Gaillard, Graham Ross and I had proposed three years earlier, and the discovery of the Higgs boson at CERN in 2012, in particular because one of the most distinctive signatures for the Higgs, its decay to two photons, was something Gaillard, Dimitri Nanopoulos and I had calculated in 1975. There was a big build up to the Higgs and it was a really emotional moment. The first of the two worst times was in 2000 with the closure of LEP, because maybe there was a glimpse of the Higgs boson. In fact, in retrospect the decision was correct because the Higgs wasn’t there. The other time was in September 2008 when there was the electrical accident in the LHC soon after it started up. No theoretical missing factor-of-two could be so tragic.
Your 1975 work on the phenomenology of the Higgs boson was the starting point for the Higgs hunt. When did you realise that the particle was more likely than not to exist?
Our paper, published in 1976, helped people think about how to look for the Higgs boson, but it didn’t move to the top of the physics agenda until after the discovery of the W and Z bosons in 1983. When we wrote the paper, things like spontaneous symmetry breaking were regarded as speculative hypotheses by the distinguished grey-haired scientists of the day. Then, in the early 1990s, precision measurements at LEP enabled us to look at the radiative corrections induced by the Higgs and they painted a consistent picture that suggested the Higgs would be relatively light (less than about 300 GeV). I was sort of morally convinced beforehand that the Higgs had to exist, but by the early 1990s it was clear that, indirectly, we had seen it. Before that there were alternative models of electroweak symmetry breaking but LEP killed most of them off.
To what extent does the Higgs boson represent a “portal” to new physics?
The Higgs boson is often presented as completing the Standard Model (SM) and solving lots of problems. Actually, it opens up a whole bunch of new ones. We know now that there is at least one particle that looks like an effective elementary scalar field. It’s an entirely new type of object that we’ve never encountered before, and every single aspect of the Higgs is problematic from a theoretical point of view. Its mass: we know that in the SM it is subject to quadratic corrections that make the hierarchy of mass scales unstable.
Every single aspect of the Higgs is problematic from a theoretical point of view
Its couplings to fermions: those are what produce the mixing of quarks, which is a complete mystery. The quartic term of the Higgs potential in the SM goes negative if you extrapolate it to high energies, the theory becomes unstable and the universe is doomed. And, in principle, you can add a constant term to the Higgs potential, which is the infamous cosmological constant that we know exists in the universe today but that is much, much smaller than would seem natural from the point of view of Higgs theory. Presumably some new physics comes in to fix these problems, and that makes the Higgs sector of the SM Lagrangian look like the obvious portal to that new physics.
In what sense do you feel an emotional connection to theory?
The Higgs discovery is testament to the power of mathematics to describe nature. People often talk about beauty as being a guide to theory, but I am always a bit sceptical about that because it depends on how you define beauty. For me, a piece of engineering can be beautiful even if it looks ugly. The LHC is a beautiful machine from that point of view, and the SM is a beautiful theoretical machine that is driven by mathematics. At the end of the day, mathematics is nothing but logic taken as far as you can.
Do you recall the moment you first encountered supersymmetry (SUSY), and what convinced you of its potential?
I guess it must have been around 1980. Of course I knew that Julius Wess and Bruno Zumino had discovered SUSY as a theoretical framework, but their motivations didn’t convince me. Then people like Luciano Maiani, Ed Witten and others pointed out that SUSY could help stabilise the hierarchy of mass scales that we find in physics, such as the electroweak, Planck and grand unification scales. For me, the first phenomenological indication that indicated SUSY could be related to reality was our realisation in 1983 that SUSY offered a great candidate for dark matter in the form of the lightest supersymmetric particle. The second was a few years later when LEP provided very precise measurements of the electroweak mixing angle, which were in perfect agreement with supersymmetric (but not non-supersymmetric) grand unified theories. The third indication was around 1991 when we calculated the mass of the lightest supersymmetric Higgs boson and got a mass up to about 130 GeV, which was being indicated by LEP as a very plausible value, and agrees with the experimental value.
There was great excitement about SUSY ahead of the LHC start-up. In hindsight, does the non-discovery so far make the idea less likely?
Certainly it’s disappointing. And I have to face the possibility that even if SUSY is there, I might not live to meet her. But I don’t think it’s necessarily a problem for the underlying theory. There are certainly scenarios that can provide the dark matter even if the supersymmetric particles are rather heavier than we originally thought, and such models are still consistent with the mass of the Higgs boson. The information you get from unification of the couplings at high energies also doesn’t exclude SUSY particles weighing 10 TeV or so. Clearly, as the masses of the sparticles increase, you have to do more fine tuning to solve the electroweak hierarchy problem. On the other hand, the amount of fine tuning is still many, many orders of magnitude less than what you’d have to postulate without it! It’s a question of how much resistance to pain you have. That said, to my mind the LHC has actually provided three additional reasons for loving SUSY. One is the correct prediction for the Higgs mass. Another is that SUSY stabilises the electroweak vacuum (without it, SM calculations show that the vacuum is metastable). The third is that in a SUSY model, the Higgs couplings to other particles, while not exactly the same as in the SM, should be pretty close – and of course that’s consistent with what has been measured so far.
To what extent is SUSY driving considerations for the next collider?
I still think it’s a relatively clear-cut and well-motivated scenario for physics at the multi-TeV scale. But obviously its importance is less than it was in the early 1990s when we were proposing the LHC. That said, if you want a specific benchmark scenario for new physics at a future collider, SUSY would still be my go-to model, because you can calculate accurate predictions. As for new physics beyond the Higgs and more generally the precision measurements that you can make in the electroweak sector, the next topic that comes to my mind is dark matter. If dark matter is made of weakly-interacting massive particles (WIMPs), a high-energy Future Circular Collider should be able to discover it. You can look at SUSY at various different levels. One is that you just add in these new particles and make sure they have the right couplings to fix the hierarchy problem. But at a more fundamental level you can write down a Lagrangian, postulate this boson-fermion symmetry and follow the mathematics through. Then there is a deeper picture, which is to talk about additional fermionic (or quantum) dimensions of space–time. If SUSY were to be discovered, that would be one of the most profound insights into the nature of reality that we could get.
If SUSY is not a symmetry of nature, what would be the implications for attempts to go beyond the SM, e.g. quantum gravity?
We are never going to know that SUSY is not there. String theorists could probably live with very heavy SUSY particles. When I first started thinking about SUSY in the 1980s there was this motivation related to fine tuning, but there weren’t many other reasons why SUSY should show up at low energies. More arguments came later, for example, dark matter, which are nice but a matter of taste. I and my grandchildren will have passed on, humans could still be exploring physics way below the Planck scale, and string theorists could still be cool with that.
How high do the masses of the super-partners need to go before SUSY ceases to offer a compelling solution for the hierarchy problem and dark matter?
Beyond about 10 TeV it is difficult to see how it can provide the dark matter unless you change the early expansion history of the universe – which of course is quite possible, because we have no idea what the universe was doing when the temperature was above an MeV. Indeed, many of my string colleagues have been arguing that the expansion history could be rather different from the conventional adiabatic smooth expansion that people tend to use as the default. In this case supersymmetric particles could weigh 10 or even 30 TeV and still provide the dark matter. As for the hierarchy problem, obviously things get tougher to bear.
What can we infer about SUSY as a theory of fundamental particles from its recent “avatars” in lasers and condensed-matter systems?
I don’t know. It’s not really clear to me that the word “SUSY” is being used in the same sense that I would use it. Supersymmetric quantum mechanics was taken as a motivation for the laser setup (CERN Courier March/April 2019 p10), but whether the deeper mathematics of SUSY has much to do with the way this setup works I’m not sure. The case of topological condensed-matter systems is potentially a more interesting place to explore what this particular face of SUSY actually looks like, as you can study more of its properties under controlled conditions. The danger is that, when people bandy around the idea of SUSY, often they just have in mind this fermion–boson partnership. The real essence of SUSY goes beyond that and includes the couplings of these particles, and it’s not clear to me that in these effective-SUSY systems one can talk in a meaningful way about what the couplings look like.
Has the LHC new-physics no-show so far impacted what theorists work on?
In general, I think that members of the theoretical community have diversified their interests and are thinking about alternative dark-matter scenarios, and about alternative ways to stabilise the hierarchy problem. People are certainly exploring new theoretical avenues, which is very healthy and, in a way, there is much more freedom for young theorists today than there might have been in the past. Personally, I would be rather reluctant at this time to propose to a PhD student a thesis that was based solely on SUSY – the people who are hiring are quite likely to want them to be not just working on SUSY and maybe even not working on SUSY at all. I would regard that as a bit unfair, but there are always fashions in theoretical physics.
Following a long and highly successful period of theory-led research, culminating in the completion of the SM, what signposts does theory offer experimentalists from here?
I would broaden your question. In particle physics, yes, we have the SM, which over the past 50 years has been the dominant paradigm. But there is also a paradigm in cosmology and gravitation – general relativity and the idea of a big bang – initiated a century ago by Einstein. The 2016 discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity, and that community now finds itself in the same fix that we do, in that they have this theory-led paradigm that doesn’t indicate where to go next.
The discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity
Gravitational waves are going to tell us a lot about astrophysics, but whether they will tell us about quantum gravity is not so obvious. The Higgs boson, meanwhile, tells us that we have a theory that works fantastically well but leaves many mysteries – such as dark matter, the origin of matter, neutrino masses, cosmological inflation, etc – still standing. These are a mixture of theoretical, phenomenological and experimental problems suggesting life beyond the SM. But we don’t have any clear signposts today. The theoretical cats are wandering off in all directions, and that’s good because maybe one of the cats will find something interesting. But there is still a dialogue going on between theory and experiment, and it’s a dialogue that is maybe less of a monologue than it was during the rise of the SM and general relativity. The problems we face in going beyond the current paradigms in fundamental physics are the hardest we’ve faced yet, and we are going to need all the dialogue we can muster between theorists, experimentalists, astrophysicists and cosmologists.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
Anomalies, which I take to mean data that disagree with the scientific paradigm of the day, are the bread and butter of phenomenologists working on physics beyond the Standard Model (SM). Are they a mere blip or the first sign of new physics? A keen understanding of statistics is necessary to help decide which “bumps” to work on.
Take the excess in the rate of di-photon production at a mass of around 750 GeV spotted in 2015 by the ATLAS and CMS experiments. ATLAS had a 4σ peak with respect to background, which CMS seemed to confirm, although its signal was less clear. Theorists produced an avalanche of papers speculating on what the signal might mean but, in the end, the signal was not confirmed in new data. In fact, as is so often the case, the putative signal stimulated some very fruitful work. For example, it was realised that ultra-peripheral collisions between lead ions could produce photon-photon resonances, leading to an innovative and unexpected search programme in heavy-ion physics. Other authors proposed using such collisions to measure the anomalous magnetic moment of the tau lepton, which is expected to be especially sensitive to new physics, and in 2018 ATLAS and CMS found the first evidence for (non-anomalous) high-energy light-by-light scattering in lead-lead ultra-peripheral collisions.
Some anomalies have disappeared during the past decade not primarily because they were statistical fluctuations, but because of an improved understanding of theory. One example is the forward-backward asymmetry (AFB) of top–antitop production at the Tevatron. At large transverse momentum, AFB was measured to be much too large compared to SM predictions, which were at next-to-leading order in QCD with some partial next-to-next-to leading order (NNLO) corrections. The complete NNLO corrections, calculated in a Herculean effort, proved to contribute much more than was previously thought, faithfully describing top–antitop production both at the Tevatron and at the LHC.
Other anomalies are still alive and kicking. Arguably, chief among them is the long-standing oddity in the measurement of the anomalous magnetic moment of the muon, which is about 4σ discrepant with the SM predictions. Spotted 20 years ago, many papers have been written in an attempt to explain it, with contributions ranging from supersymmetric particles to leptoquarks. A similarly long-standing anomaly is a 3.8σ excess in the number of electron antineutrinos emerging from a muon–antineutrino beam observed by the LSND experiment and backed up more recently by MiniBooNE. Again, numerous papers attempting to explain the excess, e.g. in terms of the existence of a fourth “sterile” neutrino, have been written, but the jury is still out.
Some anomalies are more recent, and unexpected. The so-called “X17” anomaly reported at a nuclear physics experiment in Hungary, for instance, shows a significant excess in the rate of certain nuclear decays of 8Be and 4He nuclei (see Rekindled Atomki anomaly merits closer scrutiny) which has been interpreted as being due to the creation of a new particle of mass 17 MeV. Though possible theoretically, one needs to work hard to make this new particle not fall afoul of other experimental constraints; confirmation from an independent experiment is also needed. Personally, I am not pursuing this: I think that the best new-physics ideas have already been had by other authors.
When working on an anomaly, beyond-the-SM phenomenologists hypothesise a new particle and/or interaction to explain it, check to see if it works quantitatively, check to see if any other measurements rule the explanation out, then provide new ways in which the idea can be tested. After this, they usually check where the new physics might fit into a larger theoretical structure, which might explain some other mysteries. For example, there are currently many anomalies in measurements of B meson decays, each of which isn’t particularly statistically significant (typically 2–3σ away from the SM) but taken together they form a coherent picture with a higher significance. The exchange of hypothesised Z′ or leptoquark quanta provide working explanations, the larger structure also shedding light on the pattern of masses of SM fermions, and most of my research time is currently devoted to studying them.
The coming decade will presumably sort several current anomalies into discoveries, or those that “went away”. Belle II and future LHCb measurements should settle the B anomalies, while the anomalous muon magnetic moment may even be settled this year by the g-2 experiment at Fermilab. Of course, we hope that new anomalies will appear and stick. One anomaly from the late 1990s – that type 1a supernovae have an anomalous acceleration at large red-shifts – turned out to reveal the existence of dark-energy and produce the dominant paradigm of cosmology today. This reminds us that all surprising discoveries were anomalies at some stage.
We have many fantastic achievements in our wonderful field of research, most recently the completion of the Large Hadron Collider (LHC) and its discovery of a new form of matter, the Higgs boson. The field is now preparing to face the next set of challenges, in whatever direction the European Strategy for Particle Physics recommends. With ambitious goals, this strategy update is the right time to ask: “How do we make ourselves as good as we need to be to succeed?”
Big science has brought more than fundamental knowledge: it has taught us that we can achieve more when we collaborate, and to do this we need to communicate both within and beyond the community. We need to communicate to our funders and, most importantly of all, we need to communicate with wider society to give everyone an opportunity to engage in or become a part of the scientific process. Yet some of the audiences we could and should be reaching are below the radar.
Reaching high-science capital people – those who will attend a laboratory open day, watch a new documentary on dark energy or read a newspaper article about medical accelerators – is a vital part of our work, and we do it well. But many audiences have barriers to traditional modes of outreach and engagement. For example, groups or families with an inherently low science background, perhaps linked to socio-economic grouping, will not read articles in the science-literate mainstream press as they feel, incorrectly, that science is not for them. Large potential audiences with physical or mental disabilities will be put off coming to events for practical reasons such as accessibility or perhaps being unable to read or understand printed or visual media. In the UK alone, millions of people are registered as visually impaired (VI) to some degree. To reach these and other “invisible” audiences, we need to enter their space.
When it comes to science engagement, which is a predominantly visual interaction, the VI audience is underserved. Tactile Collider is a communication project aimed at addressing this gap. The idea came in 2014 when a major LHC exhibition came to Manchester, UK. Joining in panel discussions at a launch party held at the Museum for Science and Industry, it became clear that the accessibility of the exhibition could be improved. Spurring us into action, we also had a request from a local VI couple for an adapted tour. I gathered together some pieces of the ATLAS forward detector and some radio-frequency cavity models, both of which had a pleasing weight and plenty of features to feel with fingers, and gave the couple a bespoke tour of the exhibition. The feedback was fantastic, and making this tactile, interactive and bespoke form of engagement available to more people, be it sighted or VI, was a challenge we accepted.
With the help of the museum staff, we developed the idea further and were soon put in touch with Kirin Saeed, an independent consultant on accessibility and visually impaired herself. Together, we formulated a potential project to a stage where we could approach funders. The UK’s Science and Technology Facilities Council (STFC) recognised and supported our vision for a UK-wide project and funded the nascent Tactile Collider for two years through a £100,000 public-engagement grant.
We wanted to design a project without preconceptions about the techniques and methods of communication and delivery. With co-leaders Chris Edmonds of the University of Liverpool and Robyn Watson, a teacher of VI students, our team spent one year listening to and talking with audiences before we even considered producing materials or defining an approach. We spent time in focus groups, in classrooms across the north of England, visiting museums with VI people, and looking at the varied ways of learning and accessing information for VI groups of all ages. Training in skills such as audio description and tactile-map production was crucial, as were the PhD students who got involved to design materials and deliver Tactile Collider events.
Early on, we focused on a science message based around four key themes: the universe is made of particles; we accelerate these particles using electric fields in cavities; we control particle beams using magnets; and we collide particle beams to make the Higgs boson. The first significant event took place in Liverpool in 2017 and since then the exhibition has toured UK schools, science festivals and, in 2019, joined the CERN Open Days for our first Geneva-region event.
Content development
A key aspect of Tactile Collider is content developed specifically for a VI audience, along with training the delivery team in how to sight-guide and educating them about the large range of visual impairments. As an example, take the magnetic field of a dipole – the first step to understanding how magnets are used to control and manipulate charged particle beams. The idea of a bar magnet having a north pole and a south pole, and magnetic field lines connecting the two, is simple enough to convey using pencil and paper. To communicate with VI audiences, by contrast, the magnet station of Tactile Collider contains a 3D model of a bar magnet with tactile bumps for north and south poles, partnered with tactile diagrams. In some areas of Tactile Collider, 3D sound is employed to give students a choice in how to interact.
The lessons learned during the project’s development and delivery led us to a set of principles for engagement, which work for all audiences regardless of any particular needs. We found that all science engagement should strive to be authentic, with no dumbing down of the science message and delivered by practicing scientists striving to involve the audience as equals. Alongside this authentic message, VI learners require close interaction with a scientist-presenter in a group of no bigger than four. The scientist should also be trained in VI-audience awareness, sighted guiding, audio description and in the presentation of a tactile narrative linked to the learning outcomes. Coupled with this idea is the need to train presenters to be able to use the differing materials with diverse audience groups.
Tactile Collider toured the UK in 2017 and 2018, visiting many mainstream and specialist schools, and meeting many motivated and enthusiastic students. We have also spent time at music festivals, with a focus on raising awareness of VI issues and giving people a chance to learn about the LHC using senses other than their eyes. One legacy of Tactile Collider is educating our community, and we are planning “VI in science” training events in 2020 in addition to a third community meeting bringing together scientists and communication professionals.
There is now a real interest and understanding in particle physics about the importance of reaching underrepresented audiences. Tactile Collider is a step towards this, and we are working to share the skills and insights we have gained in our journey so far. The idea has also appeared in astronomy: Tactile Universe, based at the Institute of Cosmology and Gravitation at the University of Portsmouth, engages the VI community with astrophysics research, for example by creating 3D printed tactile images of galaxies for use in schools and at public events. The first joint Tactile Collider/Universe event will take place in London in 2020 and we have already jointly hosted two community workshops. The Tactile Collider team is happy to discuss bringing the exhibition to any event, lab or venue.
Fundamental science is a humbling and levelling endeavour. When we consider the Higgs boson and supernovae, none of us can directly engage with the very small or the far or the very massive. Using all of our senses shows us science in a new and fascinating way.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.