Bluefors – leaderboard other pages

Topics

First low-mass dielectron results ahead of LHC Run 3

Dielectron cross section

A report from the ALICE experiment

One of the main objectives of the ALICE physics programme for future LHC runs is the precise measurement of the e+e (dielectron) invariant-mass continuum produced in heavy-ion collisions. In contrast to strongly interacting hadronic probes, dielectrons provide an unperturbed view into the quark–gluon plasma (QGP), a phase of deconfined quarks and gluons that is produced in such collisions. For example, they will allow physicists to determine the initial temperature of the QGP and to study the effects of the predicted restoration of chiral symmetry. In order to perform these measurements, important upgrades to the ALICE detector system are underway, most notably a new inner tracking system and a new readout system for the time projection chamber.

Meanwhile, the ALICE collaboration has also analysed the proton–proton (pp) and lead–lead (Pb–Pb) collision data recorded so far during LHC Runs 1 and 2. The results, which have recently been submitted for publication, provide new physics insights, in particular into the production of heavy quarks (charm and beauty) in pp collisions at centre-of-mass energies of 7 and 13 TeV. The measured invariant-mass spectrum of dielectrons (see figure) has been found to be in good agreement with the expected distribution of dielectrons from decays of light mesons and J/ψ, as well as semileptonic decays of correlated heavy-flavour pairs. The Pb–Pb results, recorded at a centre-of-mass energy of 2.76 TeV per nucleon–nucleon pair, are not yet sensitive enough to quantify the presence of thermal radiation and signs of chiral symmetry restoration on top of the vacuum expectation.

The results obtained in pp collisions at 13 TeV provide the first measurements of charm and beauty production cross sections at mid-rapidity integrated over all transverse momenta at the current highest LHC energy. Fitting the data with two different models of heavy-flavour production (PYTHIA 6.4 and POWHEG), ALICE observes significant differences in the obtained charm cross sections at both investigated collision energies. The difference arises from different rapidity correlations between charm and anti-charm quarks in the two calculations. Hence, the data provide crucial input to improve models of charm production that is complementary to single charmed-hadron measurements.

In addition, the distance of the closest approach (dca) of the electrons to the collision vertex has been successfully used in the analysis of pp collisions at 7 TeV to distinguish displaced dielectrons from open-heavy flavour decays and prompt decays of light hadrons. This is an important test as the dca will be a crucial tool to isolate a thermal signal in the mass region 1–3 GeV/c2 in the data that will be collected in LHC Runs 3 and 4 (starting in 2021). Part of the data will be recorded with the magnetic field of the central barrel solenoid reduced from 0.5 to 0.2 T in order to further increase the acceptance of dielectrons with low mass and transverse momentum.

A decade of advances in jet substructure

A report from the ATLAS experiment

Ten years ago, the first in a series of annual meetings devoted to the theoretical and experimental understanding of massive hadronically decaying particles with high transverse momenta took place at SLAC. These “BOOST” workshops coincided with influential publications on the subject of reconstructing such Lorentz-boosted decays as single jets with large radius parameters [1], which kick-started the field of jet substructure. Such techniques have become a critical aspect of the ATLAS and CMS experimental programmes searching for new physics at the highest scales accessible with the LHC.

The understanding of large-radius jets and their substructure has progressed considerably. Analytical calculations have recently been published that predict the distribution of jet substructure observables at high accuracy, and these have been compared to data by both ATLAS [2] and CMS [3] in proton–proton collisions. Measurements of substructure observables have also recently been made by the ALICE, CMS and ATLAS collaborations in heavy ion collisions [4,5,6]. Such results were among the many topics discussed at the 10th BOOST workshop in Paris this July, where the ATLAS collaboration presented new results accentuating the advances in jet substructure.

Left: the data-to-simulation ratio of the average large-radius jet transverse momentum response as a function of the large-radius jet transverse momentum, with the combined result based on three in situ techniques. The total uncertainty is shown as the green band, reaching percent level for jets with low-to-intermediate transverse momenta. Right: the background rejection versus signal efficiency for various algorithms, which identify hadronically decaying top quarks with large transverse momenta. The application of machine learning (DNN top, BDT top, TopoDNN) leads to large improvements over traditional approaches.

Recent focus on measuring Standard Model properties using jet substructure has motivated ATLAS to measure the energy and mass response of large-radius jets with the highest possible precision [7]. A new in situ (that is, data-driven) calibration for large-radius jets provides percent-level uncertainties by combining several measurements of the jet energy scale in events where the jet is balanced by a well-measured reference object such as a leptonically decaying Z boson, a photon or a system of well-calibrated jets with lower momenta (figure, left). The mass scale of these jets is also measured using fits to the jet mass distribution obtained from hadronically decaying W bosons and top quarks in data, and by combining information from the ATLAS inner tracking detector and calorimeters. The precision for the jet mass scale in certain regions of parameter space also reaches the percent level, which is unprecedented for substructure observables.

Meanwhile, the ongoing revolution in machine learning has directly intersected with jet-substructure studies. Techniques such as boosted decision trees and deep neural networks have been studied by ATLAS to identify W bosons and top quarks with high transverse momenta [8]. These approaches allow several high-level substructure observables such as the mass, or low-level information such as measured energy depositions from the calorimeter, to be utilised simultaneously using their complex correlation pattern to gain information. Such techniques achieve improvements of more than 100% in terms of background rejection for top quark identification over previous results (figure, right).

In situ measurements of tagging and the background efficiencies of these algorithms robustly demonstrate that they are well understood in terms of the QCD-based models implemented in Monte Carlo generators, and are stable in the face of the challenging high pile-up environment of Run 2 at the LHC.

In two ATLAS publications, the early Run-2 data have allowed for rapid progress by enabling powerful in situ techniques. The collaboration is now looking forward to the possibilities offered by the larger full Run-2 dataset, where such data-driven calibrations will bring precision to an increasing number of observables. This will improve the quality of both searches and measurements exploring the energy frontier.

Fixed-target physics in collider mode at LHCb

J/ψ cross-section

A report from the LHCb experiment

This year, the LHCb collaboration reached an important milestone in its fixed-target physics programme, publishing two key results on the production rates of particles in proton–ion collisions: measurements of the cross section of antiprotons that constrain models of cosmic rays, and of charmonium and open-charm cross sections (see further reading).

The LHCb fixed-target system, known as SMOG (System for Measuring Overlap with Gas), injects a small amount of noble gas inside the LHC beam pipe, at a pressure of the order 10–7 mbar, within the LHCb vertex detector region (CERN Courier January/February 2016 p10). This system was initially designed to improve the determination of the luminosity via beam-profile measurements, and can produce hundreds of millions of beam–gas collisions per hour. This provides a unique opportunity to exploit the LHC proton and ion beams in a fixed-target mode, opening many physics opportunities such as a precise study of the quark–gluon plasma (QGP) in the as-yet-unexplored energy regime between existing fixed-target and collider measurements.

LHCb has just taken the first step towards the use of charmonium and open-charm hadrons as probes of the QGP by measuring their cross-sections in proton–nucleus collisions, where no QGP is expected to be formed. The data for these measurements come from two SMOG data-taking campaigns with proton beams – one carried out over a period of 18 hours in 2015 with a beam of energy 6.5 TeV and an argon gas target (meaning a centre-of-mass energy per colliding nucleon–nucleon pair, sNN, of 110.4 GeV), and the other over a period of 87 hours in 2016 with a 4 TeV beam and a helium target (sNN = 86.6 GeV).

Thanks to the high-precision tracking and advanced particle-identification capabilities of the LHCb detector, the production rate of J/ψ and D0 mesons were measured with a very good precision (see figure). Taking advantage of the forward geometry of the detector and the boost induced by the multi-TeV proton beam, the detector also measures very backward particles in the centre-of-mass frame of the collision, giving access to the large Bjorken-x region in the target nucleon. In this kinematic region, no significant contribution from an intrinsic c c̅ component within the nucleon structure was observed.

Building on the success of these analyses of the 2015 and 2016 data, LHCb plans to carry out studies of charmonium suppression with the large sample of proton–neon collisions collected in 2017, and with samples of lead–neon collisions that will be taken in the upcoming LHC heavy-ion run in November 2018.

Observation of Higgs-boson decay to bottom quarks

CMS candidate event

A report from the CMS experiment

The observation of the Higgs-boson decay to bottom quark–antiquark (b) by the CMS experiment is a seminal achievement that sheds light on one of the key missing pieces of the Higgs sector of the Standard Model (SM).

Processes that include the Higgs boson’s favoured decay mode to b-quarks (with about 58% probability) have until now remained elusive because of the overwhelming background of b-quark events produced via strong interactions. While the recent CMS observation of Higgs-boson production in association with top quarks (ttH) constitutes the first confirmation of the tree-level coupling of the Higgs boson to quarks (CERN Courier June 2018 p10), the Higgs-boson decay to bb̅ tests directly its coupling to down-type quarks. Moreover, this decay is crucial for constraining, under fairly general assumptions, the overall Higgs-boson decay width and thus reducing the uncertainty on the measurement of absolute couplings. This observation effectively narrows down the remaining window available for exotic or undetected decays.

At the LHC, the most effective strategy to observe the Higgs bb̅ decay is to exploit the associated production mechanism with an electroweak vector boson VH, where V corresponds to a W or Z boson. The leptons and neutrinos arising from the V decay provide large suppression of the multijet background, and further background reduction is achieved by requiring the Higgs-boson candidates to have large transverse momentum.

Advanced machine-learning techniques (deep neural networks, DNN) are used in different steps of the analysis including: the b-jet identification, the measurement of the b-jet energy, the classification of different backgrounds in control regions, and the final signal extraction.

This result uses LHC data collected in 2016 and 2017 at an energy of 13 TeV and has benefited from the recent CMS pixel tracker upgrade with further improved b-quark identification performance.

A signal region enriched in VH events is selected together with several dedicated control regions to monitor the different background processes. Then, a simultaneous binned-likelihood fit of the signal and control regions is performed to extract the Higgs-boson signal.

The score of the DNN separating signal from the background is used for the signal extraction fit. Several observables are combined and the most discriminating are: the angular separation between the two b-quarks and the b-tagging properties of the Higgs candidate jets. An event candidate for the production of a Z boson in conjunction with a Higgs boson is shown in the left figure.

A clear excess of events is observed in the combined 2016 and 2017 data, in comparison with the expectation in the absence of a H   bb̅ signal. The significance of this excess is 4.4σ, where the expectation from SM Higgs-boson production is 4.2σ. The signal strength corresponding to this excess, in relation to the SM expectation, is 1.06 ± 0.26. When combined with the measurement from LHC Run 1 at 7 and 8 TeV, the signal significance increases to 4.8σ, while 4.9σ is expected. The corresponding signal strength is 1.01 ± 0.22.

The dijet invariant mass distribution (figure, right) allows for a more direct visualisation of the Higgs-boson signal. The contributions of the VH and VZ processes are separately visible, after all other background processes have been subtracted.

The VH results are combined with CMS measurements in other production processes, including gluon-fusion, vector-boson fusion, and associated production with top quarks, with data collected at 7, 8 and 13 TeV, depending on the process. The observed combined significance is raised to 5.6σ, where the expectation from SM Higgs-boson production is 5.5σ. The signal strength corresponding to this excess, relative to the SM expectation, is 1.04 ± 0.20, in perfect agreement with the latter.

With the direct observation of the Higgs-boson couplings to bottom quarks complementing those involving tau leptons and top quarks (see further reading), the Yukawa couplings to all accessible third-generation fermions have now been firmly established. This opens a new era of precision studies in the Higgs sector that will fully benefit from the larger dataset that will be available by the end of Run 2.

Solving the mystery of a historic stellar blast

Eta Carinae

Some 180 years ago, a relatively normal star called Eta Carinae suddenly brightened to become the second brightest star in the sky, before almost disappearing at the end of the 19th century. The sudden brightening and subsequent disappearance, recorded by astronomer John Herschel, suggested that the star had undergone a supernova explosion, leaving behind a black hole. More recent observations have shown, however, that the star still exists – ruling out the supernova hypothesis. Even more remarkably, what remains is a binary system of two stars, the more massive of which is surrounded by a large nebula.

Although supernovae imposters such as Eta Carinea are now known to occur in other galaxies, this event – known as the Great Eruption – appeared relatively close to Earth at a distance of around 7500 light years. It is therefore a perfect laboratory in which to study what exactly happens when stars appear to survive a supernova.

The fate of Eta Carinae has remained mysterious, but since the turn of the millennium clues have emerged in echoes of the light emitted during the Great Eruption. While the light observed in the 19th century travelled directly from the system towards Earth, other light initially travelled towards distant clouds surrounding the stars before being reflected in our direction. In 2003, the light echoes from this event were bright enough to be observed using the moderate-sized telescopes at the Cerro Tololo Inter-American Observatory in Chile, while the different gas clouds reflecting the light were observed more recently using the larger scale Magellan Observatory and the Gemini South Observatory, also located in Chile. By comparing historical records of the variability observed in the 19th century with the variability of the light reflected from a gas cloud, it can be determined how far in the past astronomers are observing the explosion.

Now, a team led by Nathan Smith of the University of Arizona in Tucson has studied the spectra of the light echo in more detail using the 6.5 m Magellan telescopes and found that it matches observations during the 1840s and 1850s, when the Great Eruption was at its peak. Spectral analysis of the reflected light indicates that initially matter was ejected at relatively low velocities of 150–200 km–1, while during the 1850s some matter was travelling at speeds of 10,000–20,000 km–1. The data are compatible with a system that first ejects material as one star brightens followed by more violent ejection from an explosion.

Smith and collaborators claim that the scenario which best matches the data, including information about the age and mass of the two remaining stars, is that the system originally consisted of three stars. The two closest stars initially interacted to form one massive star, while the donor star moved further away, losing mass and thereby increasing the radius of its orbit around the massive star. The gravitational field of the far-away donor star would have caused the orbiting third star to dramatically change orbit, forcing it to spiral into the massive central star. In doing so, its gravitational interactions with the massive star caused it to shed large amounts of matter as it started to burn brighter. Finally, the binary system merged, causing a violent explosion where large amounts of stellar material were ejected at large velocities towards the earlier ejected material. As the fast ejecta smashed into the slower moving ejecta, a bright object was formed on the night sky that was visible for many years during the 1850s. The remaining binary system still lights up every few years as the old donor star moves through the nebula left over from the merger.

The new details about the evolution of this complex and relatively nearby system not only teach us more about what was observed by Herschel almost two centuries ago, but also provide valuable information about the evolution of massive stars, binary and triple systems, and the nature of the supernovae imposters.

Electron–ion collider on the horizon

Protons and neutrons, the building blocks of nuclear matter, constitute about 99.9% of the mass of all visible matter in the universe. In contrast to more familiar atomic and molecular matter, nuclear matter is also inherently complex because the interactions and structures in nuclear matter are inextricably mixed up: its constituent quarks are bound by gluons that also bind themselves. Consequently, the observed properties of nucleons and nuclei, such as their mass and spin, emerge from a complex, dynamical system governed by quantum chromodynamics (QCD). The quark masses, generated via the Higgs mechanism, only account for a tiny fraction of the mass of a proton, leaving fundamental questions about the role of gluons in nucleons and nuclei unanswered.

The underlying nonlinear dynamics of the gluon’s self-interaction is key to understanding QCD and fundamental features of the strong interactions such as dynamical chiral symmetry breaking and confinement. Despite the central role of gluons, and the many successes in our understanding of QCD, the properties and dynamics of gluons remain largely unexplored.

Positive evaluation

To address these outstanding puzzles in modern nuclear physics, researchers in the US have proposed a new machine called the Electron Ion Collider (EIC). In July this year, a report by the National Academies of Sciences, Engineering, and Medicine commissioned by the US Department of Energy (DOE) positively endorsed the EIC proposal. “In summary, the committee finds a compelling scientific case for such a facility. The science questions (see “EIC’s scientific goals: in brief”) that an EIC will answer are central to completing an understanding of atoms as well as being integral to the agenda of nuclear physics today. In addition, the development of an EIC would advance accelerator science and technology in nuclear science; it would also benefit other fields of accelerator-based science and society, from medicine through materials science to elementary particle physics.”

From a broader perspective, the versatile EIC will, for the first time, be able to systematically explore and map out the dynamical system that is the ordinary QCD bound state, triggering a new area of study. Just as the advent of X-ray diffraction a century ago triggered tremendous progress in visualising and understanding the atomic and molecular structure of matter, and as the introduction of large-scale terrestrial and space-based probes in the last two to three decades led to precision observational cosmology with noteworthy findings, the EIC is foreseen to play a similarly transformative role in our understanding of the rich variety of structures at the subatomic scale.

Two pre-conceptual designs for a future high-energy and high-luminosity polarised EIC have evolved in the US using existing infrastructure and facilities (figure 1). One proposes to add an electron storage ring to the existing Relativistic Heavy-Ion Collider (RHIC) complex at Brookhaven National Laboratory (BNL) to enable electron–ion collisions. The other pre-conceptual design proposes a new electron and ion collider ring at Jefferson Laboratory (JLab), utilising the 12 GeV upgraded CEBAF facility (CERN Courier March 2018 p19) as the electron injector. The requirement that the EIC has a high luminosity (approximately 1034 cm–2 s–1) demands new ways to “cool” the hadrons, beyond the capabilities of current technology. A novel, coherent electron-cooling technique is under development  at BNL, while JLab is focussing on the extension of conventional electron cooling techniques to significantly higher energy and to use bunched electron beams for the first time. The luminosity, polarisation and cooling requirements are coupled to the existence and further development of high brilliance (polarised) electron and ion sources, benefitting from the existing experience at JLab, BNL and collaborating institutions.

Fig. 1.

The EIC is foreseen to have at least two interaction regions and thus two large detectors. The physics-driven requirements on the EIC accelerator parameters, and extreme demands on the kinematic coverage for measurements, makes it particularly challenging to integrate into the interaction regions of the main detector and dedicated detectors along the beamline in order to register all particles down to the smallest angles. The detectors would be fully integrated in the accelerator over a region of about 100 m, with a secondary focus to even detect particles with angles and rigidities near the main ion beams. To quickly separate both beams into their respective beam lines while providing the space and geometry required by the physics programme, both the BNL and JLab pre-conceptual designs incorporate a large crossing angle of 20–50 mrad. This achieves a hermetic acceptance and also has the advantage of avoiding the introduction of separator dipoles in the detector vicinity that would generate huge amounts of synchrotron radiation. The detrimental effects of this crossing angle on the luminosity and beam dynamics would be compensated by a crab-crossing radio-frequency scheme, which has many synergies with the LHC high-luminosity upgrade (CERN Courier May 2018 p18).

Modern particle detector and readout systems will be at the heart of the EIC, driven by the demand for high precision on particle detection and identification of final-state particles. A multipurpose EIC detector needs excellent hadron–lepton–photon separation and characterisation, full acceptance, and to go beyond the requirements of most particle-physics detectors when it comes to identifying pions, kaons and protons. This means that different particle-identification technologies have to be integrated over a wide rapidity range in the detector to cover particle momenta from a couple of 100 MeV to several tens of GeV. To address the demands on detector requirements, an active detector R&D programme is ongoing, with key technology developments including large, low-mass high-resolution tracking detectors and compact, high-resolution calorimetry and particle identification.

The path ahead

A high-energy and high-luminosity electron–ion collider capable of a versatile range of beam energies, polarisations and ion species is the only tool to precisely image the quarks and gluons, and their interactions, and to explore the new QCD frontier of strong colour fields in nuclei – to understand how matter at its most fundamental level is made. In recognition of this, in 2015 the Nuclear Science Advisory Committee (NSAC), advising the DOE, and the National Science Foundation (NSF) recommended an EIC in its long-range plan as the highest priority for new facility construction. Subsequently, a National Academy of Sciences (NAS) panel was charged to review both the scientific opportunities enabled by an EIC and the benefits to other fields of science and society, leading to the report published in July.

Fig. 2.

The NAS report strongly articulates the merit of an EIC, also citing its role in maintaining US leadership in accelerator science. This could be the basis for what is called a Critical Decision-0 or Mission Need approval for the DOE Office of Science, setting in motion the process towards formal project R&D, engineering and design, and construction. The DOE Office of Nuclear Physics is already supporting increased efforts towards the most critical generic EIC-related accelerator research and design.

But the EIC is by no means a US-only facility (figure 2). A large international physics community, comprising more than 800 members from 150 institutions in 30 countries and six continents, is now energised and working on the scientific and technical challenges of the machine. An EIC users group (www.eicug.org) was formed in late 2015 and has held meetings at the University of California at Berkeley, Argonne National Laboratory, and Trieste, Italy, with the most recent taking place at the Catholic University of America in Washington, DC in July. The EIC user group meetings in Trieste and Washington included presentations of US and international funding agency perspectives, further endorsing the strong international interest in the EIC. Such a facility would have capabilities beyond all previous electron-scattering machines in the US, Europe and Asia, and would be the most sophisticated and challenging accelerator currently proposed for construction in the US.

Empowering Africa’s youth to shape its future

AIMS South Africa graduates of 2017–2018

It was 2001 and Neil Turok, a cosmologist at the University of Cambridge at the time, was on sabbatical in his home town of Cape Town, South Africa. At dinner one evening, his father, who himself was a member of the first South African congress following the end of apartheid in 1994, posed the question: what will you do for Africa?

Two years later, Turok founded the African Institute for Mathematical Science (AIMS), with the mission to improve mathematics and science education throughout the African continent. The first centre, in 2003 at a derelict resort hotel in the small surfer town of Muizenberg, just south of Cape Town, saw 12 students graduate. Since that time, AIMS has grown to span the whole continent, with five more centres founded in Tanzania, Ghana, Senegal, Cameroon and most recently in Rwanda. The centres have produced almost 2000 graduates and form part of a pan-African and global network of mathematicians and physicists called the Next Einstein Initiative (NEI), a number of whom work in high-energy physics experiments at CERN and elsewhere.

Africa is a continent filled with potential, rich in natural resources and with a population that is projected to comprise nearly 50% of the world population before the end of the century. But it is also a continent plagued by problems that hinder development and success, particularly in mathematics and science (figure 1). More people there die each year from AIDS and civil war than anywhere else in the world, and access to quality education at all levels is tenuous, at best. The mission of AIMS and NEI is to address these issues by empowering Africa’s brightest students and propelling them towards scientific, educational and economic self-sufficiency.

A tutorial at the South Africa centre

The AIMS curriculum

The primary way that AIMS contributes to this transformation is an intensive one-year master’s programme that runs from August to June each year. Preparations begin well in advance, starting in January when lecturers from around the world submit proposals to teach three-week courses at one of the six centres. In parallel, students from all over Africa apply and are selected through a very competitive process, with upwards of 2000 students vying for around 50 spots at each centre. The goal of both sets of applications, for lecturers as well as students, is to ensure that the very best people are brought together.

The course is highly structured. For the first 10 weeks, students attend a series of skills courses with an emphasis on problem solving and computing. The curriculum then enters a review phase, where students elect to follow two courses for every three-week block. The courses are dynamic, selected by the academic director of each centre every year and then taught by (mostly foreign) lecturers, who have complete freedom to write the course as they so choose. The beauty of such a curriculum is the diverse set of topics that can be taught side-by-side, which allows students to sample new topics – a day that begins with a course on financial mathematics could end with students writing a simulation for computational neuroscience. Three weeks later, the courses change and students can find themselves immersed in knot theory or Monte Carlo methods in particle physics.

Fig. 1.

This cadence continues for 18 weeks, during which time the students are able to build connections with academics from around the world and find the course that suits them best. This builds to the final portion of the course, called the essay phase, in which students identify a mentor and a project from a list of proposed topics. The student then works independently for a period of 10 weeks under the supervision of a mentor, culminating in a thesis essay and oral examination. If this fast-paced academic course were not enough, the entire course is taught in English, which for many students is not their first language. Adding to their workloads, students are taught courses in English and writing throughout.

Measurement discussion

Strong support

Unlike many institutions, success at AIMS is limited only by a student’s will to achieve. All fees are paid by AIMS, as well as the costs of relocating, accommodation and food. Each student is provided with a personal computer (which for many students is the first computer they have ever owned) and a team of five to 10 academic tutors are hired to support the students in their studies and augment the lectures when necessary. This all ensures that the complete focus of the student can be on their studies and development as an academic. 

The result is a nearly 100% success rate, with more than 30% of graduates being female and AIMS graduates representing 43 out of 54 African nations. These students most often go on to enter research master’s and PhD programmes in Africa and elsewhere, their university education having been in some way validated through the standards set by AIMS and the international institutions that support it. However, nearly all AIMS graduates eventually desire to return to Africa, whether it be in industry or research, thus contributing to their home nation. Some alumni even return to the school as lecturers themselves. Ultimately, the goal of AIMS and NEI is to establish 15 centres throughout Africa by 2030 and to establish a sustainable pan-African academic culture.

Class of 2015

A lecturer’s perspective

To offer a first-hand account of a typical day as a lecturer, it’s 19:00 and you have just sent the last e-mail of the day. Dusk is welcome since it promises to relieve some of the heat. If you’re in Biriwa, Ghana, you make sure to close the window and put on some mosquito repellant. There was a student in your class who excused himself yesterday for not completely finishing his homework. He has Malaria. He’s working a lot anyhow and he’ll be better soon, but you would be completely knocked out if you caught it. As you are about to close the laptop, you hear someone at the door: it is your students, waiting for their ad-hoc evening tutorial. Teaching at AIMS is a full-day immersion. Finding students discussing your lecture, assignments or books that you showed them is not uncommon, even after midnight.

Your average AIMS student is inquisitive, hard-working and passionate, and the vastly different academic backgrounds of students in your class will force you to have to answer questions from very basic to very advanced levels. One day, a student might be “angry” because you told them that morning how light is both a wave and a particle. After the first days of shyness (many students have never been encouraged to state their own opinion over a science matter), they’ll question what you say, and clearly it is not possible that a thing is a wave but also a particle, is it?!

AIMS final party

Don’t expect to spend your evening not doing physics unless you really need a break, in which case take a walk on the beach or, if you’re at the Muizenberg centre in South Africa, grab a surfboard. For those of you familiar with CERN, the parallel that might best explain the AIMS atmosphere is the “Bermuda triangle” of Restaurant 1, the hostel and your office: you can manage to spend weeks there before breaking out and spending some time to explore Geneva, the Jura, and the world around you. AIMS students are ambitious and grateful for the opportunity to work and learn, so they can easily spend their days between the lecture room, the canteen and the computer lab without leaving the building once. As an AIMS lecturer, it is thus good to come prepared for a few extra-curricular activities. This could range from showing students how to swim in the shallow waters of the Indian Ocean in Bagamoyo, taking them on an all-day hike to Table Mountain in Cape Town, or an extra tutorial on how to write a good application or give a talk, not to mention a discussion about how to shape Africa’s future. Topics such as how a woman can be a president in some countries (or a physics lecturer for that matter!) are sure to attract the attention of all students, even those not directly in your class.

The last few days of your three-week lecture block are the most special. Students give presentations on topics that go beyond what your lecture contains, having spent every free minute preparing. Building confidence in the student’s mind is your most important mission at AIMS, and the students have every reason to be confident. Most of them had to fight to get a good education that is taken for granted in many countries, and they all want to make an impact in building Africa’s future. After the student’s talks, the ceremony and party starts. Lecturers are bid farewell, and you may well be handed a traditional African costume to be dressed properly for the party. Then, with some exceptionally gifted dancers taking the lead, you’ll not be let go before at least attempting to move gracefully to the latest African pop-music, all without a single drop of alcohol in sight.

Back at your workplace, AIMS stays with you. Many students will keep you updated on their career, seek a reference letter from you, or eventually join you as researcher.

Becoming involved with AIMS is for anyone who is interested in working with some of the best students in the world, most of whom have had to fight hard to get there. There are a variety of options. For those with master’s degrees in mathematics and science, it is possible to serve as an academic tutor at an AIMS institute for a period of one year, during which time you will work closely with students as a mentor and act as a bridge between the shorter term lecturers. For those with PhD degrees, it is possible to act as either an essay supervisor or a lecturer. In both instances, the topic of instruction is designed by you, giving you control and flexibility to tailor the course to your interests and expertise. In whatever capacity you decide to become involved, it is an opportunity you will not regret.

Defeating the background in the search for dark matter

Inspecting photomultiplier tubes

Compelling cosmological and astrophysical evidence for the existence of dark matter suggests that there is a new world beyond the Standard Model of particle physics still to be discovered and explored. Yet, despite decades of effort, direct searches for dark matter at particle accelerators and underground laboratories alike have so far come up empty handed. This calls for new and improved methods to spot the mysterious substance thought to make up most of the matter in the universe.

Dark-matter searches using detectors based on liquefied noble gases such as xenon and argon have long demonstrated great discovery potential and continue to play a major role in the field. Such experiments use a large volume of material in which nuclei struck by a dark-matter particle would create a tiny burst of scintillation light, and the very low expected event rate requires that backgrounds are kept to a minimum. Searches employing argon detectors have a particular advantage because they can significantly reduce events from background sources, such as background from the abundant radioactive decays from detector materials and from electron scattering by solar neutrinos. That will leave the low-rate nuclear recoils induced by coherent scattering of atmospheric neutrinos as the sole residual background – the so-called “neutrino floor”.

Enter the Global Argon Dark Matter Collaboration (GADMC), which was formed in September 2017. Comprising more than 300 scientists from 15 countries and 60 institutions involved in four first-generation dark-matter experiments – ArDM at Laboratorio Subterráneo de Canfranc in Spain, DarkSide-50 at INFN’s Laboratori Nazionali del Gran Sasso (LNGS) in Italy, DEAP-3600 and MiniCLEAN at SNOLAB in Canada – GADMC is working towards the immediate deployment of a dark-matter detector called DarkSide-20k. The experiment would accumulate an exposure of 100 tonne × year and be followed by a much larger detector to collect more than 1000 tonne × year, both potentially with no instrumental background. These experiments promise the most complete exploration of the mass/parameter range of the present dark-matter paradigm.

Direct detection with liquid argon

One well-considered form of dark matter that matches astronomical measurements is weakly interacting massive particles (WIMPs), which would exist in our galaxy with defined numbers and velocities. In a dark-matter experiment employing a liquid-argon detector, such particles would collide with argon nuclei, causing them to recoil. These nuclear recoils produce ionised and excited argon atoms which, after a series of reactions, form short-lived argon dimers (weakly bonded molecules) that decay and emit scintillation light. The time profile of the scintillation light is significantly different from that created by argon-ionising events associated with radioactivity in the detector material, and has been shown to enable a strong rejection of background sources through a technique known as pulse-shape discrimination.

Fig. 1.

Located at LNGS, DarkSide-50 is the first physics detector of the DarkSide programme for dark-matter detection, with a fiducial mass of 50 kg. The experiment produced its first WIMP search results in December 2014 using argon harvested from the atmosphere and, in October the following year, reported the first ever WIMP search results using lower-radioactivity underground argon.

DarkSide-50 uses a detection scheme based on a dual-phase time projection chamber (TPC), which contains a small region of gaseous argon above a larger region of liquid argon (figure 1, left). In this configuration, secondary scintillation light, generated by ionisation electrons that drift up through the liquid region and are accelerated into the gaseous one, are used together with the primary scintillation light to look for a signal. Compared to single-phase detectors using only the pulse-shape discrimination technique, this search method requires even greater care in restricting the radioactive background through detector design and fabrication but provides excellent position resolution. For low-mass (<10 GeV/c2) WIMPs, the primary scintillation light is nearly absent, but the detectors remain sensitive to dark matter through the observation of the secondary scintillation light.

Fig. 2.

Argon-based dark-matter searches have had a number of successes in the past two years (figure 2). DarkSide-50 established the availability of an underground source of argon strongly depleted in the radioactive isotope 39Ar, while DEAP-3600 (figure 3), the largest (3.3 tonnes) single-phase liquid-argon running experiment, provided the best value to date on the precision of pulse-shape discrimination for scintillation light, better than 1 part in 109. In terms of measurements, DarkSide-50 released results from a 500-day detector exposure completely free of instrumental background and set the best exclusion limit yet for interactions of WIMPs with masses between 1.8 and 6 GeV/c2. Similar results to those from Darkside-50 for the mass region above 40 GeV/c2 were reported in the first paper from DEAP-3600, and results from a one-year exposure of DEAP-3600 with a fiducial mass of about 1000 kg are expected to be released in the near future.

High-sensitivity searches for WIMPs using noble-gas dual-phase TPC detectors are complementary to searches conducted at the Large Hadron Collider (LHC) in the mass region accessible at the current LHC energy of 13 TeV (which is limited to masses of a few TeV/c2) and can reach masses of 100 TeV/c2 and beyond with very good sensitivity.

Leading limits

The best limits to date on high-mass WIMPs have been provided by xenon-based dual-phase TPCs – the leading result given by the recently released XENON1T exposure of 1 tonne × year (figure 2). In spite of a small residual background, they were able to exclude WIMP-nucleon spin-independent elastic-scatter cross-sections above 4.1 × 10–47 cm2 at 30 GeV/c2 at 90% confidence level (CERN Courier July/August 2018 p9). Larger xenon detectors (XENONnT and DARWIN) are also planned by the same collaboration (CERN Courier March 2017 p35).

Fig. 3.

The next generation of xenon and argon detectors have the potential to extend the present sensitivity by about a factor of 10. But there is still a further factor of 10 to be increased before one reaches the neutrino floor – the ultimate level at which interactions of solar and atmospheric neutrinos with the detector material become the limiting background. This is where the GADMC liquid-argon detectors, which are designed to have pulse-shape discrimination capable of eliminating the background from electron scatters of solar neutrinos and internal radioactive decays, can provide an advantage.

GADMC envisages a two-step programme to explore high-mass dark matter. The first step, DarkSide-20k, has been approved for construction at LNGS by Italy’s National Institute for Nuclear Physics (INFN) and by the US National Science Foundation, with present and potentially future funding from Canada. Also a recognised experiment at CERN called RE-37, DarkSide-20k is designed to collect an exposure of 100 tonne × year in a period of five years (to be possibly extended to 200 tonne × year in 10 years), completely free of any instrumental background. The start of data taking is foreseen for 2022–2023. The second step of the programme will involve building an argon detector that is able to collect an exposure of more than 1000 tonne × year. SNOLAB in Canada is a strong candidate to host this second-stage experiment.

Argon can deliver the ultimate background-free search for dark matter, but that comes with extensive technological development. First and foremost, researchers need to extract and distill large volumes of the gas from underground deposits, as argon in the Earth’s atmosphere is unsuitable owing to its high content of the radioactive isotope 39Ar. Second, the scintillation light has to be efficiently detected, requiring innovative photodetector R&D.

Sourcing pure argon

Focusing on the first need, atmospheric argon has a radioactivity of 1 Bq/kg, which is entirely caused by the activation of 40Ar by cosmic rays. Given that the drift time of ionisation electrons over a length of 1 m is 1 ms, a dual-phase TPC detector reaches a complete pile-up condition (i.e. when the event rate exceeds the detector’s ability to read out the information), at a mass of 1 tonne. Scintillation-only detectors do not fare much better, and given that the scintillation lifetime is 10 μs, they are limited to detectors with a fiducial mass of a few tonnes. The argon road to dark matter has thus required early concentration on solving the problem of procuring large batches of argon that are much more depleted in 39Ar than atmospheric argon is. The solution came through an unlikely path: the discovery that underground sources of CO2 originating from Earth’s mantle carry sizable quantities of noble gases, in reservoirs where secondary production of 39Ar is significantly suppressed.

As part of a project called Urania, funded by INFN, GADMC will soon deploy a plant that is able to extract underground argon at a rate of 250 kg per day from the same site in Colorado, US, where argon for DarkSide-50 was extracted. Argon from this underground source is more depleted in 39Ar than atmospheric argon by a factor of at least 1400, making detectors of hundreds of tonnes possible for high-mass WIMP searches.

Not content with this gift of nature, another project called ARIA, also funded by INFN, by the Italian Ministry of University and Research (MIUR), and by the local government of the Sardinia region, is developing a further innovative plant to actively increase the depletion in 39Ar. The plant will consist of a 350 m-tall cryogenic-distillation tower called Seruci-I, which is under construction in the Monte Sinni coal mine in Sardinia operated by the Carbosulcis mining company. Seruci-I will study the active depletion of 39Ar by cryogenic distillation, which exploits the tiny dependence of the vapour pressure upon the atomic number. Seruci-I is expected to reach a production capacity of 10 kg of argon per day with a factor of 10 of 39Ar depletion per pass. This is more than sufficient to deliver – starting from the gas extracted with the Urania underground source – a one-tonne ultra-depleted-argon target that could enable a leading programme of searches for low-mass dark matter. Seruci-I is also expected to perform strong chemical purification at the rate of several tonnes per day and will be used to perform the final stage of purification for the 50 tonne underground argon batch for DarkSide-20k as well as for GADMC’s final detector.

Fig. 4.

CERN plays an important role in DarkSide-20k by carrying out vacuum tests of the 30 modules for the Seruci-I column (figure 4) and by hosting the construction of the cryogenics for DarkSide-20k. At the time of its approval in 2017, DarkSide-20k was set to be deployed within a very efficient system of neutron and cosmic-ray rejection, based on that used for DarkSide-50 and featuring a large organic liquid scintillator detector hosted within a tank of ultrapure deionised water. But with the deployment of new organic scintillator detectors now discouraged at LNGS due to tightening environmental regulations, GADMC is completing the design of a large, and more environmentally friendly, liquid-argon detector for neutron and cosmic-ray rejection based on the cryostat technology developed at CERN to support prototype detector modules for the future Deep Underground Neutrino Experiment (DUNE) in the US.

Turning now to the second need of a background-free search for dark matter – the efficient detection of the scintillation light – researchers are focusing on perfecting existing technology to make low-radioactivity silicon photomultipliers (SiPMs) and using them to build large-area photosensors that are capable of replacing the traditional 3 cryogenic photomultipliers. Plans for DarkSide-20k settled on the use of so-called NUV-HD-TripleDose SiPMs, designed by Fondazione Bruno Kessler of Trento, Italy, and produced by LFoundry of Avezzano, also in Italy. In the meantime, researchers at LNGS and other institutions succeeded in overcoming the huge capacitance per unit surface (50 pF/mm2) required to build photosensors that have an area of 25 cm2 and deliver a signal-to-noise ratio of 15 or larger. A new INFN facility, the Nuova Officina Assergi, was designed to enable the high-throughput production of SiPMs to make such photosensors for DarkSide-20k and future detectors, and it is now under construction.

GADMC’s programme is complemented by a world-class effort to calibrate noble-liquid detectors for low-energy nuclear recoils created by low-mass dark matter. On the heels of the SCENE programme that took place at the University of Notre Dame Tandem accelerator in 2013–2015, the R&D programme, developed at the University of Naples Federico II and now installed at the INFN Laboratori Nazionali del Sud, plans to improve the characterisation of the argon response to nuclear recoils. Of special interest is the extension of measurements to 1 keV, in support of searches for low-mass dark matter, and the verification of the possible dependence of the nuclear-recoil signals upon the direction of the initial recoil momentum relative to the drift electric field, which would enable measurements below the neutrino floor. Directionality in argon has already been established for alpha particles, protons and deuterons, and its presence for nuclear recoils was hinted at by the last results of the SCENE experiment.

Although only recently established, GADMC is enthusiastically pursuing this long-term, staged approach to dark-matter detection in a background-free mode, which has great discovery potential extending all the way to the neutrino floor and perhaps beyond.

Francis Farley 1920–2018

Francis Farley

Francis Farley, who played a pivotal role in experiments to measure the anomalous magnetic moment of the muon, passed away on 16 July at his home in the south of France at the age of 97.

The son of a British Army engineering officer, Francis was born in India and educated in England. Before he could complete his education, he transferred to military research and worked on radar, developing his knowledge of electronics and demonstrating his abilities in innovation. Following a secondment to Chalk River Laboratories in Ontario, Canada, he resumed his formal education with a PhD in 1950 from the University of Cambridge, before starting his academic career at Auckland University in New Zealand. During his time at Auckland, he studied cosmic rays; represented New Zealand at a United Nations conference on atomic energy for peaceful purposes; measured neutron yields from plutonium fission (whilst on secondment to Harwell, UK); and wrote his first book Elements of Pulse Circuits.

In 1957 Francis joined CERN, where he started his long and remarkable journey on experiments to measure the anomalous magnetic moment of the muon (muon g-2). This endeavour would span nearly five decades and four major experiments, three at CERN and one at Brookhaven National Laboratory (BNL) in the US. The initial result from the first experiment had an accuracy of just 2%, whereas the final result from the last experiment reached 0.5 parts per million. Each experiment was at the time seen as a tour de force, and the measurement added an important restraint on the imaginations of theorists. It was also striking that each new measurement was within the error limits of the previous ones.

Many other people, including various highly renowned physicists, contributed to this long effort, but Francis is the sole common author, making seminal contributions to all of the experiments. The first experiment was performed on the initiative of Leon Lederman, a CERN visitor at the time, at CERN’s first accelerator, the 600 MeV Synchrocyclotron. The other members of the noteworthy team on this experiment were Georges Charpak, Richard Garwin, Theo Muller, Hans Sens and Antonino Zichichi. By the time of the second experiment, CERN’s Proton Synchrotron was operating and the second and third experiments were performed there – taking advantage of the higher-energy muons that the accelerator provided. Francis alone continued onto these experiments, but among others joining the experiments was Emilio Picasso. Later Francis, again alone, continued as a member of the most recently completed g-2 experiment at BNL. In the spirit of always looking for major improvements, it is noteworthy that in his review paper “The 47 years of muon g-2”, written with Yannis Semertzidis, a totally new structure for a muon storage ring is suggested, should greater accuracy be justified for a future experiment.

The first experiment showed that the muon was a “heavy electron”, the second validated electron loops in the photon propagator, and the third showed the contribution from virtual hadron loops. Each measurement has spurred theoretical physicists to include more and more effects in their calculations of the muon magnetic moment: higher-order corrections in quantum electrodynamics, first-order and then higher-order hadronic and electroweak contributions. These advances in the theoretical prediction in turn justified the next generation of experiment, to give an even more stringent test of theory. The muon storage rings also allowed tests of relativistic time dilation, with the third experiment achieving an accuracy of 0.1% for a “muon clock” moving at a speed of 0.9994c and the most accurate test of the “twin paradox”.

During the 1970s, when he was again based in the UK and Dean of The Royal Military College of Science, Francis also started to do research in wave energy. This work continued through his retirement, in parallel to the work on g-2. In this area too, he established a formidable reputation, with many papers written and patents produced over a period of 40 years. Indeed, his most recent paper on wave energy was published just a few days after his death.

Early in his retirement, he designed the beam transport system for a proton-therapy system at a cancer hospital, which was still being used more than 20 years later. He also published a special-relativistic single-parameter analysis of data on redshifts of type 1A supernovae that showed no evidence for acceleration or deceleration effects. Even more recently, he worked on other tests of relativity based on analysis of data from the muon g-2 experiments.

He received many honours, including election to a fellow of the Royal Society and the Hughes Medal for his work at CERN on g-2.

Outside of work, Francis had a passion for flying gliders, was a keen skier and windsurfer, a regular swimmer, and liked large American cars. All of these befitted a hardworking but somewhat playboy image, that years later formed much of the basis of his novel Catalysed Fusion.

Francis was a wonderful source of new ideas and insights, with a prodigious output. He was always enthusiastic, and he could be charming but forceful, and a stickler for precision.

He will be much missed.

Preserving European unity in physics

The European Physical Society

The year 1968 marked a turning point in the history of post-war Europe that remains engraved in our collective memory. Global politics were marked by massive student unrest, the Cold War and East–West confrontation. On 21 August the Soviet Union and other Warsaw Pact states invaded Czechoslovakia to crush the movement of liberalisation, democratisation and civil rights, which had become known as the Prague Spring.

Against this background, it seems a miracle that the European Physical Society (EPS) was established only a few weeks later, on 26 September, with representatives of the Czechoslovak Physical Society and the USSR Academy of Sciences sitting at the same table. The EPS was probably the first learned society in Europe involving physicists from both sides of the Iron Curtain. Ever since, building scientific bridges across political divides has been core to the society’s mission.

The EPS was founded in Geneva not by accident. Whereas CERN did not play a formal role, the CERN model of European cooperation made a substantial impact on the genesis of the new society. CERN was at that time principally an organisation of Western European states, but it had started early to develop scientific collaboration with the Soviet Union and other Eastern countries, notably through the Joint Institute for Nuclear Research in Dubna. Leading CERN physicists – including Director-General Bernard Gregory – were instrumental in setting up the new society; Gilberto Bernardini, who had been CERN’s first director of research in 1960–1961 and was a strong advocate of international collaboration in science, became the first EPS president. From the 20 national physical societies and similar organisations that participated in the 1968 foundation, this has now grown to 42, covering almost all of Europe plus Israel, and representing more than 130,000 members. In addition, there are about 42 associate members – mostly major research institutions including CERN – and, last but not least, around 3500 individual members.

Today, the EPS serves the European physics community in a twofold way: by promoting collaboration across borders and disciplines, through activities such as conferences, publications and prizes; and by reaching out to political decision makers, media and the public to promote awareness of the importance of physics education and research.

Rüdiger Voss

The Iron Curtain is history, but the EPS celebrates its 50th anniversary at a time when new, more complex and subtle political divides are opening up in Europe: the UK’s departure from the European Union (EU) is only the most prominent example. While respecting the result of democratic votes, a continued erosion of European unity will undermine fundamental values and best practices that many of us take for granted: free cross-border collaboration, unrestricted mobility of researchers and students, and access to European funding and infrastructures. For almost 30 years now, in Europe, we have taken such freedoms in science as self-evident. Today, prestigious universities in the heart of Europe are threatened with closure on political grounds, while in other countries physicists are jailed for claiming the right to freely exercise their academic profession. These concerns are not unique to physics and must be addressed by the scientific community at large. The EPS, representing a science with a long tradition and highly developed culture of international collaboration, has a special responsibility to uphold these values.

Against this challenging background, the EPS is undertaking efforts to make the voice of the physics community more clearly heard in European science-policy making, principally through a point of presence in Brussels to facilitate communication with the European Commission and with partner organisations defending similar interests. In an environment where funding opportunities are increasingly organised around societal rather than scientific challenges, the EPS must advocate a healthy and sustained balance between basic and applied research. The next European Framework Programme Horizon Europe must not only provide fair access to funds, research opportunities and infrastructure for researchers from EU countries, but should remain equally open to participation from third countries, following the example of the successful association of countries like Norway and Switzerland with Horizon 2020. Building scientific bridges across political divides remains as vital as ever, in the best interest of a strong cohesion of the European physics community.

bright-rec iop pub iop-science physcis connect