Comsol -leaderboard other pages

Topics

Electron–ion collider on the horizon

Protons and neutrons, the building blocks of nuclear matter, constitute about 99.9% of the mass of all visible matter in the universe. In contrast to more familiar atomic and molecular matter, nuclear matter is also inherently complex because the interactions and structures in nuclear matter are inextricably mixed up: its constituent quarks are bound by gluons that also bind themselves. Consequently, the observed properties of nucleons and nuclei, such as their mass and spin, emerge from a complex, dynamical system governed by quantum chromodynamics (QCD). The quark masses, generated via the Higgs mechanism, only account for a tiny fraction of the mass of a proton, leaving fundamental questions about the role of gluons in nucleons and nuclei unanswered.

The underlying nonlinear dynamics of the gluon’s self-interaction is key to understanding QCD and fundamental features of the strong interactions such as dynamical chiral symmetry breaking and confinement. Despite the central role of gluons, and the many successes in our understanding of QCD, the properties and dynamics of gluons remain largely unexplored.

Positive evaluation

To address these outstanding puzzles in modern nuclear physics, researchers in the US have proposed a new machine called the Electron Ion Collider (EIC). In July this year, a report by the National Academies of Sciences, Engineering, and Medicine commissioned by the US Department of Energy (DOE) positively endorsed the EIC proposal. “In summary, the committee finds a compelling scientific case for such a facility. The science questions (see “EIC’s scientific goals: in brief”) that an EIC will answer are central to completing an understanding of atoms as well as being integral to the agenda of nuclear physics today. In addition, the development of an EIC would advance accelerator science and technology in nuclear science; it would also benefit other fields of accelerator-based science and society, from medicine through materials science to elementary particle physics.”

From a broader perspective, the versatile EIC will, for the first time, be able to systematically explore and map out the dynamical system that is the ordinary QCD bound state, triggering a new area of study. Just as the advent of X-ray diffraction a century ago triggered tremendous progress in visualising and understanding the atomic and molecular structure of matter, and as the introduction of large-scale terrestrial and space-based probes in the last two to three decades led to precision observational cosmology with noteworthy findings, the EIC is foreseen to play a similarly transformative role in our understanding of the rich variety of structures at the subatomic scale.

Two pre-conceptual designs for a future high-energy and high-luminosity polarised EIC have evolved in the US using existing infrastructure and facilities (figure 1). One proposes to add an electron storage ring to the existing Relativistic Heavy-Ion Collider (RHIC) complex at Brookhaven National Laboratory (BNL) to enable electron–ion collisions. The other pre-conceptual design proposes a new electron and ion collider ring at Jefferson Laboratory (JLab), utilising the 12 GeV upgraded CEBAF facility (CERN Courier March 2018 p19) as the electron injector. The requirement that the EIC has a high luminosity (approximately 1034 cm–2 s–1) demands new ways to “cool” the hadrons, beyond the capabilities of current technology. A novel, coherent electron-cooling technique is under development  at BNL, while JLab is focussing on the extension of conventional electron cooling techniques to significantly higher energy and to use bunched electron beams for the first time. The luminosity, polarisation and cooling requirements are coupled to the existence and further development of high brilliance (polarised) electron and ion sources, benefitting from the existing experience at JLab, BNL and collaborating institutions.

Fig. 1.

The EIC is foreseen to have at least two interaction regions and thus two large detectors. The physics-driven requirements on the EIC accelerator parameters, and extreme demands on the kinematic coverage for measurements, makes it particularly challenging to integrate into the interaction regions of the main detector and dedicated detectors along the beamline in order to register all particles down to the smallest angles. The detectors would be fully integrated in the accelerator over a region of about 100 m, with a secondary focus to even detect particles with angles and rigidities near the main ion beams. To quickly separate both beams into their respective beam lines while providing the space and geometry required by the physics programme, both the BNL and JLab pre-conceptual designs incorporate a large crossing angle of 20–50 mrad. This achieves a hermetic acceptance and also has the advantage of avoiding the introduction of separator dipoles in the detector vicinity that would generate huge amounts of synchrotron radiation. The detrimental effects of this crossing angle on the luminosity and beam dynamics would be compensated by a crab-crossing radio-frequency scheme, which has many synergies with the LHC high-luminosity upgrade (CERN Courier May 2018 p18).

Modern particle detector and readout systems will be at the heart of the EIC, driven by the demand for high precision on particle detection and identification of final-state particles. A multipurpose EIC detector needs excellent hadron–lepton–photon separation and characterisation, full acceptance, and to go beyond the requirements of most particle-physics detectors when it comes to identifying pions, kaons and protons. This means that different particle-identification technologies have to be integrated over a wide rapidity range in the detector to cover particle momenta from a couple of 100 MeV to several tens of GeV. To address the demands on detector requirements, an active detector R&D programme is ongoing, with key technology developments including large, low-mass high-resolution tracking detectors and compact, high-resolution calorimetry and particle identification.

The path ahead

A high-energy and high-luminosity electron–ion collider capable of a versatile range of beam energies, polarisations and ion species is the only tool to precisely image the quarks and gluons, and their interactions, and to explore the new QCD frontier of strong colour fields in nuclei – to understand how matter at its most fundamental level is made. In recognition of this, in 2015 the Nuclear Science Advisory Committee (NSAC), advising the DOE, and the National Science Foundation (NSF) recommended an EIC in its long-range plan as the highest priority for new facility construction. Subsequently, a National Academy of Sciences (NAS) panel was charged to review both the scientific opportunities enabled by an EIC and the benefits to other fields of science and society, leading to the report published in July.

Fig. 2.

The NAS report strongly articulates the merit of an EIC, also citing its role in maintaining US leadership in accelerator science. This could be the basis for what is called a Critical Decision-0 or Mission Need approval for the DOE Office of Science, setting in motion the process towards formal project R&D, engineering and design, and construction. The DOE Office of Nuclear Physics is already supporting increased efforts towards the most critical generic EIC-related accelerator research and design.

But the EIC is by no means a US-only facility (figure 2). A large international physics community, comprising more than 800 members from 150 institutions in 30 countries and six continents, is now energised and working on the scientific and technical challenges of the machine. An EIC users group (www.eicug.org) was formed in late 2015 and has held meetings at the University of California at Berkeley, Argonne National Laboratory, and Trieste, Italy, with the most recent taking place at the Catholic University of America in Washington, DC in July. The EIC user group meetings in Trieste and Washington included presentations of US and international funding agency perspectives, further endorsing the strong international interest in the EIC. Such a facility would have capabilities beyond all previous electron-scattering machines in the US, Europe and Asia, and would be the most sophisticated and challenging accelerator currently proposed for construction in the US.

Empowering Africa’s youth to shape its future

AIMS South Africa graduates of 2017–2018

It was 2001 and Neil Turok, a cosmologist at the University of Cambridge at the time, was on sabbatical in his home town of Cape Town, South Africa. At dinner one evening, his father, who himself was a member of the first South African congress following the end of apartheid in 1994, posed the question: what will you do for Africa?

Two years later, Turok founded the African Institute for Mathematical Science (AIMS), with the mission to improve mathematics and science education throughout the African continent. The first centre, in 2003 at a derelict resort hotel in the small surfer town of Muizenberg, just south of Cape Town, saw 12 students graduate. Since that time, AIMS has grown to span the whole continent, with five more centres founded in Tanzania, Ghana, Senegal, Cameroon and most recently in Rwanda. The centres have produced almost 2000 graduates and form part of a pan-African and global network of mathematicians and physicists called the Next Einstein Initiative (NEI), a number of whom work in high-energy physics experiments at CERN and elsewhere.

Africa is a continent filled with potential, rich in natural resources and with a population that is projected to comprise nearly 50% of the world population before the end of the century. But it is also a continent plagued by problems that hinder development and success, particularly in mathematics and science (figure 1). More people there die each year from AIDS and civil war than anywhere else in the world, and access to quality education at all levels is tenuous, at best. The mission of AIMS and NEI is to address these issues by empowering Africa’s brightest students and propelling them towards scientific, educational and economic self-sufficiency.

A tutorial at the South Africa centre

The AIMS curriculum

The primary way that AIMS contributes to this transformation is an intensive one-year master’s programme that runs from August to June each year. Preparations begin well in advance, starting in January when lecturers from around the world submit proposals to teach three-week courses at one of the six centres. In parallel, students from all over Africa apply and are selected through a very competitive process, with upwards of 2000 students vying for around 50 spots at each centre. The goal of both sets of applications, for lecturers as well as students, is to ensure that the very best people are brought together.

The course is highly structured. For the first 10 weeks, students attend a series of skills courses with an emphasis on problem solving and computing. The curriculum then enters a review phase, where students elect to follow two courses for every three-week block. The courses are dynamic, selected by the academic director of each centre every year and then taught by (mostly foreign) lecturers, who have complete freedom to write the course as they so choose. The beauty of such a curriculum is the diverse set of topics that can be taught side-by-side, which allows students to sample new topics – a day that begins with a course on financial mathematics could end with students writing a simulation for computational neuroscience. Three weeks later, the courses change and students can find themselves immersed in knot theory or Monte Carlo methods in particle physics.

Fig. 1.

This cadence continues for 18 weeks, during which time the students are able to build connections with academics from around the world and find the course that suits them best. This builds to the final portion of the course, called the essay phase, in which students identify a mentor and a project from a list of proposed topics. The student then works independently for a period of 10 weeks under the supervision of a mentor, culminating in a thesis essay and oral examination. If this fast-paced academic course were not enough, the entire course is taught in English, which for many students is not their first language. Adding to their workloads, students are taught courses in English and writing throughout.

Measurement discussion

Strong support

Unlike many institutions, success at AIMS is limited only by a student’s will to achieve. All fees are paid by AIMS, as well as the costs of relocating, accommodation and food. Each student is provided with a personal computer (which for many students is the first computer they have ever owned) and a team of five to 10 academic tutors are hired to support the students in their studies and augment the lectures when necessary. This all ensures that the complete focus of the student can be on their studies and development as an academic. 

The result is a nearly 100% success rate, with more than 30% of graduates being female and AIMS graduates representing 43 out of 54 African nations. These students most often go on to enter research master’s and PhD programmes in Africa and elsewhere, their university education having been in some way validated through the standards set by AIMS and the international institutions that support it. However, nearly all AIMS graduates eventually desire to return to Africa, whether it be in industry or research, thus contributing to their home nation. Some alumni even return to the school as lecturers themselves. Ultimately, the goal of AIMS and NEI is to establish 15 centres throughout Africa by 2030 and to establish a sustainable pan-African academic culture.

Class of 2015

A lecturer’s perspective

To offer a first-hand account of a typical day as a lecturer, it’s 19:00 and you have just sent the last e-mail of the day. Dusk is welcome since it promises to relieve some of the heat. If you’re in Biriwa, Ghana, you make sure to close the window and put on some mosquito repellant. There was a student in your class who excused himself yesterday for not completely finishing his homework. He has Malaria. He’s working a lot anyhow and he’ll be better soon, but you would be completely knocked out if you caught it. As you are about to close the laptop, you hear someone at the door: it is your students, waiting for their ad-hoc evening tutorial. Teaching at AIMS is a full-day immersion. Finding students discussing your lecture, assignments or books that you showed them is not uncommon, even after midnight.

Your average AIMS student is inquisitive, hard-working and passionate, and the vastly different academic backgrounds of students in your class will force you to have to answer questions from very basic to very advanced levels. One day, a student might be “angry” because you told them that morning how light is both a wave and a particle. After the first days of shyness (many students have never been encouraged to state their own opinion over a science matter), they’ll question what you say, and clearly it is not possible that a thing is a wave but also a particle, is it?!

AIMS final party

Don’t expect to spend your evening not doing physics unless you really need a break, in which case take a walk on the beach or, if you’re at the Muizenberg centre in South Africa, grab a surfboard. For those of you familiar with CERN, the parallel that might best explain the AIMS atmosphere is the “Bermuda triangle” of Restaurant 1, the hostel and your office: you can manage to spend weeks there before breaking out and spending some time to explore Geneva, the Jura, and the world around you. AIMS students are ambitious and grateful for the opportunity to work and learn, so they can easily spend their days between the lecture room, the canteen and the computer lab without leaving the building once. As an AIMS lecturer, it is thus good to come prepared for a few extra-curricular activities. This could range from showing students how to swim in the shallow waters of the Indian Ocean in Bagamoyo, taking them on an all-day hike to Table Mountain in Cape Town, or an extra tutorial on how to write a good application or give a talk, not to mention a discussion about how to shape Africa’s future. Topics such as how a woman can be a president in some countries (or a physics lecturer for that matter!) are sure to attract the attention of all students, even those not directly in your class.

The last few days of your three-week lecture block are the most special. Students give presentations on topics that go beyond what your lecture contains, having spent every free minute preparing. Building confidence in the student’s mind is your most important mission at AIMS, and the students have every reason to be confident. Most of them had to fight to get a good education that is taken for granted in many countries, and they all want to make an impact in building Africa’s future. After the student’s talks, the ceremony and party starts. Lecturers are bid farewell, and you may well be handed a traditional African costume to be dressed properly for the party. Then, with some exceptionally gifted dancers taking the lead, you’ll not be let go before at least attempting to move gracefully to the latest African pop-music, all without a single drop of alcohol in sight.

Back at your workplace, AIMS stays with you. Many students will keep you updated on their career, seek a reference letter from you, or eventually join you as researcher.

Becoming involved with AIMS is for anyone who is interested in working with some of the best students in the world, most of whom have had to fight hard to get there. There are a variety of options. For those with master’s degrees in mathematics and science, it is possible to serve as an academic tutor at an AIMS institute for a period of one year, during which time you will work closely with students as a mentor and act as a bridge between the shorter term lecturers. For those with PhD degrees, it is possible to act as either an essay supervisor or a lecturer. In both instances, the topic of instruction is designed by you, giving you control and flexibility to tailor the course to your interests and expertise. In whatever capacity you decide to become involved, it is an opportunity you will not regret.

Defeating the background in the search for dark matter

Inspecting photomultiplier tubes

Compelling cosmological and astrophysical evidence for the existence of dark matter suggests that there is a new world beyond the Standard Model of particle physics still to be discovered and explored. Yet, despite decades of effort, direct searches for dark matter at particle accelerators and underground laboratories alike have so far come up empty handed. This calls for new and improved methods to spot the mysterious substance thought to make up most of the matter in the universe.

Dark-matter searches using detectors based on liquefied noble gases such as xenon and argon have long demonstrated great discovery potential and continue to play a major role in the field. Such experiments use a large volume of material in which nuclei struck by a dark-matter particle would create a tiny burst of scintillation light, and the very low expected event rate requires that backgrounds are kept to a minimum. Searches employing argon detectors have a particular advantage because they can significantly reduce events from background sources, such as background from the abundant radioactive decays from detector materials and from electron scattering by solar neutrinos. That will leave the low-rate nuclear recoils induced by coherent scattering of atmospheric neutrinos as the sole residual background – the so-called “neutrino floor”.

Enter the Global Argon Dark Matter Collaboration (GADMC), which was formed in September 2017. Comprising more than 300 scientists from 15 countries and 60 institutions involved in four first-generation dark-matter experiments – ArDM at Laboratorio Subterráneo de Canfranc in Spain, DarkSide-50 at INFN’s Laboratori Nazionali del Gran Sasso (LNGS) in Italy, DEAP-3600 and MiniCLEAN at SNOLAB in Canada – GADMC is working towards the immediate deployment of a dark-matter detector called DarkSide-20k. The experiment would accumulate an exposure of 100 tonne × year and be followed by a much larger detector to collect more than 1000 tonne × year, both potentially with no instrumental background. These experiments promise the most complete exploration of the mass/parameter range of the present dark-matter paradigm.

Direct detection with liquid argon

One well-considered form of dark matter that matches astronomical measurements is weakly interacting massive particles (WIMPs), which would exist in our galaxy with defined numbers and velocities. In a dark-matter experiment employing a liquid-argon detector, such particles would collide with argon nuclei, causing them to recoil. These nuclear recoils produce ionised and excited argon atoms which, after a series of reactions, form short-lived argon dimers (weakly bonded molecules) that decay and emit scintillation light. The time profile of the scintillation light is significantly different from that created by argon-ionising events associated with radioactivity in the detector material, and has been shown to enable a strong rejection of background sources through a technique known as pulse-shape discrimination.

Fig. 1.

Located at LNGS, DarkSide-50 is the first physics detector of the DarkSide programme for dark-matter detection, with a fiducial mass of 50 kg. The experiment produced its first WIMP search results in December 2014 using argon harvested from the atmosphere and, in October the following year, reported the first ever WIMP search results using lower-radioactivity underground argon.

DarkSide-50 uses a detection scheme based on a dual-phase time projection chamber (TPC), which contains a small region of gaseous argon above a larger region of liquid argon (figure 1, left). In this configuration, secondary scintillation light, generated by ionisation electrons that drift up through the liquid region and are accelerated into the gaseous one, are used together with the primary scintillation light to look for a signal. Compared to single-phase detectors using only the pulse-shape discrimination technique, this search method requires even greater care in restricting the radioactive background through detector design and fabrication but provides excellent position resolution. For low-mass (<10 GeV/c2) WIMPs, the primary scintillation light is nearly absent, but the detectors remain sensitive to dark matter through the observation of the secondary scintillation light.

Fig. 2.

Argon-based dark-matter searches have had a number of successes in the past two years (figure 2). DarkSide-50 established the availability of an underground source of argon strongly depleted in the radioactive isotope 39Ar, while DEAP-3600 (figure 3), the largest (3.3 tonnes) single-phase liquid-argon running experiment, provided the best value to date on the precision of pulse-shape discrimination for scintillation light, better than 1 part in 109. In terms of measurements, DarkSide-50 released results from a 500-day detector exposure completely free of instrumental background and set the best exclusion limit yet for interactions of WIMPs with masses between 1.8 and 6 GeV/c2. Similar results to those from Darkside-50 for the mass region above 40 GeV/c2 were reported in the first paper from DEAP-3600, and results from a one-year exposure of DEAP-3600 with a fiducial mass of about 1000 kg are expected to be released in the near future.

High-sensitivity searches for WIMPs using noble-gas dual-phase TPC detectors are complementary to searches conducted at the Large Hadron Collider (LHC) in the mass region accessible at the current LHC energy of 13 TeV (which is limited to masses of a few TeV/c2) and can reach masses of 100 TeV/c2 and beyond with very good sensitivity.

Leading limits

The best limits to date on high-mass WIMPs have been provided by xenon-based dual-phase TPCs – the leading result given by the recently released XENON1T exposure of 1 tonne × year (figure 2). In spite of a small residual background, they were able to exclude WIMP-nucleon spin-independent elastic-scatter cross-sections above 4.1 × 10–47 cm2 at 30 GeV/c2 at 90% confidence level (CERN Courier July/August 2018 p9). Larger xenon detectors (XENONnT and DARWIN) are also planned by the same collaboration (CERN Courier March 2017 p35).

Fig. 3.

The next generation of xenon and argon detectors have the potential to extend the present sensitivity by about a factor of 10. But there is still a further factor of 10 to be increased before one reaches the neutrino floor – the ultimate level at which interactions of solar and atmospheric neutrinos with the detector material become the limiting background. This is where the GADMC liquid-argon detectors, which are designed to have pulse-shape discrimination capable of eliminating the background from electron scatters of solar neutrinos and internal radioactive decays, can provide an advantage.

GADMC envisages a two-step programme to explore high-mass dark matter. The first step, DarkSide-20k, has been approved for construction at LNGS by Italy’s National Institute for Nuclear Physics (INFN) and by the US National Science Foundation, with present and potentially future funding from Canada. Also a recognised experiment at CERN called RE-37, DarkSide-20k is designed to collect an exposure of 100 tonne × year in a period of five years (to be possibly extended to 200 tonne × year in 10 years), completely free of any instrumental background. The start of data taking is foreseen for 2022–2023. The second step of the programme will involve building an argon detector that is able to collect an exposure of more than 1000 tonne × year. SNOLAB in Canada is a strong candidate to host this second-stage experiment.

Argon can deliver the ultimate background-free search for dark matter, but that comes with extensive technological development. First and foremost, researchers need to extract and distill large volumes of the gas from underground deposits, as argon in the Earth’s atmosphere is unsuitable owing to its high content of the radioactive isotope 39Ar. Second, the scintillation light has to be efficiently detected, requiring innovative photodetector R&D.

Sourcing pure argon

Focusing on the first need, atmospheric argon has a radioactivity of 1 Bq/kg, which is entirely caused by the activation of 40Ar by cosmic rays. Given that the drift time of ionisation electrons over a length of 1 m is 1 ms, a dual-phase TPC detector reaches a complete pile-up condition (i.e. when the event rate exceeds the detector’s ability to read out the information), at a mass of 1 tonne. Scintillation-only detectors do not fare much better, and given that the scintillation lifetime is 10 μs, they are limited to detectors with a fiducial mass of a few tonnes. The argon road to dark matter has thus required early concentration on solving the problem of procuring large batches of argon that are much more depleted in 39Ar than atmospheric argon is. The solution came through an unlikely path: the discovery that underground sources of CO2 originating from Earth’s mantle carry sizable quantities of noble gases, in reservoirs where secondary production of 39Ar is significantly suppressed.

As part of a project called Urania, funded by INFN, GADMC will soon deploy a plant that is able to extract underground argon at a rate of 250 kg per day from the same site in Colorado, US, where argon for DarkSide-50 was extracted. Argon from this underground source is more depleted in 39Ar than atmospheric argon by a factor of at least 1400, making detectors of hundreds of tonnes possible for high-mass WIMP searches.

Not content with this gift of nature, another project called ARIA, also funded by INFN, by the Italian Ministry of University and Research (MIUR), and by the local government of the Sardinia region, is developing a further innovative plant to actively increase the depletion in 39Ar. The plant will consist of a 350 m-tall cryogenic-distillation tower called Seruci-I, which is under construction in the Monte Sinni coal mine in Sardinia operated by the Carbosulcis mining company. Seruci-I will study the active depletion of 39Ar by cryogenic distillation, which exploits the tiny dependence of the vapour pressure upon the atomic number. Seruci-I is expected to reach a production capacity of 10 kg of argon per day with a factor of 10 of 39Ar depletion per pass. This is more than sufficient to deliver – starting from the gas extracted with the Urania underground source – a one-tonne ultra-depleted-argon target that could enable a leading programme of searches for low-mass dark matter. Seruci-I is also expected to perform strong chemical purification at the rate of several tonnes per day and will be used to perform the final stage of purification for the 50 tonne underground argon batch for DarkSide-20k as well as for GADMC’s final detector.

Fig. 4.

CERN plays an important role in DarkSide-20k by carrying out vacuum tests of the 30 modules for the Seruci-I column (figure 4) and by hosting the construction of the cryogenics for DarkSide-20k. At the time of its approval in 2017, DarkSide-20k was set to be deployed within a very efficient system of neutron and cosmic-ray rejection, based on that used for DarkSide-50 and featuring a large organic liquid scintillator detector hosted within a tank of ultrapure deionised water. But with the deployment of new organic scintillator detectors now discouraged at LNGS due to tightening environmental regulations, GADMC is completing the design of a large, and more environmentally friendly, liquid-argon detector for neutron and cosmic-ray rejection based on the cryostat technology developed at CERN to support prototype detector modules for the future Deep Underground Neutrino Experiment (DUNE) in the US.

Turning now to the second need of a background-free search for dark matter – the efficient detection of the scintillation light – researchers are focusing on perfecting existing technology to make low-radioactivity silicon photomultipliers (SiPMs) and using them to build large-area photosensors that are capable of replacing the traditional 3 cryogenic photomultipliers. Plans for DarkSide-20k settled on the use of so-called NUV-HD-TripleDose SiPMs, designed by Fondazione Bruno Kessler of Trento, Italy, and produced by LFoundry of Avezzano, also in Italy. In the meantime, researchers at LNGS and other institutions succeeded in overcoming the huge capacitance per unit surface (50 pF/mm2) required to build photosensors that have an area of 25 cm2 and deliver a signal-to-noise ratio of 15 or larger. A new INFN facility, the Nuova Officina Assergi, was designed to enable the high-throughput production of SiPMs to make such photosensors for DarkSide-20k and future detectors, and it is now under construction.

GADMC’s programme is complemented by a world-class effort to calibrate noble-liquid detectors for low-energy nuclear recoils created by low-mass dark matter. On the heels of the SCENE programme that took place at the University of Notre Dame Tandem accelerator in 2013–2015, the R&D programme, developed at the University of Naples Federico II and now installed at the INFN Laboratori Nazionali del Sud, plans to improve the characterisation of the argon response to nuclear recoils. Of special interest is the extension of measurements to 1 keV, in support of searches for low-mass dark matter, and the verification of the possible dependence of the nuclear-recoil signals upon the direction of the initial recoil momentum relative to the drift electric field, which would enable measurements below the neutrino floor. Directionality in argon has already been established for alpha particles, protons and deuterons, and its presence for nuclear recoils was hinted at by the last results of the SCENE experiment.

Although only recently established, GADMC is enthusiastically pursuing this long-term, staged approach to dark-matter detection in a background-free mode, which has great discovery potential extending all the way to the neutrino floor and perhaps beyond.

Francis Farley 1920–2018

Francis Farley

Francis Farley, who played a pivotal role in experiments to measure the anomalous magnetic moment of the muon, passed away on 16 July at his home in the south of France at the age of 97.

The son of a British Army engineering officer, Francis was born in India and educated in England. Before he could complete his education, he transferred to military research and worked on radar, developing his knowledge of electronics and demonstrating his abilities in innovation. Following a secondment to Chalk River Laboratories in Ontario, Canada, he resumed his formal education with a PhD in 1950 from the University of Cambridge, before starting his academic career at Auckland University in New Zealand. During his time at Auckland, he studied cosmic rays; represented New Zealand at a United Nations conference on atomic energy for peaceful purposes; measured neutron yields from plutonium fission (whilst on secondment to Harwell, UK); and wrote his first book Elements of Pulse Circuits.

In 1957 Francis joined CERN, where he started his long and remarkable journey on experiments to measure the anomalous magnetic moment of the muon (muon g-2). This endeavour would span nearly five decades and four major experiments, three at CERN and one at Brookhaven National Laboratory (BNL) in the US. The initial result from the first experiment had an accuracy of just 2%, whereas the final result from the last experiment reached 0.5 parts per million. Each experiment was at the time seen as a tour de force, and the measurement added an important restraint on the imaginations of theorists. It was also striking that each new measurement was within the error limits of the previous ones.

Many other people, including various highly renowned physicists, contributed to this long effort, but Francis is the sole common author, making seminal contributions to all of the experiments. The first experiment was performed on the initiative of Leon Lederman, a CERN visitor at the time, at CERN’s first accelerator, the 600 MeV Synchrocyclotron. The other members of the noteworthy team on this experiment were Georges Charpak, Richard Garwin, Theo Muller, Hans Sens and Antonino Zichichi. By the time of the second experiment, CERN’s Proton Synchrotron was operating and the second and third experiments were performed there – taking advantage of the higher-energy muons that the accelerator provided. Francis alone continued onto these experiments, but among others joining the experiments was Emilio Picasso. Later Francis, again alone, continued as a member of the most recently completed g-2 experiment at BNL. In the spirit of always looking for major improvements, it is noteworthy that in his review paper “The 47 years of muon g-2”, written with Yannis Semertzidis, a totally new structure for a muon storage ring is suggested, should greater accuracy be justified for a future experiment.

The first experiment showed that the muon was a “heavy electron”, the second validated electron loops in the photon propagator, and the third showed the contribution from virtual hadron loops. Each measurement has spurred theoretical physicists to include more and more effects in their calculations of the muon magnetic moment: higher-order corrections in quantum electrodynamics, first-order and then higher-order hadronic and electroweak contributions. These advances in the theoretical prediction in turn justified the next generation of experiment, to give an even more stringent test of theory. The muon storage rings also allowed tests of relativistic time dilation, with the third experiment achieving an accuracy of 0.1% for a “muon clock” moving at a speed of 0.9994c and the most accurate test of the “twin paradox”.

During the 1970s, when he was again based in the UK and Dean of The Royal Military College of Science, Francis also started to do research in wave energy. This work continued through his retirement, in parallel to the work on g-2. In this area too, he established a formidable reputation, with many papers written and patents produced over a period of 40 years. Indeed, his most recent paper on wave energy was published just a few days after his death.

Early in his retirement, he designed the beam transport system for a proton-therapy system at a cancer hospital, which was still being used more than 20 years later. He also published a special-relativistic single-parameter analysis of data on redshifts of type 1A supernovae that showed no evidence for acceleration or deceleration effects. Even more recently, he worked on other tests of relativity based on analysis of data from the muon g-2 experiments.

He received many honours, including election to a fellow of the Royal Society and the Hughes Medal for his work at CERN on g-2.

Outside of work, Francis had a passion for flying gliders, was a keen skier and windsurfer, a regular swimmer, and liked large American cars. All of these befitted a hardworking but somewhat playboy image, that years later formed much of the basis of his novel Catalysed Fusion.

Francis was a wonderful source of new ideas and insights, with a prodigious output. He was always enthusiastic, and he could be charming but forceful, and a stickler for precision.

He will be much missed.

Preserving European unity in physics

The European Physical Society

The year 1968 marked a turning point in the history of post-war Europe that remains engraved in our collective memory. Global politics were marked by massive student unrest, the Cold War and East–West confrontation. On 21 August the Soviet Union and other Warsaw Pact states invaded Czechoslovakia to crush the movement of liberalisation, democratisation and civil rights, which had become known as the Prague Spring.

Against this background, it seems a miracle that the European Physical Society (EPS) was established only a few weeks later, on 26 September, with representatives of the Czechoslovak Physical Society and the USSR Academy of Sciences sitting at the same table. The EPS was probably the first learned society in Europe involving physicists from both sides of the Iron Curtain. Ever since, building scientific bridges across political divides has been core to the society’s mission.

The EPS was founded in Geneva not by accident. Whereas CERN did not play a formal role, the CERN model of European cooperation made a substantial impact on the genesis of the new society. CERN was at that time principally an organisation of Western European states, but it had started early to develop scientific collaboration with the Soviet Union and other Eastern countries, notably through the Joint Institute for Nuclear Research in Dubna. Leading CERN physicists – including Director-General Bernard Gregory – were instrumental in setting up the new society; Gilberto Bernardini, who had been CERN’s first director of research in 1960–1961 and was a strong advocate of international collaboration in science, became the first EPS president. From the 20 national physical societies and similar organisations that participated in the 1968 foundation, this has now grown to 42, covering almost all of Europe plus Israel, and representing more than 130,000 members. In addition, there are about 42 associate members – mostly major research institutions including CERN – and, last but not least, around 3500 individual members.

Today, the EPS serves the European physics community in a twofold way: by promoting collaboration across borders and disciplines, through activities such as conferences, publications and prizes; and by reaching out to political decision makers, media and the public to promote awareness of the importance of physics education and research.

Rüdiger Voss

The Iron Curtain is history, but the EPS celebrates its 50th anniversary at a time when new, more complex and subtle political divides are opening up in Europe: the UK’s departure from the European Union (EU) is only the most prominent example. While respecting the result of democratic votes, a continued erosion of European unity will undermine fundamental values and best practices that many of us take for granted: free cross-border collaboration, unrestricted mobility of researchers and students, and access to European funding and infrastructures. For almost 30 years now, in Europe, we have taken such freedoms in science as self-evident. Today, prestigious universities in the heart of Europe are threatened with closure on political grounds, while in other countries physicists are jailed for claiming the right to freely exercise their academic profession. These concerns are not unique to physics and must be addressed by the scientific community at large. The EPS, representing a science with a long tradition and highly developed culture of international collaboration, has a special responsibility to uphold these values.

Against this challenging background, the EPS is undertaking efforts to make the voice of the physics community more clearly heard in European science-policy making, principally through a point of presence in Brussels to facilitate communication with the European Commission and with partner organisations defending similar interests. In an environment where funding opportunities are increasingly organised around societal rather than scientific challenges, the EPS must advocate a healthy and sustained balance between basic and applied research. The next European Framework Programme Horizon Europe must not only provide fair access to funds, research opportunities and infrastructure for researchers from EU countries, but should remain equally open to participation from third countries, following the example of the successful association of countries like Norway and Switzerland with Horizon 2020. Building scientific bridges across political divides remains as vital as ever, in the best interest of a strong cohesion of the European physics community.

Classical Field Theory

By Joel Franklin
Cambridge University Press

This book provides a comprehensive introduction to classic field theory, which concerns the generation and interaction of fields and is the logical precursor of quantum field theory. But, while in most university physics programmes students are taught classical mechanics first and then quantum mechanics, quantum field theory is normally not preceded by dedicated classic field theory classes. The author, though, claims that it would be worth giving more room to classical field theory, since it can offer a good way to think about modern physical model building.

The focus is on the relativistic structural elements of field theories, which enable a deeper understanding of Maxwell’s equations and of the electromagnetic field theory. The same also stands for other areas of physics, such as gravity.

The book comprises four chapters and is completed by three appendices. The first chapter provides a review of special relativity, with some in-depth discussion of transformations and invariants. Chapter two focuses on Green’s functions and their role as integral building blocks, offering as examples static problems in electricity and the full wave equation of electromagnetism. In chapter three, Lagrangian mechanics is introduced, together with the notions of a field Lagrangian and of action. The last chapter is dedicated to gravity, another classic field theory. The appendices include mathematical and numerical methods useful for field theories and a short essay on how one can take a compact action and from it develop all the physics known from EM.

Written for advanced-undergraduate and graduate students, this book is meant for dedicated courses on classical field theory, but could also be used in combination with other texts for advanced classes on EM or a course on quantum field theory. It could also be used as a reference text for self-study.

From Photon to Neuron: Light, Imaging, Vision

By Philip Nelson
Princeton University Press 2017

This book is as elegant as it is deep. A masterful tour of the science of light and vision. It goes beyond artificial boundaries between disciplines and presents all aspects of light as it appears in physics, chemistry, biology and the neural sciences.

The text is addressed to undergraduate students, an added challenge to the author, which is met brilliantly. Since many of the biological phenomena involved in our perception of light (in photosynthesis, image formation and image interpretation) happen ultimately at the molecular level, one is introduced rather early to the quantum treatment of the particles that form light: photons. And when they are complemented with the particle-wave duality characteristic of quantum mechanics, it is much easier to understand a large palette of natural phenomena without relying on the classical theory of light, embodied by Maxwell’s equations, whose mathematical structure is far more advanced than what is required. This classical approach has the problem that eventually one needs the quantisation of the electromagnetic field to bring photons into the picture. This would make the text rather unwieldly, and not accessible to a majority of undergraduates or biologists working in the field.

In the same way that the author instructs non-physics students in some basic physics concepts and tools, he also provides physicists with accessible and very clear presentations of many biological phenomena involving light. This is a textbook, not an encyclopaedia, hence a selection of such phenomena is necessary to illustrate the concepts and methods needed to develop the material. There are sections at the end of most chapters containing more advanced topics, and also suggestions for further reading to gain additional insight, or to follow some of the threads left open in the main text of the chapter.

A cursory perusal of the table of contents at the beginning will give the reader an idea of the breadth and depth of material covered. There is a very accessible presentation of the theory of colour, from a physical and biological point of view, and its psychophysical effects. The evolution of the eye and of vision at different stages of animal complexity, imaging, the mechanism of visual transduction and many more topics are elegantly covered in this remarkable book.

The final chapters contain some advanced topics in physics, namely, the treatment of light in the theory of quantum electrodynamics. This is our bread and butter in particle physics, but the presentation is more demanding on the reader than any of the previous chapters.

Unlike chapter zero, which explains the rudiments of probability theory in the standard frequentist and Bayesian approaches that can be understood basically by anyone familiar with high-school mathematics, chapters 12 and 13 require a more substantial background in advanced physics and mathematics.

The gestalt approach advocated by this book provides one of the most insightful, cross-disciplinary texts I have read in many years. It is mesmerising and highly recommendable, and will become a landmark in rigorous, but highly accessible interdisciplinary literature.

Applied Computational Physics

By Joseph Boudreau and Eric Swanson
Oxford University Press

This book aims to provide physical sciences students with the computational skills that they will need in their careers and expose them to applications of programming to problems relevant to their field of study. The authors, who are professors of physics at the University of Pittsburgh, decided to write this text to fill a gap in the current scientific literature that they noticed while teaching and training young researchers. Often, graduate students have only basic knowledge of coding, so they have to learn on the fly when asked to solve “real world” problems, like those involved in physics research. Since this way of learning is not optimal and sometimes slow, the authors propose this guide for a more structured study.

Over almost 900 pages, this book introduces readers to modern computational environments, starting from the foundation of object-oriented computing. Parallel computation concepts, protocols and methods are also discussed early in the text, as they are considered essential tools.

The book covers various important topics, including Monte Carlo methods, simulations, graphics for physicists and data modelling, and gives large space to algorithmic techniques. Many chapters are also dedicated to specific physics applications, such as Hamiltonian systems, chaotic systems, percolation, critical phenomena, few-body and multi-body quantum systems, quantum field theory, etc. Nearly 400 exercises of varying difficulty complete the text.

Even though most of the examples come from experimental and theoretical physics, this book could also be very useful for students in chemistry, biology, atmospheric science and engineering. Since the numerical methods and applications are sometimes technical, it is particularly appropriate for graduate students.

Quantum Field Theory Approach to Condensed Matter Physics

By Eduardo C Marino
Cambridge University Press

This book provides an excellent overview of the state of the art of quantum field theory (QFT) applications to condensed-matter physics (CMP). Nevertheless, it is probably not the best choice for a first approach to this wonderful discipline.

QFT is used to describe particles in the relativistic (high-energy intensity) regime, but, as is well known, its methods can also be applied to problems involving many interacting particles – typically electrons. The conventional way of studying solid-state physics and, in particular, silicon devices does not make use of QFT methods due to the success of models in which independent electrons move in a crystalline substrate. Currently, though, we deal with various condensed-matter systems that are impervious to that simple model and could instead profit from QFT tools. Among them: superconductivity beyond the Bardeen–Cooper–Schrieffer approach (high-temperature superconducting cuprates and iron-based superconductors), the quantum Hall effect, conducting polymers, graphene
and silicene.

The author, as he himself states, aims to offer a unified picture of condensed-matter theory and QFT. Thus, he highlights the interplay between these two theories in many examples to show how similar mechanisms operate in different systems, despite being separated by several orders of magnitude in energy. He discusses, for example, the comparison between the Landau–Ginzburg field of a superconductor with the Anderson–Higgs field in the Standard Model. He also explains the not-so-well-known relation between the Yukawa mechanism for mass generation of leptons and quarks, and the Peierls mechanism of gap generation in polyacetylene: the same trilinear interaction between a Dirac field, its conjugate and a scalar field that explains why polyacetylene is an insulator, is responsible for the mass of elementary particles.

The book is structured into three parts. The first covers conventional CMP (at advanced undergraduate level). The second provides a brief review of QFT, with emphasis on the mathematical analysis and methods appropriate for non-trivial many-body systems (as, in particular, in chapters eight and nine, where a classical and a quantum description of topological excitations are given). I found the pages devoted to renormalisation remarkable, in which the author clearly exposes that the renormalisation procedure is a necessity due to the presence of interactions in any QFT, not to that of divergences in a perturbative approach. The heart of the book is part three, composed of 18 chapters where the author discusses the state of the art of condensed-matter systems, such as topological insulators and even quantum computation.

The last chapter is a clear example of the non-conventional approach proposed by the author: going straight to the point, he does not explain the basics of quantum computation, but rather discusses how to preserve the coherence of the quantum states storing information, in order to maintain the unitary evolution of quantum data-processing algorithms. In his words, “the main method of coherence protection involves excitation, having the so-called non-abelian statistics”, which, going back to CMP, takes us to the realm of anyons and Majorana qubits. In my opinion, this book is not suitable for undergraduate or first-year graduate students (for whom I see as more appropriate, the classic Condensed Matter Field Theory by Altland and Simons). Instead, I would keenly recommend this to advanced graduate students and researchers in the field, who will find, in part three, plenty of hot topics that are very well explained and accompanied by complete references.

IceCube neutrino points to origin of cosmic rays

Event display

Since the discovery of high-energy extragalactic neutrinos by the IceCube collaboration in 2013, the hunt for the sources of such extreme cosmic events has been a major focus of neutrino astronomy. Now, in a multi-messenger measurement campaign involving more than 1000 scientists, IceCube and 18 independent partner observatories have identified such a cosmic particle accelerator – providing a first answer to the 100-year-old question concerning the origin of cosmic rays.

On 22 September 2017, IceCube – a cubic-kilometre neutrino detector installed in the 2.8 km-thick ice at the South Pole – registered a neutrino of likely astrophysical origin with a reconstructed energy of about 300 TeV. Within less than a minute from detection, IceCube’s automatic alert system sent a notice to the astronomical community, triggering worldwide follow-up observations. The notice was the 10th alert of this type sent by IceCube to the international astronomy community so far.

The neutrino event pointed to a 0.15 square-degree area in the sky, consistent with the position of a blazar called TXS 0506+056, an active galaxy whose jet points precisely towards Earth. The Fermi gamma-ray satellite found the blazar to be in a flaring state with a rare seven-fold increase in activity around the time of the neutrino event, making it one of the brightest objects in the gamma-ray sky at that moment. The MAGIC gamma-ray telescope in La Palma, Spain, then also recorded gamma rays with energies exceeding hundreds of GeV from the same region.

The convergence of observations convincingly implicates the blazar as the most likely source. A worldwide team from the various observatories involved conducted a statistical analysis to determine whether the correlation between the neutrino and the gamma-ray observations was perhaps just a coincidence, and found the chance for this to be around one in 1000.

Following the 22 September detection, the IceCube team searched the detector’s archival data and discovered a flare of more than a dozen lower-energy neutrinos detected in late 2014 and early 2015, which were also coincident with the blazar position. This independent observation greatly strengthened the initial detection of a single high-energy neutrino and was the start of a growing body of evidence for TXS 0506+056 being the first identified source of high-energy cosmic neutrinos. Furthermore, the distance to the blazar was determined to be about 4 billion light years (redshift z = 0.34) in the course of the follow-up observations, allowing the first luminosity determination for both gamma rays and neutrinos.

“It came as no surprise that on 12 July, the date of the release of the new observations, and on the following days, several studies were published devoted to modelling the source,” says IceCube member Marek Kowalski from DESY. “It will be exciting to watch the high-energy astronomy community develop a coherent picture of the source over the next few months, as well as new strategies to identify similar events more frequently in the future.”

The concerted observational campaign uses instruments located all over the globe and in space, spanning an energy range from radio waves, through visible light to X-rays and gamma rays, as well as neutrinos. It is thus a significant achievement for the nascent field of multi-messenger astronomy. Since neutrinos are produced through the collisions of charged cosmic rays, the new observation implies that active galaxies are also accelerators of charged cosmic-ray particles. “More than a century after the discovery of cosmic rays by Victor Hess in 1912, the IceCube findings have therefore for the first time pinpointed a compelling candidate for an extragalactic source of these high-energy particles,” says IceCube principal investigator Francis Halzen.

bright-rec iop pub iop-science physcis connect