Comsol -leaderboard other pages

Topics

Hyperfine structure: from hydrogen to antihydrogen

Since the discovery of the positron in 1932 and the antiproton in 1955, physicists have striven to confront the properties of leptonic and baryonic matter and antimatter. A major advance in the story took place in 1995 when the first antihydrogen atoms were observed at CERN’s LEAR facility. Then, in 2002, the ATHENA and ATRAP collaborations produced cold (trappable) antihydrogen at CERN’s Antiproton Decelerator (AD), paving the way to the first measurement of antihydrogen’s atomic transitions. An intense research programme at the AD has followed to compare the atomic states of antimatter with the most well-known atomic transitions in matter.

The physical properties of antimatter particles are tightly constrained within the Standard Model of particle physics (SM). For all local Lorentz-invariant quantum-field theories of point-like particles like the SM, the combination of the discrete symmetries charge-conjugation, parity and time-reversal (CPT) is conserved. An implication of the CPT theorem is that the properties of matter and antimatter are equal in absolute value. In this respect the lack of observation of primordial antimatter in the universe is tantalising, hinting that the universe has a preference for matter over antimatter despite their perfect symmetry on the microscopic scale as imposed by the SM. Although violations of CP symmetry, from which an imbalance in matter and antimatter can arise, have been observed in several systems, the effect is many orders of magnitude too small to account for the observed cosmological mismatch.

In the quest for a quantitative explanation to the baryon asymmetry in the universe, one could question the validity of our formulation of the laws of physics in terms of quantum-field theory. This is additionally motivated by the notable absence of the gravitational force in the SM and would suggest that CPT symmetry (or Lorentz invariance) need not be conserved. A framework called Standard Model Extension (SME), an effective field theory that contains the SM and general relativity but also possible CPT and Lorentz violating terms, allows researchers to interpret the results of experiments designed to search for such effects.

Any measurement with antihydrogen atoms constitutes a model-independent test of CPT invariance. Given the precision at which they have been measured in hydrogen, two atomic transitions in antihydrogen are of particular interest: the 1S–2S transition and the ground-state hyperfine splitting (which corresponds to the 21 cm microwave-emission line between parallel and antiparallel antiproton and positron spins). These were determined over the past few decades in hydrogen with an absolute (relative) precision of 10 Hz (4 × 10–15) and 2 mHz (1.4 × 10–12), respectively. Reaching similar precision in antihydrogen, hydrogen’s CPT conjugate would provide one of the most sensitive CPT tests in what was until recently a yet unprobed atomic domain. But this is a daunting challenge.

Status and prospects

Measurements of the hyperfine splitting of hydrogen reached their apogee in the 1970s. It is only recently that interest in such measurements has been revived, motivated by the possibility to further develop methods that can be applied to antihydrogen. Hydrogen’s hyperfine splitting was originally measured using a maser to interrogate atoms held in a Teflon-coated storage bulb, but this technique is not transferable to antihydrogen because unavoidable interactions between the antiatoms and the walls would lead to annihilations.

A precision of a few Hz can, however, be envisioned using the “beam-resonance” method of Rabi. This technique involves a polarised beam, microwave fields to drive spin flips, magnetic-field gradients to select a spin state, and a detector to measure the flux of atoms as a function of the microwave frequency. While less precise than the maser technique, the in-beam method can be directly applied to antihydrogen with a foreseen initial precision of a few kHz (10–6 relative precision). The leading order of the hyperfine splitting can be calculated from the known properties of the antiproton and positron, but a 10–6 level measurement would be sensitive to the antiproton magnetic and electric form factors that are so far unknown.

Earlier this year, the ALPHA experiment at CERN’s AD measured the hyperfine splitting of trapped antihydrogen. Following a long campaign that saw ALPHA determine antihydrogen’s 1S–2S transition in 2016 (CERN Courier January/February 2017 p8), the collaboration achieved a precision of 4 × 10–4 (0.5 MHz) on the hyperfine measurement. Ultimately the precision of in-trap measurements will be limited by the presence of strong magnetic-field gradients, however. The in-beam technique, by contrast, probes the hyperfine transition far away from the strong inhomogeneous magnetic trapping fields. In the 1950s this technique enabled hydrogen’s hyperfine structure to be determined to a precision of 50 Hz. The recent measurement of this transition by the ASACUSA experiment using a similar technique has now improved on this precision by more than an order of magnitude.

The ASACUSA collaboration was formed in 1997 to investigate antiprotonic atoms and collisions involving slow antiprotons. Its antihydrogen programme started in 2005 at the AD and in recent years the collaboration has focused on two topics. One is laser spectroscopy of antiprotonic helium, which allows the determination of the antiproton mass (CERN Courier September 2011 p7) and the antiproton magnetic moment. The latter value was recently measured to higher precision in Penning traps first by the ATRAP experiment (CERN Courier May 2013 p6) and, as announced in October, further improved by more than three orders of magnitude by the BASE experiment, both also located at the AD.

The second focus of ASACUSA, led by the CUSP group, is to measure the hyperfine structure of antihydrogen in a polarised beam. ASACUSA employs a multi-trap set-up to produce an antihydrogen beam (CERN Courier March 2014 p5) for Rabi-type spectroscopy on the hyperfine transition. The spectroscopy apparatus was designed to match the expected properties of an antihydrogen beam and called for a test of the apparatus with a hydrogen beam of similar characteristics.

Hydrogen first

The spectroscopy technique relies on the dependency of the atomic energy levels on a magnetic field, also known as the Zeeman effect (figure 1). In the presence of a magnetic field, the degeneracy of the hyperfine triplet states is lifted. Two of the states, called low-field seekers (lfs), have a rising energy with rising magnetic field, while the third state of the triplet and the singlet state decrease their energies with rising magnetic field (they are called high-field seekers, hfs). These distinguishing properties are used to first polarise the beam by means of a magnetic-field gradient (figure 2), which leads to opposite forces on lfs and hfs. As a result, only lfs arrive at the interaction region, where a microwave cavity provides an oscillating magnetic field. This field can then induce state conversions from lfs to hfs if tuned to the right frequency. Atoms in hfs states are subsequently removed from the beam by a second section of magnetic-field gradients, thus leading to a reduced count rate at the detector when the transition is induced.

In the apparatus design chosen, large geometrical openings compensate for the low antihydrogen flux and a superconducting magnet is used to generate sufficiently selective magnetic-field gradients over such a large area. The oscillating microwave field needed to drive the hyperfine transition must be homogenous over the large geometrical opening, which dictated the design of the cavity leading to a particular resonance spectrum (figure 3). The functionality of the spectroscopy apparatus and other technical developments were tested by coupling a cold and polarised hydrogen source and a quadrupole mass spectrometer as hydrogen detector to the spectroscopy apparatus envisioned for the antihydrogen experiment (figure 2).

The measurement led to the determination of the hydrogen’s so-called σ1 hyperfine transition (figure 1), the transition frequency of which was measured as a function of an externally applied magnetic field. From a set of frequency determinations, the zero-field value could be extracted and such measurements were repeated under 10 distinct conditions to investigate systematic effects. In total more than 500 resonances (an example is shown in figure 3) were acquired to extract the zero-field hydrogen ground-state hyperfine splitting. Numerical methods developed to assist the analysis of the transition line shape contributed to the improvement by more than an order of magnitude, leading to a precision of 3.8 Hz and a value consistent with the more precise maser result.

A measurement of hydrogen’s hyperfine splitting at the Hz level implies an absolute precision of 10–15 eV. Given the scarcity of antihydrogen and the yet unprobed properties (namely velocity and atomic states) of the antihydrogen beam, a measurement at this level of precision on antihydrogen is not possible in the short-term. However, the analysis of ASACUSA data collected with hydrogen enabled the collaboration to assess the necessary number of antiatoms to reach a 10–6 sensitivity, assuming plausible beam properties. The conclusion is that a measurement at the peV level (kHz precision) should be possible if 8000 antiatoms can be detected after the spectrometer. That would require at least an order-of-magnitude increase in the antihydrogen flux.

The Rabi-type spectroscopy approach chosen by ASACUSA has the capability to test individual transitions in hydrogen and antihydrogen under well-controlled external conditions and, if successful, will immediately result in a precision of 10–6 or better. At this level, the hyperfine transitions would provide yet unknown information on the internal structure of the antiproton. However, much work remains to be done for the ASACUSA experiment to gather the needed number of antihydrogen atoms in a reasonable time.

Until then, more measurements can be performed with the hydrogen set-up. The apparatus has recently been modified to allow for the simultaneous measurement of σ1 and π1 transitions (figure 1). Within the SME, the latter transition could reveal CPT and Lorentz violations while the σ1 transition is insensitive to these effects and would serve as a monitor of potential systematic errors. This would give access to a number of so-far-unconstrained SME parameters that can be probed by hydrogen alone. While the antihydrogen experiment focuses on increasing the cold, ground- state antihydrogen flux, the hydrogen experiment is about to start a new measurement campaign for which results are expected in the next 18–24 months. The hydrogen atom has been a source of profound theoretical developments for some time, and history has shown that it is well worth the effort to study it ever more closely.

Reaching out from the European school

Training and education have been among CERN’s core activities since the laboratory was founded. The CERN Convention of 1954 stated that these activities might include “promotion of contacts between, and interchange of, scientists…and the provision of advanced training for research workers”. It was in this spirit that the first residential schools of physics were organised by CERN in the early 1960s. Initially held in Switzerland, with a duration of one week, the schools soon evolved into two-week events that took place annually and rotated among CERN Member States.

Following discussions between the Directors-General of CERN and the Joint Institute for Nuclear Research (JINR) in Russia, it was agreed that CERN should organise the 1970 school in collaboration with JINR. The event was held in Finland, which at that time was not a Member State of either institution, and the CERN–JINR collaboration evolved into today’s annual CERN–JINR European Schools of High-Energy Physics (HEP). The European schools that began in 1993 (CERN Courier June 2013 p27) are held in a CERN Member State three years out of four, and in a JINR Member State one year out of four.

The target audience of the European schools is advanced PhD students in experimental HEP, preparing them for a career as research physicists. Around 100 students attend each event following a rigorous selection process. Those attending the 2017 school – the 25th in the series, held from 6 to 19 September in Évora, Portugal – were selected from more than 230 candidates, taking into account their potential to pursue a research career in experimental particle physics. The 100 successful students included 33 different nationalities and, reflecting an increasing trend over the past quarter century of the European schools, about a third were women.

The core programme of the schools continues to be particle-physics theory and phenomenology, including general topics such as the Standard Model, quantum chromodynamics and flavour physics, complemented by more specialised aspects such as heavy-ion physics, Higgs physics, neutrino physics and physics beyond the Standard Model. A course on practical statistics reflects the importance of this topic in modern HEP data analysis. The school also includes classes on cosmology, in light of the strong link between particle physics and astrophysical dark-matter research. Students are taught about the latest developments and prospects at CERN’s Large Hadron Collider (LHC). They also hear from the Director-General of CERN and the director of JINR about the programmes and plans of the two organisations, which have links going back more than half a century. Thus, in addition to studying a wide spectrum of physics topics, the students are given a broad overview and outlook on particle-physics facilities and related issues.

The two-week residential programme includes a total of more than 30 plenary lectures of 90 minutes each, complemented by parallel discussion sessions involving six groups of about 17 students. Each group remains with the same discussion leader for the duration of the school, providing an environment where the students are comfortable to ask questions about the lectures and explore topics of interest in greater depth. The students are encouraged to discuss their own research work with each other and with the staff of the school during an after-dinner poster session. The lecturers are highly experienced experts in their fields, coming from many different countries in Europe and beyond, while the discussion leaders are highly active, but sometimes less-senior physicists.

New ingredient

A new ingredient in the school’s programme since 2014 is training in outreach for the general public. Making use of two 90 minute teaching slots, the students learn about communicating science to a general audience from two professional trainers who have a background in journalism with the BBC. The compulsory training sessions are complemented by optional one-on-one exercises that are very popular with the students. The exercises involve acting out a radio interview about a discovery of new physics at the LHC based on a fictitious scenario.

Building on what they have learnt in the science-communication training, the students from each discussion group collaborate in their “free time” to prepare an eight-minute talk on a particle-physics topic at a level understandable to the public. This is an exercise in teamwork as well as in outreach. The group needs to identify the specific aspects of the topic that they are going to address, develop a plan to make it interesting and relevant to a general audience, share the work of preparing the presentation between the team members, and agree who will give the talk on their behalf. The results of the collaborative group projects are presented in an after-dinner session that is video recorded. A jury made up of experienced science communicators judges the projects and gives feedback to each group. The topics addressed in the projects at the 2017 school in Portugal included the Standard Model, neutrinos, extra dimensions, and cosmology, with the prize for the best team effort going to a presentation on the Higgs boson illustrated with a “cookie-eating grandmother” field.

Equipping young researchers with good science-communication skills is considered important by the management of both CERN and JINR, and outreach training is greatly appreciated by most of the European school’s students. As a follow up, students are encouraged to make contact with the people responsible for outreach in their experimental collaborations or home institutes, with a view to participating in science-communication activities.

In addition to the outreach training, important public events are often held in the host country at the time of the school – benefitting from the presence of the leading scientists who are lecturing.  This is well illustrated by the 2017 edition, at which a public event at Évora University coincided with visits to the school by CERN Director-General Fabiola Gianotti, who gave a talk entitled “The Higgs particle and our life”, and JINR director Victor Matveev. The event was attended by numerous high-level representatives of Portuguese scientific institutes and universities, and also by the Portuguese minister of science, technology and higher education, Manuel Heitor. There was an audience of about 300, including high-school teachers, pupils and university students, with more following a live webcast.

Branching out

In addition to the annual schools that take place in Europe, CERN is involved in organising schools of HEP in Latin America (in odd-numbered years since 2001) and in the Asia-Pacific region (in even-numbered years since 2012). These schools have a similar core programme to the European ones, but with more emphasis on instrumentation and experimental techniques. This reflects the fact that there are fewer opportunities in some of the countries concerned for advanced training in these areas.

Although there is so far no specific teaching at the schools in Latin America and the Asia-Pacific region on communicating science to a general audience, education and outreach activities are often arranged in the host country around the time of the schools. For example, an important education and outreach programme was organised to coincide with the 2017 CERN–Latin-American School held from 8 to 21 March in Querétaro, Mexico. Here, several teachers from the CERN school gave short lecture courses or seminars to undergraduate students from Universidad Autónoma de Querétaro and the Juriquilla campus of Universidad Nacional Autónoma de México.

A highlight of the outreach programme in Mexico was a large public event on 8 March, the arrivals day for students at the CERN school and, by coincidence, International Women’s Day. This included introductory talks by Fabiola Gianotti (recorded in advance and subtitled in Spanish) and by Julia Tagüeña Parga (in person), deputy director for scientific development in the Mexican national science and technology agency, CONACyT. These were followed by a lecture entitled “Einstein, black holes and gravitational waves” by Gabriela Gonzalez, spokesperson of the LIGO collaboration, attracting a capacity audience of about 400 people.

As is evident, the European schools of HEP have a long history and continue their primary mission of teaching HEP and related topics to young researchers. However, the programme continues to evolve, and it now includes some training in science communication that is becoming increasingly important in the CERN and JINR Member States. The success of the schools can be judged by an anonymous evaluation questionnaire in which the overall assessment is overwhelmingly positive, with about 60% of students in 2014–2017 giving the highest ranking of “excellent”.

In total, more than 3000 students have attended the schools, including the Latin-American schools since 2001 and the Asia–Europe–Pacific schools since 2012, as well as the European schools since 1993. All these schools are important ingredients in delivering CERN’s mission in education and outreach, and in supporting its policies of international co-operation and being open to geographical enlargement within and beyond Europe. They bring together participants and teachers of many different nationalities, and each school requires close collaboration between CERN, co-organisers such as JINR for the European schools, and colleagues from the host country. The schools may also link in with other aspects of CERN’s international relations. For example, the 2015 Latin-American school in Ecuador helped to pave the way for formal membership of Ecuadorian universities in the CMS experiment. Similarly, the 2011 European school and associated outreach activities in Bucharest marked steps towards Romania becoming a Member State of CERN.

The next European school will be held in Maratea, Italy, from 20 June to 3 July 2018, followed by an Asia–Europe–Pacific school in Quy Nhon, Vietnam, from 12 to 25 September 2018.

Charting a course for advanced accelerators

Progress in experimental particle physics is driven by advances in accelerators. The conversion of storage rings into colliders in the 1970s is one example, another is the use of superconducting magnets and RF structures that allow higher energies to be reached. CERN’s Large Hadron Collider (LHC) is halfway through its second run at an energy of 13 TeV, and its high-luminosity upgrade is expected to operate until the mid-2030s. Several machines are under consideration for the post-LHC era and many will be weighed up during the European Strategy for Particle Physics beginning in 2019. All are large facilities based on advanced but essentially existing accelerator technologies.

A completely different breed of accelerator based on novel accelerating technologies is also under intense study. Capable of operating with an accelerating gradient larger than 1 GV/m, advanced and novel accelerators (ANAs) could reach energies in the 1–10 TeV range in much more compact and efficient ways. The technological challenge is huge and the timescales are long, but the eventual goal is to have a linear electron–positron or an electron–proton collider at the energy frontier. Such a machine would have a smaller footprint than conventional collider designs and promises energies that otherwise are technologically extremely difficult and expensive to reach.

The first Advanced and Novel Accelerators for High Energy Physics Roadmap (ANAR) workshop took place at CERN in April, focusing on the application of ANAs to high-energy physics (CERN Courier June 2017 p7). The workshop was organised under the umbrella of the International Committee for Future Accelerators as a step towards an international ANA scientific roadmap for an advanced linear collider, with the aim of delivering a technical design report by 2035. The first task towards this goal is to take stock of the scientific landscape by outlining global priorities and identifying necessary facilities and existing programmes.

The ANA landscape

The first idea to accelerate particles in a plasma came as long ago as 1979, with a seminal publication by Tajima and Dawson. It involved the use of wakefields – accelerating longitudinal electric fields generated in a plasma in the wake of a driving laser pulse or a particle bunch – to accelerate and focus a relativistic bunch of particles. In ANAs using plasma as a medium, the wakefields are sustained by a charge separation in the plasma driven by a laser pulse or a particle beam. Large energy gains over short distances can also be reached in ANAs using dielectric material structures that can sustain maximum accelerating fields larger than is possible in metallic structures. These ANAs can accelerate electrons as well as positrons and can also be driven by laser pulses or particle bunches.

Initial experiments took place with electrons at SLAC and elsewhere in the 1990s, demonstrating the principles of the technique, but the advent of high-power lasers as wakefield drivers led to increased activity. After the first demonstration of peaked electron spectra in millimetre-scale plasmas in 2004, GeV electron beams were obtained with 40 TW laser pulses in 2006 and subsequently electron beams with multi-GeV energies have been reported with PW-class laser systems and few-centimetre-long plasmas. Advanced and novel technologies for accelerators have made remarkable progress over the past two decades. They are now capable of bringing electrons to energies of a few GeV over a distance of a few centimetres, compared to 0.1 MeV per centimetre for the Large Electron–Positron (LEP) collider. Reaching such energies with ANAs has therefore sparked interest for high-energy physics applications, in addition to their potential for industry, security or health sectors.

Several challenges must be addressed before proposing a technical design for an advanced linear collider (ALC), requiring the sustained efforts of a diverse community that currently includes more than 62 laboratories in more than 20 countries. The key challenges are either related to fundamental components of ANAs – such as the injectors, accelerating structures, staging of components and their reliability – or to beam dynamics at high energy and the preservation of energy spread, emittance and efficiency.

A major component necessary for the application of an ANA to high-energy physics is a wakefield driver. In practice, this could be an efficient and reliable laser pulse with a peak power topping 100 TW, or a particle bunch with an energy higher than 1 GeV. In both cases, however, the duration of the pulse must be shorter than 100 fs.

The plasma medium, separated into successive stages, is another key component. Assuming accelerating gradients in the region 10–50 GeV/m and energy gains of 10–20 GeV per stage, plasma media 20–200 cm long are required. The main challenges for the plasma medium are the reproducibility, density uniformity, density ramps at their entrance and exit, and the high repetition rate required for collider operation. Tailoring the density ramps is important to mitigate the usually large mismatch between the small transverse size of the accelerated beam inside the plasma and the relatively large beam size that inter-stage optics must handle between plasma modules.

Staging successive accelerator modules is a further challenge in itself. Staging is necessary because the energy carried by most drivers is much smaller than the final energy desired for the accelerated bunch, e.g. 1.6 kJ for 2 × 1010 electrons or positrons at an energy of 500 GeV. Since state-of-the-art femtosecond laser pulses and relativistic electron bunches carry less than 100 J, multiple drivers and multiple stages are needed. Staging has to achieve, in a compact way, coupling of the accelerated bunch out of one plasma module into the next one, while preserving all bunch properties, and evacuating the exhausted driver and bringing the fresh driver before entering the next stage. Staging has been demonstrated, although with low-energy beams (< 200 MeV), in a number of schemes, the most recent being the one performed at the BELLA Center at LBNL. Injection of electrons from a laser plasma injector into a plasma module providing acceleration to 5–10 GeV is one of the goals of the French APOLLON CILEX laser facility starting operation in 2018, and of the baseline explored in the design study EuPRAXIA (see panel on right). The AWAKE experiment at CERN, meanwhile, aims to use protons to drive a plasma wakefield in a single plasma section with the long-term goal of accelerating electrons to TeV energies.

Stability, reproducibility and reliability are trademarks of accelerators used for particle physics. Results obtained with ANAs often appear of lower stability and reproducibility than those obtained with conventional accelerators. However, it is important to note that these ANAs are run mostly as experiments and research tools, with limited resources put towards feedback and control systems – which are one of the major features of conventional accelerators. A strong effort therefore has to be put into developing proper tools and devices, for instance by exploiting synergies with the RF-accelerator community to develop more reliable technologies.

Testing the components for an eventual ALC requires major facilities, most likely located at national or international laboratories. ANA technology might be more compact than that of conventional accelerators, but the environment for producing even 10–100 GeV range prototypes is beyond the capability of university labs, requiring multiple engineering skills to demonstrate reliable operation in a safe environment. The size and cost of these facilities are better justified in a collaborative environment, in line with the development of accelerators relevant for high-energy physics.

Four-phase roadmap

Co-ordination of the advanced accelerators field is at different levels of advancement around the world. In the US, roadmaps were drawn up in 2016 for plasma- and structure-based ANAs with application to high-energy physics and the construction of a linear collider in the 2040s. One outcome of the ANAR workshop this year was a first attempt at an international scientific roadmap. Arranged into four distinct phases, the roadmap describes the stages deemed scientifically necessary to elaborate a design for a multi-TeV linear collider.

The first is a five-year-long period in which to develop injectors and accelerating structures with controlled parameters, such as an injector–accelerator unit producing GeV-range electron and positron beams with high-quality bunches, low emittance and low relative energy spread. A second five-year phase will lead to improved bunch quality at higher energy, with the staging of two accelerating structures and first proposals of conceptual ALC designs. The third phase, also lasting five years, will focus on the reliability of the acceleration process, while the fourth phase will be dedicated to technical design reports for an ALC by 2035, following selection of the most promising options.

Community effort

Many very important challenges remain, such as improving the quality, stability and efficiency of the accelerated beams with ANAs, but no show-stopper has been identified to date. However, the proposed time frame is achievable only if there is an intensive and co-ordinated R&D effort supported by sufficient funding for ANA technology with particle-physics applications. The preparation of an eventual technical design report for an ALC at the energy frontier should therefore be undertaken by the ANA community with significant contributions from the whole accelerator community.

From the current state of wakefield acceleration in plasmas and dielectrics, it is clear that advanced concepts offer several promising options for energy frontier electron–positron and electron–proton colliders. In view of the significant cost of intense R&D for an ALC, an international programme, with some level of international co-ordination, is more suitable than a regional approach. Following the April ANAR workshop, a study group towards advanced linear colliders, named ALEGRO for Advanced LinEar collider study GROup, has been set up to co-ordinate the preparation of a proposal for an ALC in the multi-TeV energy range. ALEGRO consists of scientists with expertise in advanced accelerator concepts or accelerator physics and technology, drawn from national institutions or universities in Asia, Europe and the US. The group will organise a series of workshops on relevant topics to engage the scientific community. Its first objective is to prepare and deliver, by the end of 2018, a document detailing the international roadmap and strategy of ANAs with clear priorities as input for the European Strategy Group. Another objective for ALEGRO is to provide a framework to amplify international co-ordination on this topic at the scientific level and to foster worldwide collaboration towards an ALC, and possibly broaden the community. After all, ANA technology represents the next-generation of colliders and could potentially define particle physics into the 22nd century.

EAAC workshop showcases advanced accelerator progress

The 3rd European Advanced Accelerator Concept (EAAC) workshop, held every two years, took place from 24 to 30 September on the Island of Elba, Italy. Around 300 scientists attended, with advanced linear colliders at the centre of discussions. Specialists from accelerator physics, RF technology, plasma physics, instrumentation and the laser field discussed ideas and directions towards a new generation of ultra-compact and cost-effective accelerators with novel applications in science, medicine and industry.

Among the many outstanding presentations at EAAC 2017, at which 70 PhD students presented their work, were reports on: laser-driven kHz generation of MeV beams at LOA/TU Vienna; dielectric acceleration results from PSI/DESY/Cockcroft; first results from the AWAKE experiment at CERN; 7 GeV electrons in laser plasma acceleration from LBNL; 0.5 nC electron bunches from HZDR; new R&D directions towards high-power lasers at LLNL; controllable electron beams from Osaka and LLNL; undulator X-ray generation after laser plasma accelerators from DESY/University of Hamburg/SOLEIL/LOA; important progress in hadron beams from plasma accelerators from Belfast/HZDR/GSI; and future collider plans from CERN.

A special session was devoted to the Horizon2020 design study EuPRAXIA (European Plasma Research Accelerator with eXcellence In Applications). EuPRAXIA is a consortium of 38 institutes, co-ordinated by DESY, which aims to design a European plasma accelerator facility. This future research infrastructure will deliver high-brightness electron beams of up to 5 GeV for pilot users interested in free-electron laser applications, tabletop test beams for high-energy physics, medical imaging and other applications. This study, conceived at the EAAC meeting in 2013, is strongly supported by the European laser industry.

The EAAC was founded by the European Network for Novel Accelerators in 2013 and has grown in its third edition into a meeting with worldwide visibility, rapidly catching up with the long tradition of the Advanced Accelerator Concepts workshop (AAC) in the US. The EAAC2017 workshop was supported by the EuroNNAc3 network through the EU project ARIES, INFN as the host organisation, DESY and the Helmholtz association, CERN and the industrial sponsors Amplitude, Vacuum FAB and Laser Optronic.

Ralph Assmann, DESY, Massimo Ferrario, INFN and Edda Gschwendtner, CERN.

So you want to communicate science?

 

I returned to the Netherlands as a professor of experimental physics at Radboud University Nijmegen in 1998. After having enjoyed more than 10 years almost exclusively doing research work at CERN and elsewhere, I found (as I had strongly suspected) that I very much enjoyed teaching. Teaching first-year undergraduate physics courses, I came into contact with high-school teachers who were assisting students with the transition between secondary school and university. While successful for a broad group of students, many realised during their first year of university that studying physics was rather different from what they had imagined when they were still in school. As a result, there was a significant drop-out rate.

An opportunity to remedy this situation came when I read about a cosmic-ray high-school project in Canada led by experimental particle-physicist Jim Pinfold. Soon thereafter, and independently, a Nijmegen colleague, Charles Timmermans, came to me with a similar proposal for our university, and in 2000 we initiated the Nijmegen Area High School Array. Two years later, together with others, we launched the Dutch national High-School Project on Astrophysics Research with Cosmics (HiSPARC), which involved placing scintillator detectors on the roofs of high schools to form detector arrays. This is an excellent mixture of real science and educating high-school pupils in research methods. It has been a lot of fun to build the detectors with pupils, to legally walk on school roofs, and to analyse the data that arrive. Of course reality is unruly and it is sometimes hard to keep the objectives in focus: the schools can tend to be rather casual, if not careless, about the proper function of their set-up, whereas for the physics harvest it is essential to have a reliable network.

HiSPARC had an interesting side effect. While working with my group on the DΦ experiment at the Tevatron, focusing on finding the Higgs boson, I was, more or less adiabatically, pulled towards the Pierre Auger Observatory (PAO)   the international cosmic-ray observatory in Argentina. The highest-energy particles in the universe are very mysterious: we don’t yet know precisely where they come from, although the latest PAO results suggest we’re getting close (Extreme cosmic rays reveal clues to origin). Nor do we know how they are accelerated to energies up to 100 million TeV. My involvement as a university scientist in a high-school project has completely redirected my research career, and for the past five years I have spent all of my research time on the PAO.

Prompted by my teacher network, around 10 years ago I organised a joint effort between six nearby high schools concerning a new exam subject introduced by the Dutch ministry – “nature, life and technology”, which integrates science, technology, engineering and maths (STEM) subjects. Every Friday afternoon, 350 pupils come to our faculty of science, which itself is an organisational and logistical challenge. The groups are organised during the course of the afternoon depending on the activity: a lecture for all, tutorials, and labs in biology, chemistry, physics, computer science and other subjects. Around 10 different locations in the building (and sometimes outside) are involved, and for every 20 25 pupils there is one teacher available. Following this project, in 2011 I initiated a two-year-long pre-university programme for gifted fifth and sixth graders in high school, which also takes place at the university and involves about 20 teachers and 14 university faculty members. The first cohort of pupils arrived in 2013, and one of the first graduates in the programme recently completed an internship at CERN.

Admittedly it is a lot of work. But it has been worth the effort. By thinking about how to teach particle physics to pupils with different backgrounds and experiences, I have gained more insight into the fundamentals of particle physics. Even the sometimes tedious experience of bringing school managements together and getting them to carry out projects outside of their comfort zones has prepared me well for some aspects of my present duty as president of CERN Council. Working with pupils and teachers has enriched my life, without having to compromise on research or management duties. And if I can combine such things with a research career, there seems little excuse for most scientists not to help educate and inspire the next generation.

Foundations of Nuclear and Particle Physics

By T W Donnelly, J A Formaggio, B R Holstein, R G Milner and B Surrow
Cambridge University Press

6151cZHCADS

This textbook aims to present the foundations of both nuclear and particle physics in a single volume in a balanced way, and to highlight the interconnections between them. The material is organised from a “bottom-up” point of view, moving from the fundamental particles of the Standard Model to hadrons and finally to few- and many-body nuclei built from these hadronic constituents.

The first group of chapters introduces the symmetries of the Standard Model. The structure of the proton, neutron and nuclei in terms of fundamental quarks and gluons is then presented. A lot of space is devoted to the processes used experimentally to unravel the structure of hadrons and to probe quantum chromodynamics, with particular focus on lepton scattering. Following the treatment of two-nucleon systems and few-body nuclei, which have mass numbers below five, the authors discuss the properties of many-body nuclei, and also extend the treatment of lepton scattering to include the weak interactions of leptons with nucleons and nuclei. The last group of chapters is dedicated to relativistic heavy-ion physics and nuclear and particle astrophysics. A brief perspective on physics beyond the Standard Model is also provided.

The volume includes approximately 120 exercises and is completed by two appendices collecting values of important constants, useful equations and a brief summary of quantum theory.

The Grant Writer’s Handbook: How to Write a Research Proposal and Succeed

By Gerard M Crawley and Eoin O’Sullivan
Imperial College Press

grant-writer-s-handbook-the-how-to-write-a-research-proposal-and-succeed

This book is designed as a “how to” guide to writing grant proposals for competitive peer review. Nowadays researchers are often required to apply to funding agencies to secure a budget for their work, but being a good researcher does not necessarily imply being able to write a successful grant proposal. Typically, the additional skills and insights needed are learnt through experience.

This timely book aims to guide researchers through the whole process, from conceiving the initial research idea, defining a project and drafting a proposal, through to the review process and responding to reviewers’ comments. Drawing on their own experience as reviewers in a number of different countries, the authors provide many important tips to help researchers communicate both the quality of their research and their ability to carry it out and manage a grant. The authors illustrate their guidelines with the help of many examples of both successful and unsuccessful grant applications, and emphasise key messages with quotes from reviewers.

The book also contains valuable advice for primary investigators on how to set up their research budget, manage people and lead their project. Two appendices at the end of the volume provide website addresses and references, as well as an outline of how to organise a grant competition.

Aimed primarily at early career researchers applying for their first grant, the book will also be beneficial to more experienced scientists, to the administrators of universities and institutions that support their researchers during the submission process, and to the staff of recently established funding organisations, who may have little experience in organising peer-review competitions.

ITER Physics

By C Wendell Horton Jr and Sadruddin Benkadda
World Scientific

CCboo2_09_17

This 235 page book is dedicated to the ITER tokamak, the deuterium–tritium fusion reactor under construction in France, which aims to investigate the feasibility of fusion power. The book provides a concise overview of the state-of-the-art plasma physics involved in nuclear-fusion processes. Definitely not an introductory book – not even for a plasma-physics graduate student – it would be useful as a reference text for experts. Across 10 chapters, the authors describe the physics learned from previous tokamak projects around the world and the application of that experience to ITER.

After an introduction to the ITER project, the conventional magneto-hydrodynamic description of plasma physics is discussed, with strong emphasis on the geometry of the divertor (located at the bottom of the vacuum vessel to extract heat and reduce contamination of the plasma from impurities). Chapter 3 deals with the problem of alpha-particle distribution, which is a source of Alfven and cyclotron instabilities. Edge localised mode (ELM) instabilities associated with the divertor’s magnetic separatrix are also discussed. Conditions of turbulent transport are assumed throughout, so chapter 4 provides a general review of our (mainly experimental) knowledge of the topic. Chapters 5 and 6 are specific to the ITER design because they describe the ELM instabilities in the ITER tokamak and the solutions adopted for their control. Concluding the part dedicated to the fusion-reactor transient phase, steady-state operations and plasma diagnostics techniques are described in chapters 7 and 8, respectively.

The tokamak’s complex magnetic field is able to confine charged particles in the fusion plasma but not neutral particles. Neutron bombardment of surfaces can be viewed as an inconvenience, making it necessary to ensure the walls are radiation hard, or an advantage, turning the surfaces into a breeding blanket to generate further tritium fuel. Radiation hardness of the tokamak walls is discussed in chapter 9, while chapter 10 explains how ITER will transmute a lithium blanket into tritium via bombardment with fusion neutrons. The IFMIF (International Fusion Materials Irradiation Facility) project, conceived for fusion-material tests and still in its final design phase, is also briefly presented. The book closes with some predictions about the expectations to be fulfilled by ITER, before proceeding to the design of DEMO – a future tokamak for electrical-energy production.

In summary, ITER Physics is a book for expert scientists who are looking for a compact overview of the latest advances in tokamak physics. I appreciated the exhaustive set of references at the end of each chapter, since it provides a way to go deeper into concepts not exhaustively explained in the book. Plasma-fusion physics is complex, not only because it is a many-body problem but also because our knowledge in this field is limited, as the authors stress. I would have appreciated more graphic material in some parts: in order to fully understand how a fusion reactor works, one has to think in 3D, so schematics are always helpful.

Relativistic Kinetic Theory, with Applications in Astrophysics and Cosmology

By Gregory V Vereshchagin and Alexey G Aksenov
Cambridge University Press

41R8f6QUjUL

This book provides an overview of relativistic kinetic theory, from its theoretical foundations to its applications, passing through the various numerical methods used when analytical solutions of complex equations cannot be obtained.

Kinematic theory (KT) was born in the 19th century and aims to derive the properties of macroscopic matter from the properties of its constituent microscopic particles. The formulation of KT within special relativity was completed in the 1960s.

Relativistic KT has traditional applications in astrophysics and cosmology, two fields that tend to rely on observations rather than experiments. But it is now becoming more accessible to direct tests due to recent progress in ultra-intense lasers and inertial fusion, generating growing interest in KT in recent years.

The book has three parts. The first deals with the fundamental equations and methods of the theory, starting with the evolution of the basic concept of KT from nonrelativistic to special and general relativistic frameworks. The second part gives an introduction to computational physics and describes the main numerical methods used in relativistic KT. In the third part, a range of applications of relativistic KT are presented, including wave dispersion and thermalisation of relativistic plasma, kinetics of self-gravitating systems, cosmological structure formation, and neutrino emission during gravitational collapse.

Written by two experts in the field, the book is intended for students who are already familiar with both special and general relativity and with quantum electrodynamics.

Radioactivity and Radiation: What They Are, What They Do, and How to Harness Them

By Claus Grupen and Mark Rodgers
Springer International Publishing

CCboo1_09_17

Have you ever thought that batteries capable of providing energy over very long periods could be made with radioisotopes? Did you know that the bacterium deinococcus radiodurans can survive enormous radiation doses and, thanks to its ability to chemically alter highly radioactive waste, it could be potentially employed to clean up radioactively contaminated areas? And do you believe that cockroaches have an extremely high radiation tolerance? Apparently, the latter is a myth. These are a few of the curiosities contained in this “all that you always wanted to know about radioactivity” book from Grupen and Rodgers. It gives a comprehensive overview of the world of radioactivity and radiation, from its history to its risks for humans.

The book begins by laying the groundwork with essential, but quite detailed (similar to a school textbook), information about the structure of matter, how radiation is generated, how it interacts with matter and how it can be measured. In the following chapters, the book explores the substantial benefits of radioactivity through its many applications (not only positive, but also negative and sometimes questionable) and the possible risks associated with its use. The authors deal mainly with ionising radiation; however, in view of the public debate about other kinds of radiation (such as mobile-phone and microwave signals), they include a brief chapter on non-ionising radiation. Also interesting are the final sections, provided as appendices, which summarise the main technologies of radiation detectors as well as the fundamental principles of radiation protection. In the latter, the rationale behind current international rules and regulations, put in place to avoid excessive radiation exposure for radiation workers and the general public, is clearly explained.

This extensive topic is covered using easily understood terms and only elementary mathematics is employed to describe the essentials of complex nuclear-physics phenomena. This makes for pleasant reading intended for the general public interested in radioactivity and radiation, but also for science enthusiasts and inquisitive minds. As a bonus, the book is illustrated with eye-catching cartoons, most of them drawn by one of the authors.

The book emphasises that radiation is everywhere and that almost everything around us is radioactive to some degree: there is natural radioactivity in our homes, in the food that we eat and the air that we breathe. Radiation from the natural environment does not present a hazard; however, radiation levels higher than the naturally occurring background can be harmful to both people and the environment. These artificially increased radiation levels are mainly due to the nuclear industry and have therefore risen substantially since the beginning of the civil-nuclear age in the 1950s. This approach helps readers to put things in perspective and allows them to compare the numbers and specific measurement quantities that are used in the radiation-protection arena. These quantities are the same used by the media, for instance, to address the general public when a radiation incident occurs.

Not only will this book enrich the reader’s knowledge about radioactivity and radiation, it will also provide them with tools to better understand many of the related scientific issues. Such comprehension is crucial for anyone who is willing to develop their own point of view and be active in public debates on the topic.

Maryam Mirzakhani 1977–2017

Maryam Mirzakhani, mathematics professor at Stanford University and Fields Medalist in 2014, passed away on 14 July aged just 40. She was the first woman and first Iranian citizen to win a Fields Medal.

Born in Teheran, at high-school age Maryam participated in two International Mathematics Olympiads, winning gold medals both times – once with a perfect score. After undergraduate studies at Sharif University, she moved to the US to enroll in a PhD course at Harvard University, under the supervision of Fields Medalist Curtis McMullen. Before joining Stanford in 2008 she was a fellow of the Clay Mathematics Institute in Cambridge (MA) and a professor at Princeton University.

Since her early career as a mathematician, Maryam obtained fundamental results on moduli spaces of Riemann surfaces and inhomogeneous space dynamics – topics at the intersections between mathematics and physics. One of her first major results was a counting theorem on closed geodesics that unexpectedly led to a new proof of Witten’s conjecture, related to the partition function of two-dimensional quantum gravity.

As Harvard string theorist Cumrun Vafa recalled in his speech at a memorial event held in August, results of Maryam’s work and the techniques she applied in her proofs might be applied to solve problems in string theory. Riemann surfaces are natural ingredients in string theory, where they appear both as 2D world-sheets of strings dynamically evolving in space–time, as well as 2D internal manifolds on which the theory is compactified to reduce its original 10 or 11 dimensions to a more familiar 4D scenario.

Both applications of Riemann surfaces are of great interest to theoretical physicists. Ongoing research in CERN’s theory department directly investigates string world-sheet and scattering amplitudes, as well as supersymmetric field theories, which are constructed through geometric engineering of branes wrapping Riemann surfaces. Maryam’s approach to moduli spaces provided powerful tools that, in the future, could lead to major advances in theoretical physics.

The premature departure of Maryam Mirzakhani represents a huge loss for the scientific community, not just for her scientific excellence. Winning a Fields Medal not only highlights the academic achievement of the recipient but, as Terrence Tao (Fields Medalist, UCLA) wrote in a note about Maryam Mirzakhani, it also promotes the recipient to a role model. In the case of Maryam Mirzakhani this was definitely true: as a female mathematician and the first woman to win a Fields Medal, she will remain a reference figure for future generations of female scientists.

In addition to an extraordinary scientific career, particularly noticeable were her generosity and humble personality.

bright-rec iop pub iop-science physcis connect