Comsol -leaderboard other pages

Topics

Open science: A vision for collaborative, reproducible and reusable research

The goal of practising science in such a way that others can collaborate, question and contribute – known as “open science” – long predates the web. One could even argue that it began with the first academic journal 350 years ago, which enabled scientists to share knowledge and resources to foster progress. But the web offered opportunities way beyond anything before it, quickly transforming academic publishing and giving rise to greater sharing in areas such as software. Alongside the open-source (Inspired by software), open-access (A turning point for open-access publishing) and open-data (Preserving the legacy of particle physics) movements grew the era of open science, which aims to encompass the scientific process as a whole.

Today, numerous research communities, political circles and funding bodies view open science and reproducible research as vital to accelerate future discoveries. Yet, to fully reap the benefits of open and reproducible research, it is necessary to start implementing tools to power a more profound change in the way we conduct and perceive research. This poses both sociological and technological challenges, starting from the conceptualisation of research projects, through conducting research, to how we ensure peer review and assess the results of projects and grants. New technologies have brought open science within our reach, and it is now up to scientific communities to agree on the extent to which they want to embrace this vision.

Particle physicists were among the first to embrace the open-science movement, sharing preprints and building a deep culture of using and sharing open-source software. The cost and complexity of experimental particle physics, making complete replication of measurements unfeasible, presents unique challenges in terms of open data and scientific reproducibility. It may even be considered that openness itself, in the sense of having an unfettered access to data from its inception, is not particularly advantageous.

Take the existing data-management policies of the LHC collaborations: while physicists generally strive to be open in their research, the complexity of the data and analysis procedures means that data become publicly open only after a certain embargo period that is used to assess its correctness. The science is thus born “closed”. Instead of thinking about “open data” from its inception, it is more useful to speak about FAIR (findable, accessible, interoperable and reusable) data, a term coined by the FORCE11 community. The data should be FAIR throughout the scientific process, from being initially closed to being made meaningfully open later to those outside the experimental collaborations.

True open science demands more than simply making data available: it needs to concern itself with providing information on how to repeat or verify an analysis performed over given datasets, producing results that can be reused by others for comparison, confirmation or simply for deeper understanding and inspiration. This requires runnable examples of how the research was performed, accompanied by software, documentation, runnable scripts, notebooks, workflows and compute environments. It is often too late to try to document research in such detail once it has been published.

True open science demands more than simply making data available

FAIR data repositories for particle physics, the “closed” CERN Analysis Preservation portal and the “open” CERN Open Data portal emerged five years ago to address the community’s open-science needs. These digital repositories enable physicists to preserve, document, organise and share datasets, code and tools used during analyses. A flexible metadata structure helps researchers to define everything from experimental configurations to data samples, from analysis code to software libraries and environments used to analyse the data, accompanied by documentation and links to presentations and publications. The result is a standard way to describe and document an analysis for the purposes of discoverability and reproducibility.

Recent advancements in the IT industry allow us to encapsulate the compute environments where the analysis was conducted. Capturing information about how the analysis was carried out can be achieved via a set of runnable scripts, notebooks, structured workflows and “containerised” pipelines. Complementary to data repositories, a third service named REANA (reusable analyses) allows researchers to submit parameterised computational workflows to run on remote compute clouds. It can be used to reinterpret preserved analyses but also to run “active” analyses before they are published and preserved, with the underlying philosophy that physics analyses should be automated from inception so that they can be executed without manual intervention. Future reuse and reinterpretation starts with the first commit of the analysis code; altering an already-finished analysis to facilitate its eventual reuse after publication is often too late.

Full control

The key guiding principle of the analysis preservation and reuse framework is to leave the decision as to when a dataset or a complete analysis is shared, privately or publicly, in the hands of the researchers. This gives the experiment collaborations full control over the release procedures, and thus fully supports internal processing and review protocols before the results are published on community services, such as arXiv, HEPData and INSPIRE.

The CERN Open Data portal was launched in 2014 amid a discussion as to whether primary particle-physics data would find any use outside of the LHC collaborations. Within a few years, the first paper based on open data from the CMS experiment was published (see Preserving the legacy of particle physics).

Three decades after the web was born, science is being shared more openly than ever and particle physics is at the forefront of this movement. As we have seen, however, simple compliance with data and software openness is not enough: we also need to capture, from the start of the research process, runnable recipes, software environments, computational workflows and notebooks. The increasing demand from funding agencies and policymakers for open data-management plans, coupled with technological progress in information technology, leads us to believe that the time is ripe for this change.

Sharing research in an easily reproducible and reusable manner will facilitate knowledge transfer within and between research teams, accelerating the scientific process. This fills us with hope that three decades from now, even if future generations may not be able to run our current code on their futuristic hardware platforms, they will be at least well equipped to understand the processes behind today’s published research in sufficient detail to be able to check our results and potentially reveal something new.

A turning point for open-access publishing

High-energy physics (HEP) has been at the forefront of open-access publishing, the long-sought ideal to make scientific literature freely available. An early precursor to the open-access movement in the late 1960s was the database management system SPIRES (Stanford Physics Information Retrieval System), which aggregated all available (paper-copy) preprints that were sent between different institutions. SPIRES grew to become the first database accessible through the web in 1991 and later evolved into INSPIRE-HEP, hosted and managed by CERN in collaboration with other research laboratories.

The electronic era

The birth of the web in 1989 changed the publishing scene irreversibly. Vast sums were invested to take the industry from paper to online and to digitise old content, resulting in a migration from the sale of printed copies of journals to electronic subscriptions. From 1991, helped by the early adoption by particle physicists, the self-archiving repository arXiv.org allowed rapid distribution of electronic preprints in physics and, later, mathematics, astronomy and other sciences. The first open-access journals then began to sprout up and in early 2000 three major international events – the Budapest Open Access Initiative, Bethesda Statement on Open Access Publishing and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities – set about leveraging the new technology to grant universal free access to the results of scientific research.

Today, roughly one quarter of all scholarly literature in sciences and humanities is open access. In HEP, the figure is almost 90%. The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3), a global partnership between libraries, national funding agencies and publishers of HEP journals, has played an important role in HEP’s success. Designed at CERN, SCOAP3 started operation in 2014 and removes subscription fees for journals and any expenses scientists might incur to publish their articles open access by paying publishers directly. Some 3000 institutions from 43 countries (figure 1) contribute financially according to their scientific output in the field, re-using funds previously spent on subscription fees for journals that are now open access.

“SCOAP3 has demonstrated how open access can increase the visibility of research and ease the dissemination of scientific results for the benefit of everyone,” says SCOAP3 operations manager Alex Kohls of CERN. “This initiative was made possible by a strong collaboration of the worldwide library community, researchers, as well as commercial and society publishers, and it can certainly serve as an inspiration for open access in other fields.”

Plan S

On 4 September 2018, a group of national funding agencies, the European Commission (EC) and the European Research Council – under the name “cOAlition S” – launched a radical initiative called Plan S. Its aim is to ensure that, by 2020, all scientific publications that result from research funded by public grants must be published in compliant open-access journals or platforms. Robert-Jan Smits, the EC’s open-access envoy and one of the architects of Plan S, cites SCOAP3 as an inspiration for the project and says that momentum for Plan S has been building for two decades. “During those years many declarations, such as the Budapest and Berlin ones, were adopted, calling for a rapid transition to full and immediate open access. Even the 28 science ministers of the European Union issued a joint statement in 2016 that open access to scientific publications should be a reality by 2020,” says Smits. “The current situation shows, however, that there is still a long way to go.”

Recently, China released position papers supporting the efforts of Plan S, which could mark a key moment for the project. But the reaction of scientists around the world has been mixed. An open letter published in September by biochemist Lynn Kamerlin of Uppsala University in Sweden, attracting more than 1600 signatures at the time of writing, argues that Plan S would strongly reduce the possibilities to publish in suitable scientific journals of high quality, possibly splitting the global scientific community into two separate systems. Another open letter, published in November by biologist Michael Eisen at University of California Berkeley with around 2000 signatures, backs the principles of Plan S and supports its commitment “to continue working with funders, universities, research institutions and other stakeholders until we have created a stable, fair, effective and open system of scholarly communication.”

Challenges ahead

High-energy physics is already aligned to the Plan S vision thanks to SCOAP3, says Salvatore Mele of CERN, who is one of SCOAP3’s architects. But for other disciplines “the road ahead is likely to be bumpy”. “Funders, libraries and publishers have cooperated through CERN to make SCOAP3 possible. As most of the tens of thousands of scholarly journals today operate on a different model, with access mostly limited to readers paying subscription fees, this vision implies systemic challenges for all players: funders, libraries, publishers and, crucially, the wider research community,” he says.

It is publishers who are likely to face the biggest impact from Plan S. However, the Open Access Scholarly Publishers Association (OASPA) – which includes, among others, the American Physical Society, IOP Publishing (which publishes CERN Courier) and The Royal Society – recently published a statement of support, claiming OASPA “would welcome the opportunity to provide guidance and recommendations for how the funding of open-access publications should be implemented within Plan S”, while emphasising that smaller publishers, scholarly societies and new publishing platforms need to be included in the decision-making process.

Responding to an EC request for Plan S feedback that was open until 8 February, however, publishers have expressed major concerns about the pace of implementation and about the consequences of Plan S for hybrid journals. In a statement on 12 February, the European Physical Society, while supportive of the Plan S rationale, wrote that “several of the governing principles proposed for its implementation are not conducive to a transition to open access that preserves the important assets of today’s scientific publication system”. In another statement, the world’s largest open-access publisher, Springer Nature, released a list of six recommendations for funding bodies worldwide to adopt in order for full open-access to become a reality, highlighting the differences between “geographic, funder and disciplinary needs”. In parallel, a group of learned societies in mathematics and science in Germany has reacted with a statement citing a “precipitous process” that infringes the freedom of science, and urged cOAlition S to “slow down and consider all stakeholders”.

Global growth

Smits thinks traditional publishers, which are a critical element in quality control and rigorous peer review in scholarly literature, should adopt a fresh look, for example by implementing more transparent metrics. “It is obvious that the big publishers that run the subscription journals and make enormous profits prefer to keep the current publishing model. Furthermore, the dream of each scientist is to publish in a so-called prestigious high-impact journal, which shows that the journal impact factor is still very present in the academic world,” says Smits. “To arrive at the necessary change in academic culture, new metrics need to be developed to assess scientific output. The big challenge for cOAlition S is to grow globally, by having more funders signing up.”

Undoubtedly we are at a turning point between the old and new publishing worlds. The EC already requires that all publications from projects receiving its funding be made open access. But Plan S goes further, proposing an outright shift in scholarly publication. It is therefore crucial to ensure a smooth shift that takes into account all the actors, says Mele. “Thanks to SCOAP3, which has so far supported the publication of more than 26,000 articles, the high-energy physics community is fortunate to meet the vision of Plan S, while retaining researcher choice of the most appropriate place to publish their results.” 

Preserving the legacy of particle physics

In the 17th century, Galileo Galilei looked at the moons of Jupiter through a telescope and recorded his observations in his now-famous notebooks. Galileo’s notes – his data – survive to this day and can be reviewed by anyone around the world. Students, amateurs and professionals can replicate Galileo’s data and results – a tenet of the scientific method.

In particle physics, with its unique and expensive experiments, it is practically impossible for others to attempt to reproduce the original work. When it is impractical to gather fresh data to replicate an analysis, we settle for reproducing the analysis with the originally obtained data. However, a 2013 study by researchers at the University of British Columbia, Canada, estimates that the odds of scientific data existing in an analysable form reduce by about 17% each year.

Indeed, just a few years down the line it might not even be possible for researchers to revisit their own data due to changes in formats, software or operating systems. This has led to growing calls for scientists to release and archive their data openly. One motivation is moral: society funds research and so should have access to all of its outputs. Another is practical: a fresh look at data could enable novel research and lead to discoveries that may have eluded earlier searches.

Like open-access publishing (see A turning point for open-access publishing), governments have started to impose demands on scientists regarding the availability and long-term preservation of research data. The European Commission, for example, has piloted the mandatory release of open data as part of its Horizon 2020 programme and plans to invest heavily in open data in the future. An increasing number of data repositories have been established for life and medical sciences as well as for social sciences and meteorology, and the idea is gaining traction across disciplines. Only days after they announced the first observation of gravitational waves, the LIGO and VIRGO collaborations made public their data. NASA also releases data from many of its missions via open databases, such as exoplanet catalogues. The Natural History Museum in London makes data from millions of specimens available via a website and, in the world of art, the Rijksmuseum in Amsterdam provides an interface for developers to build apps featuring historic artworks.

Data levels

The open-data movement is of special interest to particle physics, owing to the uniqueness and large volume of datasets involved in discoveries such as that of the Higgs boson at the Large Hadron Collider (LHC). The four main LHC experiments have started to periodically release their data in an open manner, and these data can be classified into four levels. The first consists of the data shown in final publications, such as plots and tables, while the second concerns datasets in a simplified format that are suitable for “lightweight” analyses in educational or similar contexts. The third level involves the data being used for analysis by the researchers themselves, requiring specialised code and dedicated computing resources, and the final level with the highest complexity is the raw data generated by the detectors, which requires petabytes of storage and, uncalibrated, is not of much use without being fed to the third tier.

In late 2014 CERN launched an open-data portal and released research data from the LHC for the first time. The data, collected by the CMS experiment, represented half the level-three data recorded in 2010. The ALICE experiment has also released level-three data from proton–proton as well as lead–lead collisions, while all four collaborations – including ATLAS and LHCb – have released subsets of level-two data for education and outreach purposes.

Proactive policy

The story of open data at CMS goes back to 2011. “We started drafting an open-data policy, not because of pressure from funding agencies but because defining our own policy proactively meant we did not have an external body defining it for us,” explains Kati Lassila-Perini, who leads the collaboration’s data-preservation project. CMS aims to release half of each year’s level-three data three years after data taking, and 100% of the data within a ten-year window. By guaranteeing that people outside CMS can use these data, says Lassila-Perini, the collaboration can ensure that the knowledge of how to analyse the data is not lost, while allowing people outside CMS to look for things the collaboration might not have time for. To allow external re-use of the data, CMS released appropriate metadata as well as analysis examples. The datasets soon found takers and, in 2017, a group of theoretical physicists not affiliated with the collaboration published two papers using them. CMS has since released half its 2011 data (corresponding to around 200 TB) and half its 2012 data (1 PB), with the first releases of level-three data from the LHC’s Run 2 in the pipeline.

The LHC collaborations have been releasing simpler datasets for educational activities from as early as 2011, for example for the International Physics Masterclasses that involve thousands of high-school students around the globe each year. In addition, CMS has made available several Jupyter notebooks – a browser-based analysis platform named with a nod to Galileo – in assorted languages (programming and human) that allow anyone with an internet connection to perform a basic analysis. “The real impact of open data in terms of numbers of users is in schools,” says Lassila-Perini. “It makes it possible for young people with no previous contact with coding to learn about data analysis and maybe discover how fascinating it can be.” Also available from CMS are more complex examples aimed at university-level students.

Open-data endeavours by ATLAS are very much focused on education, and the collaboration has provided curated datasets for teaching in places that may not have substantial computing resources or internet access. “Not even the documentation can rely on online content, so everything we produce needs to be self-contained,” remarks Arturo Sánchez Pineda, who coordinates ATLAS’s open-data programme. ATLAS datasets and analysis tools, which also rely on Jupyter notebooks, have been optimised to fit on a USB memory stick and allow simplified ATLAS analyses to be conducted just about anywhere in the world. In 2016, ATLAS released simplified open data corresponding to 1 fb–1 at 8 TeV, with the aim of giving university students a feel for what a real particle-physics analysis involves.

ATLAS open data have already found their way into university theses and have been used by people outside the collaboration to develop their own educational tools. Indeed, within ATLAS, new members can now choose to work on preparing open data as their qualification task to become an ATLAS co-author, says Sánchez Pineda. This summer, ATLAS will release 10 fb–1 of level-two data from Run 2, with more than 100 simulated physics processes and related resources. ATLAS does not provide level-three data openly and researchers interested in analysing these can do so through a tailored association programme, which 80 people have taken advantage of so far. “This allows external scientists to rely on ATLAS software, computing and analysis expertise for their project,” says Sánchez Pineda.

Fundamental motivation

CERN’s open-data portal hosts and serves data from the four big LHC experiments, also providing many of the software tools including virtual machines to run the analysis code. The OPERA collaboration recently started sharing its research data via CERN and other particle-physics collaborations are interested in joining the project.

Although high-energy physics has made great strides in providing open access to research publications, we are still in the very early days of open data. Theorist Jesse Thaler of MIT, who led the first independent analysis using CMS open data, acknowledges that it is possible for people to get their hands on coveted data by joining an experimental collaboration, but sees a much brighter future with open data. “What about more exploratory studies where the theory hasn’t yet been invented? What about engaging undergraduate students? What about examining old data for signs of new physics?” he asks. These provocative questions serve as fundamental motivations for making all data in high-energy physics as open as possible. 

CERN’s ultimate act of openness

At a mere 30 years old, the World Wide Web already ranks as one of humankind’s most disruptive inventions. Developed at CERN in the early 1990s, it has touched practically every facet of life, impacting industry, penetrating our personal lives and transforming the way we transact. At the same time, the web is shrinking continents and erasing borders, bringing with it an array of benefits and challenges as humanity adjusts to this new technology.

This reality is apparent to all. What is less well known, but deserves recognition, is the legal dimension of the web’s history. On 30 April 1993, CERN released a memo (see image) that placed into the public domain all of the web’s underlying software: the basic client, basic server and library of common code. The document was addressed “To whom it may concern” – which would suggest the authors were not entirely sure who the target audience was. Yet, with hindsight, this line can equally be interpreted as an unintended address to humanity at large.

The legal implication was that CERN relinquished all intellectual property rights in this software. It was a deliberate decision, the intention being that a no-strings-attached release of the software would “further compatibility, common practices, and standards in networking and computer supported collaboration” – arguably modest ambitions for what turned out to be such a seismic technological step. To understand what seeded this development you need to go back to the 1950s, at a time when “software” would have been better understood as referring to clothing rather than computing.

European project

CERN was born out of the wreckage of World War II, playing a role, on the one hand, as a mechanism for reconciliation between former belligerents, while, on the other, offering European nuclear physicists the opportunity to conduct their research locally. The hope was that this would stem the “brain drain” to the US, from a Europe still recovering from the devastating effects of war.

In 1953, CERN’s future Member States agreed on the text of the organisation’s founding Convention, defining its mission as providing “for collaboration among European States in nuclear research of a pure scientific and fundamental character”. With the public acutely aware of the role that destructive nuclear technology had played during the war, the Convention additionally stipulated that CERN was to have “no concern with work for military requirements” and that the results of its work, were to be “published or otherwise made generally available”.

In the early years of CERN’s existence, the openness resulting from this requirement for transparency was essentially delivered through traditional channels, in particular through publication in scientific journals. Over time, this became the cultural norm at CERN, permeating all aspects of its work both internally and with its collaborating partners and society at large. CERN’s release of the WWW software into the public domain, arguably in itself a consequence of the openness requirement of the Convention, could be seen as a precursor to today’s web-based tools that represent further manifestations of CERN’s openness: the SCOAP3 publishing model, open-source software and hardware, and open data.

Perhaps the best measure for how ingrained openness is in CERN’s ethos as a laboratory is to ask the question: “if CERN would have known then what it knows now about the impact of the World Wide Web, would it still have made the web software available, just as it did in 1993?” We would like to suggest that, yes, our culture of openness would provoke the same response now as it did then, though no doubt a modern, open-source licensing regime would be applied.

A culture of openness

This, in turn, can be viewed as testament and credit to the wisdom of CERN’s founders, and to the CERN Convention, which remains the cornerstone of our work to this day.

Bottomonium suppression in lead–lead collisions

A report from the ALICE experiment

The study of the production of quarkonia, the bound states of heavy quark–antiquark pairs, is an important goal of the ALICE physics programme. The quarkonium yield is suppressed in heavy-ion collisions when compared with proton–proton collisions because the binding force is screened by the hot and dense medium. This suppression is expected to be greatest for events with high “centrality”, when the heavy ions collide head-on.

The ALICE collaboration has recently analysed the suppression of inclusive bottomonium (bb̅) production in lead-lead collisions relative to proton–proton collisions. This reduction is quantified in terms of the nuclear modification factor RAA, which is the ratio of the measured yield in lead-lead to proton–proton collisions corrected by the number of binary nucleon–nucleon collisions. An RAA value of unity would indicate no suppression whereas zero indicates full suppression. The bottomonium states ϒ(1S) and ϒ(2S) were measured via their decays to muon pairs at a centre-of-mass energy per nucleon–nucleon pair of 5.02 TeV, in the rapidity range 2.5 < η< 4, with a maximum transverse momentum of 15 GeV/c. No significant variation of RAA is observed as a function of transverse momentum and rapidity, however, production is suppressed with increasing centrality (figure 1). A decrease in RAA from 0.60±0.10(stat)±0.04(syst) for the peripheral 50–90% of collisions to 0.34±0.03(stat)±0.02(syst) for the 0–10% most central collisions was observed.

Theoretical models must deal with the competing effects of melting and (re)generation of the ϒ within the quark-gluon plasma,  the shadowing of parton densities in the initial state and “feed-down” from higher resonance states. Due to uncertainties on the parton density, is not yet known whether the direct production of ϒ(1S) is suppressed, or merely the feed-down from ϒ(2S) and other higher-mass states. Nevertheless, the precision of these measurements imposes significant new constraints on the modelling of ϒ production in lead-lead collisions.

Yong Ho Chin 1958–2019

Yong Ho Chin, a leading theoretical accelerator physicist at the High Energy Accelerator Research Organization (KEK) in Japan and chair of the beam dynamics panel of the International Committee for Future Accelerators (ICFA) since November 2016, unexpectedly passed away on 8 January.

In 1984, Yong Ho received his PhD in accelerator physics from the University of Tokyo for studies performed at KEK under the supervision of Masatoshi Koshiba, who won the Nobel Prize in Physics jointly with Raymond Davis Jr and Riccardo Giacconi in 2002. Yong Ho participated in the design and commissioning of the TRISTAN accelerator, and later in the designs of the KEKB and J-PARC accelerators, along with major contributions to JLC (the Japan Linear Collider) and ILC (the International Linear Collider). In the 1980s and 1990s he spent several years abroad, at DESY and CERN in Europe, and at LBL (now LBNL) in the US.

In his long and distinguished career, Yong Ho made numerous essential contributions in the fields of beam-coupling impedances, coherent beam instabilities, radio-frequency klystron development, space–charge and beam–beam collective effects. He considered his “renormalisation theory for the beam–beam interaction”, developed during his last six months at DESY in the 1980s, as his greatest achievement. However, in the accelerator community, Yong Ho Chin’s name is linked, in particular, to two computer codes he wrote and maintained, and which have been widely used over the past decades.

The first of these codes, developed by Yong Ho in the 1980s, is MOSES (MOde-coupling Single bunch instabilities in an Electron Storage ring), which computes the complex transverse coherent betatron tune shifts as a function of the beam current for a bunch interacting with a resonator impedance. The second well-known code, written by Yong Ho in the 1990s, is the ABCI (Azimuthal Beam Cavity Interaction) code for impedance and wakefield calculations. This served as a time-domain solver of electromagnetic fields when a bunched beam with arbitrary charge distribution goes through an axisymmetric structure, on or off axis.

In the mid-1990s, Yong Ho’s work expanded to two-stream beam instabilities. He rightly foresaw that such instabilities could potentially limit the performance of KEKB and organised and co-organised several international workshops to address this issue early on. Subsequently, he was put in charge of the development and modelling of the X-band klystron for the JLC. He also greatly contributed to the development of the multi-beam klystron now in use for large superconducting linacs, and to the optimisation of the J-PARC accelerators.

Yong Ho returned to the field of collective effects more than 10 years ago and he remained extremely active there. Over the past few years, together with two other renowned accelerator physicists, Alexander W Chao and Michael Blaskiewicz, he developed a two-particle model to study the effects of space–charge force on transverse coherent beam instabilities. The purpose of this model was to obtain a simple picture of some of the essence of the physics of this intricate subject and at the same time provide a good starting point for newcomers joining the effort to solve this long-lasting issue.

As illustrated by his role as chair of an ICFA panel, and by his co-organisation of a large number of international workshops and conferences (including PAC and LINAC), Yong Ho was devoted to serving the international physics community. He was a productive author, diligent referee and esteemed editor for several journals. In 2015 he was recognised with an Outstanding Referee Award by the American Physical Society, and just a few months ago, in the summer of 2018, Yong Ho was appointed associate editor of Physical Review Accelerators and Beams.

Yong Ho was a very good lecturer, teaching at different accelerator schools, including the CERN Accelerator School. He was also in charge of a collaboration programme in which young accelerator scientists were invited to spend a few weeks at KEK.

Yong Ho was a wonderful person and an outstanding scientist. We are very proud to have had the chance to work and collaborate with him. His passing away is a great loss to the community and he will be sorely missed.

Reviews

The Soviet Atomic Project: How the Soviet Union Obtained the Atomic Bomb
by Lee G Pondrom
World Scientific

“Leave them in peace. We can always shoot them later.” Thus spoke Soviet Union leader Josef Stalin, in response to a query by Soviet security and secret police chief Lavrentiy Beria about whether research in quantum mechanics and relativity (considered by Marxists to be incompatible with the principles of dialectical materialism) should be allowed. With these words, a generation of Soviet physical scientists were spared a disaster like the one perpetrated on Soviet agriculture by Trofim Lysenko’s politically correct, pseudoscientific theories of genetics. The reason behind this judgement was the successful development of nuclear weapons by Soviet physical scientists and the recognition by Stalin and Beria of the essential role that these “bourgeois” sciences played in that development.

Political intrigue, the arms race, early developments of nuclear science, espionage and more are all present in this gripping book, which provides a comprehensive account of the intensive programme the Soviets embarked on in 1945, immediately after Hiroshima, to catch up with the US in the area of nuclear weapons. A great deal is known about the Manhattan project, from the key scientists involved, to the many Los Alamos incidents – such as Fermi’s determination of the Alamogordo test-blast energy using scraps of paper and Feynman’s ability to crack his Los Alamos colleagues’ safes – that are intrinsic parts of the US nuclear/particle-physics community’s culture. On the contrary, little is known, at least in the West, about the huge effort made by the war-ravaged Soviet Union in less than five years to reach strategic parity with the US.

Pondrom, a prominent experimental particle physicist with a life-long interest in Russia and its language, provides an intriguing narrative. It is based on a thorough study of available literature plus a number of original documents – many of which he translated himself – that gives a fascinating insight into this history-changing enterprise and into the personalities of the exceptional people behind it.

The Soviet Atomic Project

The success of the Soviet programme was primarily due to Igor Kurchatov, a gifted experimental physicist and outstanding scientific administrator, who was equally at ease with laboratory workers, prominent theoretical physicists and the highest leaders in government, including Beria and Stalin himself. Saddled with developing several huge and remotely located laboratories from scratch, he remained closely involved in many important nitty-gritty scientific and engineering problems. For example, Kurchatov participated hands-on and full-time in the difficult commissioning of Reactor A, the first full-scale reactor for plutonium-239 production at the sprawling Combine #817 laboratory, receiving, along the way, a radiation dose that was 100 times the safe limit that he had established for laboratory staff members.

Beria was the overall project controller and ultimate decision-maker. Although best known for his role as Stalin’s ruthless enforcer – Pondrom describes him as “supreme evil,” Sakharov as a “dangerous man” – he was also an extraordinary organiser and a practical manager. When asked in the 1970s, long after Beria’s demise, how best to develop a Soviet equivalent of Silicon Valley, Soviet Academy of Sciences president A P Alexandrov answered “Dig up Beria.” Beria promised project scientists improved living conditions and freedom from persecution if they performed well (and that they would “be sent far away” if they didn’t). His daily access to Stalin was critical for keeping the project on track. Most of the project’s manual construction work used slave labour from Beria’s gulag.

Both the US and Soviet projects were monumental in scope; Pondrom estimates the Manhattan project’s scale to be about 2% of the US economy. The Soviet’s project scale was similar, but in an economy one-tenth the size. The Soviets had some advantage from the information gathered by espionage (and the simple fact that they knew the Manhattan project succeeded). Also, German scientists interned in Russia for the project played important support roles, especially in the large-scale purification of reactor-grade natural uranium. In addition, there was a nearly unlimited supply of unpaid labourers, as well as German prisoners of war with scientific and engineering backgrounds whose participation in the project was rewarded by better living conditions.

The book is crisply written and well worth the read. The text includes a number of translated segments of official documents plus extracts from memoirs of some of the people involved. So, although Pondrom sprinkles his opinions throughout, there is sufficient material to permit readers to make their own judgements. He doesn’t shirk from explaining some of the complex technical issues, which he (usually) addresses clearly and concisely. The appendices expand on technical issues, some on an elementary level for non-physicists, and others, including isotope extraction techniques, nuclear reaction issues and encryption, in more detail, much of which was new to me.

On the other hand, the confusing assortment of laboratories, their locations, leaders and primary tasks begged for some kind of summary or graphics. The simple chart describing the Soviet’s complex espionage network in the US was useful for keeping track of the roles of the persons involved; a similar chart for the laboratories and their roles would have been equally valuable. The book would also have benefited from a final edit that might have eliminated some of the repetition and caught some obvious errors. But these are minor faults in an engaging, informative book.

Stephen L Olsen, University of Chinese Academy of Sciences.

Advances in Particle Therapy: A multidisciplinary approach
b
y Manjit Dosanjh and Jacques Bernier (eds)
CRC Press, Taylor and Francis Group

A new volume in the CRC Press series on Medical Physics and Biomedical Engineering, this interesting book on particle therapy is structured in 19 chapters, each written by one or more co-authors out of a team of 49 experts (including the two editors). Most are medical physicists, radiation oncologists and radiobiologists who are well renowned in the field.

Advances in Particle Therapy

The opening chapter provides a brief and useful summary of the evolution of modern radiation oncology, starting from the discovery of X  rays up to the latest generation of proton and carbon-ion accelerators. The second and third chapters are devoted to the radiobiological aspects of particle therapy. After an introductory part where the concepts of relative biological effectiveness (RBE) and oxygen-enhancement ratio are defined, this section of the book goes on to review the most recent knowledge gained in the field, from DNA structure to the production of radiation-induced damage, to secondary cancer risk. The conclusion is that, as biological effects and clinical response are functions of a broad range of parameters, we are still far from a complete understanding of all radiobiological aspects underlying particle therapy, as well as from a universally accepted RBE model providing the optimum RBE value to be used for any given treatment.

Chapter 4 and, later, chapter 18 are dedicated to particle-therapy technologies. The first provides a simple explanation of the operating principles of particle accelerators and then goes into the details of beam delivery systems and dose conformation devices. Chapter 18 recalls the historical development of particle therapy in Europe, first with the European Light Ion Medical Accelerator (EULIMA) study and Proton-Ion Medical Machine Study (PIMMS), and then with the design and construction of the HIT, CNAO and MedAustron clinical facilities (CERN Courier January/February 2018 p25). It then provides an outlook on ongoing and expected future technological developments in accelerator design.

Chapter 5 discusses the general requirements for setting up a particle therapy centre, while the following chapter provides an extensive review of imaging techniques for both patient positioning and treatment verification. These are made necessary by the rapid spread of active beam delivery technologies (scanning) and robotic patient positioning systems, which have strongly improved dose delivery. Chapter 7 reviews therapeutic indications for particle therapy and explains the necessity to integrate it with all other treatment modalities so that oncologists can decide on the best combination of therapies for each individual patient. Chapter 8 reports on the history of the European Network of Light Ion Hadron Therapy (ENLIGHT) and its role in boosting collaborative efforts in particle therapy and in training specialists.

The central part of the book (chapters 9 to 15) reviews worldwide clinical results and indications for particle therapy from different angles, pointing out the inherent difficulties in comparing conventional radiation therapy and particle therapy. It analyses the two perspectives under which the dosimetric properties of particles can translate into clinical benefit: decreasing the dose to normal tissue to reduce complications, or scaling the dose to the tumour to improve tumour control without increasing the dose to normal tissue.

Chapter 16 discusses the economic aspects of particle therapy, such as cost-effectiveness and budget impact, while the following chapter describes the benefits of a “rapid learning health care” system. The last chapter discusses global challenges in radiation therapy, such as  how to bring medical electron linac technology to low- and middle-income countries (CERN Courier March 2017 p31). I found this last chapter slightly confusing as I did not understand what is meant by “radiation rotary” and I could not fully grasp the mixing-up of different topics, such as particle therapy and nuclear detonation-terrorism. This part also seemed too US-focussed when discussing the various initiatives, and I was not in agreement with some of the statements (e.g. that particle therapy has undergone a cost reduction by an order of magnitude or more in the past 10 years).

Overall, this book provides a useful compendium of state-of-the-art particle therapy and each chapter is supported by an extensive bibliography, meeting the expectations of both experts and readers interested in gaining an overview of the field. The essay is well structured, and enables readers to go through only selected chapters and in the order that they prefer. Some knowledge of radiobiology, clinical oncology and accelerator technology is assumed. It is disappointing that clinical dosimetry and treatment planning are not addressed other than in a brief mention in chapter 5, but perhaps this is something to consider for a second edition.

Marco Silari, CERN.

Mad maths
Theatre, CERN Globe
24 January 2019

Do you remember your maths high-school teachers?  Were they strict?  Funny? Extraordinary?  Boring?  The theatre comedy “Mad maths” presents the two most unusual teachers you can imagine. Armed with chalk and measuring tapes, Mademoiselle X and Mademoiselle Y aim to heal all those with maths phobia, and teach the audience more about their favourite subject.

On 24 January CERN’s fully booked Globe of Science and Innovation turned into a bizarre classroom. Marching along well-defined 90° angles, and meticulously measuring everything around them, the comedians Sophie Leclercq and Garance Legrou play with numbers and fight at the blackboard to make maths entertaining. The dialogues are juiced up with rap and music, spiced by friendly maths jargon, and seasoned with a hint of craziness. Bumping with trigonometry, philosophising about the number zero, and inventing new counting systems with dubious benefits, the rhythm grows exponentially. For example, did you know that some people’s mood goes up and down like a sine function? That you can make music with fractions? And that some bureaucratic steps are noncommutative?

This comedy show originated from an idea by Olivier Faliez and Kevin Lapin from the French theatre company Sous un autre angle. First studying maths at the university, then attending theatre school, Faliez combined his two passions in 2003 to create an entertaining programme based on maths-driven jokes and turns of event.

Perfect for families with children, this French play has already been performed more than 500 times, especially at science festivals and schools. The topics are customised depending on the level of the students. Future showings are scheduled in Castanet (15 March), Les Mureaux (22 March) and in several schools in France and other countries. Teachers and event organisers who are interested in the show are advised to contact Sophie Leclercq.

At times foolish, at times witty, it is worth watching if and only if you want to unwind and rediscover maths from a different perspective.

Letizia Diamante, CERN.

The Life, Science and Times of Lev Vasilevich Shubnikov, Pioneer of Soviet Cryogenics
by L J Reinders
Springer

This book is a biography of Russian physicist Lev Vasilevich Shubnikov, whose work is scarcely known despite its importance and broad reach. It is also a portrayal of the political and ideological environment existing in the Soviet Union in the late 1930s under Stalin’s repressive regime.

The Life, Science and Times of Lev Vasilevich Shubnikov

While at Leiden University in the Netherlands, which at the time had the most advanced laboratory for low-temperature physics in the world, Shubnikov co-discovered the Shubnikov–De Haas effect: the first observation of quantum-mechanical oscillations of a physical quantity (in this case the resistance of bismuth) at low temperatures and high magnetic fields.

In 1930 Shubnikov went to Kharkov (as it is called in Russian) in the Ukraine, where he built up the first low-temperature laboratory in the Soviet Union. There he led an impressive scientific programme and, together with his team, he discovered what is now known as type-II superconductivity (or the Shubnikov phase) and nuclear paramagnetism. In addition, independently of and almost simultaneously with Meissner and Ochsenfield, they observed the complete diamagnetism of superconductors (today known as the Meissner effect).

In 1937, aged just 36, Shubnikov was arrested, processed by Stalin’s regime and executed “for no other reason than that he had shown evidence of independent thought”, as the author states.

Based on thorough document research and a collection of memories from people who knew Shubnikov, this book will appeal not only to those curious about this physicist, but also to readers interested in the history of Soviet science, especially the development of Soviet physics in the 1930s and the impact that Stalin’s regime had on it.

Virginia Greco, CERN.

The Workshop and the World, what ten thinkers can teach us about science and authority
by Robert P Crease
W. W. Norton & Company

In this book, science historian Robert Crease discusses the concept of scientific authority, how it has changed along the centuries, and how society and politicians interact with scientists and the scientific process – which he refers to as the “workshop”.

The Workshop and the World

Crease begins with an introduction about current anti-science rhetoric and science denial – the most evident manifestation of which is probably the claim that “global warming is a hoax perpetrated by scientists with hidden agendas”.

Four sections follow. In part one, the author introduces the first articulation of scientific authority through the stories of three renowned scientists and philosophers: Francis Bacon, Galileo Galilei and René Descartes. Here, some vulnerabilities of the authority of the scientific workshop emerge, but they are discussed further in the second section of the book through the stories of thinkers like Giambattista Vico, Mary Shelley and Auguste Comte.

Part three attempts to understand the deeply complicated relationship between the workshop and the world, described through the stories of Max Weber, Kemal Atatürk and his precursors, and Edmund Husserl. The final section is all about reinventing authority and is discussed through the work of Hannah Arendt, a thinker who barely escaped the Holocaust and who provided a deep analysis of authority as well as provding clues as to how to restore it.

With this brilliantly written essay, Crease aims to explore what practising science for the common good means and to understand what makes a social and political atmosphere in which science denial can flourish. Finally, Crease tries to suggest what can be done to ensure that science and scientists regain the trust of the people.

Virginia Greco, CERN.

First light for supersymmetry

schematic representation of a supersymmetric laser array

Ideas from supersymmetry have been used to address a longstanding challenge in optics – how to suppress unwanted spatial modes that limit the beam quality of high-power lasers. Mercedeh Khajavikhan at the University of Central Florida in the US and colleagues have created a first supersymmetric laser array, paving the way towards new schemes for scaling up the radiance of integrated semiconductor lasers.

Supersymmetry (SUSY) is a possible additional symmetry of space–time that would enable bosonic and fermionic degrees of freedom to be “rotated” between one another. Devised in the 1970s in the context of particle physics, it suggests the existence of a mirror-world of supersymmetric particles and promises a unified description of all fundamental interactions. “Even though the full ramification of SUSY in high-energy physics is still a matter of debate that awaits experimental validation, supersymmetric techniques have already found their way into low-energy physics, condensed matter, statistical mechanics, nonlinear dynamics and soliton theory as well as in stochastic processes and BCS-type theories, to mention a few,” write Khajavikhan and collaborators in Science.

The team applied the SUSY formalism first proposed by Ed Witten of the Institute for Advanced Study in Princeton to force a semiconductor laser array to operate exclusively in its fundamental transverse mode. In contrast to previous schemes developed to achieve this, such as common antenna-feedback methods, SUSY introduces a global and systematic method that applies to any type of integrated laser array, explains Khajavikhan. “Now that the proof of concept has been demonstrated, we are poised to develop high-power electrically pumped laser arrays based on a SUSY design. This can be applicable to various wavelengths, ranging from visible to mid-infrared lasers.”

To demonstrate the concept, the Florida-based team paired the unwanted modes of the main laser array (comprising five coupled ridge-waveguide cavities etched from quantum wells on an InP wafer) with a lossy superpartner (an array of four waveguides left unpumped). Optical strategies were used to build a superpartner index profile with propagation constants matching those of the four higher-order modes associated with the main array, and the performance of the SUSY laser was assessed using a custom-made optical setup. The results indicated that the existence of an unbroken SUSY phase (in conjunction with a judicious pumping of the laser array) can promote the in-phase fundamental mode and produce high-radiance emission.

“This is a remarkable example of how a fundamental idea such as SUSY may have a practical application, here increasing the power of lasers,” says SUSY pioneer John Ellis of King’s College London. “The discovery of fundamental SUSY still eludes us, but SUSY engineering has now arrived.”

MINOS squeezes sterile neutrino’s hiding ground

MINOS+

Newly published results from the MINOS+ experiment at Fermilab in the US cast fresh doubts on the existence of the sterile neutrino – a hypothetical fourth neutrino flavour that would constitute physics beyond the Standard Model. MINOS+ studies how muon neutrinos oscillate into other neutrino flavours as a function of distance travelled, using magnetised-iron detectors located 1 and 735 km downstream from a neutrino beam produced at Fermilab.

Neutrino oscillations, predicted more than 60 years ago, and finally confirmed in 1998, explain the observed transmutation of neutrinos from one flavour to another as they travel. Tantalising hints of new-physics effects in short-baseline accelerator-neutrino experiments have persisted since 1995, when the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory reported an 88±23 excess in the number of electron antineutrinos emerging from a muon–antineutrino beam. This suggested that muon antineutrinos were oscillating into electron antineutrinos along the way, but not in the way expected if there are only three neutrino flavours.

The plot thickened in 2007 when another Fermilab experiment, MiniBooNE, an 818 tonne mineral-oil Cherenkov detector located 541 m downstream from Fermilab’s Booster neutrino beamline, began to see a similar effect. The excess grew, and last November the MiniBooNE collaboration reported a 4.5σ deviation from the predicted event rate for the appearance of electron neutrinos in a muon neutrino beam. In the meantime, theoretical revisions in 2011 meant that measurements of neutrinos from nuclear reactors also show deviations suggestive of sterile-neutrino interference: the so-called “reactor anomaly”.

Tensions have been running high. The latest results from MINOS+, first reported in 2017 and recently accepted for publication in Physical Review Letters, fail to confirm the MiniBooNE signal. The MINOS+ results are also consistent with those from a comparable analysis of atmospheric neutrinos in 2016 by the IceCube detector at the South Pole. “LSND, MiniBooNE and the reactor data are fairly compatible when interpreted in terms of sterile neutrinos, but they are in stark conflict with the null results from MINOS+ and IceCube,” says theorist Joachim Kopp of CERN. “It might be possible to come up with a model that allows compatibility, but the simplest sterile neutrino models do not allow this.” In late February, the long-baseline T2K experiment in Japan joined the chorus of negative searches for the sterile neutrino, although excluding a different region of parameter space.

Whereas MiniBooNE and LSND sought to observe a second-order flavour transition (in which a muon neutrino morphs into a sterile and then electron neutrino), MINOS+ and IceCube are sensitive to a first-order muon-to-sterile transition that would reduce the expected flux of muon neutrinos. Such “disappearance” experiments are potentially more sensitive to sterile neutrinos, provided systematic errors are carefully modelled.

“The MiniBooNE observations interpreted as a pure sterile neutrino oscillation signal are incompatible with the muon-neutrino disappearance data,” says MINOS+ spokesperson Jenny Thomas of University College London. “In the event that the most likely MiniBooNE signal were due to a sterile neutrino, the signal would be unmissable in the MINOS/MINOS+ neutral-current and charged-current data sets.” Taking into account simple unitarity arguments, adds Thomas, the latest MINOS+ analysis is incompatible with the MiniBooNE result at the 2σ level and at 3σ sigma below a “mass-splitting” of 1 eV2 (see figure 1).

plot showing the coupling and mass splitting of sterile neutrinos with the established neutrinos

The sterile-neutrino hypothesis is also in tension with cosmological data, says theorist Silvia Pascoli of Durham University. “Sterile neutrinos with these masses and mixing angles would be copiously produced in the early universe and would make up a significant fraction of hot dark matter. This is somewhat at odds with cosmological observations.”

One possibility for the surplus electron–neutrino-like events in MiniBooNE is insufficient accuracy in the way neutrino–nucleus interactions in the detector are modelled – a challenge for neutrino-oscillation experiments generally. According to MiniBooNE collaborator Teppei Katori, one effect proposed to account for the MiniBooNE anomaly is neutral-current single-gamma production. “This rare process has many theoretical interests, both within and beyond the Standard Model, but the calculations are not yet tractable at low energies (around 1 GeV) as they are in the non-perturbative QCD region,” he says.

MINOS+ is now analysing its final dataset and working on a direct comparison with MiniBooNE to look for electron-neutrino appearance as well as the present study on muon-neutrino disappearance. Clarification could also come from other short-baseline experiments at Fermilab, in particular MicroBooNE, which has been operating since 2015, and two liquid-argon detectors ICARUS and SBND (CERN Courier June 2017 p25). The most exciting possibility is that new physics is at play. “One viable explanation requires a new neutral-current interaction mediated by a new GeV-scale vector boson and sterile neutrinos with masses in the hundreds of MeV,” explains Pascoli. “So far this has not been excluded. And it is theoretically consistent. We have to wait and see.”

CMS beam pipe to be mined for monopoles

The original CMS beampipe

On 18 February the CMS and MoEDAL collaborations at CERN signed an agreement that will see a 6 m-long section of the CMS beam pipe cut into pieces and fed into a SQUID in the name of fundamental research. The 4 cm diameter beryllium tube – which was in place (right) from 2008 until its replacement by a new beampipe for LHC Run 2 in 2013 – is now under the proud ownership of MoEDAL spokesperson Jim Pinfold and colleagues, who will use it to search for the existence of magnetic monopoles.

Magnetic monopoles with multiple magnetic charge, if produced in high-energy particle collisions at the LHC, are so highly ionising that they could stop in the material surrounding the collision points and bind there with the beryllium nuclei of the beam pipe. To detect the trapped monopoles, Pinfold and coworkers will pass the beam-pipe material through superconducting loops and look for a non-decaying current using highly precise SQUID-based magnetometers.

Materials from the CDF and D0 detectors at the Tevatron and from the H1 detector at HERA were subjected to such searches during the 1990s, and the first pieces of beam pipe from the LHC experiments, taken from the CMS region, were tested in 2012. But these were from regions far from the collision point, whereas the new study will use material surrounding the CMS central-interaction region. “It’s the most directly exposed piece of material of the experiment that the monopoles encounter when produced and moving away from the collision point,” says Albert De Roeck of CMS and MoEDAL, who was involved in the previous LHC and HERA studies. “Although no signs of monopoles have shown up in data so far, this new study pushes the search for monopoles with magnetic charge well beyond the five Dirac charges currently achievable with the MoEDAL detector.”

MoEDAL technical coordinator Richard Soluk and a small team of technicians will first cut the beampipe into bite-sized pieces at a special facility constructed at the Centre for Particle Physics at the University of Alberta, Canada, where they have to be especially careful because beryllium is highly toxic. The resulting pieces, carefully enshrined in plastic, will then be shipped back to Europe to the SQUID Magnetometer Laboratory at ETH Zurich, where the freshly sliced beam pipe will undergo a short measurement campaign planned for early summer. “On the analysis front we have to estimate how many monopoles would have been trapped in the beam pipe during its deployment at CMS as a function of monopole mass, spin, magnetic charge, kinetic energy and production mechanism,” says Pinfold.

The latest search is complementary to general monopole searches that have already been carried out by the ATLAS and MoEDAL collaborations. Deployed at LHC Point 8, MoEDAL contains more than 100 m2 of nuclear-track detectors that are sensitive only to new physics and has a dedicated trapping detector consisting of around one tonne of aluminum.

“Most modern theories such as GUTs and string theory require the existence of monopoles,” says Pinfold. “The monopole is the most important particle not yet found.”

bright-rec iop pub iop-science physcis connect