Since the advent of the Large Hadron Collider (LHC), CERN has been recognised as the world’s leading laboratory for experimental particle physics. More than 10,000 people work at CERN on a daily basis. The majority are members of universities and other institutions worldwide, and many are young students and postdocs. The experience of working at CERN therefore plays an important role in their careers, be it in high-energy physics or a different domain.
The value of education
In 2016 the CERN management appointed a study group to collect information about the careers of students who have completed their thesis studies in one of the four LHC experiments. Similar studies were carried out in the past, also including people working on the former LEP experiments, and were mainly based on questionnaires sent to the team leaders of the various collaborator institutes. The latest study collected a larger and more complete sample of up-to-date information from all the experiments, with the aim of addressing young physicists who have left the field. This allows a quantitative measurement of the value of the education and skills acquired at CERN in finding jobs in other domains, which is of prime importance to evaluate the impact and role of CERN’s culture.
Following an initial online questionnaire with 282 respondents, the results were presented to the CERN Council in December 2016. The experience demonstrated the potential for collecting information from a wider population and also to deepen and customise the questions. Consequently, it was decided to enlarge the study to all persons who have been or are still involved with CERN, without any particular restrictions. Two distinct communities were polled with separate questionnaires: past and current CERN users (mainly experimentalists at any stage of their career), and theorists who had collaborated with the CERN theory department. The questionnaires were opened for a period of about four months and attracted 2692 and 167 participants from the experimental and theoretical communities, respectively. A total of 84 nationalities were represented, with German, Italian and US nationals making
up around half, and the distribution of participants by experiments was: ATLAS (994); CMS (977); LHCb (268) ALICE (102); and “other” (87), which mainly included members of the NA62 collaboration.
The questionnaires addressed various professional and sociological aspects: age, nationality, education, domicile and working place, time spent at CERN, acquired expertise, current position, and satisfaction with the CERN environment. Additional points were specific to those who are no longer CERN users, in relation to their current situation and type of activity. The analysis revealed some interesting trends.
For experimentalists, the CERN environment and working experience is considered as satisfactory or very satisfactory by 82% of participants, which is evenly distributed across nationalities. In 70% of cases, people who left high-energy physics mainly did so because of the long and uncertain path for obtaining a permanent position. Other reasons for leaving the field, although quoted by a lower percentage of participants, were: interest in other domains; lack of satisfaction at work; and family reasons. The majority of participants (63%) who left high-energy physics are currently working in the private sector, often in information technology, advanced technologies and finance domains, where they occupy a wide range of positions and responsibilities. Those in the public sector are mainly involved in academia or education.
For persons who left the field, several skills developed during their experience at CERN are considered important in their current work. The overall satisfaction of participants with their current position was high or very high for 78% of respondents, while 70% of respondents considered CERN’s impact on finding a job outside high-energy physics as positive or very positive. CERN’s services and networks, however, are not found to be very effective in helping finding a new job – a situation that is being addressed, for example, by the recently launched CERN alumni programme.
Theorists participating in the second questionnaire mainly have permanent or tenure-track positions. A large majority of them spent time at CERN’s theory department with short- or medium-term contracts, and this experience seems to improve participants’ careers when leaving CERN for a national institution. On average, about 35% of a theorist’s scientific publications originate from collaborations started at CERN, and a large fraction of theorists (96%) declared that they are satisfied or highly satisfied with their experience at CERN.
Conclusions
As with all such surveys, there is an inherent risk of bias due to the formulation of the questions and the number and type of participants. In practice, only between 20 and 30% of the targeted populations responded, depending on the addressed community, which means the results of the poll cannot be considered as representative of the whole CERN population. Nevertheless, it is clear that the impact of CERN on people’s careers is considered by a large majority of the people polled to be mostly positive, with some areas for improvement such as training and supporting the careers of those who choose to leave CERN and high-energy physics.
In the future this study could be made more significant by collecting similar information on larger samples of people, especially former CERN users. In this respect, the CERN alumni programme could help build a continuously updated database of current and former CERN users and also provide more support for people who decide to leave high-energy physics.
The final results of the survey, mostly in terms of statistical plots, together with a detailed description of the methods used to collect and analyse all the data, have been documented in a CERN Yellow Report, and will also be made available through a dedicated web page.
As generations of particle colliders have come and gone, CERN’s fixed-target experiments have remained a backbone of the lab’s physics activities. Notable among them are those fed by the Super Proton Synchrotron (SPS). Throughout its long service to CERN’s accelerator complex, the 7 km-circumference SPS has provided a steady stream of high-energy proton beams to the North Area at the Prévessin site, feeding a wide variety of experiments. Sequentially named, they range from the pioneering NA1, which measured the photoproduction of vector and scalar bosons, to today’s NA64, which studies the dark sector. As the North Area marks 40 years since its first physics result, this hub of experiments large and small is as lively and productive as ever. Its users continue to drive developments in detector design, while reaping a rich harvest of fundamental physics results.
Specialised and precise
In fixed-target experiments, a particle beam collides with a target that is stationary in the laboratory frame, in most cases producing secondary particles for specific studies. High-energy machines like the SPS, which produces proton beams with a momentum up to 450 GeV/c, give the secondary products a large forward boost, providing intense sources of secondary and tertiary particles such as electrons, muons and hadrons. With respect to collider experiments, fixed-target experiments tend to be more specialised and focus on precision measurements that demand very high statistics, such as those involving ultra-rare decays.
Fixed-target experiments have a long history at CERN, forming essential building blocks in the physics landscape in parallel to collider facilities. Among these were the first studies of the quark–gluon plasma, the first evidence of direct CP violation and a detailed understanding of how nucleon spin arises from quarks and gluons. The first muons in CERN’s North Area were reported at the start of the commissioning run in March 1978, and the first physics publication – a measurement of the production rate of muon pairs by quark–antiquark annihilation as predicted by Drell and Yan – was published in 1979 by the NA3 experiment. Today, the North Area’s physics programme is as vibrant as ever.
The longevity of the North Area programme is explained by the unique complex of proton accelerators at CERN, where each machine is not only used to inject the protons into the next one but also serves its own research programme (for example, the Proton Synchrotron Booster serves the ISOLDE facility, while the Proton Synchrotron serves the Antiproton Decelerator and the n_TOF experiment). Fixed-target experiments using protons from the SPS started taking data while the ISR collider was already in operation in the late 1970s, continued during SPS operation as a proton–antiproton collider in the early 1980s, and again during the LEP and now LHC eras. As has been the case with collider experiments, physics puzzles and unexpected results were often at the origin of unique collaborations and experiments, pushing limits in several technology areas such as the first use of silicon-microstrip detectors.
The initial experimental programme in the North Area involved two large experimental halls: EHN1 for hadronic studies and EHN2 for muon experiments. The first round of experiments in EHN1 concerned studies of: meson photoproduction (NA1); electromagnetic form factors of pions and kaons (NA7); hadronic production of particles with large transverse momentum (NA3); inelastic hadron scattering (NA5); and neutron scattering (NA6). In EHN2 there were experiments devoted to studies with high-intensity muon beams (NA2 and NA4). A third, underground, area called ECN3 was added in 1980 to host experiments requiring primary proton beams and secondary beams of the highest intensity (up to 1010 particles per cycle).
Experiments in the North Area started a bit later than those in CERN’s West Area, which started operation in 1971 with 28 GeV/c protons supplied by the PS. Built to serve the last stage of the PS neutrino programme and the Omega spectrometer, the West Area zone was transformed into an SPS area in 1975 and is best known for seminal neutrino experiments (by the CDHS and CHARM collaborations, later CHORUS and NOMAD) and hadron-spectroscopy experiments with Omega. We are now used to identifying experimental collaborations by means of fancy acronyms such as ATLAS or ALICE, to mention two of the large LHC collaborations. But in the 1970s and the 1980s, one could distinguish between the experiments (identified by a sequential number) and the collaborations (identified by the list of the cities hosting the collaborating institutes). For instance CDHS stood for the CERN–Dortmund–Heidelberg–Saclay collaboration that operated the WA1 experiment in the West Area.
Los Alamos, SLAC, Fermilab and Brookhaven National Laboratory in the US, JINR and the Institute for High Energy Physics in Russia, and KEK in Japan, for example, also all had fixed-target programmes, some of which date back to the 1960s. As fixed-target programmes got into their stride, however, colliders were commanding the energy frontier. In 1980 the CERN North Area experimental programme was reviewed in a special meeting held in Cogne, Italy, and it was not completely obvious that there was a compelling physics case ahead. But it also led to highly optimised installations thanks to strong collaborations and continuous support from the CERN management. Advances in detectors and innovations such as silicon detectors and aerogel Cherenkov counters, plus the hybrid integration of bubble chambers with electronic detectors, led to a revamp in the study of hadron interactions at fixed-target experiments, especially for charmed mesons.
Physics landscape
Experiments at CERN’s North Area began shortly after the Standard Model had been established, when the scale of experiments was smaller than it is today. According to the 1979 CERN annual report, there were 34 active experiments at the SPS (West and North areas combined) and 14 were completed in 1978. This article cannot do justice to all of them, not even to those in the North Area. But over the past 40 years the experimental programme has clearly evolved into at least four main themes: probing nucleon structure with high-energy muons; hadroproduction and photoproduction at high energy; CP violation in very rare decays; and heavy-ion experiments (see “Forty years of fixed-target physics at CERN’s North Area”).
Aside from seminal physics results, fixed-target experiments at the North Area have driven numerous detector innovations. This is largely a result of their simple geometry and ease of access, which allows more adventurous technical solutions than might be possible with collider experiments. Examples of detector technologies perfected at the North Area include: silicon microstrips and active targets (NA11, NA14); rapid-cycling bubble chambers (NA27); holographic bubble chambers (NA25); Cherenkov detectors (CEDAR, RICH); liquid-krypton calorimeters (NA48); micromegas gas detectors (COMPASS); silicon pixels with 100 ps time resolution (NA62); time-projection chambers with dE/dx measurement (ISIS, NA49); and many more. The sheer amount of data to be recorded in these experiments also led to the very early adoption of PC farms for the online systems of the NA48 and COMPASS experiments.
Another key function of the North Area has been to test and calibrate detectors. These range from the fixed-target experiments themselves to experiments at colliders (such as LHC, ILC and CLIC), space and balloon experiments, and bent-crystal applications (such as UA9 and NA63). New detector concepts such as dual-readout calorimetry (DREAM) and particle-flow calorimetry (CALICE) have also been developed and optimised. Recently the huge EHN1 hall was extended by 60 m to house two very large liquid-argon prototype detectors to be tested for the Deep Underground Neutrino Experiment under construction in the US.
If there is an overall theme concerning the development of the fixed-target programme in the North Area, one could say that it was to be able to quickly evolve and adapt to address the compelling questions of the day. This looks set to remain true, with many proposals for new experiments appearing on the horizon, ranging from the study of very rare decays and light dark matter to the study of QCD with hadron and heavy-ion beams. There is even a study under way to possibly extend the North Area with an additional very-high-intensity proton beam serving a beam dump facility. These initiatives are being investigated by the Physics Beyond Collider study (see p20), and many of the proposals explore the high-intensity frontier complementary to the high-energy frontier at large colliders. Here’s to the next 40 years of North Area physics!
Forty years of fixed-target physics at CERN's North Area
High-energy muons are excellent probes with which to investigate the structure of the nucleon. The North Area’s EHN2 hall was built to house two sets of muon experiments: the sequential NA2/NA9/NA28 (also known as the European Muon Collaboration, EMC), which made the observation that nucleons bound in nuclei are different from free nucleons; and NA4 (pictured), which confirmed the electroweak effects between the weak and electromagnetic interactions. A particular success of the North Area’s muon experiments concerned the famous “proton spin crisis”. In the late-1980s, contrary to the expectation by the otherwise successful quark–parton model, data showed that the proton’s spin is not carried by the quark spins. This puzzle interested the community for decades, compelling CERN to further investigate by building the NA47 Spin Muon collaboration experiment in the early 1990s (which established the same result for the neutron) and, subsequently, the COMPASS experiment (which studied the contribution of the gluon spins to the nucleon spin). A second phase of COMPASS still ongoing today, is devoted to nucleon tomography using deeply virtual Compton scattering and, for the first time, polarised Drell–Yan reactions. Hadron spectroscopy is another area of research at the North Area, and among recent important results from COMPASS is the measurement of pion polarisability, which is an important test of low-energy QCD.
Hadroproduction and photoproduction at high energy
Following the first experiment to publish data in the North Area (NA3) concerning the production of μ+μ– pairs from hadron collisions, the ingenuity to combine bubble chambers and electronic detectors led to a series of experiments. The European Hybrid Spectrometer facility housed NA13, NA16, NA22, NA23 and NA27, and studied charm production and many aspects of hadronic physics, while photoproduction of heavy bosons was the primary aim of NA1. A measurement of the charm lifetime using the first ever microstrip silicon detectors was pioneered by the ACCMOR collaboration (NA11/NA32; see image of Robert Klanner next to the ACCMOR spectrometer in 1977), and hadron spectroscopy with neutral final states was studied by NA12 (GAMS), which employed a large array of lead glass counters, in particular a search for glueballs. To study μ+μ– pairs from pion interactions at the highest possible intensities, the toroidal spectrometer NA10 was housed in the ECN3 underground cavern. Nearby in the same cavern, NA14 used a silicon active target and the first big microstrip silicon detectors (10,000 channels) to study charm photoproduction at high intensity. Later, experiment NA30 enabled a direct measurement of the π0 lifetime by employing thin gold foils to convert the photons from the π0 decays. Today, electron beams are used by NA64 to look for dark photons while hadron spectroscopy is still actively pursued, in particular at COMPASS.
The discovery of CP violation in the decay of the long-lived neutral kaon to two pions at Brookhaven National Laboratory in 1964 was unexpected. To understand its origin, physicists needed to make a subtle comparison (in the form of a double ratio) between long- and short-lived neutral kaon decays in pairs of neutral and charged kaons. In 1987 an ambitious experiment (NA31) showed a deviation from one of the double ratios, providing the first evidence of direct CP violation (that is, it happens in the decay of the neutral mesons, not only in the mixing between neutral kaons). A second-generation experiment (NA48, pictured in 1996), located in ECN3 to accept a much higher primary-proton intensity, was able to measure the four decay modes concurrently thanks to the deflection of a tiny fraction of the primary proton beam into a downstream target via channelling in a “bent” crystal. NA48 was approved in 1991 when it became evident that more precision was needed to confirm the original observation (a competing programme at Fermilab called E731 did not find a significant deviation from the unity of the double ratio). Both KTeV (the follow-up Fermilab experiment) and NA48 confirmed NA31’s results, firmly establishing direct CP violation. Continuations of the NA48 experiments studied rare decays of the short-lived neutral kaon and searched for direct CP violation in charged kaons. Nowadays the kaon programme continues with NA62, which is dedicated to the study of very rare K+→π+ννdecays and is complementary to the B-meson studies performed by the LHCb experiment.
In the mid-1980s, with a view to reproduce in the laboratory the plasma of free quarks and gluons predicted by QCD and believed to have existed in the early universe, the SPS was modified to accelerate beams of heavy ions and collide them with nuclei. The lack of a single striking signature of the formation of the plasma demands that researchers look for as many final states as possible, exploiting the evolution of standard observables (such as the yield of muon pairs from the Drell–Yan process or the production rate of strange quarks) as a function of the degree of overlap of the nuclei that participate in the collision (centrality). By 2000 several experiments had, according to CERN Courier in March that year, found “tantalising glimpses of mechanisms that shaped our universe”. The experiments included NA44, NA45, NA49, NA50, NA52 and NA57, as well as WA97 and WA98 in the West Area. Among the most popular signatures observed was the suppression of the J/ψ yield in ion–nucleus collisions with respect to proton–proton collisions, which was seen by NA50. Improved sensitivity to muon pairs was provided by the successor experiment NA60. The current heavy-ion programme at the North Area includes NA61/SHINE (see image), the successor of NA49, which is studying the onset of phase transitions in dense quark–gluon matter at different beam energies and for different beam species. Studies of the quark–gluon plasma continue today, in particular at the LHC and at RHIC in the US. At the same time, NA61/SHINE is measuring the yield of mesons from replica targets for neutrino experiments worldwide and particle production for cosmic-ray studies.
High-energy physics (HEP) has been at the forefront of open-access publishing, the long-sought ideal to make scientific literature freely available. An early precursor to the open-access movement in the late 1960s was the database management system SPIRES (Stanford Physics Information Retrieval System), which aggregated all available (paper-copy) preprints that were sent between different institutions. SPIRES grew to become the first database accessible through the web in 1991 and later evolved into INSPIRE-HEP, hosted and managed by CERN in collaboration with other research laboratories.
The electronic era
The birth of the web in 1989 changed the publishing scene irreversibly. Vast sums were invested to take the industry from paper to online and to digitise old content, resulting in a migration from the sale of printed copies of journals to electronic subscriptions. From 1991, helped by the early adoption by particle physicists, the self-archiving repository arXiv.org allowed rapid distribution of electronic preprints in physics and, later, mathematics, astronomy and other sciences. The first open-access journals then began to sprout up and in early 2000 three major international events – the Budapest Open Access Initiative, Bethesda Statement on Open Access Publishing and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities – set about leveraging the new technology to grant universal free access to the results of scientific research.
Today, roughly one quarter of all scholarly literature in sciences and humanities is open access. In HEP, the figure is almost 90%. The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3), a global partnership between libraries, national funding agencies and publishers of HEP journals, has played an important role in HEP’s success. Designed at CERN, SCOAP3 started operation in 2014 and removes subscription fees for journals and any expenses scientists might incur to publish their articles open access by paying publishers directly. Some 3000 institutions from 43 countries (figure 1) contribute financially according to their scientific output in the field, re-using funds previously spent on subscription fees for journals that are now open access.
“SCOAP3 has demonstrated how open access can increase the visibility of research and ease the dissemination of scientific results for the benefit of everyone,” says SCOAP3 operations manager Alex Kohls of CERN. “This initiative was made possible by a strong collaboration of the worldwide library community, researchers, as well as commercial and society publishers, and it can certainly serve as an inspiration for open access in other fields.”
Plan S
On 4 September 2018, a group of national funding agencies, the European Commission (EC) and the European Research Council – under the name “cOAlition S” – launched a radical initiative called Plan S. Its aim is to ensure that, by 2020, all scientific publications that result from research funded by public grants must be published in compliant open-access journals or platforms. Robert-Jan Smits, the EC’s open-access envoy and one of the architects of Plan S, cites SCOAP3 as an inspiration for the project and says that momentum for Plan S has been building for two decades. “During those years many declarations, such as the Budapest and Berlin ones, were adopted, calling for a rapid transition to full and immediate open access. Even the 28 science ministers of the European Union issued a joint statement in 2016 that open access to scientific publications should be a reality by 2020,” says Smits. “The current situation shows, however, that there is still a long way to go.”
Recently, China released position papers supporting the efforts of Plan S, which could mark a key moment for the project. But the reaction of scientists around the world has been mixed. An open letter published in September by biochemist Lynn Kamerlin of Uppsala University in Sweden, attracting more than 1600 signatures at the time of writing, argues that Plan S would strongly reduce the possibilities to publish in suitable scientific journals of high quality, possibly splitting the global scientific community into two separate systems. Another open letter, published in November by biologist Michael Eisen at University of California Berkeley with around 2000 signatures, backs the principles of Plan S and supports its commitment “to continue working with funders, universities, research institutions and other stakeholders until we have created a stable, fair, effective and open system of scholarly communication.”
Challenges ahead
High-energy physics is already aligned to the Plan S vision thanks to SCOAP3, says Salvatore Mele of CERN, who is one of SCOAP3’s architects. But for other disciplines “the road ahead is likely to be bumpy”. “Funders, libraries and publishers have cooperated through CERN to make SCOAP3 possible. As most of the tens of thousands of scholarly journals today operate on a different model, with access mostly limited to readers paying subscription fees, this vision implies systemic challenges for all players: funders, libraries, publishers and, crucially, the wider research community,” he says.
It is publishers who are likely to face the biggest impact from Plan S. However, the Open Access Scholarly Publishers Association (OASPA) – which includes, among others, the American Physical Society, IOP Publishing (which publishes CERN Courier) and The Royal Society – recently published a statement of support, claiming OASPA “would welcome the opportunity to provide guidance and recommendations for how the funding of open-access publications should be implemented within Plan S”, while emphasising that smaller publishers, scholarly societies and new publishing platforms need to be included in the decision-making process.
Responding to an EC request for Plan S feedback that was open until 8 February, however, publishers have expressed major concerns about the pace of implementation and about the consequences of Plan S for hybrid journals. In a statement on 12 February, the European Physical Society, while supportive of the Plan S rationale, wrote that “several of the governing principles proposed for its implementation are not conducive to a transition to open access that preserves the important assets of today’s scientific publication system”. In another statement, the world’s largest open-access publisher, Springer Nature, released a list of six recommendations for funding bodies worldwide to adopt in order for full open-access to become a reality, highlighting the differences between “geographic, funder and disciplinary needs”. In parallel, a group of learned societies in mathematics and science in Germany has reacted with a statement citing a “precipitous process” that infringes the freedom of science, and urged cOAlition S to “slow down and consider all stakeholders”.
Global growth
Smits thinks traditional publishers, which are a critical element in quality control and rigorous peer review in scholarly literature, should adopt a fresh look, for example by implementing more transparent metrics. “It is obvious that the big publishers that run the subscription journals and make enormous profits prefer to keep the current publishing model. Furthermore, the dream of each scientist is to publish in a so-called prestigious high-impact journal, which shows that the journal impact factor is still very present in the academic world,” says Smits. “To arrive at the necessary change in academic culture, new metrics need to be developed to assess scientific output. The big challenge for cOAlition S is to grow globally, by having more funders signing up.”
Undoubtedly we are at a turning point between the old and new publishing worlds. The EC already requires that all publications from projects receiving its funding be made open access. But Plan S goes further, proposing an outright shift in scholarly publication. It is therefore crucial to ensure a smooth shift that takes into account all the actors, says Mele. “Thanks to SCOAP3, which has so far supported the publication of more than 26,000 articles, the high-energy physics community is fortunate to meet the vision of Plan S, while retaining researcher choice of the most appropriate place to publish their results.”
In the 17th century, Galileo Galilei looked at the moons of Jupiter through a telescope and recorded his observations in his now-famous notebooks. Galileo’s notes – his data – survive to this day and can be reviewed by anyone around the world. Students, amateurs and professionals can replicate Galileo’s data and results – a tenet of the scientific method.
In particle physics, with its unique and expensive experiments, it is practically impossible for others to attempt to reproduce the original work. When it is impractical to gather fresh data to replicate an analysis, we settle for reproducing the analysis with the originally obtained data. However, a 2013 study by researchers at the University of British Columbia, Canada, estimates that the odds of scientific data existing in an analysable form reduce by about 17% each year.
Indeed, just a few years down the line it might not even be possible for researchers to revisit their own data due to changes in formats, software or operating systems. This has led to growing calls for scientists to release and archive their data openly. One motivation is moral: society funds research and so should have access to all of its outputs. Another is practical: a fresh look at data could enable novel research and lead to discoveries that may have eluded earlier searches.
Like open-access publishing (see A turning point for open-access publishing), governments have started to impose demands on scientists regarding the availability and long-term preservation of research data. The European Commission, for example, has piloted the mandatory release of open data as part of its Horizon 2020 programme and plans to invest heavily in open data in the future. An increasing number of data repositories have been established for life and medical sciences as well as for social sciences and meteorology, and the idea is gaining traction across disciplines. Only days after they announced the first observation of gravitational waves, the LIGO and VIRGO collaborations made public their data. NASA also releases data from many of its missions via open databases, such as exoplanet catalogues. The Natural History Museum in London makes data from millions of specimens available via a website and, in the world of art, the Rijksmuseum in Amsterdam provides an interface for developers to build apps featuring historic artworks.
Data levels
The open-data movement is of special interest to particle physics, owing to the uniqueness and large volume of datasets involved in discoveries such as that of the Higgs boson at the Large Hadron Collider (LHC). The four main LHC experiments have started to periodically release their data in an open manner, and these data can be classified into four levels. The first consists of the data shown in final publications, such as plots and tables, while the second concerns datasets in a simplified format that are suitable for “lightweight” analyses in educational or similar contexts. The third level involves the data being used for analysis by the researchers themselves, requiring specialised code and dedicated computing resources, and the final level with the highest complexity is the raw data generated by the detectors, which requires petabytes of storage and, uncalibrated, is not of much use without being fed to the third tier.
In late 2014 CERN launched an open-data portal and released research data from the LHC for the first time. The data, collected by the CMS experiment, represented half the level-three data recorded in 2010. The ALICE experiment has also released level-three data from proton–proton as well as lead–lead collisions, while all four collaborations – including ATLAS and LHCb – have released subsets of level-two data for education and outreach purposes.
Proactive policy
The story of open data at CMS goes back to 2011. “We started drafting an open-data policy, not because of pressure from funding agencies but because defining our own policy proactively meant we did not have an external body defining it for us,” explains Kati Lassila-Perini, who leads the collaboration’s data-preservation project. CMS aims to release half of each year’s level-three data three years after data taking, and 100% of the data within a ten-year window. By guaranteeing that people outside CMS can use these data, says Lassila-Perini, the collaboration can ensure that the knowledge of how to analyse the data is not lost, while allowing people outside CMS to look for things the collaboration might not have time for. To allow external re-use of the data, CMS released appropriate metadata as well as analysis examples. The datasets soon found takers and, in 2017, a group of theoretical physicists not affiliated with the collaboration published two papers using them. CMS has since released half its 2011 data (corresponding to around 200 TB) and half its 2012 data (1 PB), with the first releases of level-three data from the LHC’s Run 2 in the pipeline.
The LHC collaborations have been releasing simpler datasets for educational activities from as early as 2011, for example for the International Physics Masterclasses that involve thousands of high-school students around the globe each year. In addition, CMS has made available several Jupyter notebooks – a browser-based analysis platform named with a nod to Galileo – in assorted languages (programming and human) that allow anyone with an internet connection to perform a basic analysis. “The real impact of open data in terms of numbers of users is in schools,” says Lassila-Perini. “It makes it possible for young people with no previous contact with coding to learn about data analysis and maybe discover how fascinating it can be.” Also available from CMS are more complex examples aimed at university-level students.
Open-data endeavours by ATLAS are very much focused on education, and the collaboration has provided curated datasets for teaching in places that may not have substantial computing resources or internet access. “Not even the documentation can rely on online content, so everything we produce needs to be self-contained,” remarks Arturo Sánchez Pineda, who coordinates ATLAS’s open-data programme. ATLAS datasets and analysis tools, which also rely on Jupyter notebooks, have been optimised to fit on a USB memory stick and allow simplified ATLAS analyses to be conducted just about anywhere in the world. In 2016, ATLAS released simplified open data corresponding to 1 fb–1 at 8 TeV, with the aim of giving university students a feel for what a real particle-physics analysis involves.
ATLAS open data have already found their way into university theses and have been used by people outside the collaboration to develop their own educational tools. Indeed, within ATLAS, new members can now choose to work on preparing open data as their qualification task to become an ATLAS co-author, says Sánchez Pineda. This summer, ATLAS will release 10 fb–1 of level-two data from Run 2, with more than 100 simulated physics processes and related resources. ATLAS does not provide level-three data openly and researchers interested in analysing these can do so through a tailored association programme, which 80 people have taken advantage of so far. “This allows external scientists to rely on ATLAS software, computing and analysis expertise for their project,” says Sánchez Pineda.
Fundamental motivation
CERN’s open-data portal hosts and serves data from the four big LHC experiments, also providing many of the software tools including virtual machines to run the analysis code. The OPERA collaboration recently started sharing its research data via CERN and other particle-physics collaborations are interested in joining the project.
Although high-energy physics has made great strides in providing open access to research publications, we are still in the very early days of open data. Theorist Jesse Thaler of MIT, who led the first independent analysis using CMS open data, acknowledges that it is possible for people to get their hands on coveted data by joining an experimental collaboration, but sees a much brighter future with open data. “What about more exploratory studies where the theory hasn’t yet been invented? What about engaging undergraduate students? What about examining old data for signs of new physics?” he asks. These provocative questions serve as fundamental motivations for making all data in high-energy physics as open as possible.
At a mere 30 years old, the World Wide Web already ranks as one of humankind’s most disruptive inventions. Developed at CERN in the early 1990s, it has touched practically every facet of life, impacting industry, penetrating our personal lives and transforming the way we transact. At the same time, the web is shrinking continents and erasing borders, bringing with it an array of benefits and challenges as humanity adjusts to this new technology.
This reality is apparent to all. What is less well known, but deserves recognition, is the legal dimension of the web’s history. On 30 April 1993, CERN released a memo (see image) that placed into the public domain all of the web’s underlying software: the basic client, basic server and library of common code. The document was addressed “To whom it may concern” – which would suggest the authors were not entirely sure who the target audience was. Yet, with hindsight, this line can equally be interpreted as an unintended address to humanity at large.
The legal implication was that CERN relinquished all intellectual property rights in this software. It was a deliberate decision, the intention being that a no-strings-attached release of the software would “further compatibility, common practices, and standards in networking and computer supported collaboration” – arguably modest ambitions for what turned out to be such a seismic technological step. To understand what seeded this development you need to go back to the 1950s, at a time when “software” would have been better understood as referring to clothing rather than computing.
European project
CERN was born out of the wreckage of World War II, playing a role, on the one hand, as a mechanism for reconciliation between former belligerents, while, on the other, offering European nuclear physicists the opportunity to conduct their research locally. The hope was that this would stem the “brain drain” to the US, from a Europe still recovering from the devastating effects of war.
In 1953, CERN’s future Member States agreed on the text of the organisation’s founding Convention, defining its mission as providing “for collaboration among European States in nuclear research of a pure scientific and fundamental character”. With the public acutely aware of the role that destructive nuclear technology had played during the war, the Convention additionally stipulated that CERN was to have “no concern with work for military requirements” and that the results of its work, were to be “published or otherwise made generally available”.
In the early years of CERN’s existence, the openness resulting from this requirement for transparency was essentially delivered through traditional channels, in particular through publication in scientific journals. Over time, this became the cultural norm at CERN, permeating all aspects of its work both internally and with its collaborating partners and society at large. CERN’s release of the WWW software into the public domain, arguably in itself a consequence of the openness requirement of the Convention, could be seen as a precursor to today’s web-based tools that represent further manifestations of CERN’s openness: the SCOAP3 publishing model, open-source software and hardware, and open data.
Perhaps the best measure for how ingrained openness is in CERN’s ethos as a laboratory is to ask the question: “if CERN would have known then what it knows now about the impact of the World Wide Web, would it still have made the web software available, just as it did in 1993?” We would like to suggest that, yes, our culture of openness would provoke the same response now as it did then, though no doubt a modern, open-source licensing regime would be applied.
A culture of openness
This, in turn, can be viewed as testament and credit to the wisdom of CERN’s founders, and to the CERN Convention, which remains the cornerstone of our work to this day.
Yong Ho Chin, a leading theoretical accelerator physicist at the High Energy Accelerator Research Organization (KEK) in Japan and chair of the beam dynamics panel of the International Committee for Future Accelerators (ICFA) since November 2016, unexpectedly passed away on 8 January.
In 1984, Yong Ho received his PhD in accelerator physics from the University of Tokyo for studies performed at KEK under the supervision of Masatoshi Koshiba, who won the Nobel Prize in Physics jointly with Raymond Davis Jr and Riccardo Giacconi in 2002. Yong Ho participated in the design and commissioning of the TRISTAN accelerator, and later in the designs of the KEKB and J-PARC accelerators, along with major contributions to JLC (the Japan Linear Collider) and ILC (the International Linear Collider). In the 1980s and 1990s he spent several years abroad, at DESY and CERN in Europe, and at LBL (now LBNL) in the US.
In his long and distinguished career, Yong Ho made numerous essential contributions in the fields of beam-coupling impedances, coherent beam instabilities, radio-frequency klystron development, space–charge and beam–beam collective effects. He considered his “renormalisation theory for the beam–beam interaction”, developed during his last six months at DESY in the 1980s, as his greatest achievement. However, in the accelerator community, Yong Ho Chin’s name is linked, in particular, to two computer codes he wrote and maintained, and which have been widely used over the past decades.
The first of these codes, developed by Yong Ho in the 1980s, is MOSES (MOde-coupling Single bunch instabilities in an Electron Storage ring), which computes the complex transverse coherent betatron tune shifts as a function of the beam current for a bunch interacting with a resonator impedance. The second well-known code, written by Yong Ho in the 1990s, is the ABCI (Azimuthal Beam Cavity Interaction) code for impedance and wakefield calculations. This served as a time-domain solver of electromagnetic fields when a bunched beam with arbitrary charge distribution goes through an axisymmetric structure, on or off axis.
In the mid-1990s, Yong Ho’s work expanded to two-stream beam instabilities. He rightly foresaw that such instabilities could potentially limit the performance of KEKB and organised and co-organised several international workshops to address this issue early on. Subsequently, he was put in charge of the development and modelling of the X-band klystron for the JLC. He also greatly contributed to the development of the multi-beam klystron now in use for large superconducting linacs, and to the optimisation of the J-PARC accelerators.
Yong Ho returned to the field of collective effects more than 10 years ago and he remained extremely active there. Over the past few years, together with two other renowned accelerator physicists, Alexander W Chao and Michael Blaskiewicz, he developed a two-particle model to study the effects of space–charge force on transverse coherent beam instabilities. The purpose of this model was to obtain a simple picture of some of the essence of the physics of this intricate subject and at the same time provide a good starting point for newcomers joining the effort to solve this long-lasting issue.
As illustrated by his role as chair of an ICFA panel, and by his co-organisation of a large number of international workshops and conferences (including PAC and LINAC), Yong Ho was devoted to serving the international physics community. He was a productive author, diligent referee and esteemed editor for several journals. In 2015 he was recognised with an Outstanding Referee Award by the American Physical Society, and just a few months ago, in the summer of 2018, Yong Ho was appointed associate editor of Physical Review Accelerators and Beams.
Yong Ho was a very good lecturer, teaching at different accelerator schools, including the CERN Accelerator School. He was also in charge of a collaboration programme in which young accelerator scientists were invited to spend a few weeks at KEK.
Yong Ho was a wonderful person and an outstanding scientist. We are very proud to have had the chance to work and collaborate with him. His passing away is a great loss to the community and he will be sorely missed.
The Soviet Atomic Project: How the Soviet Union Obtained the Atomic Bomb by Lee G Pondrom World Scientific
“Leave them in peace. We can always shoot them later.” Thus spoke Soviet Union leader Josef Stalin, in response to a query by Soviet security and secret police chief Lavrentiy Beria about whether research in quantum mechanics and relativity (considered by Marxists to be incompatible with the principles of dialectical materialism) should be allowed. With these words, a generation of Soviet physical scientists were spared a disaster like the one perpetrated on Soviet agriculture by Trofim Lysenko’s politically correct, pseudoscientific theories of genetics. The reason behind this judgement was the successful development of nuclear weapons by Soviet physical scientists and the recognition by Stalin and Beria of the essential role that these “bourgeois” sciences played in that development.
Political intrigue, the arms race, early developments of nuclear science, espionage and more are all present in this gripping book, which provides a comprehensive account of the intensive programme the Soviets embarked on in 1945, immediately after Hiroshima, to catch up with the US in the area of nuclear weapons. A great deal is known about the Manhattan project, from the key scientists involved, to the many Los Alamos incidents – such as Fermi’s determination of the Alamogordo test-blast energy using scraps of paper and Feynman’s ability to crack his Los Alamos colleagues’ safes – that are intrinsic parts of the US nuclear/particle-physics community’s culture. On the contrary, little is known, at least in the West, about the huge effort made by the war-ravaged Soviet Union in less than five years to reach strategic parity with the US.
Pondrom, a prominent experimental particle physicist with a life-long interest in Russia and its language, provides an intriguing narrative. It is based on a thorough study of available literature plus a number of original documents – many of which he translated himself – that gives a fascinating insight into this history-changing enterprise and into the personalities of the exceptional people behind it.
The success of the Soviet programme was primarily due to Igor Kurchatov, a gifted experimental physicist and outstanding scientific administrator, who was equally at ease with laboratory workers, prominent theoretical physicists and the highest leaders in government, including Beria and Stalin himself. Saddled with developing several huge and remotely located laboratories from scratch, he remained closely involved in many important nitty-gritty scientific and engineering problems. For example, Kurchatov participated hands-on and full-time in the difficult commissioning of Reactor A, the first full-scale reactor for plutonium-239 production at the sprawling Combine #817 laboratory, receiving, along the way, a radiation dose that was 100 times the safe limit that he had established for laboratory staff members.
Beria was the overall project controller and ultimate decision-maker. Although best known for his role as Stalin’s ruthless enforcer – Pondrom describes him as “supreme evil,” Sakharov as a “dangerous man” – he was also an extraordinary organiser and a practical manager. When asked in the 1970s, long after Beria’s demise, how best to develop a Soviet equivalent of Silicon Valley, Soviet Academy of Sciences president A P Alexandrov answered “Dig up Beria.” Beria promised project scientists improved living conditions and freedom from persecution if they performed well (and that they would “be sent far away” if they didn’t). His daily access to Stalin was critical for keeping the project on track. Most of the project’s manual construction work used slave labour from Beria’s gulag.
Both the US and Soviet projects were monumental in scope; Pondrom estimates the Manhattan project’s scale to be about 2% of the US economy. The Soviet’s project scale was similar, but in an economy one-tenth the size. The Soviets had some advantage from the information gathered by espionage (and the simple fact that they knew the Manhattan project succeeded). Also, German scientists interned in Russia for the project played important support roles, especially in the large-scale purification of reactor-grade natural uranium. In addition, there was a nearly unlimited supply of unpaid labourers, as well as German prisoners of war with scientific and engineering backgrounds whose participation in the project was rewarded by better living conditions.
The book is crisply written and well worth the read. The text includes a number of translated segments of official documents plus extracts from memoirs of some of the people involved. So, although Pondrom sprinkles his opinions throughout, there is sufficient material to permit readers to make their own judgements. He doesn’t shirk from explaining some of the complex technical issues, which he (usually) addresses clearly and concisely. The appendices expand on technical issues, some on an elementary level for non-physicists, and others, including isotope extraction techniques, nuclear reaction issues and encryption, in more detail, much of which was new to me.
On the other hand, the confusing assortment of laboratories, their locations, leaders and primary tasks begged for some kind of summary or graphics. The simple chart describing the Soviet’s complex espionage network in the US was useful for keeping track of the roles of the persons involved; a similar chart for the laboratories and their roles would have been equally valuable. The book would also have benefited from a final edit that might have eliminated some of the repetition and caught some obvious errors. But these are minor faults in an engaging, informative book.
Stephen L Olsen, University of Chinese Academy of Sciences.
Advances in Particle Therapy: A multidisciplinary approach by Manjit Dosanjh and Jacques Bernier (eds) CRC Press, Taylor and Francis Group
A new volume in the CRC Press series on Medical Physics and Biomedical Engineering, this interesting book on particle therapy is structured in 19 chapters, each written by one or more co-authors out of a team of 49 experts (including the two editors). Most are medical physicists, radiation oncologists and radiobiologists who are well renowned in the field.
The opening chapter provides a brief and useful summary of the evolution of modern radiation oncology, starting from the discovery of X rays up to the latest generation of proton and carbon-ion accelerators. The second and third chapters are devoted to the radiobiological aspects of particle therapy. After an introductory part where the concepts of relative biological effectiveness (RBE) and oxygen-enhancement ratio are defined, this section of the book goes on to review the most recent knowledge gained in the field, from DNA structure to the production of radiation-induced damage, to secondary cancer risk. The conclusion is that, as biological effects and clinical response are functions of a broad range of parameters, we are still far from a complete understanding of all radiobiological aspects underlying particle therapy, as well as from a universally accepted RBE model providing the optimum RBE value to be used for any given treatment.
Chapter 4 and, later, chapter 18 are dedicated to particle-therapy technologies. The first provides a simple explanation of the operating principles of particle accelerators and then goes into the details of beam delivery systems and dose conformation devices. Chapter 18 recalls the historical development of particle therapy in Europe, first with the European Light Ion Medical Accelerator (EULIMA) study and Proton-Ion Medical Machine Study (PIMMS), and then with the design and construction of the HIT, CNAO and MedAustron clinical facilities (CERN Courier January/February 2018 p25). It then provides an outlook on ongoing and expected future technological developments in accelerator design.
Chapter 5 discusses the general requirements for setting up a particle therapy centre, while the following chapter provides an extensive review of imaging techniques for both patient positioning and treatment verification. These are made necessary by the rapid spread of active beam delivery technologies (scanning) and robotic patient positioning systems, which have strongly improved dose delivery. Chapter 7 reviews therapeutic indications for particle therapy and explains the necessity to integrate it with all other treatment modalities so that oncologists can decide on the best combination of therapies for each individual patient. Chapter 8 reports on the history of the European Network of Light Ion Hadron Therapy (ENLIGHT) and its role in boosting collaborative efforts in particle therapy and in training specialists.
The central part of the book (chapters 9 to 15) reviews worldwide clinical results and indications for particle therapy from different angles, pointing out the inherent difficulties in comparing conventional radiation therapy and particle therapy. It analyses the two perspectives under which the dosimetric properties of particles can translate into clinical benefit: decreasing the dose to normal tissue to reduce complications, or scaling the dose to the tumour to improve tumour control without increasing the dose to normal tissue.
Chapter 16 discusses the economic aspects of particle therapy, such as cost-effectiveness and budget impact, while the following chapter describes the benefits of a “rapid learning health care” system. The last chapter discusses global challenges in radiation therapy, such ashow to bring medical electron linac technology to low- and middle-income countries (CERN Courier March 2017 p31). I found this last chapter slightly confusing as I did not understand what is meant by “radiation rotary” and I could not fully grasp the mixing-up of different topics, such as particle therapy and nuclear detonation-terrorism. This part also seemed too US-focussed when discussing the various initiatives, and I was not in agreement with some of the statements (e.g. that particle therapy has undergone a cost reduction by an order of magnitude or more in the past 10 years).
Overall, this book provides a useful compendium of state-of-the-art particle therapy and each chapter is supported by an extensive bibliography, meeting the expectations of both experts and readers interested in gaining an overview of the field. The essay is well structured, and enables readers to go through only selected chapters and in the order that they prefer. Some knowledge of radiobiology, clinical oncology and accelerator technology is assumed. It is disappointing that clinical dosimetry and treatment planning are not addressed other than in a brief mention in chapter 5, but perhaps this is something to consider for a second edition.
Marco Silari, CERN.
Mad maths Theatre, CERN Globe
24 January 2019
Do you remember your maths high-school teachers? Were they strict? Funny? Extraordinary? Boring? The theatre comedy “Mad maths” presents the two most unusual teachers you can imagine. Armed with chalk and measuring tapes, Mademoiselle X and Mademoiselle Y aim to heal all those with maths phobia, and teach the audience more about their favourite subject.
On 24 January CERN’s fully booked Globe of Science and Innovation turned into a bizarre classroom. Marching along well-defined 90° angles, and meticulously measuring everything around them, the comedians Sophie Leclercq and Garance Legrou play with numbers and fight at the blackboard to make maths entertaining. The dialogues are juiced up with rap and music, spiced by friendly maths jargon, and seasoned with a hint of craziness. Bumping with trigonometry, philosophising about the number zero, and inventing new counting systems with dubious benefits, the rhythm grows exponentially. For example, did you know that some people’s mood goes up and down like a sine function? That you can make music with fractions? And that some bureaucratic steps are noncommutative?
This comedy show originated from an idea by Olivier Faliez and Kevin Lapin from the French theatre company Sous un autre angle. First studying maths at the university, then attending theatre school, Faliez combined his two passions in 2003 to create an entertaining programme based on maths-driven jokes and turns of event.
Perfect for families with children, this French play has already been performed more than 500 times, especially at science festivals and schools. The topics are customised depending on the level of the students. Future showings are scheduled in Castanet (15 March), Les Mureaux (22 March) and in several schools in France and other countries. Teachers and event organisers who are interested in the show are advised to contact Sophie Leclercq.
At times foolish, at times witty, it is worth watching if and only if you want to unwind and rediscover maths from a different perspective.
Letizia Diamante, CERN.
The Life, Science and Times of Lev Vasilevich Shubnikov, Pioneer of Soviet Cryogenics by L J Reinders Springer
This book is a biography of Russian physicist Lev Vasilevich Shubnikov, whose work is scarcely known despite its importance and broad reach. It is also a portrayal of the political and ideological environment existing in the Soviet Union in the late 1930s under Stalin’s repressive regime.
While at Leiden University in the Netherlands, which at the time had the most advanced laboratory for low-temperature physics in the world, Shubnikov co-discovered the Shubnikov–De Haas effect: the first observation of quantum-mechanical oscillations of a physical quantity (in this case the resistance of bismuth) at low temperatures and high magnetic fields.
In 1930 Shubnikov went to Kharkov (as it is called in Russian) in the Ukraine, where he built up the first low-temperature laboratory in the Soviet Union. There he led an impressive scientific programme and, together with his team, he discovered what is now known as type-II superconductivity (or the Shubnikov phase) and nuclear paramagnetism. In addition, independently of and almost simultaneously with Meissner and Ochsenfield, they observed the complete diamagnetism of superconductors (today known as the Meissner effect).
In 1937, aged just 36, Shubnikov was arrested, processed by Stalin’s regime and executed “for no other reason than that he had shown evidence of independent thought”, as the author states.
Based on thorough document research and a collection of memories from people who knew Shubnikov, this book will appeal not only to those curious about this physicist, but also to readers interested in the history of Soviet science, especially the development of Soviet physics in the 1930s and the impact that Stalin’s regime had on it.
Virginia Greco, CERN.
The Workshop and the World, what ten thinkers can teach us about science and authority by Robert P Crease W. W. Norton & Company
In this book, science historian Robert Crease discusses the concept of scientific authority, how it has changed along the centuries, and how society and politicians interact with scientists and the scientific process – which he refers to as the “workshop”.
Crease begins with an introduction about current anti-science rhetoric and science denial – the most evident manifestation of which is probably the claim that “global warming is a hoax perpetrated by scientists with hidden agendas”.
Four sections follow. In part one, the author introduces the first articulation of scientific authority through the stories of three renowned scientists and philosophers: Francis Bacon, Galileo Galilei and René Descartes. Here, some vulnerabilities of the authority of the scientific workshop emerge, but they are discussed further in the second section of the book through the stories of thinkers like Giambattista Vico, Mary Shelley and Auguste Comte.
Part three attempts to understand the deeply complicated relationship between the workshop and the world, described through the stories of Max Weber, Kemal Atatürk and his precursors, and Edmund Husserl. The final section is all about reinventing authority and is discussed through the work of Hannah Arendt, a thinker who barely escaped the Holocaust and who provided a deep analysis of authority as well as provding clues as to how to restore it.
With this brilliantly written essay, Crease aims to explore what practising science for the common good means and to understand what makes a social and political atmosphere in which science denial can flourish. Finally, Crease tries to suggest what can be done to ensure that science and scientists regain the trust of the people.
Strategy is a base that allows resources to be prioritised in the pursuit of important goals. No strategy would be needed if enough resources were available – we would just do what appears to be necessary.
Elementary particle physics generally requires large and expensive facilities, often on an international scale, which take a long time to develop and are heavy consumers of resources during operations. For this reason, in 2005 the CERN Council initiated a European Strategy for Particle Physics (ESPP), resulting in a document being adopted the following year. The strategy was updated in 2013 and the community is now working towards a second ESPP update (CERN Courier April 2018 p7).
The making of the ESPP has three elements: bottom-up activities driven by the scientific community through document submission and an open symposium (the latter to be held in Spain in May 2019); strategy drafting (to take place in Germany in January 2020) by scientists, who are mostly appointed by CERN member states; and the final discussion and approval by the CERN Council. Therefore, the final product should be an amalgamation of the wishes of the community and the political and financial constraints defined by state authorities. Experience of the previous ESPP update suggests that this is entirely achievable, but not without effort and compromise.
Out of four high-priority items in the current ESPP, which concluded in 2013, three of them are well under way: the full exploitation of the LHC via a luminosity upgrade; R&D and design studies for a future energy-frontier machine at CERN; and establishing a platform at CERN for physicists to develop neutrino detectors for experiments around the world. The remaining item, relating to an initiative of the Japanese particle-physics community to host an international linear collider in Japan, has not made much progress.
In physics, discussions about strategy usually start with a principled statement: “Science should drive the strategy”. This is of course correct, but unfortunately not always sufficient in real life, since physics consideration alone does not provide a practical solution most of the time. In this context, it is worth recalling the discussion about long-baseline neutrino experiments that took place during the previous strategy exercises.
Optimal outcome
At the time of the first ESPP almost 15 years ago, so little was known about the neutrino mass-mixing parameters that several ambitious facilities were discussed so as to cover necessary parameter spaces. Some resources were directed into R&D, but most probably they were too little and not well prioritised. In the meantime, it became clear that a state-of-the-art neutrino beam based on conventional technology would be sufficient to make the next necessary step of measuring the neutrino CP-violation parameter and mass hierarchy. What should be done was therefore clear from a scientific point of view, but there simply were not enough resources in Europe to construct a long-baseline neutrino experiment together with a high performance beam line while fully exploiting the LHC at the same time. The optimal outcome was found by considering global opportunities and this was one of the key ingredients that drove the strategy.
The challenge facing the community now in updating the current ESPP is to steer the field into the mid-2020s and beyond. As such, discussions about the various ideas for the next big machine at CERN will be an important focus, but numerous other projects, including proposals for non-collider experiments, will be jostling for attention. Many brilliant people are working in our field with many excellent ideas, with different strengths and weaknesses. The real issue of the strategy update is how we can optimise the resources using time and location, and possibly synergies with other scientific fields.
The intention of the strategy is to achieve a scientific goal. We may already disagree about what this goal is, since it is people with different visions, tastes and habits who conduct research. But let us at least agree this to be “to understand the most fundamental laws of nature” for now. Also, depending on the time scales, the relative importance of elements in the decision-making might change and factors beyond Europe cannot be neglected. Strategy that cannot be implemented is not useful for anyone and the key is to make a judgement on the balance among many elements. Lastly, we should not forget that the most exciting scenario for the ESPP update will be the appearance of an unexpected result –then there would be a real paradigm shift in particle physics.
In Lost in Math, theoretical physicist Sabine Hossenfelder embarks on a soul-searching journey across contemporary theoretical particle physics. She travels to various countries to interview some of the most influential figures of the field (but also some “outcasts”) to challenge them, and be challenged, about the role of beauty in the investigation of nature’s laws.
Colliding head-on with the lore of the field and with practically all popular-science literature, Hossenfelder argues that beauty is overrated. Some leading scientists say that their favourite theories are too beautiful not to be true, or possess such a rich mathematical structure that it would be a pity if nature did not abide by those rules. Hossenfelder retorts that physics is not mathematics, and names examples of extremely beautiful and rich maths that does not describe the world. She reminds us that physics is based on data. So, she wonders, what can be done when an entire field is starved of experimental breakthroughs?
Confirmation bias
Nobel laureate Steven Weinberg, interviewed for this book, argues that experts call “beauty” the experience-based feeling that a theory is on a good track. Hossenfelder is sceptical that this attitude really comes from experience. Maybe most of the people who chose to work in this field were attracted to it, in the first place, because they like mathematics and symmetries, and would not have worked in the field otherwise. We may be victims of confirmation bias: we choose to believe that aesthetic sense leads to correct theories; hence, we easily recall to memory all of the correct theories that possess some quality of beauty, while we do not pay equal attention to the counterexamples. Dirac and Einstein, among many, vocally affirmed beauty as a guiding principle, and achieved striking successes by following its guidance; however, they also had, as Hossenfelder points out, several spectacular failures that are less well known. Moreover, a theoretical sense of beauty is far from universal. Copernicus made a breakthrough because he sought a form of beauty that differed from those of his predecessors, making him think out of the box; and by today’s taste, Kepler’s solar system of platonic solids feels silly and repulsive.
Hossenfelder devotes attention to a concept that is particularly relevant to contemporary particle physics: the “naturalness principle” (see Understanding naturalness). Take the case of the Higgs mass: the textbook argument is that quantum corrections go wild for the Higgs boson, making any mass value between zero and the Planck mass a priori possible; however, its value happens to be closer to zero than to the Planck mass by a factor of 1017. Hence, most particle physicists argue that there must be an almost perfect cancellation of corrections, a problem known as the “hierarchy problem”. Hossenfelder points out that implicit in this simple argument is that all values between zero and the Planck mass should be equally likely. “Why,” she asks, “are we assuming a flat probability, instead of a logarithmic (or whatever other function) one?” In general, we say that a new theory is necessary when a parameter value is unlikely, but she argues that we can estimate the likeliness of that value only when we have a prior likelihood function, for which we would need a new theory.
New angles
Hossenfelder illustrates various popular solutions to this naturalness problem, which in essence all try to make small values of the Higgs mass much more likely than large ones. She also discusses string theory, as well as multiverse hypotheses and anthropic solutions, exposing their shortcomings. Some of her criticisms may recall Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong, but Hossenfelder brings new angles to the discussion.
This book comes out at a time when more and more specialists are questioning the validity of naturalness-inspired predictions. Many popular theories inspired by the naturalness problem share an empirical consequence: either they manifest themselves soon in existing experiments, or they definitely fail in solving the problems that they were invented for.
Hossenfelder describes in derogatory terms the typical argumentative structure of contemporary theory papers that predict new particles “just around the corner”, while explaining why we did not observe them yet. She finds the same attitude in what she calls the “di-photon diarrhoea”, i.e., the prolific reaction of the same theoretical community to a statistical fluctuation at a mass of around 750 GeV in the earliest data from the LHC’s Run 2.
The author explains complex matters at the cutting edge of theoretical physics research in a clear way, with original metaphors and appropriate illustrations. With this book, Hossenfelder not only reaches out to the public, but also invites it to join a discourse that she is clearly passionate about. The intended readership ranges from fellow scientists to the layperson, also including university administrators and science policy makers, as is made explicit in an appendix devoted to practical suggestions for various categories of readers.
While this book will mostly attract attention for its pars destruens, it also contains a pars construens. Hossenfelder argues for looking away from the lamppost, both theoretically and experimentally. Having painted naturalness arguments as a red herring that drives attention away from the real issues, and acknowledging throughout the book that when data offer no guidance there is no other choice than following some non-empirical assessment criteria, she advocates other criteria that deserve better prominence, such as the internal consistency of the theoretical foundations of particle physics.
As a non-theorist my opinion carries little weight, but my gut feeling is that this direction of investigation, although undeniably crucial, is not comparably “fertile”. On the other hand, Hossenfelder makes it clear that she sees nothing scientific in this kind of fertility, and even argues that bibliometric obsessions played a big role in creating what she depicts as a gigantic bibliographical bubble. Inspired by that, Hossenfelder also advises learning how to recognise and mitigate biases, and building a culture of criticism both in the scientific arena and in response to policies that create short-term incentives, going against the idea of exploring less conventional ideas. Regardless of what one may think about the merits of naturalness or other non-empirical criteria, I believe that these suggestions are uncontroversially worthy of consideration.
Andrea Giammanco, UCLouvain, Louvain-la-Neuve, Belgium.
Amaldi’s last letter to Fermi: a monologue Theatre, CERN Globe
11 September 2018
On the occasion of the 110th anniversary of the birth of Italian physicist Edoardo Amaldi (1908–1989), CERN hosted a new production titled “Amaldi l’italiano, centodieci e lode!” The title is a play on words concerning the top score at an Italian university (“110 cum laude”) and the production is a well-deserved recognition of a self-confessed “ideas shaker” who was one of the pioneers
in the establishment of CERN, the European Space Agency (ESA) and the Italian National Institute for Nuclear Physics (INFN).
The nostalgic monologue opens with Amaldi, played by Corrado Calda, sitting at his desk and writing a letter to his mentor, Enrico Fermi. Set on the last day of Amaldi’s life, the play retraces some of his scientific, personal and historical memories, which pass by while he writes.
It begins in 1938 when Amaldi is part of an enthusiastic group of young scientists, led by Fermi and nicknamed “Via Panisperna boys” (boys from Panisperna Road, the location of the Physics Institute of the University of Rome). Their discoveries on slow neutrons led to Fermi’s Nobel Prize in Physics that year.
Then, suddenly, World War IIbegins and everything falls apart. Amaldi writes about his frustrations to his teacher, who had passed away but is still close to him. “While physicists were looking for physical laws, Europe sank into racial laws,” he despairs. Indeed, most of his colleagues and friends, including Fermi who had a Jewish wife, moved to the US. Left alone in Italy, Amaldi decided to stop his studies on fission and focus on cosmic rays, a type of research that required less resources and was not related to military applications.
Out of the ruins
After World War II, while in Italy there was barely enough money to buy food, the US was building state-of-the-art particle-physics detectors. Amaldi described his strong temptation to cross the ocean, and re-join with Fermi. However, he decided to stay in war-torn Europe and help European science grow out of the ruins. He worked to achieve his dream of “a laboratory independent from military organisations, where scientists from all over the world could feel at home” – today know as CERN. He was general secretary of CERN between 1952 and 1954, before its official foundation in September 1954.
This beautiful monologue is interspersed by radio messages from the epoch, which announce salient historical facts. These create a factual atmosphere that becomes less and less tense as alerts about the Nazi’s declarations and bombs are replaced by news about the first women’s vote, the landing of the first person on the Moon, and disarmament movements.
Written and directed by Giusy Cafari Panico and Corrado Calda, the play was composed after consulting with Edoardo’s son, Ugo Amaldi, who was present at the inaugural performance. The script is so rich in information that you leave the theatre feeling you now know a lot about scientific endeavours, mindsets and the general zeitgeist of the last century. Moreover, the play touches on some topics that are still very relevant today, including: brain drain, European identity, women in science and the use of science for military purposes.
The event was made possible thanks to the initiative of Ugo Amaldi, CERN’s Lucio Rossi, the Edoardo Amaldi Association (Fondazione Piacenza e Vigevano, Italy), and several sponsors. The presentation was introduced by former CERN Director-General Luciano Maiani, who was Edoardo Amaldi’s student, and current CERN Director-General Fabiola Gianotti, who expressed her gratitude for Amaldi’s contribution in establishing CERN.
Letizia Diamante, CERN.
Topological and Non-Topological Solitons in Scalar Field Theories by Yakov M Shnir Cambridge University Press
In the 19th century, the Scottish engineer John Scott Russell was the first to observe what he called a “wave of transition” while watching a boat drawn along a channel by a pair of horses. This phenomenon is now referred to as a soliton and described mathematically as a stable, non-dissipative wave packet that maintains its shape while propagating at a constant velocity.
Solitons emerge in various nonlinear physical systems, from nonlinear optics and condensed matter to nuclear physics, cosmology and supersymmetric theories.
Structured in three parts, this book provides a comprehensive introduction to the description and construction of solitons in various models. In the first two chapters of part one, the author discusses the properties of topological solitons in the completely integrable Sine-Gordon model and in the non-integrable models with polynomial potentials. Then, in chapter three, he introduces solitary wave solutions of the Korteweg–de Vries equation, which provide an example of non-topological solitons.
Part two deals with higher dimensional nonlinear theories. In particular, the properties of scalar soliton configurations are analysed in two 2+1 dimension systems: the O(3) nonlinear sigma model and the baby Skyrme model. Part three focuses mainly on the solitons in three spatial dimensions. Here, the author covers stationary Q-balls and their properties. Then he discusses soliton configurations in the Skyrme model (called skyrmions) and the knotted solutions of the Faddev–Skyrme model (hopfions). The properties of the related deformed models, such as the Nicole and the Aratyn–Ferreira–Zimerman model, are also summarised.
Based on the author’s lecture notes for a graduate-level course, this book is addressed at graduate students in theoretical physics and mathematics, as well as researchers interested in solitons.
Virginia Greco, CERN.
Universal Themes of Bose–Einstein Condensation by Nick P Proukakis, David W Snoke and Peter B Littlewood Cambridge University Press
The study of Bose–Einstein condensation (BEC) has undergone an incredible expansion during the last 25 years. Back then, the only experimentally realised Bose condensate was liquid helium-4, whereas today the phenomenon has been observed in a number of diverse atomic, optical and condensed-matter systems. The turning point for BEC came in 1995, when three different US groups reported the observation of BEC in trapped, weakly interacting atomic gases of rubidium-87, lithium-7 and sodium-23 within weeks of one another. These studies led to the 2001 Nobel Prize in Physics being jointly awarded to Eric Cornell, Wolfgang Ketterle and Carl Wieman.
This book is a collection of essays written by leading experts on various aspects and in different branches of BEC, which is now a broad and interdisciplinary area of modern physics. Composed of four parts, the volume starts with the history of the rapid development of this field and then takes the reader through the most important results.
The second part provides an extensive overview of various general themes related to universal features of Bose–Einstein condensates, such as the question of whether BEC involves spontaneous symmetry breaking, of how the ideal Bose gas condensation is modified by interactions between the particles, and the concept of universality and scale invariance in cold-atom systems. Part three focuses on active research topics in ultracold environments, including optical lattice experiments, the study of distinct sound velocities in ultracold atomic gases – which has shaped our current understanding of superfluid helium – and quantum turbulence in atomic condensates.
Part four is dedicated to the study of condensed-matter systems that exhibit various features of BEC, while in part five possible applications of the study of condensed matter and BEC to answer questions on astrophysical scales are discussed.
Virginia Greco, CERN.
Zeros of Polynomials and Solvable Nonlinear Evolution Equations by Francesco Calogero Cambridge University Press
This concise book discusses the mathematical tools used to model complex phenomena via systems of nonlinear equations, which can be useful to describe many-body problems.
Starting from a well-established approach to solvable dynamical systems identification, the author proposes a novel algorithm that allows some of the restrictions of this approach to be eliminated and, thus, identifies more solvable/integrable N-body problems. After reporting this new differential algorithm to evaluate all the zeros of a generic polynomial of arbitrary degree, the book presents many examples to show its application and impact. The author first discusses systems of ordinary differential equations (ODEs), including second-order ODEs of Newtonian type, and then moves on to systems of partial differential equations and equations evolving in discrete time-steps.
This book is addressed to both applied mathematicians and theoretical physicists, and can be used as a basic text for a topical course for advanced undergraduates.
Advances in particle physics are driven by well-defined innovations in accelerators, instrumentation, electronics, computing and data-analysis techniques. Yet our ability to innovate depends strongly on the talents of individuals, and on how we continue to attract and foster the best people. It is therefore vital that, within today’s ever-growing collaborations, individual researchers feel that their contributions are recognised adequately within the scientific community at large.
Looking back to the time before large accelerators, individual recognition was not an issue in our field. Take Rutherford’s revolutionary work on the nucleus or, more recently, Cowan and Reines’ discovery of the neutrino – there were perhaps a couple of people working in a lab, at most with a technician, yet acknowledgement was at a global scale. There was no need for project management; individual recognition was spot-on and instinctive.
As high-energy physics progressed, the needs of experiments grew. During the 1980s, experiments such as UA1 and UA2 at the Super Proton Synchrotron (SPS) involved institutions from around five to eight countries, setting in motion a “natural evolution” of individual recognition. From those experiments, in which mentoring in family-sized groups played a big role, emerged spontaneous leaders, some of whom went on to head experimental physics groups, departments and laboratories. Moving into the 1990s, project management and individual recognition became even more pertinent. In the experiments at the Large Electron–Positron collider (LEP), the number of physicists, engineers and technicians working together rose by an order of magnitude compared to the SPS days, with up to 30 participating institutions and 20 countries involved in a given experiment.
Today, with the LHC experiments providing an even bigger jump in scale, we must ask ourselves: are we making our immense scientific progress at the expense of individual recognition?
Group goals
Large collaborations have been very successful, and the discovery of the Higgs boson at the LHC had a big impact in our community. Today there are more than 5000 physicists from institutions in more than 40 countries working on the main LHC experiments, and this mammoth scale demands a change in the way we nurture individual recognition and careers. In scientific collaborations with a collective mission, group goals are placed above personal ambition. For example, many of us spend hundreds of hours in the pit or carry out computing and software tasks to make sure our experiments deliver the best data, even though some of this collective work isn’t always “visible”. However, there are increasing challenges nowadays, particularly for young scientists who need to navigate the difficulties of balancing their aspirations. Larger collaborations mean there are many more PhD students and postdocs, while the number of permanent jobs has not increased equivalently; hence we also need to prepare early-career researchers for a non-academic career.
To fully exploit the potential of large collaborations, we need to bring every single person to maximum effectiveness by motivating and stimulating individual recognition and career choices. With this in mind, in spring 2018 the European Committee for Future Accelerators (ECFA) established a working group to investigate what the community thinks about individual recognition in large collaborations. Following an initial survey addressing leaders of several CERN and CERN-recognised experiments, a community-wide survey closed on 26 October with a total of 1347 responses.
Community survey
Participants expressed opinions on several statements related to how they perceive systems of recognition in their collaboration. More than 80% of the participants are involved in LHC experiments and researchers from most European countries were well represented. Just less than half (44%) were permanent staff members at their institute, with the rest comprising around 300 PhD students and 440 postdocs or junior staff. Participants were asked to indicate their level of agreement with a list of statements related to individual recognition. Each answer was quantified and the score distributions were compared between groups of participants, for instance according to career position, experiment, collaboration size, country, age, gender and discipline. Some initial findings are listed over the page, while the full breakdown of results – comprising hundreds of plots – is available at https://ecfa.web.cern.ch.
Conferences:“The collaboration guidelines for speakers at conferences allow me to be creative and demonstrate my talents.” Overall, participants from the LHCb collaboration agree more with this statement compared to those from CMS and especially ATLAS. For younger participants this sentiment is more pronounced. Respondents affirmed that conference talks are an outstanding opportunity to demonstrate to the broader community their creativity and scientific insight, and are perceived to be one of the most important aspects of verifying the success of a scientist.
Publications:“For me it is important to be included as an author of
all collaboration-wide papers.” Although the effect is less pronounced for participants from very large collaborations, they value being included as authors on collaboration-wide publications. The alphabetic listing of authors is also supported, and at all career stages. Participants had divided opinions when it came to alternatives.
Assigned responsibilities:“I perceive that profiles of positions with responsibility are well known outside the particle-physics community.” The further away from the collaboration, the more challenging it becomes to inform people about the role of a convener, yet the selection as a convenor is perceived to be very important in verifying the success of a scientist in our field. The majority of the participating early-career researchers are neutral or do not agree with the statement that the process of selecting conveners is sufficiently transparent and accessible.
Technical contributions:“I perceive that my technical contributions get adequate recognition in the particle-physics community.”Hardware and software technical work is at the core of particle-physics experiments, yet it remains challenging to recognise these contributions inside, but especially outside, the collaboration.
Scientific notes: “Scientific notes on analysis methods, detector and physics simulations, novel algorithms, software developments, etc, would be valuable for me as a new class of open publications to recognise individual contributions.” Although participants have very diverse opinions when it comes to making the internal collaboration notes public, they would value the opportunity to write down their novel and creative technical ideas in a new class of public notes.
Beyond disseminating the results of the survey, ECFA will reflect on how it can help to strengthen the recognition of individual achievements in large collaborations. The LHC experiments and other large collaborations have expressed openness to enter a dialogue on the topic, and will be invited by ECFA to join a pan-collaboration working group. This will help to relate observations from the survey to current practices in the collaborations, with the aim of keeping particle physics fit and healthy towards the next generation of experiments.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.