Comsol -leaderboard other pages

Topics

Learning machine learning

Electromagnetic shower identification

Machine learning, whereby the ability of a computer to perform an intelligent task progressively improves, has penetrated many scientific domains. It allows researchers to tackle problems from a completely new perspective, enabling improvements to things previously thought solved for good. The downside of machine learning is that the field itself is developing so quickly, with new techniques popping up at an incredible rate, that it is hard to keep up. What is needed is some sort of high-level trigger to discriminate between good and bad, and to guide a growing community of users in a systematic way.

Machine-learning techniques are already in wide use in particle physics, and they will only become more prevalent during the coming years of the high-luminosity LHC and future colliders. Online data processing, offline data analysis, fast Monte Carlo generation techniques and detector-upgrade optimisation are just a few examples of the areas that could profit significantly from smarter algorithms (see The rise of deep learning).

The most remarkable growth trend in machine learning today, and one that has also been heavily hyped, concerns so-called deep learning. Although there is no strict boundary, a neural network with less than four layers is considered “shallow”, while one with more than 10 layers and many thousands of connections is considered “deep”. Using deep-learning algorithms, plus performative computing resources and extremely large datasets, researchers have managed to break important barriers for such tasks as text translation, voice recognition, image segmentation and even to master the game Go. Many of the educational materials one can find on the Internet are thus focused around typical tasks such as image recognition, annotation, segmentation, text processing and pattern generation.

Since most of these are conveyed in computer-science language, there is an obvious language barrier for domain-specific scientists, such as particle physicists, who have to learn a new technique and apply it to their own research. Another complication is that there are a variety of machine-learning methods capable of solving particular problems and a plenitude of tools (i.e. different languages, packages and platforms) out there – almost all of which are online – with which to implement those methods.

Targeting particle physics

As machine learning spreads into new domains such as astrophysics or biology, schools that focus on problems in specific areas are becoming more popular. Historically there are several summer schools for particle physicists focused around data analysis, computing and statistical learning – in particular the CERN School for computing, INFN School of Statistics and the CMS Data Analysis School. But, until 2014, none focused specifically on machine learning. In that year, a series with the straightforward title Machine-Learning school for High-Energy Physics (MLHEP) was launched.

MLHEP grew out of the well-established Yandex School of Data Analysis (YSDA), a non-commercial educational organisation funded by the Russia-based internet firm Yandex. Over the past decade, YSDA has grown to receive several thousand applications per year, out of which around 200 people pass the entrance exams and around 50 graduate in conjunction with leading Russian universities – almost all of them finding data-science positions in the private sector.

Simulated overlapping electromagnetic showers

In 2015, YSDA joined the LHCb collaboration. The goal was to help optimise LHCb’s high-level trigger system to improve its efficiency for selecting B-decay events, and the result of the LHCb-YSDA collaboration was an efficiency gain of up to 60% compared to that obtained during LHC Run I. Another early joint effort between YSDA, CERN and MIT within LHCb was the design of decision-tree algorithms capable of decorrelating their output from a given variable, such as invariant mass.

The first MLHEP schools in 2015 and 2016 were satellite events at the Large Hadron Collider Physics (LHCP) conference held in St. Petersburg and Lund, respectively. Another key contrib-utor to the school was the faculty of computer science at Russia’s Higher School of Economics (HSE), which was founded in 2014 by Yandex. MLHEP 2017 was organised by Imperial College London in the UK, and the 2018 school takes place in Oxford at the beginning of August.

The topics covered during the schools usually start from the basic aspects of machine learning, such as loss functions, optimisation methods, predictive-model quality validation, and stretch towards advanced techniques like generative adversarial networks and Bayesian optimisation. The curriculum is not static, and each year the focus changes to address the most interesting and promising trends in deep learning while providing an overview of various techniques available on the market. At the 2018 school, speakers were invited from both academia and from companies, including Oracle, Nvidia, Yandex and DeepMind.

Breaking the language barrier

Some people compare deep learning not with a tool or platform, but with a language that allows a researcher to express computational “sentences” addressing a particular problem.

To reinforce the language analogy, recall that there is no solid theory of deep learning yet; in a sense it is just a bunch of best-practices and approaches that has proven to work in several important cases. A lot of the time during MLHEP classes is therefore devoted to practical exercises. School students are also encouraged to enter a data-science competition that is related to particle physics – e.g. tracking for the Coherent Muon to Electron Transition (COMET) experiment and event selection for the Higgs-boson discovery by the ATLAS and CMS experiments. The competition is published on the machine-learning competition platform kaggle.com at the start of the school, and is open for anyone who wants to get more machine-learning practice.

For summer 2017, the competition was organised together with the OPERA and SHiP collaborations. The goal was to analyse volumes of nuclear emulsions collected by OPERA that contain lots of cosmic-background tracks as well as tracks from electromagnetic showers. These shower-like structures are of interest for OPERA for the analysis of tau-neutrino interactions, so special algorithms have to be developed. Such algorithms are also very relevant to the SHiP experiment, which aims to use emulsion-based detectors for finding hidden-sector particles at CERN. According to some theoretical models, such showers might be closely related to hidden-sector particle interaction with regular matter (e.g. elastic scattering of very weakly-interacting particles off electrons or nuclei), so a separate task would be to discriminate these showers from neutrino interactions. The performance of the algorithms designed by participants was amazing. The winner of the challenge presented his solution at the SHiP collaboration meeting in November 2017 and was invited by OPERA to continue the collaboration.

A major part of the MLHEP curriculum is given by YSDA/HSE lecturers, and guest speakers help to broaden the view on the machine-learning challenges and methods. The school is non-commercial, and its success depends on external contributions from the HSE, YSDA, local organisers and commercial sponsors. For the past two years we have been supported by the Marie Skłodowska-Curie training network AMVA4NewPhysics, which has also sent several PhD students to the school.

The format of the summer school is very productive, allowing students to dive into the topics without distraction. The school materials also remain available at GitHub, allowing students to access them whenever they want. As time goes by, basic machine-learning courses are becoming more readily available online, giving us a chance to introduce more advanced topics every year and to keep up with the rapid developments in this field.

Creativity across cultures

HALO

Lift up your eyes as you walk through the principal entrance to CERN’s main building and you will see a tangled iron coil suspended above the central staircase. Rather like electron orbitals marking out the shape of an atom, the structure’s overlapping lines form hints of something more tangible that changes as you move – a human body. Here, in his sculpture Feeling Material XXXIV, the artist Antony Gormley has spun a chaotic spiralling line that envelopes the body’s space.

Artists, like scientists, have always been keen observers of the world about them and Gormley is no exception. It was his interest in how spaces are delineated that led to his first contacts with CERN physicist Michael Doser in 2006, and ultimately to his donation of Feeling Material XXXIV to the Organization in 2008. Over the years many artists have visited CERN, intrigued by its research; the American performance artist James Lee Byars even featured on the cover of CERN Courier in September 1972. And in the 1990s, British artist and film-maker Ken McMullen visited the laboratory as a result of his friendship with the daughter of the late Maurice Jacob, a well-known CERN theorist. The visit sowed the seeds for a major project, Signatures of the Invisible, based on a collaboration between the London Institute and CERN. This project brought 11 established artists from various countries, including McMullen, to work with scientists and technicians at CERN during 1999–2000, resulting in works of art that were exhibited worldwide (CERN Courier May 2001 p23).

Bello and Kim

The experience proved rewarding for both sides. Writing in the Courier (July 2001, p30), Ian Sexton, the CERN technician who worked with laser cutting and other techniques on McMullen’s piece Crumpled Theory, described his pleasure at seeing the artist’s first sight of the completed work “simply presented on the workshop floor, with sunlight streaming through the blinds. Ken was … delighted. His enthusiasm was a most unusual experience for me. Normally on completion of a job at CERN a perfunctory ‘thank you’ is the only response.”

The project had involved a significant commitment by CERN. The Press Office managed the project on the Organization’s behalf, a number of scientists became deeply involved, and the artists were offered the use of the laboratory’s workshop – all of which implied a great deal of disruption and additional work for those concerned. So perhaps there were reservations in the minds of some at CERN when a new “science and art” initiative began to take shape. In 2009, creative producer Ariane Koek decided to use the award of a Clore Fellowship to come to CERN and – with the encouragement of the Director-General at the time, Rolf Heuer – work out how to establish and fund an artists’ residency scheme. Heuer was suitably impressed by her proposals and the following year, after a selection process, Koek was taken on to set up Arts at CERN.

Cultural policy

These efforts bore fruit in August 2011 with the launch of CERN’s first-ever cultural policy. Its central element is a selection process for arts engagement with CERN, with a cultural board for the arts to advise on projects and collaborations involving CERN. The initiative brought order and direction to what had been an ad-hoc approach to CERN’s involvement with the arts.


The first outwardly visible outcome of Arts at CERN was a competition, Collide at CERN, announced in 2011 and open to artists from anywhere in the world. A key element was to pair winning artists with scientists at CERN during a residency lasting up to three months. One strand in this award – the Prix Electronica Collide @ CERN prize for Digital Arts – was set up in collaboration with Austria-based digital arts organisation Ars Electronica, and the residency consisted of two months at CERN and one month at Ars Electronica’s research and development lab. The second strand – Collide @ CERN Geneva – marked a partnership with the City and Canton of Geneva, and in the first year was for dance and performance.

Antye Greie-Ripatti

The partnership with Ars Electronica was a coup for CERN and the new cultural policy. Over 40 years, Ars Electronica had built up a formidable reputation in bringing artists, scientists and engineers together. Widely publicised by CERN, the partnership was well received in the arts world, but it was perhaps not so well understood at CERN. Was this something that CERN should be doing and who was paying for it all?

Heuer, who was instrumental in initiating Arts at CERN, was always clear on the first point. “The arts and science are inextricably linked; both are ways of exploring our existence, what it is to be human and what is our place in the universe,” he said on launching the cultural policy. Commenting later after three years of successful partnership with Ars Electronica, he wrote: “The level of heated debate about the so-called two-cultures is a constant source of bafflement to me. Of course arts and science are linked. Both are about creativity. Both require technical mastery. And both are about exploring the limits of human potential.”

Feeling Material XXXIV

Regarding the second point, Arts at CERN was conceived from the start to be mainly self-funding. Support from CERN initially came through its programmes for fellows and students. Funding for the original Collide at CERN programme came from Ars Electronica, the City and the Canton of Geneva, and individual private donors, as well as from the UNIQA Insurance Group, which has a long association with CERN and continues to sponsor the Collide programme. Currently, FACT (Foundation for Art and Creative Technology), based in Liverpool in the UK, is the main partner for the Collide International award, while the Republic and Canton of Geneva and the City of Geneva support the Collide Geneva strand. Collide Geneva is now awarded in alternate years with Collide Pro Helvetia, in which artists from across Switzerland can compete for a residency funded principally by Pro Helvetia, the Swiss Arts Council. In addition, via a slightly different scheme called Accelerate, each year ministries or foundations in two different countries fund two artists working in different domains to come to CERN for one month. A further essential strand that has existed from the beginning brings many more artists to the Laboratory. Originally named Visiting Artists, now called Guest Artists, it hosts up to 10 artists a year who are specially selected for a visit of one to two days which they fund themselves. Together these three strands form the Arts at CERN programme.

A new era begins

By autumn 2014, when the call went out for a new person to head Arts at CERN, the programme had already earned a global reputation. Two internationally known artists and recipients of the Collide International award epitomise this reach: Bill Fontana and Ryoji Ikeda. Sound-sculptor Fontana, from the US, had produced works based on sounds across the globe when he was awarded the 2013 international residency. He explored sounds recorded in the LHC tunnel in works such as Acoustic Time Travel (CERN Courier December 2012 p32), and was followed a year later by Ikeda, Japan’s leading electronic composer and visual artist, who used his residency to inform his works supersymmetry and micro|macro.

Quantum

This growing reputation within the science and art scene appealed in particular to the art historian and curator Mónica Bello, who had more than 15 years’ experience in curating and managing cultural programmes in art, science and technology institutions in different countries and had spent five years as artistic director of the VIDA International Art and Artificial Life Awards. Educated in modern and contemporary art history, she had been exposed to new ideas emerging at the boundaries of modern art and become passionate about the fusion between science and art. “I like art that is based on open processes, where different agents – the artists, researchers, even the audience – can join together to become the project,” she explains. “Experimentation with openness is the most exciting thing that’s happening in the arts right now – and CERN is the place to be for that.”

Bello took up her position at CERN in March 2015, joining co-ordinator Julian Caló. In accelerator terms, by the following year, Arts at CERN was already running at its design energy and beyond. The programme was bringing artists to the laboratory for as many as four residencies a year, and the number of entries for the Collide International award had risen from some 400 when it was launched in 2011 to around 1000.

Bello’s main vision is to move the main focus beyond exploration and artistic research towards the further development of new art commissions and exhibitions. Continuing to support the artists once they finish their residencies at CERN is essential for this, and one way is to connect the artists with CERN scientists that have links to the cities of the programmes’ partners. This was initiated with Liverpool, where artists spent a one-month residency at FACT after being at CERN and where they were connected with research groups at Liverpool University led by LHCb physicist Tara Shears. Connecting CERN to international cultural organisations is part of the same objective, through links formed with different cities and countries.

These new developments are fully supported by CERN’s current management. Earlier this year, Director-General Fabiola Gianotti made her views on the “two cultures” clear at the World Economic Forum in Davos: “Too often people put science and humanities, or science and the arts, in different compartments… but they do have much in common. They are the highest expression of the creativity and the curiosity of humanity. We should really talk about culture in general, and not focus on one particular sector of culture. This is an important message we should be giving to teachers and to young people, for a better world, so they can grow to face the challenges of society.”

Semiconductor duo

Arts at CERN currently has a clear home within CERN’s international relations sector. The aim is to provide stability for the programme, with a view to making it self-sustaining with separate funding within the context of the CERN & Society Foundation. At the same time, Arts at CERN forms part of a broader interest at CERN in the arts, which includes a distinctive and complementary programme Arts@CMS. This education and outreach initiative of the CMS collaboration has set up school-based projects and collaborations with artists with the aim of inspiring a greater appreciation of CERN’s science within the public at large.

A new production scheme

Arts at CERN has clearly been a resounding success with the arts community, and reaching audiences beyond the confines of the laboratory has proved no problem at all. Nor has it been difficult to find scientists willing to work with the artists; more than 200 have so far been involved. But it is by no means easy to mount an exhibition at a scientific laboratory – in places almost an industrial site – where health and safety are of paramount importance. There have been some obvious artistic interventions, such as when choreographer Gilles Jobin, winner of the 2012 Collide Geneva award, installed dancers in the CERN restaurant and computer centre, and even the library; and his project Quantum – a fusion of dance and lighting installation developed with Julius von Bismarck, the first Collide International artist-in-residence – debuted in the CMS cavern during CERN’s open days in 2013, before embarking on an international tour (CERN Courier November 2013 p29).


More recently, as part of Geneva’s annual Electron Festival in 2016, the winners of the 2014 Collide Geneva award, Rudy Decelière and Vincent Hänni, showed work developed with experimentalist Robert Kieffer and theorist Diego Blas. Their sound installation Horizons Irrésolus (2016) – which consists of 888 micro-synthesisers and speakers, network cable and nylon thread – was installed at CERN for visits during the Easter weekend when the festival traditionally takes place. The effort required by many people at CERN included registration to allow access to the Meyrin site and a shuttle bus to take visitors to see the installation. The response was enthusiastic, but not large.

It is to address such problems that Bello is developing the production and exhibition stages of the residencies. Rather as a scientific experiment evolves from conception to data-collection, analysis and publication, so does an artistic endeavour evolve from exploration to production and exhibition. The original focus of the residencies at CERN was on exploration: having artists and scientists come together to evolve ideas. In the new phase for Arts at CERN, at the end of their residencies, artists will be invited to propose a work to be considered for additional funding for production. The aim is to collaborate with other institutes to co-produce cultural works for ideas that are worth developing, and to curate the resulting work so that it can be shown and shared with the CERN community.

Jan Peters

In 2016 CERN began a new collaboration for the Collide International award involving FACT, ushering in the production phase. To support production of the artworks, CERN and FACT have brought together several important European cultural organisations under the umbrella of ScANNER (Science and Art Network for New Exhibitions and Research). Supported by ScANNER, a major exhibition will open at FACT in November 2018 showcasing artworks from, among others, the 2018 Collide International award winner and four previous winners:
Semiconductor (2015), Yunchul Kim (2016), studio hrm199 led by Haroon Mirza (2017) and Suzanne Triester (2018). The exhibition will then tour all venues of the ScANNER members during 2019 and 2020.

Meanwhile, Arts at CERN continues to be a major influence across an impressive range of artistic areas. For example, in her project Quantum Nuggets, designer Laura Couto (Collide Pro Helvetia award 2017) has developed a computer program to enable other artists and designers to produce 3D shapes based on collision data from the LHC, thus creating real objects, such as furniture, that echo the invisible quantum world of particle physics. And in February this year Cheolwon Chang (Accelerate Korea) spent time at CERN finding out about the geometric properties of nature and how mathematics influences our further understanding of the universe. The winners of two awards for 2018 were announced in March: Suzanne Treister (Collide International) and Anne Sylvie Henchoz and Julie Lang (Collide Geneva).

Most recently, a prestigious commission for Art Basel held on 11–17 June and guest-curated by Bello, has highlighted the pinnacles that Arts at CERN is reaching. Swiss watchmakers, Audemars Piguet, a partner of Art Basel, chose the British artist-duo Semiconductor to create the Audemars Piguet Art Commission for the 2018 fair. Ruth Jarman and Joe Gerhardt, who work together under the name Semiconductor, were the recipients of the 2015 Collide International award and for Art Basel they created HALO – an installation that surrounds visitors with data collected by the ATLAS experiment at the LHC. HALO consists of a 10 m-wide cylinder defined by vertical piano wires, within which a 4 m-tall screen displays particle collisions. The data also trigger hammers that strike the wires and set up vibrations to create a multisensory experience.

This important commission is testament to the impact that the Arts at CERN programme is having in the world of contemporary art, and underlines its importance in bringing together apparently disparate ways in viewing and making sense of the world, the universe in which we live. There are many people who say they do not appreciate modern art, just as there are many who say that they never liked physics. But with modern art, just as with modern physics, making a little effort can open up remarkable new ways of thinking about our place in space and time. Arts at CERN is very clearly bringing people together in ways that open their minds and allow them not necessarily to understand but to appreciate how others view the world about us.

  • This article was modified on 5 September 2018.

 

The rise of deep learning

It is 1965 and workers at CERN are busy analysing photographs of trajectories of particles travelling through a bubble chamber. These and other scanning workers were employed by CERN and laboratories across the world to manually scan countless such photographs, seeking to identify specific patterns contained in them. It was their painstaking work – which required significant skill and a lot of visual effort – that put particle physics in high gear. Researchers used the photographs (see figures 1 and 3) to make discoveries that would form a cornerstone of the Standard Model of particle physics, such as the observation of weak neutral currents with the Gargamelle bubble chamber in 1973.

In the subsequent decades the field moved away from photographs to collision data collected with electronic detectors. Not only had data volumes become unmanageable, but Moore’s law had begun to take hold and a revolution in computing power was under way. The marriage between high-energy physics and computing was to become one of the most fruitful in science. Today, the Large Hadron Collider (LHC), with its hundreds of millions of proton–proton collisions per second, generates data at a rate of 25 GB/s – leading the CERN data centre to pass the milestone of 200 PB of permanently archived data last summer. Modelling, filtering and analysing such datasets would be impossible had the high-energy-physics community not invested heavily in computing and a distributed-computing network called the Grid.

Learning revolution

The next paradigm change in computing, now under way, is based on artificial intelligence. The so-called deep learning revolution of the late 2000s has significantly changed how scientific data analysis is performed, and has brought machine-learning techniques to the forefront of particle-physics analysis. Such techniques offer advances in areas ranging from event selection to particle identification to event simulation, accelerating progress in the field while offering considerable savings in resources. In many cases, images of particle tracks are making a comeback – although in a slightly different form from their 1960s counterparts.

Fig. 1.

Artificial neural networks are at the centre of the deep learning revolution. These algorithms are loosely based on the structure of biological brains, which consist of networks of neurons interconnected by signal-carrying synapses. In artificial neural networks these two entities – neurons and synapses – are represented by mathematical equivalents. During the algorithm’s “training” stage, the values of parameters such as the weights representing the synapses are modified to lower the overall error rate and improve the performance of the network for a particular task. Possible tasks vary from identifying images of people’s faces to isolating the particles into which the Higgs boson decays from a background of identical particles produced by other Standard Model processes.

Artificial neural networks have been around since the 1960s. But it took several decades of theoretical and computational development for these algorithms to outperform humans, in some specific tasks. For example: in 1996, IBM’s chess-playing computer Deep Blue won its first game against the then world chess champion Garry Kasparov; in 2016 Google DeepMind’s AlphaGo deep neutral-network algorithm defeated the best human players in the game of Go; modern self-driving cars are powered by deep neural networks; and in December 2017 the latest DeepMind algorithm, called AlphaZero, learned how to play chess in just four hours and defeated the world’s best chess-playing computer program. So important is artificial intelligence in potentially addressing intractable challenges that the world’s leading economies are establishing dedicated investment programmes to better harness its power.

Computer vision

The immense computing and data challenges of high-energy physics are ideally suited to modern machine-learning algorithms. Because the signals measured by particle detectors are stored digitally, it is possible to recreate an image from the outcome of particle collisions. This is most easily seen for cases where detectors offer discrete pixelised position information, such as in some neutrino experiments, but it also applies, on a more complex basis, to collider experiments. It was not long after computer-vision techniques, which are based on so-called convolutional neural networks (figure 2), were applied to the analysis of images that particle physicists applied them to detector images – first of jets and then of photons, muons and neutrinos, simplifying and making the task of understanding ever-larger and more abstract datasets more intuitive.

Fig. 2.

Particle physicists were among the first to use artificial-intelligence techniques in software development, data analysis and theoretical calculations. The first of a series of workshops on this topic, titled Artificial Intelligence in High-Energy and Nuclear Physics (AIHENP), was held in 1990. At the time, several changes were taking effect. For example, neural networks were being evaluated for event-selection and analysis purposes, and theorists were calling on algebraic or symbolic artificial-intelligence tools to cope with a dramatic increase in the number of terms in perturbation-theory calculations.

Over the years, the AIHENP series was renamed ACAT (Advanced Computing and Analysis Techniques) and expanded to span a broader range of topics. However, following a new wave of adoption of machine learning in particle physics, the focus of the 18th edition of the workshop, ACAT 2017, was again machine learning – featuring its role in event reconstruction and classification, fast simulation of detector response, measurements of particle properties, and AlphaGo-inspired calculations of Feynman loop integrals, to name a few examples.

Learning challenge

For these advances to happen, machine-learning algorithms had to improve and a physics community dedicated to machine learning needed to be built. In 2014 a machine-learning challenge set up by the ATLAS experiment to identify the Higgs boson garnered close to 2000 participants on the machine-learning competition platform Kaggle. To the surprise of many, the challenge was won by a computer scientist armed with an ensemble of artificial neural networks. In 2015 the Inter-experimental LHC Machine Learning working group was born at CERN out of a desire of physicists from across the LHC to have a platform for machine-learning work and discussions. The group quickly grew to include all the LHC experiments and to involve others outside CERN, like the Belle II experiment in Japan and neutrino experiments worldwide. More dedicated training efforts in machine learning are now emerging, including the Yandex machine learning school for high-energy physics and the INSIGHTS and AMVA4NewPhysics Marie Skłodowska-Curie Innovative Training Networks (see Learning machine learning).

Fig. 3.

Event selection, reconstruction and classification are arguably the most important particle-physics tasks to which machine learning has been applied. As in the time of manual scanning, when the photographs of particle trajectories were analysed to select events of potential physics interest, modern trigger systems are used by many particle-physics experiments, including those at the LHC, to select events for further analysis (figure 3). The decision of whether to save or throw away an event has to be made in a split microsecond and requires specialised hardware located directly on the trigger systems’ logic boards. In 2010 the CMS experiment introduced machine-learning algorithms to its trigger system to better estimate the momentum of muons, which may help identify physics beyond the Standard Model. At around the same time, the LHCb experiment also began to use such algorithms in their trigger system for event selection.

Neutrino experiments such as NOvA and MicroBooNE at Fermilab in the US have also used computer-vision techniques to reconstruct and classify various types of neutrino events. In the NOvA experiment, using deep learning techniques for such tasks is equivalent to collecting 30% more data, or alternatively building and using more expensive detectors – potentially saving global taxpayers significant amounts of money. Similar efficiency gains are observed by the LHC experiments.

Currently, about half of the Worldwide LHC Computing Grid budget in computing is spent simulating the numerous possible outcomes of high-energy proton–proton collisions. To achieve a detailed understanding of the Standard Model and any physics beyond it, a tremendous number of such Monte Carlo events needs to be simulated. But despite the best efforts by the community worldwide to optimise these simulations, the speed is still a factor of 100 short of the needs of the High-Luminosity LHC, which is scheduled to start taking data around 2026. If a machine-learning model could directly learn the properties of the reconstructed particles and bypass the complicated simulation process of the interactions between the particles and the material of the detectors, it could lead to simulations orders of magnitude faster than those currently available.

Competing networks

One idea for such a model relies on algorithms called generative adversarial networks (GANs). In these algorithms, two neural networks compete with each other for a particular goal, with one of them acting as an adversary that the other network is trying to fool. CERN’s openlab and software for experiments group, along with others in the LHC community and industry partners, are starting to see the first results of using GANs for faster event and detector simulations.

Particle physics has come a long way from the heyday of manual scanners in understanding elementary particles and their interactions. But there are gaps in our understanding of the universe that need to be filled – the nature of dark matter, dark energy, matter–antimatter asymmetry, neutrinos and colour confinement, to name a few. High-energy physicists hope to find answers to these questions using the LHC and its upcoming upgrades, as well as future lepton colliders and neutrino experiments. In this endeavour, machine learning will most likely play a significant part in making data processing, data analysis and simulation, and many other tasks, more efficient.

Driven by the promise of great returns, big companies such as Google, Apple, Microsoft, IBM, Intel, Nvidia and Facebook are investing hundreds of millions of dollars in deep learning technology including dedicated software and hardware. As these technologies find their way into particle physics, together with high-performance computing, they will boost the performance of current machine-learning algorithms. Another way to increase the performance is through collaborative machine learning, which involves several machine-learning units operating in parallel. Quantum algorithms running on quantum computers might also bring orders-of-magnitude improvement in algorithm acceleration, and there are probably more advances in store that are difficult to predict today. The availability of more powerful computer systems together with deep learning will likely allow particle physicists to think bigger and perhaps come up with new types of searches for new physics or with ideas to automatically extract and learn physics from the data.

That said, machine learning in particle physics still faces several challenges. Some of the most significant include understanding how to treat systematic uncertainties while employing machine-learning models and interpreting what the models learn. Another challenge is how to make complex deep learning algorithms work in the tight time window of modern trigger systems, to take advantage of the deluge of data that is currently thrown away. These challenges aside, the progress we are seeing today in machine learning and in its application to particle physics is probably just the beginning of the revolution to come.

Ishfaq Ahmad 1930–2018

Ishfaq Ahmad

The architect of Pakistan–CERN collaboration and former chairman of the Pakistan Atomic Energy Commission (PAEC), Ishfaq Ahmad, passed away on 15 January in Islamabad aged 87. He remained associated with PAEC for more than 40 years. After joining the organisation in 1960 on completion of his PhD at the University of Montreal in Canada, and post-doc positions at University of Ottawa and Sorbonne (Université de Paris), he played a crucial role in the development of civil and military nuclear technology in Pakistan.

Ishfaq’s doctoral work was based on the use of fine-grained nuclear emulsions, pioneered by his thesis supervisor, Pierre Demers. He also worked at the Niels Bohr Institute in Copenhagen between 1961 and 1962, where he had opportunities to interact with Bohr himself. It was during his stay there that his experimental work on nuclear reactions brought him to CERN, where nuclear emulsions were exposed for subsequent analyses at different laboratories. Years later, he recalled that his fascination with CERN and the work being done there never faded, resulting in the establishment of close ties between CERN and PAEC.

The first formal scientific and technical agreement between CERN and Pakistan, which formed the basis of future Pakistan–CERN cooperation, was signed on 11 January 1994 by Ishfaq on behalf of Pakistan and the then CERN Director-General Chris Llewellyn-Smith, on behalf of CERN. Thereafter, a series of protocols, addendums and extensions of protocols, MoUs and Letters-of-Intent were signed by CERN DGs and PAEC chairmen, many concerning specific projects related to the construction of the Large Hadron Collider and components of the CMS and ATLAS detectors. The most conspicuous of these projects was the supply of eight steel supports for the CMS yoke, which were fabricated in PAEC laboratories in Islamabad. Concurrently, the participation of the National Centre for Physics (NCP) in the CMS collaboration resulted in scientific exchanges and data simulations. A node for grid computing was also established at NCP. Another institution in Pakistan that joined the CERN collaboration was the COMSATS Institute of Information Technology, which was granted membership of the ALICE collaboration. Eventually, Pakistan gained Associate Membership of CERN on 31 July 2015.

While overseeing the increasingly deeper ties between Pakistan and CERN, Ishfaq remained actively engaged with other international fora, such as the International Atomic Energy Agency (IAEA), the International Centre for Theoretical Physics (ICTP) and the International Institute for Applied Systems Analysis (IIASA). As member of the board of governors of IAEA, he was able to convince the then director-general of IAEA, Hans Blix, to establish an advisory group that strengthened the agency’s role as a facilitator of civilian nuclear technology through improved technical cooperation programmes, especially for developing countries.

His avid support for ICTP was also based on his strong belief in science as a vehicle of peace and development. It is not surprising that he was the one who wrote to the then UN secretary-general Kofi Annan to launch World Science Day for Peace and Development, which was duly approved by the Security Council and has been organised internationally by UNESCO since 2001. He regularly participated in PUGWASH meetings following the 1974 Indian nuclear tests and strongly advocated for a nuclear-free South Asia. He was a strong supporter of nuclear power as a significant component of the energy mix in Pakistan, but kept open mind about alternative energy sources. He lobbied and successfully achieved Pakistan’s membership of IIASA, and remained on its board from 2007 to 2012. His broader vision of national economic interests led to the creation of institutions such as the Global Climate Change Impact Study Centre and the Centre for Earthquake Studies in Pakistan.

In 1998 the government of Pakistan bestowed upon Ishfaq the highest civil award, “Nishan-i-Imtiaz”, besides several other honours and awards in preceding years, and entrusted him with prestigious positions such as advisor to the prime minister of Pakistan and other senior roles. He held government posts until 2012, when he decided to restrict his activities to the work of NCP as the chairman of its board of governors. He was buried with state honours in Islamabad on 16 January.

What is Quantum Information?

By O Lombardi, S Fortin, F Holik and C López (eds.)
Cambridge University Press

CCJune18_Book-lombardi

This book debates the topic of quantum information from both a physical and philosophical perspective, addressing the main questions about its nature. At present, different interpretations of the notion of information coexist and quantum mechanics brings in many puzzles; as a consequence, says the author, there is not yet a generally agreed upon answer to the question “what is quantum information?”.

The chapters are organised in three parts. The first is dedicated to presenting various interpretations of the concept of information and addressing the question of the existence of two qualitatively different kinds of information (classical and quantum). The links between this concept and other notions, such as knowledge, representation, interpretation and manipulation, are discussed as well.

The second part is devoted to the relationship between informational and quantum issues, and deals with the entanglement of quantum states and the notion of pragmatic information. Finally, the third part analyses how probability and correlation underlie the concept of information in different problem domains, as well as the issue of the ontological status of quantum information.

Providing an interdisciplinary examination of quantum information science, this book is aimed at philosophers of science, quantum physicists and information-technology experts who are interested in delving into the multiple conceptual and philosophical problems inherent to this recently born field of research.

The Black Book of Quantum Chromodynamics: A Primer for the LHC Era

By J Campbell, J Huston and F Krauss
Oxford University Press

Also available at the CERN bookshop

This book provides a comprehensive overview of the physics of the strong interaction, which is necessary to analyse and understand the results of current experiments at particle accelerators. In particular, the authors aim to show how to apply the framework of perturbative theory in the context of the strong interaction, to the prediction as well as correct interpretation of signals and backgrounds at the Large Hadron Collider (LHC).

The book consists of three parts. In the first, after a brief introduction to the LHC and the present hot topics in particle physics, a general picture of high-energy interactions involving hadrons in the initial state is developed. The relevant terminology and techniques are reviewed and worked out using standard examples.

The second part is dedicated to a more detailed discussion of various aspects of the perturbative treatment of the strong interaction in hadronic reactions. Finally, in the last section, experimental findings are confronted with theoretical predictions.

Primarily addressed at graduate students and young researchers, this book can also be a helpful reference for advanced scientists. In fact, it can provide the right level of knowledge for theorists to understand data more in depth and for experimentalists to be able to recognise the advantages and disadvantages of different theoretical descriptions.

The reader is assumed to be familiar with concepts of particle physics such as the calculation of Feynman diagrams at tree level and the evaluation of cross sections through phase space integration with analytical terms. However, a short review of these topics is given in the appendices.

In Praise of Simple Physics: The Science and Mathematics behind Everyday Questions

By Paul J Nahin
Princeton

81altc6UJ4L

In this book, popular-science writer Paul Nahin presents a collection of everyday situations in which the application of simple physical principles and a bit of mathematics can make us understand how things work. His aim is to take these scientific disciplines closer to the layperson and, at the same time, show them the wonder lying behind many aspects of reality that are often taken for granted.

The problems presented and explained are very diverse, ranging from how to extract more energy from renewable sources, how best to catch a baseball, to how to measure gravity in one’s garage and why the sky is dark at night. These topics are treated in an informal and entertaining way, but without waiving the maths. In fact, as the author himself highlights, he is interested in keeping the discussions simple, but not so simple that they are simply wrong. The whole point of the book is actually to show how physics and some calculus can explain many of the things that we commonly encounter.

Engaging and humorous, this text will appeal to non-experts with some background in maths and physics. It is suited to students at any level beyond the last years of high school, as well as to practicing scientists who might discover alternative, clever ways to solve (and explain) everyday physics problems.

Calorimetry: Energy Measurement in Particle Physics (2nd edition)

By Richard Wigmans
Oxford Science Publications

When the first edition of this book appeared in 2000, it established itself as “the bible of calorimetry” – not only because of the exhaustive approach to this subtle area of detection, but also because its author enjoyed worldwide recognition within the field. Wigmans gained it thanks to his ground-breaking work on the quantitative understanding of so-called compensating calorimeters (i.e. how to equalise the response of such detectors for electromagnetic and hadronic interactions) and to the leading role he played in designing and operating large detectors that are still considered to be state of the art.

As with the real Bible, which underwent several revisions, this book has been reviewed in depth and published in a second edition. The author has updated it to take into account the last 16 years of progress in the field and to improve its impact as a reference for both students and practitioners.

At first look, one immediately notices that considerable work has been put into improving the quality of the graphics and figures – introducing colours where appropriate – and this new edition is available as an e-book. But there is much more to this updated version.

Chapters two to six, in which the fundamentals of calorimetry are discussed, follow the same thorough structure of the first edition, but they include new insights and use more recent data for illustration, mostly coming from the LHC experiments. Chapters one (Seventy Years of Calorimetry), seven (Performance of Calorimeter Systems) and 11 (Contributions of Calorimetry to the Advancement of Science) have also been brought up to date. Chapters eight, nine and (to a large extent) 10 are brand new and, in my opinion, represent the real added value of this new edition. In particular, chapter eight (New Calorimeter Techniques) discusses the two most relevant innovations introduced in the field during the past decade: dual-readout calorimetry (DRC) and particle-flow analysis (PFA).

The concept of DRC is elaborated upon to circumvent the limitations of compensating hadron calorimeters. Their performances depend crucially on the detection of the abundant contribution of the neutrons produced in the hadronic shower development, which in turn requires the use of heavy absorbers and a small sampling fraction – with the consequent loss of resolution for electromagnetic showers – as well as a relatively large signal-integration time and volume. In DRCs, signals coming from scintillation and Cherenkov processes provide complementary information about the shower development and allow the measurement of the electromagnetic fraction of hadron showers event by event, thus eliminating the effects of fluctuations on calorimeter performance. This concept is discussed in depth and predictions are compared with R&D results on prototypes, providing a convincing experimental demonstration of this novel technique. Although no full-scale calorimeter of this type has been built so far, the results obtained with real detectors, combined with Monte Carlo simulations, have outlined the breakthrough power of this idea, which has all the potential to rival the performances of the best compensating calorimeters, with much better energy resolution for electromagnetic showers. It is very stimulating food for thought for whoever is poised to design next-generation calorimeters.

The other important topic discussed in chapter eight, PFA, is a completely different method that is being used to improve calorimeter performances for jets. It is based on the combined use of a precision tracker and a high-granularity calorimeter, which measures the momentum of charged-jet particles and the energy of neutral particles, respectively. High granularity is mandatory to avoid double counting of the charged particles already measured by the tracker. The topic is treated in great detail, with abundant examples of the application of this technique in real experiments, and its pros and cons are discussed in view of future large-scale detector systems.

As an example, the idea that one can relax the requirements on the calorimeters, since they measure on average only one third of the particles in a jet while the remaining two thirds are very well measured by the tracker, is strongly questioned because the jet-energy resolution would be dominated by the fluctuations in the fraction of the total jet energy that is carried by the charged fragments.

Chapter nine (Analysis and Interpretation of Test Beam Data) is a brand-new addition that I find extremely illuminating and will be valuable for more than just newcomers to the field. By going through it, I have retraced the path of some of my mistakes when dealing with calorimeters, which are complex and subtly deceptive detectors, often exhibiting counterintuitive properties.

Finally, chapter 10 (Calorimeters for Measuring Natural Phenomena) is a tribute to the realisation and successful employment of calorimetric systems to the study of natural phenomena (neutrinos, cosmic rays) in the Antarctica, the Mediterranean Sea and the Argentinian pampa, inside a variety of mountains and deep mines, and in space.

In summary, this second edition of Calorimetry fully meets the ambitious goals of its author: it is a well written and pleasant book, a reference manual for both beginners and experts, and a source of inspiration for future developments in the field.

The Cosmic Web

By John Richard Gott
Princeton University Press

The observation of the night sky is as old as humankind itself. Cosmology, however, has only achieved the status of “science” in the past century or so. In this book, Gott accompanies the reader through the birth of this new science and our growing understanding of the universe as a whole, starting from the observation by Hubble and others in the 1920s that distant galaxies are receding away from us. This was one of the most important discoveries in the history of science because it shifted the position of humans farther away from the centre of the cosmos and showed that the universe is not eternal, but had a beginning. The philosophical implications were hard to digest, even for Einstein, who invented the cosmological constant such that his equations of general relativity could have a static solution.

Following the first observations of distant galaxies, astronomers began to draw a comprehensive map of the observable universe. They played the same role as the explorers travelling around our planet, except that they could only sit where they were and receive light from distant objects, like the faded photography of a lost past.

After an introduction to the early days of cosmology, the book becomes more personal, and the reader feels drawn in to the excitement of actually doing research. Gott’s account of cosmology is given through the lens of his own research, making the book slightly biased towards the physics of the large-scale structure of the universe, but also more focused and definitely captivating for the reader.

The overarching theme of the book is the quest to understand the shape of the “cosmic web”, which is the distribution of galaxies and voids in a universe that is homogeneous only on very large scales. Tiny fluctuations in the matter density, ultimately quantum in origin, grow via gravity to weave the web.

In graduate school, under the supervision of Jim Gunn, Gott wrote his most cited paper, proposing a mathematical model of the gravitational collapse of small density fluctuations. Here, the readers are given a flavour of the way real research is carried out. The author describes in detail the physics involved in the topic, as well as how the article was born and completed and how it took on a life of its own to become a classic.

The author’s investigation of the large-scale structure intertwines with his passion for topology. He was fascinated by polyhedrons with an infinite number of faces, which were the subject of an award-winning project that he developed in high school and of his first scientific article published in a mathematics journal.

At the time, when astronomical surveys were covering only a small portion of the sky, it was unclear how the cosmic structures assembled. American cosmologists thought that galaxies gathered in isolated clusters floating in a low-density universe, like meatballs in a soup. On the other hand, Soviet scientists maintained that the universe was made up of a connected structure of walls and filaments, where voids appear like holes in a Swiss cheese.

Does the 3D map of the universe resemble a meatball stew or a Swiss cheese? Neither, Gott says. With his collaborators, he proposed that the cosmic web is topologically like a sponge, where voids and galaxy clusters form two interlocking regions, much like the infinite polyhedrons Gott studied in his youth.

The reader is given clear and mathematically precise descriptions of the methods used to demonstrate the idea, which was later confirmed by deeper and larger astronomical observations (in 3D), and by the analysis of the cosmic microwave background (in 2D). By that time, we had the theory of cosmological inflation to explain a few of the puzzles regarding the origin of the universe. Remarkably, inflation predicts tiny quantum fluctuations in the fabric of space–time, giving rise to a symmetry between higher and lower density perturbations, leading to the observed sponge-like topology.

Therefore, by the end of the 20th century, the pieces of our understanding of the universe were falling into place and, in 1998, the discovery that the universe is accelerating allowed us to start thinking about the ultimate fate of the cosmos. This is the subject of the last chapter, an interesting mix of sound predictions (for the next trillion years) and speculative ideas (in a future so far away that it is hard to think about), ending the book with a question – rather than an exclamation – mark.

This is not only a good popular science book that achieves a balance between mathematical precision and a layperson’s intuition. It is also a text about the day-to-day life of a researcher, describing details of how science is actually done, the excitement of discovery and the disappointment of following a wrong path. It is a book for readers curious about cosmology, for researchers in other fields, and for young scientists, who will be inspired by an elder one to pursue the fascinating exploration of nature.

FCC presents at tunnel congress

The World Tunnel Congress (WTC) brings together leading tunnel and underground-space experts from all around the world. This year, the congress was held in Dubai from 21 to 26 April and was attended by nearly 2000 professionals, with case studies illuminating the latest trends and innovations and discussions about the role of tunnels in supporting future sustainable cities. CERN’s Future Circular Collider (FCC) study – which is exploring the possibility of a 100 km-circumference collider (see CERN thinks bigger) – would require one of the world’s largest ever underground projects, generating great interest from WTC delegates.

The extensive underground tunnel works required for FCC were presented by John Osborne from CERN and by Werner Dallapiazza from ILF Consulting, who have been tasked with performing a cost and schedule study for the civil-engineering aspects of the FCC study.

The FCC could provide a facility able to host machines in several different collider modes, as well as four very large experimental caverns and service caverns at depths of up to approximately 300 m below the surface. The key challenges for civil engineering come from the difficult geology under Lake Geneva, the river Arve crossing and the area where the river Rhone exits the Geneva basin. In addition, solutions for the 9.2 million cubic metres of excavated rock and other environmental issues need to be studied further.

bright-rec iop pub iop-science physcis connect