Last year, the LHCb collaboration announced the first observation of the Ξcc++ baryon, a doubly charmed particle (CERN Courier July/August 2017 p8). It was identified via the decay Ξcc++→ Λc+ K–π+π+, with the Λc+ baryon subsequently decaying to pK–π+. Since then, LHCb has carried out a campaign of further studies to pinpoint the properties of this special particle, namely looking for additional Ξcc++ decays and, more importantly, measuring its lifetime.
LHCb has now reported a first measurement of the Ξcc++ lifetime, exploiting the same decay mode and using a data sample and an event selection similar to those used in the first observation. The experimental technique used is to measure the decay-time distribution relative to that of another decay with a similar topology, Λb0→ Λc+π–π+π–. As the lifetime of the Λb0 is already known with high precision from previous measurements, once the ratio of efficiencies for reconstructing the Ξcc++ and Λb0 decays is determined, it is possible to derive the lifetime of the Ξcc++ baryon from its decay-time distribution (see figure).
The lifetime value that is obtained is 256+24–22 (stat) ± 14 (syst) fs. Relatively large lifetimes like this are a distinctive feature of weak interactions. In addition, LHCb has also observed a new Ξcc++ decay: Ξcc++→ Ξc+π+, with a statistical significance of about six standard deviations, thus confirming the first observation of Ξcc++ in an independent analysis. The baryon’s mass is measured to be 3620.6 ± 1.5 (stat) ± 0.5 (syst) MeV/c2, which is consistent with the previous result.
Turning to a separate analysis, a puzzling result has emerged at LHCb while measuring the lifetime of another charmed baryon: the Ωc0. The sample of LHCb data used for this measurement comprises about 1000 Ωb−→ Ωc0μ−νμX signal decays, where the Ωc0 baryon is detected via the decay Ωc0→ pK−K−π+ and X represents possible additional undetected particles in the decay. The Ωc0 lifetime is determined from the observed decay-time distribution to be 268 ± 24 (stat) ± 10 (syst) fs (see figure).
Quite surprisingly, this value is nearly four times larger than, and inconsistent with, the current world average of 69 ± 12 fs. This average is based on three experimental results from fixed-target experiments, namely E687, FOCUS and WA89, where each experiment observed only a few dozen events with relatively large background. The new measurement from LHCb redefines the lifetime hierarchy of charmed baryons, placing the Ωc0 baryon as having the second largest lifetime after the Ξc+ baryon, i.e. τ(Ξc+) > τ(Ωc0) > τ(Λc+) > τ(Ξc0). This result may lead to reconsideration of the relative importance of the roles of spectator quarks and of non-perturbative effects in the decay dynamics of hadrons containing heavy quarks.
According to the Standard Model (SM), fermions acquire their mass through coupling to the Higgs field. New results released by the ATLAS collaboration firmly establish and measure these so-called Yukawa couplings to third-generation fermions. The Higgs-boson coupling to top quarks has been observed in associated production with a top quark pair (ttH production), and the Higgs-boson coupling to tau leptons has been observed in Higgs-boson decays to two tau leptons (H → ττ). Data from LHC proton–proton collisions at a centre-of-mass energy of 13 TeV recorded during 2015, 2016 and 2017 were analysed for these results.
The measurement of H → ττ, which is based on 2015 and 2016 data, was challenging because the tau lepton is short- lived and can only be observed through its decay products, of which at least one is always an invisible neutrino. The unknown momentum taken away by the neutrino makes the tau reconstruction incomplete and thus susceptible to backgrounds. Events with tau leptons are difficult to select online when the visible tau decay products are hadrons. Moreover, the Z boson, which also decays to a tau-lepton pair and is relatively close in mass to the Higgs boson but much more abundant, represents a large source of background. Good reconstruction of the di-tau invariant mass is therefore essential, using information from all detector systems to account for the missing energy.
The measured H → ττ signal has an observed (expected) statistical significance of 6.4 (5.4) standard deviations when combined with previous measurements using 7 and 8 TeV data. In 13 TeV data, the total cross-section times branching fraction was measured to be 3.71 ± 0.59 (stat) +0.87–0.74 (syst) pb. In addition, separate measurements of the gluon fusion and weak-boson-fusion Higgs-boson production cross sections were performed (figure, top). SM predictions agree with these measurements.
The production of ttH was measured from a combination of channels involving Higgs-boson decays to a pair of W or Z bosons (WW* or ZZ*), tau leptons, b-quarks or photons. The analyses exploiting the H → γγ and H → ZZ* → 4l decays used the full 80 fb–1 proton–proton dataset collected by ATLAS between 2015 and 2017, and deployed improved reconstruction algorithms and new analysis procedures based on machine learning. The H → γγ analysis alone observed a ttH signal with a significance of 4.1 standard deviations for 3.7 expected in the SM. The H → 4l analysis expected less than one event from ttH production in the 80 fb–1 dataset and observed no event.
These results, combined with those from the other ttH channels based on 2015 and 2016 data, led to an observed (expected) significance of 5.8 (4.9) standard deviations for ttH production at 13 TeV, with a ratio of measured to predicted cross section of 1.32 . Further combination with the results from Run 1 based on data taken at 7 and 8 TeV centre-of-mass energies yielded an observed significance of 6.3 standard deviations for 5.1 expected. The measured total cross-section for ttH production at 13 TeV is 670 ± 90 (stat) ± 110 (syst) fb, in agreement with the SM prediction of 507 fb. The corresponding result at 8 TeV is 220 ± 100 (stat) ± 70 (syst) fb (figure, bottom).
With further data being collected at the LHC, more precise measurements of cross-sections and differential distributions will allow the study of the structure of Yukawa couplings in great detail and thus provide more stringent tests of the SM and increased sensitivity to physics beyond it.
One of the key goals in exploring the properties of QCD matter is to determine the minimum value of the shear viscosity to entropy density ratio (η/s) for an ideal fluid. In heavy-ion collisions at the LHC, a quark-gluon plasma (QGP) is created, which is a state of hot and dense matter where quarks and gluons become deconfined. The plasma is formed at early times in the collisions and subsequently cools down to a temperature where the quarks and gluons cluster together into hadrons. The value of η/s is of particular interest, as weak coupling QCD and anti-de-Sitter/conformal field theory (AdS/CFT) theories predict different values. AdS/CFT is a technique from string theory that can be used to understand a strongly coupled system. The value of η/s implied by AdS/CFT is approximately 0.05–0.08, with calculations based on perturbative QCD techniques giving larger values.
The ALICE collaboration has recently released results of anisotropic-flow measurements from xenon–xenon (Xe–Xe) collisions at a per-nucleon centre of mass energy of 5.44 TeV, which offer additional constraints for the viscosity of the QGP. The anisotropic flow observed in a heavy-ion collision results from the spatial anisotropy of the initial collision zone, which is converted to momentum anisotropies via pressure gradients during the system̓s evolution. The magnitudes of momentum anisotropies are quantified by the harmonic coefficients νn of a Fourier expansion of the azimuthal distribution of particles; ν2 is generated by initial states with an elliptic shape, ν3 a triangular shape, and so on. The magnitude of νn depends not only on η/s, but also depends on the magnitude of the azimuthal asymmetries in the initial density distribution in the collisions. Comparing the new results from Xe–Xe collisions to those from lead–lead (Pb–Pb) collisions is expected to provide stronger constraints in the initial matter distribution, which will, in turn, provide a more precise determination of η/s.
The figure shows measurements of νn vs centrality for both Xe–Xe and Pb–Pb collisions. Centrality is a measure of the degree of overlap in heavy-ion collisions, where 0% corresponds to collisions that are head-on, and for 100% the heavy-ions do not overlap enough to interact. For mid-central collisions (20–70%), the second harmonic coefficients of the initial matter distributions are predicted to be very similar for Xe–Xe and Pb–Pb from various initial-state models. At the same centrality, however, the Xe–Xe system size is smaller than Pb–Pb and the impact of a finite η/s suppresses ν2 by 1/R, where R corresponds to the transverse size of the system. Therefore, ratios of Xe–Xe/Pb–Pb ν2 coefficients in the mid-centrality range could be directly sensitive to η/s, with larger values of η/s leading to a greater suppression of this ratio. When comparing our data to two different hydrodynamic models, which use parameters of η/s close to the values from AdS/CFT calculations, we find a good agreement with the data.
This shows that η/s is small, which implies a short mean-free path for the quarks and gluons in the QGP, or strong interactions. In central collisions, the ν2 in Xe–Xe collisions is larger than in Pb–Pb collisions. This is due to the 129Xe nucleus not being exactly spherical and to larger fluctuations of the initial density distributions for the smaller Xe nucleus. The latter also gives rise to larger values of ν3 in the centrality range of 0–50%.
For decades, theoretical models of galaxy evolution have predicted that the supermassive black hole lying at the heart of the Milky Way is surrounded by thousands of smaller black holes left behind by dying stars. Testing such theories is important to understand our own galaxy and, more generally, to understand how galaxies evolve and how black holes are produced. Now, observations by NASA’s Chandra X-ray Observatory have revealed a dozen stellar-mass black holes at the centre of the galaxy, providing the first observational evidence for such a black-hole cluster.
Black holes emit virtually no radiation, so it’s not possible to detect them when they are isolated and located at large distances from Earth. But many black holes have close stellar companions from which they accrete matter and, as this matter is sucked into the black hole, it heats up and emits X-rays that can be detected on Earth. If only a few of the thousands of the stellar-mass black holes that are predicted to exist in the galactic centre had a companion star, at least this binary fraction of the total black-hole population would be detectable by X-ray telescopes.
Using Chandra data, a group led by Chuck Hailey of Columbia University in New York searched for such black-hole binary systems in a region extending several light years from the galactic centre. This type of search is confounded by two aspects: the high density of other X-ray-emitting objects in the same region, such as binary systems containing neutron stars or white dwarfs instead of black holes; and the relatively low intensity of the X-ray binary sources in the region. But in their study, Hailey and colleagues were able to distinguish between the different types of weak X-ray binary system in the region by studying their spectra.
The researchers examined the Chandra spectra of 415 weak X-ray point sources, containing as few as 100 counts, and looked for the expected spectral features of black-hole binaries. They found 12 sources that have the expected spectral characteristics of black-hole binaries, all within a radius of three light years from the supermassive black hole (see figure). Other X-ray sources whose spectra match well with those of white-dwarf binary systems were found to be distributed at larger distances from the galactic centre.
The researchers went on to estimate the total number of black-hole binary systems in the observed region, assuming that the 12 sources are the brightest in their family and using the known fluxes of brighter and well-studied black-hole binary systems. This resulted in about 300–1000 binary black holes, which is a lower limit on the total number because it only includes those with companion stars. According to theoretical follow-up work by Aleksey Generozov of Columbia and colleagues, the total number of black holes should be between 10,000 and 40,000.
The results, published in Nature, agree with the theoretical predictions and therefore confirm the existing models of galaxy evolution. What’s more, the findings allow astronomers to predict the number of black-hole mergers – and thus the number of gravitational waves – from this region.
Fundamental insights into the constituents of matter have been gained by observing what happens when beams of high-energy particles collide. Electron–positron, proton–proton, proton–antiproton and electron–proton colliders have all contributed to the development of today’s understanding, embodied in the Standard Model of particle physics (SM). The Large Hadron Collider (LHC) brings 6.5 TeV proton beams into collision, allowing the Higgs boson and other SM particles to be studied and searches for new physics to be carried out. To reach physics beyond the LHC will require hadronic colliders at higher energies and/or lepton colliders that can deliver substantially increased precision.
A variety of options are being explored to achieve these goals. For example, the Future Circular Collider study at CERN is investigating a 100 km-circumference proton–proton collider with beam energies of around 50 TeV the tunnel for which could also host an electron–positron collider (CERN Courier June 2018 p15). Electron–positron annihilation has the advantage that all of the beam energy is available in the collision, rather than being shared between the constituent quarks and gluons as it is in hadronic collisions. But to reach very high energies requires either a state-of-the-art linear accelerator, such as the proposed Compact Linear Collider or the International Linear Collider, or a circular accelerator with an extremely large bending radius.
Muons to the fore
A colliding-beam facility based on muons has a number of advantages. First, since the muon is a lepton, all of the beam energy is available in the collision. Second, since the muon is roughly 200 times heavier than the electron and thus emits around 109 times less synchrotron radiation than an electron beam of the same energy, it is possible to produce multi-TeV collisions in an LHC-sized circular collider. The large muon mass also enhances the direct “s-channel” Higgs-production rate by a factor of around 40,000 compared to that in electron–positron colliders, making it possible to scan the centre-of-mass energy to measure the Higgs-boson line shape directly and to search for closely spaced states.
Stored muon beams could also serve the long-term needs of neutrino physicists (see box 1). In a neutrino factory, beams of electron and muon neutrinos are produced from the decay of muons circulating in a storage ring. It is straightforward to tune the neutrino-beam energy because the neutrinos carry away a substantial fraction of the muon’s energy. This, combined with the excellent knowledge of the beam composition and energy spectrum resulting from the very well-known characteristics of muon decays, makes the neutrino factory the ideal place to make precision measurements of neutrino properties and to look for oscillation phenomena that are outside the standard, three-neutrino-mixing paradigm.
Given the many benefits of a muon collider or neutrino factory, it is reasonable to ask why one has yet to be built. The answer is that muons are unstable, decaying with a mean lifetime at rest of 2.2 microseconds. This presents two main challenges: first, a high-intensity primary beam must be used to create the muons that will form the beam; and, second, once captured, the muon beam must be accelerated rapidly to high energy so that the effective lifetime of the muon can be extended by the relativistic effect of time dilation.
One way to produce beams for a muon collider or neutrino factory is to harness the muons produced from the decay of pions when a high-power (few-MW), multi-GeV proton beam strikes a target such as carbon or mercury. For this approach, new proton accelerators with the required performance are being developed at CERN, Fermilab, J-PARC and at the European Spallation Source. The principle of the mercury target was proved by the MERIT experiment that operated on the Proton Synchrotron at CERN. However, at the point of production, the tertiary muon beam emerging from such schemes occupies a large volume in phase space. To maximise the muon yield, the beam has to be “cooled” – i.e. its phase-space volume reduced – in a short period of time before it is accelerated.
The proposed solution is called ionisation cooling, which involves passing the beam through a material in which it loses energy via ionisation and then re-accelerating it in the longitudinal direction to replace the lost energy. Proving the principle of this technique is the goal of the Muon Ionization Cooling Experiment (MICE) collaboration, which, following a long period of development, has now reported its first observation of ionisation cooling.
An alternative path to a muon collider called the Low Emittance Muon Accelerator (LEMMA), recently proposed by accelerator physicists at INFN in Italy and the ESRF in France, provides a naturally cooled muon beam with a long lifetime in the laboratory by capturing muon–antimuon pairs created in electron–positron annihilation (see box 2).
Cool beginnings
The benefits of a collider based on stored muon beams were first recognised by Budker and Tikhonin at the end of the 1960s. In 1974, when CERN’s Super Proton Synchrotron (SPS) was being brought into operation, Koshkarev and Globenko showed how muons confined within a racetrack-shaped storage ring could be used to provide intense neutrino beams. The following year, the SPS proton beam was identified as a potential muon source and the basic parameters of the muon beam, storage ring and neutrino beam were defined. It was quickly recognised that the performance of this facility—the first neutrino factory to be proposed – could be enhanced if the muon beam was cooled. In 1978, Budker and Skrinsky identified ionisation cooling as a technique that could produce sufficient cooling in a timeframe short compared to the muon lifetime and, the following year, Neuffer proposed a muon collider that exploited ionisation cooling to increase the luminosity.
The study of intense, low-emittance muon beams as the basis of a muon collider and/or neutrino factory was re-initiated in the 1990s, first in the US and then in Europe and Japan. Initial studies of muon production and capture, phase-space manipulation, cooling and acceleration were carried out and neutrino- and energy-frontier physics opportunities evaluated. The reduction of the tertiary muon-beam phase space was recognised as a key technological challenge and at the 2001 NuFact workshop the international MICE collaboration was created, comprising 136 physicists and engineers from 40 institutes in Asia, Europe and the US.
The MICE cooling cell, in common with the cooling channels studied since the seminal work of the 1990s, is designed to operate at a beam momentum of around 200 MeV/c. This choice is a compromise between the size of the ionisation-cooling effect and its dependence on the muon energy, the loss rate of muon-beam intensity through decay, and the ease of acceleration following the cooling channel. The ideal absorber has, at the same time, a large ionisation energy loss per unit length (to maximise ionisation cooling) and a large radiation length (to minimise heating through multiple Coulomb scattering). Liquid hydrogen meets these requirements and is an excellent absorber material; a close runner-up, with the practical advantage of being solid, is lithium hydride. MICE was designed to study the properties of both. The critical challenges faced by the collaboration therefore included: the integration of high-field superconducting magnets operating in a magnetically coupled lattice; high-gradient accelerating cavities capable of operation in a strong magnetic field; and the safe implementation of liquid-hydrogen absorber modules – all solved through more than a decade of R&D (see box 3).
In 2003 the MICE collaboration submitted a proposal to mount the experiment (figure 1) on a new beamline at the ISIS proton and muon source at the Science and Technology Facilities Council’s (STFC) Rutherford Appleton Laboratory in the UK. Construction began in 2005 and first beam was delivered on 29 March 2008. The detailed design of the spectrometer solenoids was also carried out at this time and the procurement process was started. During the period from 2008 to 2012, the collaboration carried out detailed studies of the properties of the beam delivered to the experiment and, in parallel, designed and fabricated the focus-coil magnets and a first coupling coil.
Delays were incurred in addressing issues that arose in the manufacture of the spectrometer solenoids. This, combined with the challenges of integrating the four-cavity linac module with the coupling coil, led, in November 2014, to a reconfiguration of the MICE cooling cell. The simplified experiment required two, single-cavity modules and beam transport was provided by the focus-coil modules. An intense period of construction followed, culminating with the installation of the spectrometer solenoids and the focus-coil module in the summer of 2015. Magnet commissioning progressed well until, a couple of months later, a coil in the downstream solenoid failed during a training quench. The modular design of the apparatus meant the collaboration was able to devise new settings rapidly, but it proved not to be possible to restore the downstream spectrometer magnet to full functionality. This, combined with the additional delays incurred in the recovery of the magnet, eventually led to the cancellation of the installation of the RF cavities in favour of the extended operation of a configuration of the experiment without the cavities (see schematic in box 3).
It is interesting to reflect, as was done in a recent lessons-learnt exercise convened by the STFC, whether a robust evaluation of alternative options for the cooling-demonstration lattice at the outset of MICE might have identified the simplified lattice as a “less-risky” option and allowed some of the delays in implementing the experiment to be avoided.
The bulk of the data-taking for MICE was carried out between November 2015 and December 2017, using lithium-hydride and liquid-hydrogen absorbers. The campaign was successful: more than 5 × 108 triggers were collected over a range of initial beam momentum and emittance for a variety of configurations of the magnetic channel for each absorber material. The key parameter to measure when demonstrating ionisation cooling is the “amplitude” of each muon – the distance from the beam centre in transverse phase space, reconstructed from its position and momentum. The muon’s amplitude is measured before it enters the absorber and again as it leaves, and the distributions of amplitudes are then examined for evidence of cooling: a net migration of muons from high to low amplitudes. As can be seen (figure 2), the particle density in the core of the MICE beam is increased as a result of the beam’s passage through the absorber, leading to a lower transverse emittance and thereby providing a higher neutrino flux or a larger luminosity.
The MICE observation of the ionisation-cooling of muon beams is an important breakthrough, achieved through the creativity and tenacity of the collaboration and the continuous support of the funding agencies and host laboratory. The results match expectations, and the next step would be to design an experiment to demonstrate cooling in all six phase-space dimensions.
Completing the MICE programme
Having completed its experimental programme, MICE will now focus on the detailed analysis of the factors that determine ionisation-cooling performance over a range of momentum, initial emittance and lattice configurations for both liquid-hydrogen and lithium-hydride absorbers. MICE was operated such that data were recorded one particle at a time. This single-particle technique will allow the collaboration to study the impact of transverse-emittance growth in rapidly varying magnetic fields and to devise mechanisms to mitigate such effects. Furthermore, MICE has taken data to explore a scheme in which a wedge-shaped absorber is used to decrease the beam’s longitudinal emittance while allowing a controlled growth in its transverse emittance. This is required for a proton-based muon collider to reach the highest luminosities.
With the MICE observation of ionisation cooling, the last of the proof-of-principle demonstrations of the novel technologies that underpin a proton-based neutrino factory or muon collider has now been delivered. The drive to produce lepton–antilepton collisions at centre-of-mass energies in the multi-TeV range can now include consideration of the muon collider, for which two routes are offered: one, for which the R&D is well advanced, that exploits muons produced using a high-power proton beam and which requires ionisation cooling; and one that exploits positron annihilation with electrons at rest to create a high-energy cold muon source. The high muon flux that can be achieved using the proton-based technique has the potential to serve a neutrino-physics programme of unprecedented sensitivity, and the MICE collaboration’s timely results will inform the coming update of the European Strategy for Particle Physics.
Machine learning, whereby the ability of a computer to perform an intelligent task progressively improves, has penetrated many scientific domains. It allows researchers to tackle problems from a completely new perspective, enabling improvements to things previously thought solved for good. The downside of machine learning is that the field itself is developing so quickly, with new techniques popping up at an incredible rate, that it is hard to keep up. What is needed is some sort of high-level trigger to discriminate between good and bad, and to guide a growing community of users in a systematic way.
Machine-learning techniques are already in wide use in particle physics, and they will only become more prevalent during the coming years of the high-luminosity LHC and future colliders. Online data processing, offline data analysis, fast Monte Carlo generation techniques and detector-upgrade optimisation are just a few examples of the areas that could profit significantly from smarter algorithms (see The rise of deep learning).
The most remarkable growth trend in machine learning today, and one that has also been heavily hyped, concerns so-called deep learning. Although there is no strict boundary, a neural network with less than four layers is considered “shallow”, while one with more than 10 layers and many thousands of connections is considered “deep”. Using deep-learning algorithms, plus performative computing resources and extremely large datasets, researchers have managed to break important barriers for such tasks as text translation, voice recognition, image segmentation and even to master the game Go. Many of the educational materials one can find on the Internet are thus focused around typical tasks such as image recognition, annotation, segmentation, text processing and pattern generation.
Since most of these are conveyed in computer-science language, there is an obvious language barrier for domain-specific scientists, such as particle physicists, who have to learn a new technique and apply it to their own research. Another complication is that there are a variety of machine-learning methods capable of solving particular problems and a plenitude of tools (i.e. different languages, packages and platforms) out there – almost all of which are online – with which to implement those methods.
Targeting particle physics
As machine learning spreads into new domains such as astrophysics or biology, schools that focus on problems in specific areas are becoming more popular. Historically there are several summer schools for particle physicists focused around data analysis, computing and statistical learning – in particular the CERN School for computing, INFN School of Statistics and the CMS Data Analysis School. But, until 2014, none focused specifically on machine learning. In that year, a series with the straightforward title Machine-Learning school for High-Energy Physics (MLHEP) was launched.
MLHEP grew out of the well-established Yandex School of Data Analysis (YSDA), a non-commercial educational organisation funded by the Russia-based internet firm Yandex. Over the past decade, YSDA has grown to receive several thousand applications per year, out of which around 200 people pass the entrance exams and around 50 graduate in conjunction with leading Russian universities – almost all of them finding data-science positions in the private sector.
In 2015, YSDA joined the LHCb collaboration. The goal was to help optimise LHCb’s high-level trigger system to improve its efficiency for selecting B-decay events, and the result of the LHCb-YSDA collaboration was an efficiency gain of up to 60% compared to that obtained during LHC Run I. Another early joint effort between YSDA, CERN and MIT within LHCb was the design of decision-tree algorithms capable of decorrelating their output from a given variable, such as invariant mass.
The first MLHEP schools in 2015 and 2016 were satellite events at the Large Hadron Collider Physics (LHCP) conference held in St. Petersburg and Lund, respectively. Another key contrib-utor to the school was the faculty of computer science at Russia’s Higher School of Economics (HSE), which was founded in 2014 by Yandex. MLHEP 2017 was organised by Imperial College London in the UK, and the 2018 school takes place in Oxford at the beginning of August.
The topics covered during the schools usually start from the basic aspects of machine learning, such as loss functions, optimisation methods, predictive-model quality validation, and stretch towards advanced techniques like generative adversarial networks and Bayesian optimisation. The curriculum is not static, and each year the focus changes to address the most interesting and promising trends in deep learning while providing an overview of various techniques available on the market. At the 2018 school, speakers were invited from both academia and from companies, including Oracle, Nvidia, Yandex and DeepMind.
Breaking the language barrier
Some people compare deep learning not with a tool or platform, but with a language that allows a researcher to express computational “sentences” addressing a particular problem.
To reinforce the language analogy, recall that there is no solid theory of deep learning yet; in a sense it is just a bunch of best-practices and approaches that has proven to work in several important cases. A lot of the time during MLHEP classes is therefore devoted to practical exercises. School students are also encouraged to enter a data-science competition that is related to particle physics – e.g. tracking for the Coherent Muon to Electron Transition (COMET) experiment and event selection for the Higgs-boson discovery by the ATLAS and CMS experiments. The competition is published on the machine-learning competition platform kaggle.com at the start of the school, and is open for anyone who wants to get more machine-learning practice.
For summer 2017, the competition was organised together with the OPERA and SHiP collaborations. The goal was to analyse volumes of nuclear emulsions collected by OPERA that contain lots of cosmic-background tracks as well as tracks from electromagnetic showers. These shower-like structures are of interest for OPERA for the analysis of tau-neutrino interactions, so special algorithms have to be developed. Such algorithms are also very relevant to the SHiP experiment, which aims to use emulsion-based detectors for finding hidden-sector particles at CERN. According to some theoretical models, such showers might be closely related to hidden-sector particle interaction with regular matter (e.g. elastic scattering of very weakly-interacting particles off electrons or nuclei), so a separate task would be to discriminate these showers from neutrino interactions. The performance of the algorithms designed by participants was amazing. The winner of the challenge presented his solution at the SHiP collaboration meeting in November 2017 and was invited by OPERA to continue the collaboration.
A major part of the MLHEP curriculum is given by YSDA/HSE lecturers, and guest speakers help to broaden the view on the machine-learning challenges and methods. The school is non-commercial, and its success depends on external contributions from the HSE, YSDA, local organisers and commercial sponsors. For the past two years we have been supported by the Marie Skłodowska-Curie training network AMVA4NewPhysics, which has also sent several PhD students to the school.
The format of the summer school is very productive, allowing students to dive into the topics without distraction. The school materials also remain available at GitHub, allowing students to access them whenever they want. As time goes by, basic machine-learning courses are becoming more readily available online, giving us a chance to introduce more advanced topics every year and to keep up with the rapid developments in this field.
Lift up your eyes as you walk through the principal entrance to CERN’s main building and you will see a tangled iron coil suspended above the central staircase. Rather like electron orbitals marking out the shape of an atom, the structure’s overlapping lines form hints of something more tangible that changes as you move – a human body. Here, in his sculpture Feeling Material XXXIV, the artist Antony Gormley has spun a chaotic spiralling line that envelopes the body’s space.
Artists, like scientists, have always been keen observers of the world about them and Gormley is no exception. It was his interest in how spaces are delineated that led to his first contacts with CERN physicist Michael Doser in 2006, and ultimately to his donation of Feeling Material XXXIV to the Organization in 2008. Over the years many artists have visited CERN, intrigued by its research; the American performance artist James Lee Byars even featured on the cover of CERN Courier in September 1972. And in the 1990s, British artist and film-maker Ken McMullen visited the laboratory as a result of his friendship with the daughter of the late Maurice Jacob, a well-known CERN theorist. The visit sowed the seeds for a major project, Signatures of the Invisible, based on a collaboration between the London Institute and CERN. This project brought 11 established artists from various countries, including McMullen, to work with scientists and technicians at CERN during 1999–2000, resulting in works of art that were exhibited worldwide (CERN Courier May 2001 p23).
The experience proved rewarding for both sides. Writing in the Courier (July 2001, p30), Ian Sexton, the CERN technician who worked with laser cutting and other techniques on McMullen’s piece Crumpled Theory, described his pleasure at seeing the artist’s first sight of the completed work “simply presented on the workshop floor, with sunlight streaming through the blinds. Ken was … delighted. His enthusiasm was a most unusual experience for me. Normally on completion of a job at CERN a perfunctory ‘thank you’ is the only response.”
The project had involved a significant commitment by CERN. The Press Office managed the project on the Organization’s behalf, a number of scientists became deeply involved, and the artists were offered the use of the laboratory’s workshop – all of which implied a great deal of disruption and additional work for those concerned. So perhaps there were reservations in the minds of some at CERN when a new “science and art” initiative began to take shape. In 2009, creative producer Ariane Koek decided to use the award of a Clore Fellowship to come to CERN and – with the encouragement of the Director-General at the time, Rolf Heuer – work out how to establish and fund an artists’ residency scheme. Heuer was suitably impressed by her proposals and the following year, after a selection process, Koek was taken on to set up Arts at CERN.
Cultural policy
These efforts bore fruit in August 2011 with the launch of CERN’s first-ever cultural policy. Its central element is a selection process for arts engagement with CERN, with a cultural board for the arts to advise on projects and collaborations involving CERN. The initiative brought order and direction to what had been an ad-hoc approach to CERN’s involvement with the arts.
The first outwardly visible outcome of Arts at CERN was a competition, Collide at CERN, announced in 2011 and open to artists from anywhere in the world. A key element was to pair winning artists with scientists at CERN during a residency lasting up to three months. One strand in this award – the Prix Electronica Collide @ CERN prize for Digital Arts – was set up in collaboration with Austria-based digital arts organisation Ars Electronica, and the residency consisted of two months at CERN and one month at Ars Electronica’s research and development lab. The second strand – Collide @ CERN Geneva – marked a partnership with the City and Canton of Geneva, and in the first year was for dance and performance.
The partnership with Ars Electronica was a coup for CERN and the new cultural policy. Over 40 years, Ars Electronica had built up a formidable reputation in bringing artists, scientists and engineers together. Widely publicised by CERN, the partnership was well received in the arts world, but it was perhaps not so well understood at CERN. Was this something that CERN should be doing and who was paying for it all?
Heuer, who was instrumental in initiating Arts at CERN, was always clear on the first point. “The arts and science are inextricably linked; both are ways of exploring our existence, what it is to be human and what is our place in the universe,” he said on launching the cultural policy. Commenting later after three years of successful partnership with Ars Electronica, he wrote: “The level of heated debate about the so-called two-cultures is a constant source of bafflement to me. Of course arts and science are linked. Both are about creativity. Both require technical mastery. And both are about exploring the limits of human potential.”
Regarding the second point, Arts at CERN was conceived from the start to be mainly self-funding. Support from CERN initially came through its programmes for fellows and students. Funding for the original Collide at CERN programme came from Ars Electronica, the City and the Canton of Geneva, and individual private donors, as well as from the UNIQA Insurance Group, which has a long association with CERN and continues to sponsor the Collide programme. Currently, FACT (Foundation for Art and Creative Technology), based in Liverpool in the UK, is the main partner for the Collide International award, while the Republic and Canton of Geneva and the City of Geneva support the Collide Geneva strand. Collide Geneva is now awarded in alternate years with Collide Pro Helvetia, in which artists from across Switzerland can compete for a residency funded principally by Pro Helvetia, the Swiss Arts Council. In addition, via a slightly different scheme called Accelerate, each year ministries or foundations in two different countries fund two artists working in different domains to come to CERN for one month. A further essential strand that has existed from the beginning brings many more artists to the Laboratory. Originally named Visiting Artists, now called Guest Artists, it hosts up to 10 artists a year who are specially selected for a visit of one to two days which they fund themselves. Together these three strands form the Arts at CERN programme.
A new era begins
By autumn 2014, when the call went out for a new person to head Arts at CERN, the programme had already earned a global reputation. Two internationally known artists and recipients of the Collide International award epitomise this reach: Bill Fontana and Ryoji Ikeda. Sound-sculptor Fontana, from the US, had produced works based on sounds across the globe when he was awarded the 2013 international residency. He explored sounds recorded in the LHC tunnel in works such as Acoustic Time Travel (CERN Courier December 2012 p32), and was followed a year later by Ikeda, Japan’s leading electronic composer and visual artist, who used his residency to inform his works supersymmetry and micro|macro.
This growing reputation within the science and art scene appealed in particular to the art historian and curator Mónica Bello, who had more than 15 years’ experience in curating and managing cultural programmes in art, science and technology institutions in different countries and had spent five years as artistic director of the VIDA International Art and Artificial Life Awards. Educated in modern and contemporary art history, she had been exposed to new ideas emerging at the boundaries of modern art and become passionate about the fusion between science and art. “I like art that is based on open processes, where different agents – the artists, researchers, even the audience – can join together to become the project,” she explains. “Experimentation with openness is the most exciting thing that’s happening in the arts right now – and CERN is the place to be for that.”
Bello took up her position at CERN in March 2015, joining co-ordinator Julian Caló. In accelerator terms, by the following year, Arts at CERN was already running at its design energy and beyond. The programme was bringing artists to the laboratory for as many as four residencies a year, and the number of entries for the Collide International award had risen from some 400 when it was launched in 2011 to around 1000.
Bello’s main vision is to move the main focus beyond exploration and artistic research towards the further development of new art commissions and exhibitions. Continuing to support the artists once they finish their residencies at CERN is essential for this, and one way is to connect the artists with CERN scientists that have links to the cities of the programmes’ partners. This was initiated with Liverpool, where artists spent a one-month residency at FACT after being at CERN and where they were connected with research groups at Liverpool University led by LHCb physicist Tara Shears. Connecting CERN to international cultural organisations is part of the same objective, through links formed with different cities and countries.
These new developments are fully supported by CERN’s current management. Earlier this year, Director-General Fabiola Gianotti made her views on the “two cultures” clear at the World Economic Forum in Davos: “Too often people put science and humanities, or science and the arts, in different compartments… but they do have much in common. They are the highest expression of the creativity and the curiosity of humanity. We should really talk about culture in general, and not focus on one particular sector of culture. This is an important message we should be giving to teachers and to young people, for a better world, so they can grow to face the challenges of society.”
Arts at CERN currently has a clear home within CERN’s international relations sector. The aim is to provide stability for the programme, with a view to making it self-sustaining with separate funding within the context of the CERN & Society Foundation. At the same time, Arts at CERN forms part of a broader interest at CERN in the arts, which includes a distinctive and complementary programme Arts@CMS. This education and outreach initiative of the CMS collaboration has set up school-based projects and collaborations with artists with the aim of inspiring a greater appreciation of CERN’s science within the public at large.
A new production scheme
Arts at CERN has clearly been a resounding success with the arts community, and reaching audiences beyond the confines of the laboratory has proved no problem at all. Nor has it been difficult to find scientists willing to work with the artists; more than 200 have so far been involved. But it is by no means easy to mount an exhibition at a scientific laboratory – in places almost an industrial site – where health and safety are of paramount importance. There have been some obvious artistic interventions, such as when choreographer Gilles Jobin, winner of the 2012 Collide Geneva award, installed dancers in the CERN restaurant and computer centre, and even the library; and his project Quantum – a fusion of dance and lighting installation developed with Julius von Bismarck, the first Collide International artist-in-residence – debuted in the CMS cavern during CERN’s open days in 2013, before embarking on an international tour (CERN Courier November 2013 p29).
More recently, as part of Geneva’s annual Electron Festival in 2016, the winners of the 2014 Collide Geneva award, Rudy Decelière and Vincent Hänni, showed work developed with experimentalist Robert Kieffer and theorist Diego Blas. Their sound installation Horizons Irrésolus (2016) – which consists of 888 micro-synthesisers and speakers, network cable and nylon thread – was installed at CERN for visits during the Easter weekend when the festival traditionally takes place. The effort required by many people at CERN included registration to allow access to the Meyrin site and a shuttle bus to take visitors to see the installation. The response was enthusiastic, but not large.
It is to address such problems that Bello is developing the production and exhibition stages of the residencies. Rather as a scientific experiment evolves from conception to data-collection, analysis and publication, so does an artistic endeavour evolve from exploration to production and exhibition. The original focus of the residencies at CERN was on exploration: having artists and scientists come together to evolve ideas. In the new phase for Arts at CERN, at the end of their residencies, artists will be invited to propose a work to be considered for additional funding for production. The aim is to collaborate with other institutes to co-produce cultural works for ideas that are worth developing, and to curate the resulting work so that it can be shown and shared with the CERN community.
In 2016 CERN began a new collaboration for the Collide International award involving FACT, ushering in the production phase. To support production of the artworks, CERN and FACT have brought together several important European cultural organisations under the umbrella of ScANNER (Science and Art Network for New Exhibitions and Research). Supported by ScANNER, a major exhibition will open at FACT in November 2018 showcasing artworks from, among others, the 2018 Collide International award winner and four previous winners:
Semiconductor (2015), Yunchul Kim (2016), studio hrm199 led by Haroon Mirza (2017) and Suzanne Triester (2018). The exhibition will then tour all venues of the ScANNER members during 2019 and 2020.
Meanwhile, Arts at CERN continues to be a major influence across an impressive range of artistic areas. For example, in her project Quantum Nuggets, designer Laura Couto (Collide Pro Helvetia award 2017) has developed a computer program to enable other artists and designers to produce 3D shapes based on collision data from the LHC, thus creating real objects, such as furniture, that echo the invisible quantum world of particle physics. And in February this year Cheolwon Chang (Accelerate Korea) spent time at CERN finding out about the geometric properties of nature and how mathematics influences our further understanding of the universe. The winners of two awards for 2018 were announced in March: Suzanne Treister (Collide International) and Anne Sylvie Henchoz and Julie Lang (Collide Geneva).
Most recently, a prestigious commission for Art Basel held on 11–17 June and guest-curated by Bello, has highlighted the pinnacles that Arts at CERN is reaching. Swiss watchmakers, Audemars Piguet, a partner of Art Basel, chose the British artist-duo Semiconductor to create the Audemars Piguet Art Commission for the 2018 fair. Ruth Jarman and Joe Gerhardt, who work together under the name Semiconductor, were the recipients of the 2015 Collide International award and for Art Basel they created HALO – an installation that surrounds visitors with data collected by the ATLAS experiment at the LHC. HALO consists of a 10 m-wide cylinder defined by vertical piano wires, within which a 4 m-tall screen displays particle collisions. The data also trigger hammers that strike the wires and set up vibrations to create a multisensory experience.
This important commission is testament to the impact that the Arts at CERN programme is having in the world of contemporary art, and underlines its importance in bringing together apparently disparate ways in viewing and making sense of the world, the universe in which we live. There are many people who say they do not appreciate modern art, just as there are many who say that they never liked physics. But with modern art, just as with modern physics, making a little effort can open up remarkable new ways of thinking about our place in space and time. Arts at CERN is very clearly bringing people together in ways that open their minds and allow them not necessarily to understand but to appreciate how others view the world about us.
It is 1965 and workers at CERN are busy analysing photographs of trajectories of particles travelling through a bubble chamber. These and other scanning workers were employed by CERN and laboratories across the world to manually scan countless such photographs, seeking to identify specific patterns contained in them. It was their painstaking work – which required significant skill and a lot of visual effort – that put particle physics in high gear. Researchers used the photographs (see figures 1 and 3) to make discoveries that would form a cornerstone of the Standard Model of particle physics, such as the observation of weak neutral currents with the Gargamelle bubble chamber in 1973.
In the subsequent decades the field moved away from photographs to collision data collected with electronic detectors. Not only had data volumes become unmanageable, but Moore’s law had begun to take hold and a revolution in computing power was under way. The marriage between high-energy physics and computing was to become one of the most fruitful in science. Today, the Large Hadron Collider (LHC), with its hundreds of millions of proton–proton collisions per second, generates data at a rate of 25 GB/s – leading the CERN data centre to pass the milestone of 200 PB of permanently archived data last summer. Modelling, filtering and analysing such datasets would be impossible had the high-energy-physics community not invested heavily in computing and a distributed-computing network called the Grid.
Learning revolution
The next paradigm change in computing, now under way, is based on artificial intelligence. The so-called deep learning revolution of the late 2000s has significantly changed how scientific data analysis is performed, and has brought machine-learning techniques to the forefront of particle-physics analysis. Such techniques offer advances in areas ranging from event selection to particle identification to event simulation, accelerating progress in the field while offering considerable savings in resources. In many cases, images of particle tracks are making a comeback – although in a slightly different form from their 1960s counterparts.
Artificial neural networks are at the centre of the deep learning revolution. These algorithms are loosely based on the structure of biological brains, which consist of networks of neurons interconnected by signal-carrying synapses. In artificial neural networks these two entities – neurons and synapses – are represented by mathematical equivalents. During the algorithm’s “training” stage, the values of parameters such as the weights representing the synapses are modified to lower the overall error rate and improve the performance of the network for a particular task. Possible tasks vary from identifying images of people’s faces to isolating the particles into which the Higgs boson decays from a background of identical particles produced by other Standard Model processes.
Artificial neural networks have been around since the 1960s. But it took several decades of theoretical and computational development for these algorithms to outperform humans, in some specific tasks. For example: in 1996, IBM’s chess-playing computer Deep Blue won its first game against the then world chess champion Garry Kasparov; in 2016 Google DeepMind’s AlphaGo deep neutral-network algorithm defeated the best human players in the game of Go; modern self-driving cars are powered by deep neural networks; and in December 2017 the latest DeepMind algorithm, called AlphaZero, learned how to play chess in just four hours and defeated the world’s best chess-playing computer program. So important is artificial intelligence in potentially addressing intractable challenges that the world’s leading economies are establishing dedicated investment programmes to better harness its power.
Computer vision
The immense computing and data challenges of high-energy physics are ideally suited to modern machine-learning algorithms. Because the signals measured by particle detectors are stored digitally, it is possible to recreate an image from the outcome of particle collisions. This is most easily seen for cases where detectors offer discrete pixelised position information, such as in some neutrino experiments, but it also applies, on a more complex basis, to collider experiments. It was not long after computer-vision techniques, which are based on so-called convolutional neural networks (figure 2), were applied to the analysis of images that particle physicists applied them to detector images – first of jets and then of photons, muons and neutrinos, simplifying and making the task of understanding ever-larger and more abstract datasets more intuitive.
Particle physicists were among the first to use artificial-intelligence techniques in software development, data analysis and theoretical calculations. The first of a series of workshops on this topic, titled Artificial Intelligence in High-Energy and Nuclear Physics (AIHENP), was held in 1990. At the time, several changes were taking effect. For example, neural networks were being evaluated for event-selection and analysis purposes, and theorists were calling on algebraic or symbolic artificial-intelligence tools to cope with a dramatic increase in the number of terms in perturbation-theory calculations.
Over the years, the AIHENP series was renamed ACAT (Advanced Computing and Analysis Techniques) and expanded to span a broader range of topics. However, following a new wave of adoption of machine learning in particle physics, the focus of the 18th edition of the workshop, ACAT 2017, was again machine learning – featuring its role in event reconstruction and classification, fast simulation of detector response, measurements of particle properties, and AlphaGo-inspired calculations of Feynman loop integrals, to name a few examples.
Learning challenge
For these advances to happen, machine-learning algorithms had to improve and a physics community dedicated to machine learning needed to be built. In 2014 a machine-learning challenge set up by the ATLAS experiment to identify the Higgs boson garnered close to 2000 participants on the machine-learning competition platform Kaggle. To the surprise of many, the challenge was won by a computer scientist armed with an ensemble of artificial neural networks. In 2015 the Inter-experimental LHC Machine Learning working group was born at CERN out of a desire of physicists from across the LHC to have a platform for machine-learning work and discussions. The group quickly grew to include all the LHC experiments and to involve others outside CERN, like the Belle II experiment in Japan and neutrino experiments worldwide. More dedicated training efforts in machine learning are now emerging, including the Yandex machine learning school for high-energy physics and the INSIGHTS and AMVA4NewPhysics Marie Skłodowska-Curie Innovative Training Networks (see Learning machine learning).
Event selection, reconstruction and classification are arguably the most important particle-physics tasks to which machine learning has been applied. As in the time of manual scanning, when the photographs of particle trajectories were analysed to select events of potential physics interest, modern trigger systems are used by many particle-physics experiments, including those at the LHC, to select events for further analysis (figure 3). The decision of whether to save or throw away an event has to be made in a split microsecond and requires specialised hardware located directly on the trigger systems’ logic boards. In 2010 the CMS experiment introduced machine-learning algorithms to its trigger system to better estimate the momentum of muons, which may help identify physics beyond the Standard Model. At around the same time, the LHCb experiment also began to use such algorithms in their trigger system for event selection.
Neutrino experiments such as NOvA and MicroBooNE at Fermilab in the US have also used computer-vision techniques to reconstruct and classify various types of neutrino events. In the NOvA experiment, using deep learning techniques for such tasks is equivalent to collecting 30% more data, or alternatively building and using more expensive detectors – potentially saving global taxpayers significant amounts of money. Similar efficiency gains are observed by the LHC experiments.
Currently, about half of the Worldwide LHC Computing Grid budget in computing is spent simulating the numerous possible outcomes of high-energy proton–proton collisions. To achieve a detailed understanding of the Standard Model and any physics beyond it, a tremendous number of such Monte Carlo events needs to be simulated. But despite the best efforts by the community worldwide to optimise these simulations, the speed is still a factor of 100 short of the needs of the High-Luminosity LHC, which is scheduled to start taking data around 2026. If a machine-learning model could directly learn the properties of the reconstructed particles and bypass the complicated simulation process of the interactions between the particles and the material of the detectors, it could lead to simulations orders of magnitude faster than those currently available.
Competing networks
One idea for such a model relies on algorithms called generative adversarial networks (GANs). In these algorithms, two neural networks compete with each other for a particular goal, with one of them acting as an adversary that the other network is trying to fool. CERN’s openlab and software for experiments group, along with others in the LHC community and industry partners, are starting to see the first results of using GANs for faster event and detector simulations.
Particle physics has come a long way from the heyday of manual scanners in understanding elementary particles and their interactions. But there are gaps in our understanding of the universe that need to be filled – the nature of dark matter, dark energy, matter–antimatter asymmetry, neutrinos and colour confinement, to name a few. High-energy physicists hope to find answers to these questions using the LHC and its upcoming upgrades, as well as future lepton colliders and neutrino experiments. In this endeavour, machine learning will most likely play a significant part in making data processing, data analysis and simulation, and many other tasks, more efficient.
Driven by the promise of great returns, big companies such as Google, Apple, Microsoft, IBM, Intel, Nvidia and Facebook are investing hundreds of millions of dollars in deep learning technology including dedicated software and hardware. As these technologies find their way into particle physics, together with high-performance computing, they will boost the performance of current machine-learning algorithms. Another way to increase the performance is through collaborative machine learning, which involves several machine-learning units operating in parallel. Quantum algorithms running on quantum computers might also bring orders-of-magnitude improvement in algorithm acceleration, and there are probably more advances in store that are difficult to predict today. The availability of more powerful computer systems together with deep learning will likely allow particle physicists to think bigger and perhaps come up with new types of searches for new physics or with ideas to automatically extract and learn physics from the data.
That said, machine learning in particle physics still faces several challenges. Some of the most significant include understanding how to treat systematic uncertainties while employing machine-learning models and interpreting what the models learn. Another challenge is how to make complex deep learning algorithms work in the tight time window of modern trigger systems, to take advantage of the deluge of data that is currently thrown away. These challenges aside, the progress we are seeing today in machine learning and in its application to particle physics is probably just the beginning of the revolution to come.
The architect of Pakistan–CERN collaboration and former chairman of the Pakistan Atomic Energy Commission (PAEC), Ishfaq Ahmad, passed away on 15 January in Islamabad aged 87. He remained associated with PAEC for more than 40 years. After joining the organisation in 1960 on completion of his PhD at the University of Montreal in Canada, and post-doc positions at University of Ottawa and Sorbonne (Université de Paris), he played a crucial role in the development of civil and military nuclear technology in Pakistan.
Ishfaq’s doctoral work was based on the use of fine-grained nuclear emulsions, pioneered by his thesis supervisor, Pierre Demers. He also worked at the Niels Bohr Institute in Copenhagen between 1961 and 1962, where he had opportunities to interact with Bohr himself. It was during his stay there that his experimental work on nuclear reactions brought him to CERN, where nuclear emulsions were exposed for subsequent analyses at different laboratories. Years later, he recalled that his fascination with CERN and the work being done there never faded, resulting in the establishment of close ties between CERN and PAEC.
The first formal scientific and technical agreement between CERN and Pakistan, which formed the basis of future Pakistan–CERN cooperation, was signed on 11 January 1994 by Ishfaq on behalf of Pakistan and the then CERN Director-General Chris Llewellyn-Smith, on behalf of CERN. Thereafter, a series of protocols, addendums and extensions of protocols, MoUs and Letters-of-Intent were signed by CERN DGs and PAEC chairmen, many concerning specific projects related to the construction of the Large Hadron Collider and components of the CMS and ATLAS detectors. The most conspicuous of these projects was the supply of eight steel supports for the CMS yoke, which were fabricated in PAEC laboratories in Islamabad. Concurrently, the participation of the National Centre for Physics (NCP) in the CMS collaboration resulted in scientific exchanges and data simulations. A node for grid computing was also established at NCP. Another institution in Pakistan that joined the CERN collaboration was the COMSATS Institute of Information Technology, which was granted membership of the ALICE collaboration. Eventually, Pakistan gained Associate Membership of CERN on 31 July 2015.
While overseeing the increasingly deeper ties between Pakistan and CERN, Ishfaq remained actively engaged with other international fora, such as the International Atomic Energy Agency (IAEA), the International Centre for Theoretical Physics (ICTP) and the International Institute for Applied Systems Analysis (IIASA). As member of the board of governors of IAEA, he was able to convince the then director-general of IAEA, Hans Blix, to establish an advisory group that strengthened the agency’s role as a facilitator of civilian nuclear technology through improved technical cooperation programmes, especially for developing countries.
His avid support for ICTP was also based on his strong belief in science as a vehicle of peace and development. It is not surprising that he was the one who wrote to the then UN secretary-general Kofi Annan to launch World Science Day for Peace and Development, which was duly approved by the Security Council and has been organised internationally by UNESCO since 2001. He regularly participated in PUGWASH meetings following the 1974 Indian nuclear tests and strongly advocated for a nuclear-free South Asia. He was a strong supporter of nuclear power as a significant component of the energy mix in Pakistan, but kept open mind about alternative energy sources. He lobbied and successfully achieved Pakistan’s membership of IIASA, and remained on its board from 2007 to 2012. His broader vision of national economic interests led to the creation of institutions such as the Global Climate Change Impact Study Centre and the Centre for Earthquake Studies in Pakistan.
In 1998 the government of Pakistan bestowed upon Ishfaq the highest civil award, “Nishan-i-Imtiaz”, besides several other honours and awards in preceding years, and entrusted him with prestigious positions such as advisor to the prime minister of Pakistan and other senior roles. He held government posts until 2012, when he decided to restrict his activities to the work of NCP as the chairman of its board of governors. He was buried with state honours in Islamabad on 16 January.
This book debates the topic of quantum information from both a physical and philosophical perspective, addressing the main questions about its nature. At present, different interpretations of the notion of information coexist and quantum mechanics brings in many puzzles; as a consequence, says the author, there is not yet a generally agreed upon answer to the question “what is quantum information?”.
The chapters are organised in three parts. The first is dedicated to presenting various interpretations of the concept of information and addressing the question of the existence of two qualitatively different kinds of information (classical and quantum). The links between this concept and other notions, such as knowledge, representation, interpretation and manipulation, are discussed as well.
The second part is devoted to the relationship between informational and quantum issues, and deals with the entanglement of quantum states and the notion of pragmatic information. Finally, the third part analyses how probability and correlation underlie the concept of information in different problem domains, as well as the issue of the ontological status of quantum information.
Providing an interdisciplinary examination of quantum information science, this book is aimed at philosophers of science, quantum physicists and information-technology experts who are interested in delving into the multiple conceptual and philosophical problems inherent to this recently born field of research.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.