Bluefors – leaderboard other pages

Topics

The universe is getting out of breath

A new study of more than 200,000 galaxies, from the ultraviolet to the far infrared, has provided the most comprehensive assessment of the energy output of the nearby universe. It confirms that the radiation produced by stars in galaxies today is only about half what it was two thousand million years ago. This overall “fading” reflects a decrease in the rate of star formation via the collapse of cool clouds of gas. It seems that the universe is running out of gas – in effect, getting out of breath – and slowly dying.

It is well known to astronomers that the rate of star formation in the universe reached a peak around a redshift z = 2, when the universe was about 3 Gyr old. Over the subsequent 10 Gyr until now, the production of stars in galaxies has steadily decreased in a given co-moving volume of space – that is, a volume expanding at the same rate as the cosmic expansion of the universe, therefore keeping a constant matter content during the history of the universe. Because the most massive stars are also the most luminous ones and have the shortest lifetimes, the energy output of a galaxy is closely related to its star-formation rate. Indeed, some 100 million years after the formation of a star cluster, its brightest stars would have exploded as supernovas leaving only the lower-mass stars, which are much less luminous.

Although the fading trend of the universe has been known since the late 1990s, measuring it accurately has been a challenge. Part of the difficulty is to gather a representative sample of galaxies at different redshifts and to account properly for all biases. Another complication comes from the obscuration by dust in the galaxies, which absorbs ultraviolet and visible radiation and then re-emits this energy in the infrared. A way to overcome these difficulties is to observe the same region of the sky at many different wavelengths to cover fully the energy output. This has now been achieved by a large international collaboration led by Simon Driver from the International Centre for Radio Astronomy Research (ICRAR), University of Western Australia.

The study is part of the Galaxy and Mass Assembly (GAMA) project, the largest multi-wavelength survey ever put together. It used seven of the world’s most powerful telescopes to observe more than 200,000 galaxies, each measured at 21 wavelengths from the ultraviolet at 0.1 μm to the far infrared at 500 μm. Driver and collaborators then used this unique data set to derive the spectral energy distribution of the individual galaxies, and the combined one for three different ranges of redshift up to z = 0.20. For the nearest galaxies, they obtain an average energy output of (1.5±0.3) × 1035 W produced on average by galaxies in a co-moving volume of a cubic megaparsec, which is equivalent to a cube with a side of about 3.3 million light-years. While this is for a redshift range between z = 0.02 and z = 0.08, corresponding to a mean look-back time of 0.75 Gyr, the team finds a significantly higher value of (2.5±0.3) × 10 W35 for a look-back time of 2.25 Gyr (0.14 < z < 0.20). This indicates a decrease by about 1035 W in 1.5 Gyr. This trend occurs across all wavelengths and corresponds roughly to a decrease by a factor two over the past two thousand million years.

The ongoing decay of energy production by stars in galaxies also follows the trend of active galactic nuclei and gamma-ray bursts, which were all more numerous and powerful several gigayears ago. The shining, glorious days of the universe are now long past; instead, it will continue to decline, sliding gently into old age, an age of quiescence.

The most precise picture of the proton

After 15 years of measurement and another eight years of scrutinizing and calculations, the H1 and ZEUS collaborations have published the most precise results to date about the innermost structure and behaviour of the proton. The two collaborations, which took data at DESY’s electron–proton collider, HERA, from 1992 to 2007, have combined nearly 3000 measurements of inclusive deep-inelastic cross-sections (H1, ZEUS 2015). With its completion, the paper secures the legacy of the HERA data.

Within the framework of perturbative QCD, the proton is described in terms of parton-density functions, which provide the probability of scattering from a parton, either a gluon or a quark. The H1 and ZEUS collaborations have also produced the first QCD analysis of the data, encompassed in the HERAPDF2.0 sets of parton-distribution functions (PDFs), which form a significant part of the paper. The combined data presented in the new publication will be the basis of all analyses of the structure of the proton for years to come.

As figure 1 depicts, in deep-inelastic scattering, a boson – γ, Z0 or W± – acts as a probe of the structure of the proton by interacting with its constituents, through neutral-current (γ, Z0) or charged-current (W±) reactions. Of course, this picture is simplified: the proton is a dynamic structure of quarks and gluons, but by measuring deep-inelastic scattering over a wide kinematic range, this internal structure can be mapped precisely. The variables used to do this are the squared four-momentum, Q2, of the exchanged boson, and Bjorken x, xBj, the fraction of the proton’s momentum carried by the struck quark.

A wealth of data

The data, taken over the 15-year lifetime of the HERA accelerator, correspond to a total luminosity of about 1 fb–1 of deep-inelastic electron–proton and positron–proton scattering. All of the data used were taken with an electron/positron beam energy of 27.5 GeV, with roughly equal amounts of data for electron–proton and positron–proton scattering being recorded. HERA initially operated with a proton-beam energy of 820 GeV, which was increased subsequently to 920 GeV; these data constitute the bulk of the combined measurements. Towards the end of HERA’s run, special data samples with a proton-beam energy of 575 GeV and 460 GeV were taken and are also included. The data were combined separately for the e+p and ep runs and for the different centre-of-mass energies. Overall, 41 separate data sets were used in the combination, spanning 0.045 < Q2 < 50,000 GeV2 and 6 × 10–7 < xBj < 0.65, i.e. six orders of magnitude in each variable. The initial measurements consisted of 2937 published cross-sections in total, which were combined to produce 1307 final combined cross-section measurements. These results supersede the previous paper with combined measurements of deep-inelastic scattering cross-sections in which only data up to the year 2000 were combined (CERN Courier January/February 2008 p30).

The procedure for combining the data involved a careful treatment of the various uncertainties between all of the data sets. In particular, the correlations of the various sources were assessed, and those uncertainties deemed to be point-to-point correlated were accounted for as such in the averaging of the data based on a χ2 minimization method. The resulting χ2 is 1687 for 1620 degrees of freedom, demonstrating excellent compatibility of the multitude of data sets. Figure 2 illustrates the power of the data combination. It displays a selection of the data in bins of the photon virtuality, Q2, and for fixed values of xBj, showing separately individual data sets from several different analyses. A combined data point can be the combination of up to eight individual measurements. The improvement in precision is striking, as is seen more clearly in the close-up on some of the points. An indication of the precision of the combined data is that the total uncertainties are close to 1% for the bulk region of 3 < Q2 < 500 GeV2.

As well as showing the precision of the data and power of the combination, the cross-section dependence for the different values of xBj demonstrates the dynamic structure of the proton in a striking way. For xBj = 0.08, the cross-section dependence is reasonably flat as a function of Q2. This is known as Bjorken scaling, and is expected from the simple parton model in which inelastic electron–proton scattering is viewed as a sum of elastic electron–parton scattering, where the partons are free point-like objects. At lower values of xBj, the cross-section rises increasingly more steeply with increasing Q2 and decreasing xBj. This effect is known as scaling violation, and is indicative of the density of gluons in the proton increasing.

The increased density and rise of the cross-section can also be observed by considering the proton-structure function F2 (which is closely related to the cross-section) plotted versus xBj at fixed Q2, as in figure 3. The strong rise of F2 with decreasing xBj was one of the most important discoveries at HERA. Previous experiments, which were with fixed targets, could not constrain this behaviour, because the data were at low values of Q2 and high values of xBj. The figure also shows how the rise towards low xBj is steeper with increasing Q2. At higher Q2, the exchanged boson effectively probes smaller distances, and so can see more of the inner structure of the proton and hence resolves more and more gluons.

Parton distributions

The proton structure of quarks and gluons is often parameterized in terms of the PDFs, which correspond to the probability of finding a gluon or a quark of a given flavour with momentum fraction x in the proton, given the scale μ of the hard interaction. The behaviour of the PDFs with scale is predicted by QCD, but the absolute values need to be determined from fits to data. Using the HERA data, the PDFs can be extracted, while at the same time the evolution as a function of the scale is tested. This analysis is performed at leading order, next-to-leading order (NLO) and next-to-next-to-leading order, yielding the HERAPDF2.0 family of PDFs.

Figure 3 compares the predictions of the PDF analysis at NLO with the measurements of the structure functions. In general, the QCD predictions describe the data well, although this becomes poorer at low Q2, indicating inadequacies in the theory used at these low scales. Such precise knowledge of the PDFs is also of highest importance for physics at the LHC at CERN, because the uncertainties stemming from the knowledge of the PDFs are increased for proton–proton collisions compared with deep-inelastic scattering.

The QCD analysis can also be extended to include data from the production of charm quarks and jets at HERA. Charm production is measured again as a function of xBj and Q2, however with the condition of detecting a charm meson in the final state. Jet production is measured in the Breit frame, where jets with non-zero transverse momentum are expected from hard QCD processes only. By including the charm and jet data, the analysis becomes particularly sensitive to the strong-coupling constant, αs(MZ), whereas without jet data the coupling constant is strongly correlated with the normalization of the gluon density. The combined analysis of inclusive data, charm data and jet data at NLO results in an experimentally very precise measurement of the strong-coupling constant, αs(MZ) = 0.1183±0.0009 (exp.), with significantly larger uncertainties of +0.0039–0.0033 related to the model and theory.

It is also interesting to look at data from HERA on neutral-current (NC) and charged-current (CC) scattering that is differential in Q2 but integrated over xBj, as shown in figure 4 both for e+p and ep. At small Q2, the cross-sections for NC are much larger than for CC, whereas at large Q2, in the order of the vector-boson mass squared, they become similar in size. This is a direct visualization of the electroweak unification: the CC process is mediated by weak forces, whereas photon exchange dominates the NC cross-section. Looking in more detail, the NC cross-sections for e+p and ep are almost identical at small Q2 but start to diverge as Q2 grows. This is owing to γ–Z0 interference, which has the opposite effect on the e+p and ep cross-sections. The CC cross-sections also differ between e+p and ep scattering, with two effects contributing: the helicity structure of the W± exchange and the fact that CC ep scattering probes the u-valence quarks, whereas d-valence quarks are accessed in CC e+p.

In summary, the HERA collider experiments H1 and ZEUS have combined their precision data on deep-inelastic scattering, reaching a precision of almost 1% in the double-differential cross-section measurements. It is the largest coherent data set on proton structure, spanning six orders of magnitude in the kinematic variables xBj and Q2. A QCD analysis of the HERA data alone results in a set of parton-density functions, HERAPDF2.0, without the need for data from other experiments. Also, using HERA jet and charm data, the strong-coupling constant is measured together with proton PDFs. QCD and electroweak effects are probed at high precision in the same data set, providing beautiful demonstrations of the validity of the Standard Model.

Inventing our future accelerator

Can you imagine that electrons

Are planets circling their suns?

Space exploration, wars, elections

And hundreds of computer tongues

Translation by A Seryi of a 1920 poem by Valery Bryusov, “The World of Electron”

Accelerator science and technology exhibits a rich history of inventions that now spans almost a century. The fascinating story of accelerator development, which is particularly well described in Engines of Discovery: A Century of Particle Accelerators by Andy Sessler and Ted Wilson (CERN Courier September 2007 p63), can also be summarized in the so-called “Livingston plot”, where the equivalent energy of an accelerated beam is shown as a function of time. The plot depicts how new accelerating technologies take over once the previous technology has reached its full potential, so that over the course of many decades the maximum achieved energy has continued to grow exponentially, thanks to many inventions and the development of many different accelerator technologies. The most recent decades have also been rich with inventions, such as the photon-collider concept (still an idea), crab-waist collisions (already verified experimentally at the DAFNE storage ring in Frascati) and integrable optics for storage rings (verification is planned at the Integrable Optics Test Accelerator at Fermilab), to name a few.

Despite recent inventions, however, there is some cause for anxiety about the latest progress in the field and projections for the future. The three most recent decades represented by the Tevatron and the LHC exhibit a much slower energy growth over time. This may be an indication that the existing technologies for acceleration have come to their maximum potential, and that further progress will demand the creation of a new accelerating method – one that is more compact and economical. There are indeed several emerging acceleration techniques, such as laser-driven and beam-driven plasma acceleration (CERN Courier June 2007 p28), which can perhaps bring the Livingston plot back to the fast-rising exponent. Nevertheless, inspired by the variety of past inventions in the field, and dreaming about future accelerators that will require many scientific and technological breakthroughs, we can pose the question: how can we invent more efficiently?

It is worth recalling two biographical facts about two prominent accelerator scientists: John Adams, who in the 1950s played the key role in implementing the courageous decision to cancel the already approved 10 GeV weak-focusing accelerator for a totally innovative 25 GeV strong-focusing machine (the CERN Proton Synchrotron), and Gersh Budker, who was the founder and first director of the Institute of Nuclear Physics, Novosibirsk, and inventor of many innovations in the field of accelerator physics, such as electron cooling. It is important in this context that Adams had a unique combination of scientific and engineering abilities, and that Budker was once called by Lev Landau a “relativistic engineer”. This connection is indeed notable, because the art of inventiveness that I am about to discuss came from engineering.

While everyone has probably heard about problem-solving approaches such as brainstorming or even its improved version, synectics (the use of a fairy-tale-style description of the problem is one of its approaches – note the snakes in figure 1c representing the magnetic fields in the solenoid), it is likely that most people working in science have never heard about the inventive methodologies that engineers have developed and used. It is indeed astonishing that formal inventive approaches, so widely used in industry, are rarely known in science.

One such approach is TRIZ – pronounced “treez” – which can be translated as the Theory of Inventive Problem Solving. TRIZ was developed by Genrikh Altshuller in the Soviet Union in the mid-20th century. Starting in 1946 when he was working in a patent office, but interrupted by a dramatic decade-long turmoil in his life (another story) that he overcame to resume his studies, Altshuller analysed many thousands of patents, trying to discover patterns to identify what makes a patent successful. Following his work in the patent office, between 1956 and 1985 he formulated TRIZ and, together with his team, developed it further. Since then, TRIZ has gradually become one of the most powerful tools in the industrial world. For example, in his 7 March 2013 contribution to the business magazine Forbes, “What Makes Samsung Such An Innovative Company?”, Haydn Shaughnessy wrote that TRIZ “became the bedrock of innovation at Samsung”, and that “TRIZ is now an obligatory skill set if you want to advance within Samsung”.

A methodology

The authors of TRIZ devised the following four cornerstones for the method: the same problems and solutions appear again and again but in different industries; there is a recognizable technological evolution path for all industries; innovative patents (which are about a quarter of the total) use science and engineering theories outside of their own area or industry; and an innovative patent uncovers and solves contradictions. In addition, the team created a detailed methodology, which employs tables of typical contradicting parameters and a wonderfully universal table of 40 inventive principles. The TRIZ method consists in finding a pair of contradicting parameters in a problem, which, using the TRIZ inventive tables, immediately leads to the selection of only a few suitable inventive principles that narrow down the choice and result in a faster solution to a problem.

TRIZ textbooks often cite Charles Wilson’s cloud chamber (invented in 1911) and Donald Glaser’s bubble chamber (invented in 1952) as examples – to use the terminology of TRIZ – of a system and anti-system. Indeed, the cloud chamber works on the principle of bubbles of liquid created in gas, whereas the bubble chamber uses bubbles of gas created in liquid (figure 1a). If the TRIZ inventive principle of system/anti-system were applied, the invention of the bubble chamber would follow immediately and not almost half a century after the invention of the cloud chamber.

Another TRIZ inventive principle, that of Russian dolls (nested dolls, or matryoshki), can be applied not only to engineering but also in many other areas, including science or even philology. The principle of a concept inside a concept can be seen in the British nursery rhyme “This is the house that Jack built”, and the 1920 poem by Valery Bryusov (quoted at the start), which describes an electron as a planet in its own world, can also be seen as a reflection of the nested-doll inventive principle, this time in poetic science fiction. A spectacular scientific example is the construction of a high-energy physics detector, where many different sub-detectors are inserted into one another, to enhance the accuracy of detecting elusive particles (figure 1b). Such detectors are needed to find out if there is indeed a world inside of an electron – and the circle is now closed!

The TRIZ method can be applied, in particular, to accelerator science. For example, the dual force-neutral solenoid found in the interaction region of a collider, or in NMR scanners, is an illustration of both the nested-doll and the system/anti-system inventive principles. Two solenoids of opposite currents are inserted in one another in such a way that all of the magnetic flux-return is between the solenoids and none is seen outside, reducing the need for magnetic shielding in case of NMR or reducing interference with the main solenoid of the detector in case of a particle collider (figure 1c). Remarkably, the same combination of inventive principles can be seen in the technique of stimulated emission depletion microscopy (STED), which was rewarded with the 2014 Nobel Prize in Chemistry. The final focus system at a collider with non-local chromaticity correction is an illustration of the inventive principle of what is known as “beforehand cushioning”. And so on.

While many of the TRIZ inventive principles can be applied directly to problems in accelerator science, it is tempting to add accelerator-science-related parameters and inventive principles to TRIZ. The equations of Maxwell or of thermodynamics, where an integral on a surface is connected to the integral over volume, suggest an inventive principle of changing the volume-to-surface ratio of an object. Nature provides an illustration in a smart cat, stretched out under the sun or curled up in the cold, but flat colliding electron–positron beams or fibre lasers also illustrate the same principle. Another possible inventive principle for accelerator science is the use of non-damageable or already damaged materials: the laser wire for beam diagnostics, the mercury jet as a beam target, plasma acceleration, or a plasma mirror – the list of examples illustrating this inventive principle can be continued.

So the TRIZ method of inventiveness, although created originally for engineering, is universal and can also be applied to science. TRIZ methodology provides another way to look at the world; combined with science it creates a powerful and eye-opening amalgam of science and inventiveness. It is particularly helpful for building bridges of understanding between completely different scientific disciplines, and so is also naturally useful to educational and research organizations that endeavour to break barriers between disciplines.

However, experience shows that knowledge of TRIZ is nearly non-existent in the scientific departments of western universities. Moreover, it is not unusual to hear about unsuccessful attempts to introduce TRIZ into the graduate courses of universities’ science departments. Indeed, in many or most of these cases, the apparent reason for the failure is that the canonical version of TRIZ was introduced to science PhD students in the same way that TRIZ is taught to engineers in industrial companies. This may be a mistake, because science students are rightfully more critically minded and justifiably sceptical about overly prescriptive step-by-step methods. Indeed, a critically thinking scientist would immediately question the canonical number of 40 inventive principles, and note that identifying just a pair of contradicting parameters is a first-order approximation, and so on.

A more suitable approach to introduce TRIZ to graduate students, which takes into account the lessons learnt by its predecessors, could be different. Instead of teaching graduate students the ready-to-use methodology, it might be better to take them through the process of recreating parts of TRIZ by analysing various inventions and discoveries from scientific disciplines, showing that the TRIZ inventive principles can be efficiently applied to science. In the process, additional inventive principles that are more suitable for scientific disciplines could be found and added to standard TRIZ. In my recent textbook, I call this extension “Accelerating Science (AS) TRIZ”, where “accelerating” refers not to accelerators, but instead highlights that TRIZ can help to boost various areas of science.

Many of the examples of TRIZ-like inventions in science considered above have already been made, and I am being deliberately provocative in connecting them to TRIZ post factum. However, it is natural to wonder whether TRIZ and AS-TRIZ could actually help to inspire and create new scientific inventions and innovations, especially in regard to projects that continue to manifest many unsolved obstacles.

One example of such a project is the circular collider currently being considered as a successor to the LHC – the Future Circular Collider (FCC), a 100 km circumference machine (CERN Courier April 2014 p16). This project has many scientific and technical tasks and challenges that need to be solved. Notably, the total energy in each circulating proton beam is expected to exceed 8 GJ, which is equivalent to the kinetic energy of an Airbus-380 flying at 720 km/h. Not only does such a beam need to be handled safely in the bending magnets, it also needs to be focused in the interaction region to a micrometre-size spot – the equivalent, more or less, of having to pass through the eye of a needle.

It remains to be seen if the methodology of TRIZ and AS-TRIZ can be applied to such a large-scale project as the FCC, because it brings a whole array of new, difficult and exciting challenges to the table. Nonetheless, it is certainly a project that can only flourish with the application of knowledge and inventiveness.

RD51 and the rise of micro-pattern gas detectors

Résumé

RD51 et l’essor des détecteurs gazeux à micropistes

En 2008 a été créée au CERN la collaboration RD51, répondant ainsi au besoin de développer et d’utiliser les techniques innovantes des détecteurs gazeux à micropistes (MPGD). Si nombre de ces technologies ont été adoptées avant la création de RD51, d’autres techniques sont apparues depuis ou sont devenues accessibles, de nouveaux concepts de détection sont en cours d’adoption et des techniques actuelles font l’objet d’améliorations importantes. Parallèlement, le déploiement de détecteurs MPGD dans des expériences en exploitation s’est considérablement accru. Aujourd’hui, RD51 est au service d’une vaste communauté d’utilisateurs, veillant sur le domaine des détecteurs MPGD et sur les applications commerciales qui pourraient voir le jour.

Improvements in detector technology often come from capitalizing on industrial progress. Over the past two decades, advances in photolithography, microelectronics and printed circuits have opened the way for the production of micro-structured gas-amplification devices. By 2008, interest in the development and use of the novel micro-pattern gaseous detector (MPGD) technologies led to the establishment at CERN of the RD51 collaboration. Originally created for a five-year term, RD51 was later prolonged for another five years beyond 2013. While many of the MPGD technologies were introduced before RD51 was founded (figure 1), with more techniques becoming available or affordable, new detection concepts are still being introduced, and existing ones are substantially improved.

In the late 1980s, the development of the micro-strip gas chamber (MSGC) created great interest because of its intrinsic rate-capability, which was orders of magnitude higher than in wire chambers, and its position resolution of a few tens of micrometres at particle fluxes exceeding about 1 MHz/mm2. Developed for projects at high-luminosity colliders, MSGCs promised to fill a gap between the high-performance but expensive solid-state detectors, and cheap but rate-limited traditional wire chambers. However, detailed studies of their long-term behaviour at high rates and in hadron beams revealed two possible weaknesses of the MSGC technology: the formation of deposits on the electrodes, affecting gain and performance (“ageing effects”), and spark-induced damage to electrodes in the presence of highly ionizing particles.

These initial ideas have since led to more robust MPGD structures, in general using modern photolithographic processes on thin insulating supports. In particular, ease of manufacturing, operational stability and superior performances for charged-particle tracking, muon detection and triggering have given rise to two main designs: the gas electron-multiplier (GEM) and the micro-mesh gaseous structure (Micromegas). By using a pitch size of a few hundred micrometres, both devices exhibit intrinsic high-rate capability (> 1 MHz/mm2), excellent spatial and multi-track resolution (around 30 μm and 500 μm, respectively), and time resolution for single photoelectrons in the sub-nanosecond range.

Coupling the microelectronics industry and advanced PCB technology has been important for the development of gas detectors with increasingly smaller pitch size. An elegant example is the use of a CMOS pixel ASIC, assembled directly below the GEM or Micromegas amplification structure. Modern “wafer post-processing technology” allows for the integration of a Micromegas grid directly on top of a Medipix or Timepix chip, thus forming integrated read-out of a gaseous detector (InGrid). Using this approach, MPGD-based detectors can reach the level of integration, compactness and resolving power typical of solid-state pixel devices. For applications requiring imaging detectors with large-area coverage and moderate spatial resolution (e.g. ring-imaging Cherenkov (RICH) counters), coarser macro-patterned structures offer an interesting economic solution with relatively low mass and easy construction – thanks to the intrinsic robustness of the PCB electrodes. Such detectors are the thick GEM (THGEM), large electron multiplier (LEM), patterned resistive thick GEM (RETGEM) and the resistive-plate WELL (RPWELL).

RD51 and its working groups

The main objective of RD51 is to advance the technological development and application of MPGDs. While a number of activities have emerged related to the LHC upgrade, most importantly, RD51 serves as an access point to MPGD “know-how” for the worldwide community – a platform for sharing information, results and experience – and optimizes the cost of R&D through the sharing of resources and the creation of common projects and infrastructure. All partners are already pursuing either basic- or application-oriented R&D involving MPGD concepts. Figure 1 shows the organization of seven Working Groups (WG) that cover all of the relevant aspects of MPGD-related R&D.

WG1 Technological Aspects and Development of New Detector Structures. The objectives of WG1 are to improve the performance of existing detector structures, optimize fabrication methods, and develop new multiplier geometries and techniques. One of the most prominent activities is the development of large-area GEM, Micromegas and THGEM detectors. Only one decade ago, the largest MPGDs were around 40 × 40 cm2, limited by existing tools and materials. A big step towards the industrial manufacturing of MPGDs with a size around a square metre came with new fabrication methods – the single-mask GEM, “bulk” Micromegas, and the novel Micromegas construction scheme with a “floating mesh”. While in “bulk” Micromegas, the metallic mesh is integrated into the PCB read-out, in the “floating-mesh” scheme it is integrated in the panel containing drift electrodes and placed on pillars when the chamber is closed. The single-mask GEM technique overcomes the cumbersome practice of alignment of two masks between top and bottom films, which limits the achievable lateral size to 50 cm. This technology, together with the novel “self-stretching technique” for assembling GEMs without glue and spacers, simplifies the fabrication process to such an extent that, especially for large-volume production, the cost per unit area drops by orders of magnitude.

Another breakthrough came with the development of Micromegas with resistive electrodes for discharge mitigation. The resistive strips match the pattern of the read-out strips geometrically, but are electrically insulated from them. Large-area resistive electrodes to prevent sparks have been developed using two different techniques: screen printing and carbon sputtering. The technology of the THGEM detectors is well established in small prototypes, the major challenge is the industrial production of high-quality large-size boards. A novel MPGD-based hydrid architecture, consisting of double THGEM and Micromegas, has been developed for photon detection; the latter allows a significant reduction in the ion backflow to the photocathode. A spark-protected version of THGEM (RETGEM), where the copper-clad conductive electrodes are replaced by resistive materials, and the RPWELL detector, consisting of a single-sided THGEM coupled to the read-out electrode through a sheet of large bulk resistivity, have also been manufactured and studied. To reduce discharge probability, a micro-pixel gas chamber (μ-PIC) with resistive electrodes using sputtered carbon has been developed; this technology is easily extendable for the production of large areas up to a few square metres.

To reduce costs, further work is needed for developing radiation-hard read-out and reinventing mainstream technologies under a new paradigm of integration of electronics and detectors, as well as integration of functionality, e.g. integrating read-out electronics directly into the MPGD structure. A breakthrough here is the development of a time-projection chamber (TPC) read-out with a total of 160 InGrid detectors, each 2 cm2, corresponding to 10.5 million pixels. Despite the enormous challenges, this has demonstrated for the first time the feasibility of extending the Timepix CMOS read-out of MPGDs to large areas.

WG2 Detector Physics and Performance. The goal of WG2 is to improve understanding of the basic physics phenomena in gases, to define common test standards, which allow comparison and eventually selection among different technologies for a particular application, and to study the main physics processes that limit MPGD performance, such as sparking, charging-up effects and ageing.

Primary ionization and electron multiplication in avalanches are statistical processes that set limits to the spatial, energy and timing resolution, and so affect the overall performance of a detector. Exploiting the ability of Micromegas and GEM detectors to measure both the position and arrival time of the charge deposited in the drift gap, a novel method – the μTPC – has been developed for the case of inclined tracks, allowing for a precise segment reconstruction using a single detection plane, and significantly improving spatial resolution (well below 100 μm, even at large track angles). Excellent energy resolution is routinely achieved with “microbulk” Micromegas and InGrid devices, differing only slightly from the accuracy obtained with gaseous scintillation proportional counters and limited by the Fano factor. Moreover, “microbulk” detectors have very low levels of intrinsic radioactivity. Other recent studies have revealed that Micromegas could act as a photodetector coupled to a Cherenkov-radiator front window, in a set-up that produces a sufficient number of UV photons to convert single-photoelectron time jitter of a few hundred picoseconds into an incident-particle timing response of the order of 50 ps.

One of the central topics of WG2 is the development of effective protection against discharges in the presence of heavily ionizing particles. The limitation caused by occasional sparking is now being lifted by the use of resistive electrodes, but at the price of current-dependent charging-up effects that cause a reduction in gain. Systematic studies are needed to optimize the electrical and geometrical characteristics of resistive Micromegas in terms of the maximum particle rate. Recent ageing studies performed in view of the High-Luminosity LHC upgrades confirmed that the radiation hardness of MPGDs is comparable with solid-state sensors in harsh radiation environments. Nevertheless, it is important to develop and validate materials with resistance to ageing and radiation damage.

Many of the advances involve the use of new materials and concepts – for example, a GEM made out of crystallized glass, and a “glass piggyback” Micromegas that separates the Micromegas from the actual read-out by a ceramic layer, so that the signal is read by capacitive coupling and the read-out is immune to discharges. A completely new approach is the study of charge-transfer properties through graphene for applications in gaseous detectors.

Working at cryogenic temperatures – or even within the cryogenic liquid itself – requires optimization to achieve simultaneously high gas gain and long-term stability. Two ideas have been pursued for future large-scale noble-liquid detectors: dual-phase TPCs with cryogenic large-area gaseous photomultipliers (GPMs) and single-phase TPCs with MPGDs immersed in the noble liquid. Studies have demonstrated that the copious light yields in liquid xenon, and the resulting good energy resolution, are a result of electroluminescence occurring within xenon-gas bubbles trapped under the hole electrode.

WG3 Applications, Training and Dissemination. WG3 concentrates on the application of MPGDs and on how to optimize detectors for particularly demanding cases. Since the pioneering use of GEM and Micromegas by the COMPASS experiment at CERN – the first large-scale use of MPGDs in particle physics – they have spread to colliders. Their use in mega-projects at accelerators is very important to engage people with science and to receive public recognition. During the past five years, there have been major developments of Micromegas and GEMs for various upgrades for ATLAS, CMS and ALICE at the LHC, as well as THGEMs for the upgrade of the COMPASS RICH. Although normally used as flat detectors, MPGDs can be bent to form cylindrically curved, ultralight tracking systems as used in inner-tracker and vertex applications. Examples are cylindrical GEMs for the KLOE2 experiment at the DAFNE e+e collider and resistive Micromegas for CLAS12 at Jefferson Lab. MPGD technology can also fulfil the most stringent constraints imposed by future facilities, from the Facility for Antiproton and Ion Research to the International Linear Collider and Future Circular Collider.

MPGDs have also found numerous applications in other fields of fundamental research. They are being used or considered, for example, for X-ray and neutron imaging, neutrino–nucleus scattering experiments, dark-matter and astrophysics experiments, plasma diagnostics, material sciences, radioactive-waste monitoring and security applications, medical physics and hadron therapy.

To help in further disseminating MPGD applications beyond fundamental physics, academia–industry matching events were introduced when the continuation of the RD51 was discussed in 2013. Since then, three events have been organized by RD51 in collaboration with the HEPTech network (CERN Courier April 2015 p17), covering MPGD applications in neutron and photon detection. The events provided a platform where academic institutions, potential users and industry could meet to foster collaboration with people interested in MPGD technology. In the case of neutron detection, there is tangible mutual interest between the high-energy physics and neutron-scattering communities to advance the technology of MPGDs; GEM-based solutions for thermal-neutron detection at spallation sources, novel high-resolution neutron devices for macromolecular crystallography, and fast neutron MPGD detectors in fusion research represent a new frontier for future developments.

WG4 Modelling of Physics Processes and Software Tools. Fast and accurate simulation has become increasingly important as the complexity of instrumentation has increased. RD51’s activity on software tools and the modelling of physics processes that make MPGDs function provides an entry point for institutes that have a strong theoretical background, but do not yet have the facilities to do experimental work. One example is the development of a nearly exact boundary-element solver, which is in most aspects superior to the finite-element method for gas-detector simulations. Another example is the dedicated measurement campaign and data analysis programme that was undertaken to understand avalanche statistics and determine the Penning transfer-rates in numerous gas mixtures.

The main difference between traditional wire-based devices and MPGDs is that the electrode size of order 10 μm in MPGDs is comparable to the collision mean free path. Microscopic tracking algorithms (Garfield++) developed within WG4 have shed light on the effects of surface and space charge in GEMs, as well as on the transparency of meshes in Micromegas. The microscopic tracking technique has also led to better understanding of the avalanche-size statistics, clarifying in particular why light noble gases perform better than heavier noble gases. Significant effort has also been devoted to modelling the performance of MPGDs for particular applications – for example, studies of electron losses in Micromegas with different mesh specifications, and of GEM electron transparency, charging-up and ion-backflow processes, for the ATLAS and ALICE upgrades.

WG5 MPGD-Related Electronics. Initiated in WG5 in 2009 as a basic multichannel read-out-system for MPGDs, the scalable read-out system (SRS) electronics has evolved into a popular RD51 standard for MPGDs. Many groups contribute to SRS hardware, firmware, software and applications, and the system has already extended beyond RD51. SRS is generally considered to be an “easy-to-use” portable system from detector to data analysis, with read-out software that can be installed on a laptop for small laboratory set-ups. Its scalability principle allows systems of 100,000 channels and more to be built through the simple addition of more electronic SRS slices, and operated at very high bandwidth using the online software of the LHC experiments. The front-end adapter concept of SRS represents another degree of freedom, because basically any sensor technology typically implemented in multi-channel ASICs may be used. So far, five different ASICs have been implemented on SRS hybrids as plug-ins for MPGDs: APV25, VFAT, Beetle, VMM2 and Timepix.

The number of SRS systems deployed is now nearing 100, with more than 300,000 APV channels, corresponding to a total volume of SRS sales of around CHF1 million. SRS has been ported for the read-out of photon detectors and tracking detectors, and is being used in several of the upgrades for ALICE, ATLAS, CMS and TOTEM at the LHC. Meanwhile, CERN’s Technology Transfer group has granted SRS reproduction licences to several companies. Since 2013, SRS has been re-designed according to the ATCA industry standard, which allows for much higher channel density and output bandwidth.

WG6 Production and Industrialization. A key point that must be solved in WG6 to advance cost-effective MPGDs is the manufacturing of large-size detectors and their production by industrial processes. The CERN PCB workshop is a unique MPGD production facility, where generic R&D, detector-component production and quality control take place. Today, GEM and Micromegas detectors can reach areas of 1 m2 in a single unit and nearly 2 m2 by patching some elements inside the detectors. Thanks to the completion of the upgrade to its infrastructure in 2012, CERN is still leading in the MPGD domain in terms of maximum detector size; however, more than 10 companies are already producing detector parts of reasonable size. WG6 serves as a reference point for companies interested in MPGD manufacturing and helps them to reach the required level of competences. Contacts with some have strengthened to the extent that they have signed licence agreements and engaged in a technology-transfer programme co-ordinated within WG6. As an example, the ATLAS New Small Wheel (NSW) upgrade will be the first detector mass produced in industry using a large high-granularity MPGD, with a detecting area around 1300 m2 divided into 2 m × 0.5 m detectors.

WG7 Common Test Facilities. The development of robust and efficient MPGDs entails understanding of their performance and implies a significant investment for laboratory measurements and detector test-beam activities to study prototypes and qualify final designs. Maintenance of the RD51 lab at CERN and test-beam facilities plays a key role among the objectives of WG7. A semi-permanent common test-beam infrastructure has been installed at the H4 test-beam area at CERN’s Super Proton Synchrotron for the needs of the RD51 community. It includes three high-precision beam telescopes made of Micromegas and GEM detectors, data acquisition, services, and gas-distribution systems. One advantage of the H4 area is the “Goliath” magnet (around 1.5 T over a large area), allowing tests of MPGDs in a magnetic field. RD51 users can also use the instrumentation, services and infrastructures of the Gas Detector Development (GDD) laboratory at CERN, and clean rooms are accessible for assembly, modification and inspection of detectors. More than 30 groups use the general RD51 infrastructure every year as a part of the WG7 activities; three annual test-beam campaigns attract on average three to seven RD51 groups at a time, working in parallel.

The RD51 collaboration also advances the MPGD domain with scientific, technological and educational initiatives. Thanks to RD51’s interdisciplinary and inter-institutional co-operation, the University Antonio Nariño in Bogota has built a detector laboratory where doctoral students and researchers are trained in the science and technology of MPGDs. With this new infrastructure and international support, the university is leveraging co-operation with other Latin American institutes to build a critical mass around MPGDs in this part of the world.

Given the ever-growing interest in MPGDs, RD51 re-established an international conference series on the detectors. The first meeting in the new series took place in Crete in 2009, followed by Kobe in 2011 and Zaragoza in 2013 (CERN Courier November 2013 p33). This year, the collaboration is looking forward to holding the fourth MPGD conference in Trieste, on 12–15 October.

The vitality of the MPGD community resides in the relatively large number of young scientists, so educational events constitute an important activity. A series of specialized schools, comprising lectures and hands-on training for students, engineers and physicists from RD51 institutes, has been organized at CERN covering the assembly of MPGDs (2009), software and simulation tools (2011), and electronics (2014). This is particularly important for young people who are seeking meaningful and rewarding work in research and industry. Last year, RD51 co-organized the MPGD lecture series and the IWAD conference in Kolkata, the Danube School on Instrumentation in Novi Sad, and the special “Charpak Event” in Lviv, organized in the context of CERN’s 60th anniversary programme “60 Years of Science for Peace” (CERN Courier November 2014 p38). The latter was organized at a particularly fragile time for Ukraine, to enhance the role of science diplomacy to tackle global challenges via the development of novel technologies.

In conclusion

During the past 10 years, the deployment of MPGDs in operational experiments has increased enormously, and RD51 now serves a broad user community, driving the MPGD domain and any potential commercial applications that may arise. Because of a growing interest in the benefits of MPGDs in many fields of research, technologies are being optimized for a broad range of applications, demonstrating the capabilities of this class of detector. Today, RD51 is continuing to grow, and now has more than 90 institutes and 450 participants from more than 30 countries in Europe, America, Asia and Africa. Last year, six new institutes from Spain, Croatia, Brazil, Korea, Japan and India joined the collaboration, further enhancing the geographical diversity and expertise of the MPGD community. Since its foundation, RD51 has provided a fundamental boost from isolated developers to a world-wide MPGD network, as illustrated by collaboration-spotting software (figure 2, p29). Many opportunities are still to be exploited, and RD51 will remain committed to the quest to help shape the future of MPGD technologies and pave the way for novel applications.

• For more information about RD51, visit http://rd51-public.web.cern.ch/RD51-Public/.

Vienna hosts a high-energy particle waltz

 

The first results at a new high-energy frontier in particle physics were a major highlight for the 2015 edition of the European Physical Society Conference on High Energy Physics (EPS-HEP). The biennial conference took place at the University of Vienna on 22–29 July, only weeks after data taking at the LHC at CERN had started at the record centre-of-mass energy of 13 TeV. In addition to the hot news from the LHC, the 723 participants from all over the world were also able to share a variety of exciting news in different areas of particle and astroparticle physics, presented in 425 parallel talks, 194 posters and 41 plenary talks. The following report focuses on a few selected highlights, including the education and outreach session – a “first” for EPS-HEP conferences (see box below).

After more than two years of intense work during the first long shutdown, the LHC and the experiments have begun running again, ready to venture into unexplored territories and perhaps observe physics beyond the Standard Model, following the discovery of the Higgs boson in 2012. Both the accelerator teams and the LHC experimental collaborations made a huge effort to provide collisions and to gather physics data in time for EPS-HEP 2015. By mid-July, the experiments had already recorded 100 times more data than they had at around the same time after the LHC had started up at 7 TeV in 2010, and the collaborations had worked hard to be able to bring the first results using 2015 data.

Talks at the conference provided detailed information about the operation of the accelerator and expectations for the near and distant future. The ATLAS, CMS and LHCb collaborations all presented results at 13 TeV for the first time (CERN Courier September 2015 pp8–11). Measurements of the charged-particle production rate as a function of rapidity provide a first possibility to test hadronic physics models in the new energy region. Several known resonances, such as the J/ψ and the Z and W bosons, have been rediscovered at these higher energies, and the cross-section for top–antitop production has been measured and found to be consistent with the predictions of the Standard Model. The first searches for new phenomena have also been performed, but unfortunately with no sign of unexpected behaviour. In all, the early results presented at the conference were very encouraging and everyone is looking forward to more data being delivered and analysed.

At the same time, the LHC collaborations have continued to extract interesting new physics from the collider’s first long run. According to the confinement paradigm of quantum chromodynamics, the gauge theory of strong interactions, only bound states of quarks and gluons that transform trivially under the local symmetries of this description are allowed to exist in nature. It forbids free quarks and gluons, but allows bound states composed of two, three, four, five, etc, quarks and antiquarks, and provides no reason why such states cannot exist. While quark–antiquark and three-quark bound states have been known since the first formulation of the basic theory some 40 years ago, it is only a year or so since unambiguous evidence for tetraquark states was first presented. Now, at EPS-HEP 2015, the LHCb collaboration reported on the observation of exotic resonances in the decay products of the Λb, which could be interpreted as charmonium-pentaquarks. The best fit of the findings requires two pentaquark states with spin-parity JP = 3⁻2 and JP = 5⁺2, although other assignments and even a fit in terms of merely one pentaquark are also possible (CERN Courier September 2015 p5).

The study of semileptonic decays of B mesons with τ leptons in the final state offers the possibility of revealing hints of “new physics” sensitive to non-Standard Model particles that preferentially couple to third- generation fermions. The BaBar experiment at SLAC, the Belle experiment at KEK and the LHCb experiment at CERN have all observed an excess of events for the B-meson decays B → D + τ– ντ and B → D* + τ– ντ. Averaging over the results of the three experiments, the discrepancy compared with Standard Model expectations amounts to some 3.9σ.

Nonzero neutrino masses and associated phenomena such as neutrino oscillations belong to what is currently the least well-understood sector of the Standard Model. The Tokai to Kamioka (T2K) experiment, using a νμ beam generated at the Japan Proton Accelerator Complex situated approximately 300 km east of the Super-Kamiokande detector, was the first to observe νμ to νe oscillations. It has also made a precise measurement of the angle θ23 in the Pontecorvo–Maki–Nakagawa–Sakata neutrino-mixing matrix, the leptonic counterpart of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix. However, as this value is practically independent of the relative magnitudes of the neutrino masses, it does not enable the different scenarios for the neutrino-mass hierarchy to be distinguished. A comparison of neutrino oscillations with those of antineutrinos might provide clues to the still unsolved puzzle of charge-parity violation. In this context, T2K presented an update of their earlier results on νμ disappearance results and three candidates for the appearance of νe.

At the flavour frontier, the LHCb collaboration reported a new exclusive measurement of the magnitude of the CKM matrix element |Vub|, while Belle revisited the CKM magnitude |Vcb|. In the case of |Vub|, based on Λb decays, there remains a tension between the values distilled from exclusive and inclusive decay channels that is still not understood. For |Vcb|, Belle presented an updated exclusive measurement that is, for the first time, completely consistent with the inclusive measurement of the same parameter.

Weak gravitational lensing provides a means to estimate the distribution of dark matter in the universe. By looking at more than a million source galaxies at a mean co-moving distance of 2.9 Gpc (about nine thousand million light-years), the Dark Energy Survey collaboration has produced an impressive map of both luminous and dark matter, exhibiting potential candidates for superclusters and (super)voids. The mass distribution deduced from this map correlates nicely with the “known”, that is, optically detected, galaxy clusters in the foreground.

More than a year ago, the BICEP2 collaboration caused some disturbance in the scientific community by claiming to have observed the imprint of primordial gravitational waves, generated during inflation, in the B-mode polarization spectrum of the cosmic-microwave background. Since then, the Planck collaboration has collected strong evidence that, upon subtraction of the impact of foreground dust, the BICEP2 data can be explained by a “boring ordinary” cosmic-microwave background (CERN Courier November 2014 p15).

Following the parallel sessions that formed the first part of the conference, Saturday afternoon was devoted to the traditional special joint session with the European Committee for Future Accelerators (ECFA). The comprehensive title for this year was “Connecting Scales: Bridging the Infinities”, with an emphasis on particle-physics topics that influence the evolution of the universe. This joint EPS-HEP/ECFA session, which was well attended, gave the audience a unique occasion to profit from broad overviews in various fields.

Prizes and more

As is traditional, the award of the latest prizes of the EPS High Energy and Particle Physics Division started the second half of the conference, which is devoted to the plenary sessions. The 2015 High Energy and Particle Physics Prize was awarded to James Bjorken “for his prediction of scaling behaviour in the structure of the proton that led to a new understanding of the strong interaction”, and to Guido Altarelli, Yuri Dokshizer, Lev Lipatov and Giorgio Parisi “for developing a probabilistic field theory framework for the dynamics of quarks and gluons, enabling a quantitative understanding of high-energy collisions involving hadrons”. The 2015 Giuseppe and Vanna Cocconi Prize was awarded to Francis Halzen “for his visionary and leading role in the detection of very-high-energy extraterrestrial neutrinos, opening a new observational window on the universe”. The Gribov Medal, Young Experimental Physicist Prize, and Outreach Prize for 2015 were also presented to their recipients, respectively, Pedro Vieira, Jan Fiete Grosse-Oetringhaus and Giovanni Petrucciani, and Kate Shaw (CERN Courier June 2015 p27).

An integral part of every conference is the social programme, which offers the local organizers the opportunity to present impressions of the city and the country where the conference is being held. Vienna is well known for classical music, and on this occasion the orchestra of the Vienna University of Technology performed Beethoven’s 7th symphony at the location where it was first performed – the Festival Hall of the Austrian Academy of Sciences. The participants were also invited by the mayor of the city of Vienna to a “Heurigen” – an Austrian wine tavern where recent year’s wines are served, combined with local food. A play called Curie_Meitner_Lamarr_indivisible presented three outstanding women pioneers of science and technology, all of whom had a connection to Vienna. A dinner in the orangery of the Schönbrunn Palace, the former imperial summer residence, provided a fitting conclusion to the social programme of this important conference for particle physics.

• EPS-HEP 2015 was jointly organized by the High Energy and Particle Physics Division of the European Physical Society, the Institute of High Energy Physics of the Austrian Academy of Sciences, the University of Vienna, the Vienna University of Technology, and the Stefan-Meyer Institute of the Austrian Academy of Sciences. For more details and the full programme, visit http://eps-hep2015.eu.

All about communication

The EPS-HEP 2015 conference made several innovations to communicate not only to the participants and particle physicists elsewhere, but also to a wider general public.

Each morning the participants were welcomed with a small newsletter containing information for the day. During the first part of the conference with only parallel sessions, the newsletter summarized the topics of all of the sessions, highlighting expected new results. The idea was to give the participants a glimpse of the topics being discussed at the parallel sessions they could not attend. For the second part of the conference with plenary presentations only, the daily newsletter also contained interviews that looked behind the scenes. The conference was accompanied online in social media, with tweets, Facebook entries and blogs highlighting selected scientific topics and social events. The tweets, in particular, attracted a large audience of people who were not able to attend the conference.

During the first week, a dedicated parallel session on education and outreach took place – the first ever at an EPS-HEP conference. The number of abstracts submitted for the session was remarkable, clearly indicating the need for exchange and discussions on this topic. The conveners chose a slightly different format from the standard parallel sessions, so that besides oral presentations on specific topics, a lively panel discussion with various contributions from the audience also took place. The session concluded with a “Science Slam” – a format in which scientists give short talks explaining the focus of their research in lively terms for the public. Extending the scope of the EPS-HEP conference towards topics concerned with education and outreach was clearly an important strength of this year’s edition.

In addition, a rich outreach programme formed an important part of the conference in Vienna; from the start, everyone involved in planning had a strong desire to take the scientific questions of the conference outside of the particle-physics community. One highlight of the programme was the public screening of the movie Particle Fever, followed by a discussion with Fabiola Gianotti, who will be the next director-general of CERN, and the producer of the movie, David Kaplan. Visual arts have become another important way to bring the general public in touch with particle physics, and several exhibitions, reflecting different aspects of particle physics from an artistic point of view, took place during the conference.

 

 

Pakistan: fulfilling Salam’s wish

In September 1954, the European Organization for Nuclear Research – CERN – officially came into existence. This was just nine years after the Second World War, when Europe was completely divided and torn apart. Founders of CERN hoped that “it would play a fundamental role in rebuilding European physics to its former grandeur, reverse the brain drain of the brightest and best to the US, and continue and consolidate post-war European integration”. Today, as one of the outstanding high-energy physics laboratories in the world, CERN has not only more than fulfilled the goals of its founders, but is also a laboratory for thousands of physicists and engineers from all over the world.

CERN is a fine example in which high technology and science reinforce both each other and international collaboration. Exploration of the unknown is the hallmark of fundamental research. This requires, on one hand, cutting-edge technology for developing detectors for the LHC, the world’s largest accelerator. On the other hand it necessitates new concepts in computer software for the storage and analysis of the enormous amount of data generated by LHC’s experiments.

On 31 July, Pakistan officially became an associate member of CERN. There is one respect in which CERN has a very special relationship with Pakistan. Experiments done at CERN in 1973 provided the first and crucial verification of one of the predictions of electroweak unification theory proposed by Sheldon Glashow, Abdus Salam and Steven Weinberg, which resulted in the award of the 1979 Nobel Prize in Physics to these three physicists. In a speech made by Salam on 11 May 1983 in Bahrain, he said: “We forget that an accelerator like the one at CERN develops sophisticated modern technology at its furthest limit. I am not advocating that we should build a CERN for Islamic countries. However, I cannot but feel envious that a relatively poor country like Greece has joined CERN, paying a subscription according to the standard GNP formula. I cannot rejoice that Turkey, or the Gulf countries, or Iran or Pakistan seem to show no ambition to join this fount of science and get their people catapulted into the forefront of the latest technological expertise. Working with CERN’s accelerators brings at the least this reward to a nation, as Greece has had the perception to realize.” Salam’s wish has now been fulfilled.

Pakistan has had an established linkage with CERN for more than two decades. The CERN–Pakistan co-operation agreement was signed in 1994. In 1997, the Pakistan Atomic Energy Commission signed an agreement for an in-kind contribution worth $0.5 million for the construction of eight magnetic supports for the CMS detector. This was followed by another agreement in 2000, where Pakistan assumed responsibility for the construction of part of the CMS muon system, increasing Pakistan’s contribution to $1.8 million. Through the same agreement, the National Centre for Physics (NCP) became a full member of the CMS collaboration. In 2004, the NCP established a Tier-2 node in the Worldwide LHC Computing Grid, the first in south-east Asia.

Since then, there has been no looking back. Pakistan has contributed to all of the four big experiments at the LHC, as well as in the consolidation of the LHC accelerator itself. Above all, Pakistani physicists and hardware built in Pakistan for the CMS detector played an important role in the discovery of the Higgs boson in 2012, the last missing piece of the Glashow–Salam–Weinberg model.

Pakistan’s collaboration with CERN has already resulted in numerous benefits: manufacturing jobs in engineering, benefiting Pakistani industry; engineers learning new techniques in design and quality assurance, which in turn improves the quality of engineering in Pakistan; a unique opportunity for interfacing among multidisciplinary groups in academia and industry working at CERN; and working in an international environment with people from diverse backgrounds has advantages of its own.

It is hoped that CERN has also benefited from the expertise brought in by Pakistani scientists, students, engineers and technicians to save time and money. It has certainly been satisfying for Pakistan to contribute in a small way in this great enterprise.

We also plan to get involved in CERN’s future research and development projects. In particular, there is keen interest in the Pakistani physics community to participate in R&D for future accelerators. Discussions are already underway to understand where we can contribute meaningfully, keeping in mind our resources and other limitations. In particular, there is strong interest among Pakistani physicists to be involved in the R&D for a future linear collider.

In this new phase of Pakistan–CERN co-operation, which started on 19 December 2014 with the signing of the document for associate membership (CERN Courier January/February 2015 p6), the emphasis will shift to finding work opportunities at CERN for young scientists and engineers, as well as to the training of young Pakistani scientists at CERN. It will also be an opportunity for Pakistan to be more deeply involved in fundamental research in physics. For this purpose, we would involve our graduate students in work with physics groups at CERN as a part of their PhD studies. This would provide an opportunity for our young scientists and engineers to contribute to knowledge at the very frontiers of physics.

Particle Accelerators: From Big Bang Physics to Hadron Therapy

By Ugo Amaldi
Springer
Paperback: £19.99 €36.01 $34.99
E-book: £14.99 €29.74 $19.99
Also available at the CERN bookshop

CCboo1_07_15

There was a time when books on particle physics for the non-expert were a rarity; not quite as rare as Higgs bosons, but certainly as rare as heavy quarks. Then, rather as the “November revolution” of 1974 heralded in the new era of charm, beauty and top, so the construction of the LHC became the harbinger of a wealth of “popular” books on particle physics, and the quest to find the final piece of the Standard Model and what lies beyond. These books can be excellent in what they set out to do, but few venture where Ugo Amaldi goes – to look at the basic tools that have made this whole adventure possible, and in particular, the accelerators and their builders. Without the cyclotron and its descendants, there would be no Standard Model, no CERN, no LHC. Nor would there be the applications, particularly in medicine, which Amaldi himself has done so much to bring about.

As the son of Edoardo Amaldi, one of CERN’s “founding fathers”, Ugo Amaldi must have the history of particle physics in his bones, and he writes with feeling about the development of particle accelerators, introducing each chapter with personal touches – photos of roads at CERN named after important protagonists, anecdotes of his personal experience, quotes from people he admires. There is a passion here that makes the book interesting even for those who already know the basic story. Indeed, while particle physicists may not be the main audience the author had in mind, they can still learn from many chapters, “speed-reading” the parts they are familiar with, then dwelling on some of the historical gems – such as the rather sad story of the co-inventor of strong focusing, Nick Christofilos, about whom I had previously known little beyond his being Greek and a lift engineer.

For the non-expert, the book has much to absorb, the result of containing quite a thorough mini-introduction to the Standard Model and beyond – the author’s inner particle physicist could clearly not resist. Yet it is worth persevering and reaching the chapters on “accelerators that care”, to use Amaldi’s phrase, to discover the medical applications of the 21st century.

So, this is a book for everyone, and in particular, I believe, for young people. Books like this inspired my studies, and I would like to think that Amaldi will inspire others with his passion for physics.

High Gradient Accelerating Structure

By W Gai (ed.)
World Scientific
Hardback: £65
E-book: £49

41GOeI9PN5L._SX312_BO1,204,203,200_

This proceedings volume, for the symposium in honour of Juwen Wang’s 70th anniversary, is dedicated to his many important achievements in the field of accelerator physics. Wang has been a key member of SLAC for many years, working on accelerating structures for linear colliders, up to and including the CLIC project at CERN, as well as the Linac Coherent Light Source at SLAC. The book includes discussions of recent advances and challenging problems by experts in the field of high-gradient accelerating structures.

International Seminars on Nuclear War and Planetary Emergencies 46th Session: The Role of Science in the Third Millennium

By A Zichichi and R Ragani (eds)
World Scientific
Hardback: £98
E-book: £74

711zuO+mphL

The 46th Session of the International Seminars on Nuclear War and Planetary Emergencies, held in Erice, Sicily, gathered again, in 2013, more than 100 scientists from 43 countries. This is the latest output from an interdisciplinary effort that has been going on for the past 32 years, to examine and analyse planetary problems that are followed up, throughout the year, by the World Federation of Scientists’ Permanent Monitoring Panels.

Nuclear Radiation Interactions

By Sidney Yip
World Scientific
Hardback: £49

410pj+2QHdL._SX331_BO1,204,203,200_

Based on a first-year graduate-level course that the author taught in the Department of Nuclear Science and Engineering at MIT, this book differs from traditional nuclear-physics texts for a nuclear-engineering curriculum by emphasizing the understanding of nuclear radiations and their interactions with matter. In generating nuclear radiations and using them for beneficial purposes, scientists and engineers must understand the properties of the radiations and how they interact with their surroundings. Hence, radiation interaction is the essence of this book.

bright-rec iop pub iop-science physcis connect