In an improved analysis of 8 TeV collision events at the LHC, the CMS experiment has made the first observation of the production of a top quark–antiquark pair together with a Z boson, ttZ, as well as the most precise cross-section measurements of ttZ and ttW to date.
Since the top quark was discovered 20 years ago, its mass, width and other properties have been measured with great precision. However, only recently have experiments been able to study directly the top quark’s interactions with the electroweak bosons. Its coupling to the W boson has been tightly constrained using single top events in proton–antiproton collisions at Fermilab’s Tevatron and proton–proton collisions at the LHC. Direct measurements of the top quark’s couplings to the photon (γ) and the Z or Higgs boson are currently most feasible in LHC collisions that produce a tt pair and a coupled boson: ttγ, ttZ and ttH. However, studying these processes (and the related ttW) is challenging because their expected production rates are hundreds of times smaller than the tt- cross-section.
The CMS and ATLAS experiments at CERN have previously observed ttγ, found evidence for tttZ, and conducted searches for ttW and ttH in 7 and 8 TeV proton–proton collisions. Deviations from the predicted cross-sections could hint at non-Standard Model physics such as anomalous top-quark-boson couplings or new particles decaying into multiple charged leptons and bottom quarks.
The decays ttW and ttZ both produce two b quarks, and are most easily distinguished from tt, WZ, and ZZ backgrounds when they produce two to four charged leptons and up to four additional quarks. However, signal events can be identified even more precisely when the reconstructed leptons and quarks are matched to particular top, W or Z decays. Leptons of the same flavour and opposite charge, with an invariant mass near 91 GeV, are assigned to Z decays. The remaining leptons and quarks are compared with top and W decays using the charge and b-quark identification of single objects, together with the combined mass of multiple objects. Every possible permutation of objects matched to decays is tested, and the best matching is taken as the reconstruction of the entire ttW or ttZ event. Background events with fewer top quarks or W or Z bosons are typically worse matches to ttW and ttZ than signal events.
The figure shows the best match score in events with three charged leptons and four reconstructed quarks in data, along with estimates of ttZ, WZ and tt, as well as tt and single Z with a non-prompt lepton from quark decay. The hashed area indicates the 68% uncertainty in the signal-plus-background prediction. The matching scores are combined with quark and lepton momenta and other distinguishing variables in so-called boosted decision trees (BDTs), which separate signal from background events. The BDTs are used to compare data events with signal and background models, and so estimate the number of signal events contained in the data. This estimate makes it possible to measure the cross-sections.
The ttW cross-section is measured in events with two same-charge leptons or three leptons, and is found to be 382+117–102 fb, somewhat larger than the 203+20–22 fb predicted by the Standard Model. This higher-than-expected value is driven by an excess of signal-like data events with two same-charge leptons. The data overall exclude the zero-signal hypothesis with a significance of 4.8σ. Events with two opposite-charge leptons, three leptons, or four leptons are used in the ttZ search. The measured ttZ cross-section is 242+65–55 fb, quite close to the Standard Model prediction of 206+19–24 fb. The zero-signal hypothesis is rejected with a significance of 6.4σ, making this measurement the first observation of the ttZ process.
The measured cross-sections are also used to place the most stringent limits to date on models of new physics employing any of four different dimension-six operators, which would affect the rates of ttW or ttZ production. Further studies in 13 TeV collisions should provide an even more detailed picture of these interesting processes and may reveal the first hints of new physics at the LHC.
After demonstrating a good understanding of the detector and observing most of the Standard Model particles using the first data of LHC Run 2 collected in July (CERN Courier September 2015 p8), the ATLAS collaboration is now stepping into the unknown, open to the possibility that dimensions beyond the familiar four could make themselves known through the appearance of microscopic black holes.
Relative to the other fundamental forces, gravity is weak. In particular, why is the natural energy scale of quantum gravity, the Planck mass MPl, roughly 17 orders of magnitude larger than the scales of electroweak interactions? One exciting solution to this so-called hierachy problem exists in “brane” models, where the particles of the Standard Model are mainly confined to a three-plus-one-dimensional brane and gravity acts in the full space of the “bulk”. As gravity escapes into the hypothesized extra dimensions, it therefore “appears” weak in the known four-dimensional world.
With enough large, additional dimensions, the effective Planck mass, MD, is reduced to a scale where quantum gravitational effects become important within the energy range of the LHC. Theory suggests that microscopic black holes will form more readily in this higher-dimensional universe. With the increase of the centre-of-mass energy to 13 TeV at the start of Run 2, the early collisions could already produce signs of these systems.
If produced by the LHC, a black hole with a mass near MD – a quantum black hole – will decay faster than it can thermalize, predominately producing a pair of particles with high transverse momentum (pT). Such decays would appear as a localized excess in the dijet mass distribution (figure 1). This signature is also consistent with theories that predict parton scattering via the exchange of a black hole – so-called gravitational scattering.
A black hole with a mass well above MD will behave as a classical thermal state and decay through Hawking emission to a relatively large number of high-pT particles. The frequency at which Standard Model particles are expected to be emitted is proportional to the number of charge, spin, flavour and colour states available. ATLAS can therefore perform a robust search for a broad excess in the scalar sum of jet pT (HT) in high-multiplicity events (figure 2), or in similar final states that include a lepton. The requirement of a lepton (electron or muon) helps to reduce the large multijet background.
Even though the reach of these analyses extends beyond the previous limits, they have so far revealed no evidence for black holes or any of the other signatures to which they are potentially sensitive. Run 2 is just underway and with more luminosity to come, this is only the beginning.
One of the hottest debates at the LHC is the potential emergence of collective effects in proton–lead (pPb) collisions, prompted by the discovery of double ridge structures in angular correlations of charged particles (CERN Courier March 2013 p6), and the dependence of the azimuthal asymmetry, characterized by its second Fourier coefficient v2, on particle mass (CERN Courier September 2013 p10). The experimental findings in pPb are qualitatively the same as those in PbPb collisions, and they are usually interpreted as hydrodynamic signatures of a strongly coupled, nearly perfect quantum liquid. However, QCD calculations, which invoke the colour-glass condensate (CGC) formalism for the gluon content of a high-energy nucleus in the saturation regime, can also describe several features of the data.
Thus, one of the key questions to answer is, whether the ridge is a result of final-state effects, driven by the density of produced particles, or of initial-state effects, driven by the gluon density at low-x. In the former case, v2 could be expected to be larger in the Pb-going direction, while it would be larger the p-going direction in the latter case.
The ALICE collaboration has recently completed a measurement to address this question in analysis of pPb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. Muons reconstructed in the muon spectrometer at forward (p-going) and backward (Pb-going) rapidities (2.5 < |η| < 4.0) were correlated with associated charged particles reconstructed in the central (|η| < 1.0) tracking detectors. In high-multiplicity events, this revealed a pronounced near-side ridge at forward- and backward-going rapidities, ranging over about five units in Δη, similar to the case of two-particle angular correlations at mid-rapidity. An almost symmetric double ridge structure emerged when, as in previous analyses, jet-like correlations from low-multiplicity events were subtracted.
The v2 for muons, vμ2, in high-multiplicity events was obtained by dividing out the v2 of charged particles measured at mid-rapidity from the second-order two-particle Fourier coefficient, under the assumption that it factorizes into a product of muon v2 and charged-particle v2. The vμ2 coefficients were found to have a similar dependence on transverse momentum (pT) in p-going and Pb-going directions, with the Pb-going coefficients larger by about 16±6%, more or less independent of pT within the uncertainties of the measurement. The dominant contribution to the uncertainty arose from the correction for jet-like correlations affecting the extraction of v2.
The results add further support to the hydrodynamic picture, and are in qualitative agreement with model calculations incorporating final-state effects. At high pT (> 2 GeV/c), the measurement is sensitive to a contribution from heavy-flavour decays, and hence may be used to constrain the v2 of D mesons from calculations.
LHCb has significantly improved the trigger for the experiment during Run 2 of the LHC. The detector is now calibrated in real time, allowing the best possible event reconstruction in the trigger, with the same performance as the Run 2 offline reconstruction. The improved trigger allows event selection at a higher rate and with better information than in Run 1, providing a significant advantage in the hunt for new physics in Run 2.
The trigger consists of two stages: a hardware trigger that reduces the 40 MHz bunch-crossing rate to 1 MHz, and two high-level software triggers, HLT1 and HLT2 (figure 1). In HLT1, a quick reconstruction is performed before further event selection. Here, dedicated inclusive triggers for heavy-flavour physics use multivariate approaches. HLT1 also selects an inclusive muon sample, and exclusive lines select specific decays. This trigger typically takes 35 ms/event and writes out events at about 150 kHz.
In Run 1, 20% of events were deferred and processed with the HLT between fills. For Run 2, all events that pass HLT1 are deferred while a real-time alignment is run, so minimizing the time spent using sub-optimal conditions. The spatial alignments of the vertex detector – the VELO – and the tracker systems are evaluated in a few minutes at the beginning of each fill. The VELO is reinserted for stable collisions in each fill, so the alignment could vary from one fill to another; figure 2 shows the variation for the first fills of Run 2. In addition, the calibration of the Cherenkov detectors and the outer tracker are evaluated for each run. The quality of the calibration allows the offline performance, including the offline track reconstruction, to be replicated in the trigger, thus reducing systematic uncertainties in LHCb’s results.
The second stage of the software trigger, HLT2, now writes out events for offline storage at about 12.5 kHz (compared to 5 kHz in Run 1). There are nearly 400 trigger lines. Beauty decays are typically found using multivariate analysis of displaced vertices. There is also an inclusive trigger for D* decays, and many lines for specific decays. Events containing leptons with a significant transverse momentum are also selected.
A new trigger stream – the “turbo” stream – allows candidates to be written out without further processing. Raw event data are not stored for these candidates, reducing disk usage. All of this enables a very quick data analysis. LHCb has already used data from this stream for a preliminary measurement of the J/ψ cross-section in √s = 13 TeV collisions (CERN Courier September 2015 p11).
This is an event view of the highest energy neutrino detected so far by the IceCube experiment based at the South Pole (CERN Courier December 2014 p30). Each sphere is one optical sensor; the coloured spheres show those that observed light from this event. The sizes show how many photons each module observed, while the colour gives some idea of the arrival time of the first photon, from red (earliest) to blue (latest). It is easy to see that the neutrino is going slightly upward (by about 11.5°), so the muon cannot be from a cosmic-ray air shower; it must be from a neutrino. The event, detected on 11 June 2014, was in the form of a through-going muon, which means that the track originated and ended outside of the detector’s volume. So, IceCube cannot measure the total energy of the neutrino, but rather its specific energy loss (dE/dx). While the team is still working on estimating the neutrino energy, the total energy loss visible in the detector was 2.6±0.3 PeV.
A new study of more than 200,000 galaxies, from the ultraviolet to the far infrared, has provided the most comprehensive assessment of the energy output of the nearby universe. It confirms that the radiation produced by stars in galaxies today is only about half what it was two thousand million years ago. This overall “fading” reflects a decrease in the rate of star formation via the collapse of cool clouds of gas. It seems that the universe is running out of gas – in effect, getting out of breath – and slowly dying.
It is well known to astronomers that the rate of star formation in the universe reached a peak around a redshift z = 2, when the universe was about 3 Gyr old. Over the subsequent 10 Gyr until now, the production of stars in galaxies has steadily decreased in a given co-moving volume of space – that is, a volume expanding at the same rate as the cosmic expansion of the universe, therefore keeping a constant matter content during the history of the universe. Because the most massive stars are also the most luminous ones and have the shortest lifetimes, the energy output of a galaxy is closely related to its star-formation rate. Indeed, some 100 million years after the formation of a star cluster, its brightest stars would have exploded as supernovas leaving only the lower-mass stars, which are much less luminous.
Although the fading trend of the universe has been known since the late 1990s, measuring it accurately has been a challenge. Part of the difficulty is to gather a representative sample of galaxies at different redshifts and to account properly for all biases. Another complication comes from the obscuration by dust in the galaxies, which absorbs ultraviolet and visible radiation and then re-emits this energy in the infrared. A way to overcome these difficulties is to observe the same region of the sky at many different wavelengths to cover fully the energy output. This has now been achieved by a large international collaboration led by Simon Driver from the International Centre for Radio Astronomy Research (ICRAR), University of Western Australia.
The study is part of the Galaxy and Mass Assembly (GAMA) project, the largest multi-wavelength survey ever put together. It used seven of the world’s most powerful telescopes to observe more than 200,000 galaxies, each measured at 21 wavelengths from the ultraviolet at 0.1 μm to the far infrared at 500 μm. Driver and collaborators then used this unique data set to derive the spectral energy distribution of the individual galaxies, and the combined one for three different ranges of redshift up to z = 0.20. For the nearest galaxies, they obtain an average energy output of (1.5±0.3) × 1035 W produced on average by galaxies in a co-moving volume of a cubic megaparsec, which is equivalent to a cube with a side of about 3.3 million light-years. While this is for a redshift range between z = 0.02 and z = 0.08, corresponding to a mean look-back time of 0.75 Gyr, the team finds a significantly higher value of (2.5±0.3) × 10W35 for a look-back time of 2.25 Gyr (0.14 < z < 0.20). This indicates a decrease by about 1035 W in 1.5 Gyr. This trend occurs across all wavelengths and corresponds roughly to a decrease by a factor two over the past two thousand million years.
The ongoing decay of energy production by stars in galaxies also follows the trend of active galactic nuclei and gamma-ray bursts, which were all more numerous and powerful several gigayears ago. The shining, glorious days of the universe are now long past; instead, it will continue to decline, sliding gently into old age, an age of quiescence.
After 15 years of measurement and another eight years of scrutinizing and calculations, the H1 and ZEUS collaborations have published the most precise results to date about the innermost structure and behaviour of the proton. The two collaborations, which took data at DESY’s electron–proton collider, HERA, from 1992 to 2007, have combined nearly 3000 measurements of inclusive deep-inelastic cross-sections (H1, ZEUS 2015). With its completion, the paper secures the legacy of the HERA data.
Within the framework of perturbative QCD, the proton is described in terms of parton-density functions, which provide the probability of scattering from a parton, either a gluon or a quark. The H1 and ZEUS collaborations have also produced the first QCD analysis of the data, encompassed in the HERAPDF2.0 sets of parton-distribution functions (PDFs), which form a significant part of the paper. The combined data presented in the new publication will be the basis of all analyses of the structure of the proton for years to come.
As figure 1 depicts, in deep-inelastic scattering, a boson – γ, Z0 or W± – acts as a probe of the structure of the proton by interacting with its constituents, through neutral-current (γ, Z0) or charged-current (W±) reactions. Of course, this picture is simplified: the proton is a dynamic structure of quarks and gluons, but by measuring deep-inelastic scattering over a wide kinematic range, this internal structure can be mapped precisely. The variables used to do this are the squared four-momentum, Q2, of the exchanged boson, and Bjorken x, xBj, the fraction of the proton’s momentum carried by the struck quark.
A wealth of data
The data, taken over the 15-year lifetime of the HERA accelerator, correspond to a total luminosity of about 1 fb–1 of deep-inelastic electron–proton and positron–proton scattering. All of the data used were taken with an electron/positron beam energy of 27.5 GeV, with roughly equal amounts of data for electron–proton and positron–proton scattering being recorded. HERA initially operated with a proton-beam energy of 820 GeV, which was increased subsequently to 920 GeV; these data constitute the bulk of the combined measurements. Towards the end of HERA’s run, special data samples with a proton-beam energy of 575 GeV and 460 GeV were taken and are also included. The data were combined separately for the e+p and e–p runs and for the different centre-of-mass energies. Overall, 41 separate data sets were used in the combination, spanning 0.045 < Q2 < 50,000 GeV2 and 6 × 10–7 < xBj < 0.65, i.e. six orders of magnitude in each variable. The initial measurements consisted of 2937 published cross-sections in total, which were combined to produce 1307 final combined cross-section measurements. These results supersede the previous paper with combined measurements of deep-inelastic scattering cross-sections in which only data up to the year 2000 were combined (CERN Courier January/February 2008 p30).
The procedure for combining the data involved a careful treatment of the various uncertainties between all of the data sets. In particular, the correlations of the various sources were assessed, and those uncertainties deemed to be point-to-point correlated were accounted for as such in the averaging of the data based on a χ2 minimization method. The resulting χ2 is 1687 for 1620 degrees of freedom, demonstrating excellent compatibility of the multitude of data sets. Figure 2 illustrates the power of the data combination. It displays a selection of the data in bins of the photon virtuality, Q2, and for fixed values of xBj, showing separately individual data sets from several different analyses. A combined data point can be the combination of up to eight individual measurements. The improvement in precision is striking, as is seen more clearly in the close-up on some of the points. An indication of the precision of the combined data is that the total uncertainties are close to 1% for the bulk region of 3 < Q2 < 500 GeV2.
As well as showing the precision of the data and power of the combination, the cross-section dependence for the different values of xBj demonstrates the dynamic structure of the proton in a striking way. For xBj = 0.08, the cross-section dependence is reasonably flat as a function of Q2. This is known as Bjorken scaling, and is expected from the simple parton model in which inelastic electron–proton scattering is viewed as a sum of elastic electron–parton scattering, where the partons are free point-like objects. At lower values of xBj, the cross-section rises increasingly more steeply with increasing Q2 and decreasing xBj. This effect is known as scaling violation, and is indicative of the density of gluons in the proton increasing.
The increased density and rise of the cross-section can also be observed by considering the proton-structure function F2 (which is closely related to the cross-section) plotted versus xBj at fixed Q2, as in figure 3. The strong rise of F2 with decreasing xBj was one of the most important discoveries at HERA. Previous experiments, which were with fixed targets, could not constrain this behaviour, because the data were at low values of Q2 and high values of xBj. The figure also shows how the rise towards low xBj is steeper with increasing Q2. At higher Q2, the exchanged boson effectively probes smaller distances, and so can see more of the inner structure of the proton and hence resolves more and more gluons.
Parton distributions
The proton structure of quarks and gluons is often parameterized in terms of the PDFs, which correspond to the probability of finding a gluon or a quark of a given flavour with momentum fraction x in the proton, given the scale μ of the hard interaction. The behaviour of the PDFs with scale is predicted by QCD, but the absolute values need to be determined from fits to data. Using the HERA data, the PDFs can be extracted, while at the same time the evolution as a function of the scale is tested. This analysis is performed at leading order, next-to-leading order (NLO) and next-to-next-to-leading order, yielding the HERAPDF2.0 family of PDFs.
Figure 3 compares the predictions of the PDF analysis at NLO with the measurements of the structure functions. In general, the QCD predictions describe the data well, although this becomes poorer at low Q2, indicating inadequacies in the theory used at these low scales. Such precise knowledge of the PDFs is also of highest importance for physics at the LHC at CERN, because the uncertainties stemming from the knowledge of the PDFs are increased for proton–proton collisions compared with deep-inelastic scattering.
The QCD analysis can also be extended to include data from the production of charm quarks and jets at HERA. Charm production is measured again as a function of xBj and Q2, however with the condition of detecting a charm meson in the final state. Jet production is measured in the Breit frame, where jets with non-zero transverse momentum are expected from hard QCD processes only. By including the charm and jet data, the analysis becomes particularly sensitive to the strong-coupling constant, αs(MZ), whereas without jet data the coupling constant is strongly correlated with the normalization of the gluon density. The combined analysis of inclusive data, charm data and jet data at NLO results in an experimentally very precise measurement of the strong-coupling constant, αs(MZ) = 0.1183±0.0009 (exp.), with significantly larger uncertainties of +0.0039–0.0033 related to the model and theory.
It is also interesting to look at data from HERA on neutral-current (NC) and charged-current (CC) scattering that is differential in Q2 but integrated over xBj, as shown in figure 4 both for e+p and e–p. At small Q2, the cross-sections for NC are much larger than for CC, whereas at large Q2, in the order of the vector-boson mass squared, they become similar in size. This is a direct visualization of the electroweak unification: the CC process is mediated by weak forces, whereas photon exchange dominates the NC cross-section. Looking in more detail, the NC cross-sections for e+p and e–p are almost identical at small Q2 but start to diverge as Q2 grows. This is owing to γ–Z0 interference, which has the opposite effect on the e+p and e–p cross-sections. The CC cross-sections also differ between e+p and e–p scattering, with two effects contributing: the helicity structure of the W± exchange and the fact that CC e–p scattering probes the u-valence quarks, whereas d-valence quarks are accessed in CC e+p.
In summary, the HERA collider experiments H1 and ZEUS have combined their precision data on deep-inelastic scattering, reaching a precision of almost 1% in the double-differential cross-section measurements. It is the largest coherent data set on proton structure, spanning six orders of magnitude in the kinematic variables xBj and Q2. A QCD analysis of the HERA data alone results in a set of parton-density functions, HERAPDF2.0, without the need for data from other experiments. Also, using HERA jet and charm data, the strong-coupling constant is measured together with proton PDFs. QCD and electroweak effects are probed at high precision in the same data set, providing beautiful demonstrations of the validity of the Standard Model.
Accelerator science and technology exhibits a rich history of inventions that now spans almost a century. The fascinating story of accelerator development, which is particularly well described in Engines of Discovery: A Century of Particle Accelerators by Andy Sessler and Ted Wilson (CERN Courier September 2007 p63), can also be summarized in the so-called “Livingston plot”, where the equivalent energy of an accelerated beam is shown as a function of time. The plot depicts how new accelerating technologies take over once the previous technology has reached its full potential, so that over the course of many decades the maximum achieved energy has continued to grow exponentially, thanks to many inventions and the development of many different accelerator technologies. The most recent decades have also been rich with inventions, such as the photon-collider concept (still an idea), crab-waist collisions (already verified experimentally at the DAFNE storage ring in Frascati) and integrable optics for storage rings (verification is planned at the Integrable Optics Test Accelerator at Fermilab), to name a few.
Despite recent inventions, however, there is some cause for anxiety about the latest progress in the field and projections for the future. The three most recent decades represented by the Tevatron and the LHC exhibit a much slower energy growth over time. This may be an indication that the existing technologies for acceleration have come to their maximum potential, and that further progress will demand the creation of a new accelerating method – one that is more compact and economical. There are indeed several emerging acceleration techniques, such as laser-driven and beam-driven plasma acceleration (CERN Courier June 2007 p28), which can perhaps bring the Livingston plot back to the fast-rising exponent. Nevertheless, inspired by the variety of past inventions in the field, and dreaming about future accelerators that will require many scientific and technological breakthroughs, we can pose the question: how can we invent more efficiently?
It is worth recalling two biographical facts about two prominent accelerator scientists: John Adams, who in the 1950s played the key role in implementing the courageous decision to cancel the already approved 10 GeV weak-focusing accelerator for a totally innovative 25 GeV strong-focusing machine (the CERN Proton Synchrotron), and Gersh Budker, who was the founder and first director of the Institute of Nuclear Physics, Novosibirsk, and inventor of many innovations in the field of accelerator physics, such as electron cooling. It is important in this context that Adams had a unique combination of scientific and engineering abilities, and that Budker was once called by Lev Landau a “relativistic engineer”. This connection is indeed notable, because the art of inventiveness that I am about to discuss came from engineering.
While everyone has probably heard about problem-solving approaches such as brainstorming or even its improved version, synectics (the use of a fairy-tale-style description of the problem is one of its approaches – note the snakes in figure 1c representing the magnetic fields in the solenoid), it is likely that most people working in science have never heard about the inventive methodologies that engineers have developed and used. It is indeed astonishing that formal inventive approaches, so widely used in industry, are rarely known in science.
One such approach is TRIZ – pronounced “treez” – which can be translated as the Theory of Inventive Problem Solving. TRIZ was developed by Genrikh Altshuller in the Soviet Union in the mid-20th century. Starting in 1946 when he was working in a patent office, but interrupted by a dramatic decade-long turmoil in his life (another story) that he overcame to resume his studies, Altshuller analysed many thousands of patents, trying to discover patterns to identify what makes a patent successful. Following his work in the patent office, between 1956 and 1985 he formulated TRIZ and, together with his team, developed it further. Since then, TRIZ has gradually become one of the most powerful tools in the industrial world. For example, in his 7 March 2013 contribution to the business magazine Forbes, “What Makes Samsung Such An Innovative Company?”, Haydn Shaughnessy wrote that TRIZ “became the bedrock of innovation at Samsung”, and that “TRIZ is now an obligatory skill set if you want to advance within Samsung”.
A methodology
The authors of TRIZ devised the following four cornerstones for the method: the same problems and solutions appear again and again but in different industries; there is a recognizable technological evolution path for all industries; innovative patents (which are about a quarter of the total) use science and engineering theories outside of their own area or industry; and an innovative patent uncovers and solves contradictions. In addition, the team created a detailed methodology, which employs tables of typical contradicting parameters and a wonderfully universal table of 40 inventive principles. The TRIZ method consists in finding a pair of contradicting parameters in a problem, which, using the TRIZ inventive tables, immediately leads to the selection of only a few suitable inventive principles that narrow down the choice and result in a faster solution to a problem.
TRIZ textbooks often cite Charles Wilson’s cloud chamber (invented in 1911) and Donald Glaser’s bubble chamber (invented in 1952) as examples – to use the terminology of TRIZ – of a system and anti-system. Indeed, the cloud chamber works on the principle of bubbles of liquid created in gas, whereas the bubble chamber uses bubbles of gas created in liquid (figure 1a). If the TRIZ inventive principle of system/anti-system were applied, the invention of the bubble chamber would follow immediately and not almost half a century after the invention of the cloud chamber.
Another TRIZ inventive principle, that of Russian dolls (nested dolls, or matryoshki), can be applied not only to engineering but also in many other areas, including science or even philology. The principle of a concept inside a concept can be seen in the British nursery rhyme “This is the house that Jack built”, and the 1920 poem by Valery Bryusov (quoted at the start), which describes an electron as a planet in its own world, can also be seen as a reflection of the nested-doll inventive principle, this time in poetic science fiction. A spectacular scientific example is the construction of a high-energy physics detector, where many different sub-detectors are inserted into one another, to enhance the accuracy of detecting elusive particles (figure 1b). Such detectors are needed to find out if there is indeed a world inside of an electron – and the circle is now closed!
The TRIZ method can be applied, in particular, to accelerator science. For example, the dual force-neutral solenoid found in the interaction region of a collider, or in NMR scanners, is an illustration of both the nested-doll and the system/anti-system inventive principles. Two solenoids of opposite currents are inserted in one another in such a way that all of the magnetic flux-return is between the solenoids and none is seen outside, reducing the need for magnetic shielding in case of NMR or reducing interference with the main solenoid of the detector in case of a particle collider (figure 1c). Remarkably, the same combination of inventive principles can be seen in the technique of stimulated emission depletion microscopy (STED), which was rewarded with the 2014 Nobel Prize in Chemistry. The final focus system at a collider with non-local chromaticity correction is an illustration of the inventive principle of what is known as “beforehand cushioning”. And so on.
While many of the TRIZ inventive principles can be applied directly to problems in accelerator science, it is tempting to add accelerator-science-related parameters and inventive principles to TRIZ. The equations of Maxwell or of thermodynamics, where an integral on a surface is connected to the integral over volume, suggest an inventive principle of changing the volume-to-surface ratio of an object. Nature provides an illustration in a smart cat, stretched out under the sun or curled up in the cold, but flat colliding electron–positron beams or fibre lasers also illustrate the same principle. Another possible inventive principle for accelerator science is the use of non-damageable or already damaged materials: the laser wire for beam diagnostics, the mercury jet as a beam target, plasma acceleration, or a plasma mirror – the list of examples illustrating this inventive principle can be continued.
So the TRIZ method of inventiveness, although created originally for engineering, is universal and can also be applied to science. TRIZ methodology provides another way to look at the world; combined with science it creates a powerful and eye-opening amalgam of science and inventiveness. It is particularly helpful for building bridges of understanding between completely different scientific disciplines, and so is also naturally useful to educational and research organizations that endeavour to break barriers between disciplines.
However, experience shows that knowledge of TRIZ is nearly non-existent in the scientific departments of western universities. Moreover, it is not unusual to hear about unsuccessful attempts to introduce TRIZ into the graduate courses of universities’ science departments. Indeed, in many or most of these cases, the apparent reason for the failure is that the canonical version of TRIZ was introduced to science PhD students in the same way that TRIZ is taught to engineers in industrial companies. This may be a mistake, because science students are rightfully more critically minded and justifiably sceptical about overly prescriptive step-by-step methods. Indeed, a critically thinking scientist would immediately question the canonical number of 40 inventive principles, and note that identifying just a pair of contradicting parameters is a first-order approximation, and so on.
A more suitable approach to introduce TRIZ to graduate students, which takes into account the lessons learnt by its predecessors, could be different. Instead of teaching graduate students the ready-to-use methodology, it might be better to take them through the process of recreating parts of TRIZ by analysing various inventions and discoveries from scientific disciplines, showing that the TRIZ inventive principles can be efficiently applied to science. In the process, additional inventive principles that are more suitable for scientific disciplines could be found and added to standard TRIZ. In my recent textbook, I call this extension “Accelerating Science (AS) TRIZ”, where “accelerating” refers not to accelerators, but instead highlights that TRIZ can help to boost various areas of science.
Many of the examples of TRIZ-like inventions in science considered above have already been made, and I am being deliberately provocative in connecting them to TRIZ post factum. However, it is natural to wonder whether TRIZ and AS-TRIZ could actually help to inspire and create new scientific inventions and innovations, especially in regard to projects that continue to manifest many unsolved obstacles.
One example of such a project is the circular collider currently being considered as a successor to the LHC – the Future Circular Collider (FCC), a 100 km circumference machine (CERN Courier April 2014 p16). This project has many scientific and technical tasks and challenges that need to be solved. Notably, the total energy in each circulating proton beam is expected to exceed 8 GJ, which is equivalent to the kinetic energy of an Airbus-380 flying at 720 km/h. Not only does such a beam need to be handled safely in the bending magnets, it also needs to be focused in the interaction region to a micrometre-size spot – the equivalent, more or less, of having to pass through the eye of a needle.
It remains to be seen if the methodology of TRIZ and AS-TRIZ can be applied to such a large-scale project as the FCC, because it brings a whole array of new, difficult and exciting challenges to the table. Nonetheless, it is certainly a project that can only flourish with the application of knowledge and inventiveness.
RD51 et l’essor des détecteurs gazeux à micropistes
En 2008 a été créée au CERN la collaboration RD51, répondant ainsi au besoin de développer et d’utiliser les techniques innovantes des détecteurs gazeux à micropistes (MPGD). Si nombre de ces technologies ont été adoptées avant la création de RD51, d’autres techniques sont apparues depuis ou sont devenues accessibles, de nouveaux concepts de détection sont en cours d’adoption et des techniques actuelles font l’objet d’améliorations importantes. Parallèlement, le déploiement de détecteurs MPGD dans des expériences en exploitation s’est considérablement accru. Aujourd’hui, RD51 est au service d’une vaste communauté d’utilisateurs, veillant sur le domaine des détecteurs MPGD et sur les applications commerciales qui pourraient voir le jour.
Improvements in detector technology often come from capitalizing on industrial progress. Over the past two decades, advances in photolithography, microelectronics and printed circuits have opened the way for the production of micro-structured gas-amplification devices. By 2008, interest in the development and use of the novel micro-pattern gaseous detector (MPGD) technologies led to the establishment at CERN of the RD51 collaboration. Originally created for a five-year term, RD51 was later prolonged for another five years beyond 2013. While many of the MPGD technologies were introduced before RD51 was founded (figure 1), with more techniques becoming available or affordable, new detection concepts are still being introduced, and existing ones are substantially improved.
In the late 1980s, the development of the micro-strip gas chamber (MSGC) created great interest because of its intrinsic rate-capability, which was orders of magnitude higher than in wire chambers, and its position resolution of a few tens of micrometres at particle fluxes exceeding about 1 MHz/mm2. Developed for projects at high-luminosity colliders, MSGCs promised to fill a gap between the high-performance but expensive solid-state detectors, and cheap but rate-limited traditional wire chambers. However, detailed studies of their long-term behaviour at high rates and in hadron beams revealed two possible weaknesses of the MSGC technology: the formation of deposits on the electrodes, affecting gain and performance (“ageing effects”), and spark-induced damage to electrodes in the presence of highly ionizing particles.
These initial ideas have since led to more robust MPGD structures, in general using modern photolithographic processes on thin insulating supports. In particular, ease of manufacturing, operational stability and superior performances for charged-particle tracking, muon detection and triggering have given rise to two main designs: the gas electron-multiplier (GEM) and the micro-mesh gaseous structure (Micromegas). By using a pitch size of a few hundred micrometres, both devices exhibit intrinsic high-rate capability (> 1 MHz/mm2), excellent spatial and multi-track resolution (around 30 μm and 500 μm, respectively), and time resolution for single photoelectrons in the sub-nanosecond range.
Coupling the microelectronics industry and advanced PCB technology has been important for the development of gas detectors with increasingly smaller pitch size. An elegant example is the use of a CMOS pixel ASIC, assembled directly below the GEM or Micromegas amplification structure. Modern “wafer post-processing technology” allows for the integration of a Micromegas grid directly on top of a Medipix or Timepix chip, thus forming integrated read-out of a gaseous detector (InGrid). Using this approach, MPGD-based detectors can reach the level of integration, compactness and resolving power typical of solid-state pixel devices. For applications requiring imaging detectors with large-area coverage and moderate spatial resolution (e.g. ring-imaging Cherenkov (RICH) counters), coarser macro-patterned structures offer an interesting economic solution with relatively low mass and easy construction – thanks to the intrinsic robustness of the PCB electrodes. Such detectors are the thick GEM (THGEM), large electron multiplier (LEM), patterned resistive thick GEM (RETGEM) and the resistive-plate WELL (RPWELL).
RD51 and its working groups
The main objective of RD51 is to advance the technological development and application of MPGDs. While a number of activities have emerged related to the LHC upgrade, most importantly, RD51 serves as an access point to MPGD “know-how” for the worldwide community – a platform for sharing information, results and experience – and optimizes the cost of R&D through the sharing of resources and the creation of common projects and infrastructure. All partners are already pursuing either basic- or application-oriented R&D involving MPGD concepts. Figure 1 shows the organization of seven Working Groups (WG) that cover all of the relevant aspects of MPGD-related R&D.
WG1 Technological Aspects and Development of New Detector Structures. The objectives of WG1 are to improve the performance of existing detector structures, optimize fabrication methods, and develop new multiplier geometries and techniques. One of the most prominent activities is the development of large-area GEM, Micromegas and THGEM detectors. Only one decade ago, the largest MPGDs were around 40 × 40 cm2, limited by existing tools and materials. A big step towards the industrial manufacturing of MPGDs with a size around a square metre came with new fabrication methods – the single-mask GEM, “bulk” Micromegas, and the novel Micromegas construction scheme with a “floating mesh”. While in “bulk” Micromegas, the metallic mesh is integrated into the PCB read-out, in the “floating-mesh” scheme it is integrated in the panel containing drift electrodes and placed on pillars when the chamber is closed. The single-mask GEM technique overcomes the cumbersome practice of alignment of two masks between top and bottom films, which limits the achievable lateral size to 50 cm. This technology, together with the novel “self-stretching technique” for assembling GEMs without glue and spacers, simplifies the fabrication process to such an extent that, especially for large-volume production, the cost per unit area drops by orders of magnitude.
Another breakthrough came with the development of Micromegas with resistive electrodes for discharge mitigation. The resistive strips match the pattern of the read-out strips geometrically, but are electrically insulated from them. Large-area resistive electrodes to prevent sparks have been developed using two different techniques: screen printing and carbon sputtering. The technology of the THGEM detectors is well established in small prototypes, the major challenge is the industrial production of high-quality large-size boards. A novel MPGD-based hydrid architecture, consisting of double THGEM and Micromegas, has been developed for photon detection; the latter allows a significant reduction in the ion backflow to the photocathode. A spark-protected version of THGEM (RETGEM), where the copper-clad conductive electrodes are replaced by resistive materials, and the RPWELL detector, consisting of a single-sided THGEM coupled to the read-out electrode through a sheet of large bulk resistivity, have also been manufactured and studied. To reduce discharge probability, a micro-pixel gas chamber (μ-PIC) with resistive electrodes using sputtered carbon has been developed; this technology is easily extendable for the production of large areas up to a few square metres.
To reduce costs, further work is needed for developing radiation-hard read-out and reinventing mainstream technologies under a new paradigm of integration of electronics and detectors, as well as integration of functionality, e.g. integrating read-out electronics directly into the MPGD structure. A breakthrough here is the development of a time-projection chamber (TPC) read-out with a total of 160 InGrid detectors, each 2 cm2, corresponding to 10.5 million pixels. Despite the enormous challenges, this has demonstrated for the first time the feasibility of extending the Timepix CMOS read-out of MPGDs to large areas.
WG2 Detector Physics and Performance. The goal of WG2 is to improve understanding of the basic physics phenomena in gases, to define common test standards, which allow comparison and eventually selection among different technologies for a particular application, and to study the main physics processes that limit MPGD performance, such as sparking, charging-up effects and ageing.
Primary ionization and electron multiplication in avalanches are statistical processes that set limits to the spatial, energy and timing resolution, and so affect the overall performance of a detector. Exploiting the ability of Micromegas and GEM detectors to measure both the position and arrival time of the charge deposited in the drift gap, a novel method – the μTPC – has been developed for the case of inclined tracks, allowing for a precise segment reconstruction using a single detection plane, and significantly improving spatial resolution (well below 100 μm, even at large track angles). Excellent energy resolution is routinely achieved with “microbulk” Micromegas and InGrid devices, differing only slightly from the accuracy obtained with gaseous scintillation proportional counters and limited by the Fano factor. Moreover, “microbulk” detectors have very low levels of intrinsic radioactivity. Other recent studies have revealed that Micromegas could act as a photodetector coupled to a Cherenkov-radiator front window, in a set-up that produces a sufficient number of UV photons to convert single-photoelectron time jitter of a few hundred picoseconds into an incident-particle timing response of the order of 50 ps.
One of the central topics of WG2 is the development of effective protection against discharges in the presence of heavily ionizing particles. The limitation caused by occasional sparking is now being lifted by the use of resistive electrodes, but at the price of current-dependent charging-up effects that cause a reduction in gain. Systematic studies are needed to optimize the electrical and geometrical characteristics of resistive Micromegas in terms of the maximum particle rate. Recent ageing studies performed in view of the High-Luminosity LHC upgrades confirmed that the radiation hardness of MPGDs is comparable with solid-state sensors in harsh radiation environments. Nevertheless, it is important to develop and validate materials with resistance to ageing and radiation damage.
Many of the advances involve the use of new materials and concepts – for example, a GEM made out of crystallized glass, and a “glass piggyback” Micromegas that separates the Micromegas from the actual read-out by a ceramic layer, so that the signal is read by capacitive coupling and the read-out is immune to discharges. A completely new approach is the study of charge-transfer properties through graphene for applications in gaseous detectors.
Working at cryogenic temperatures – or even within the cryogenic liquid itself – requires optimization to achieve simultaneously high gas gain and long-term stability. Two ideas have been pursued for future large-scale noble-liquid detectors: dual-phase TPCs with cryogenic large-area gaseous photomultipliers (GPMs) and single-phase TPCs with MPGDs immersed in the noble liquid. Studies have demonstrated that the copious light yields in liquid xenon, and the resulting good energy resolution, are a result of electroluminescence occurring within xenon-gas bubbles trapped under the hole electrode.
WG3 Applications, Training and Dissemination. WG3 concentrates on the application of MPGDs and on how to optimize detectors for particularly demanding cases. Since the pioneering use of GEM and Micromegas by the COMPASS experiment at CERN – the first large-scale use of MPGDs in particle physics – they have spread to colliders. Their use in mega-projects at accelerators is very important to engage people with science and to receive public recognition. During the past five years, there have been major developments of Micromegas and GEMs for various upgrades for ATLAS, CMS and ALICE at the LHC, as well as THGEMs for the upgrade of the COMPASS RICH. Although normally used as flat detectors, MPGDs can be bent to form cylindrically curved, ultralight tracking systems as used in inner-tracker and vertex applications. Examples are cylindrical GEMs for the KLOE2 experiment at the DAFNE e+e– collider and resistive Micromegas for CLAS12 at Jefferson Lab. MPGD technology can also fulfil the most stringent constraints imposed by future facilities, from the Facility for Antiproton and Ion Research to the International Linear Collider and Future Circular Collider.
MPGDs have also found numerous applications in other fields of fundamental research. They are being used or considered, for example, for X-ray and neutron imaging, neutrino–nucleus scattering experiments, dark-matter and astrophysics experiments, plasma diagnostics, material sciences, radioactive-waste monitoring and security applications, medical physics and hadron therapy.
To help in further disseminating MPGD applications beyond fundamental physics, academia–industry matching events were introduced when the continuation of the RD51 was discussed in 2013. Since then, three events have been organized by RD51 in collaboration with the HEPTech network (CERN Courier April 2015 p17), covering MPGD applications in neutron and photon detection. The events provided a platform where academic institutions, potential users and industry could meet to foster collaboration with people interested in MPGD technology. In the case of neutron detection, there is tangible mutual interest between the high-energy physics and neutron-scattering communities to advance the technology of MPGDs; GEM-based solutions for thermal-neutron detection at spallation sources, novel high-resolution neutron devices for macromolecular crystallography, and fast neutron MPGD detectors in fusion research represent a new frontier for future developments.
WG4 Modelling of Physics Processes and Software Tools. Fast and accurate simulation has become increasingly important as the complexity of instrumentation has increased. RD51’s activity on software tools and the modelling of physics processes that make MPGDs function provides an entry point for institutes that have a strong theoretical background, but do not yet have the facilities to do experimental work. One example is the development of a nearly exact boundary-element solver, which is in most aspects superior to the finite-element method for gas-detector simulations. Another example is the dedicated measurement campaign and data analysis programme that was undertaken to understand avalanche statistics and determine the Penning transfer-rates in numerous gas mixtures.
The main difference between traditional wire-based devices and MPGDs is that the electrode size of order 10 μm in MPGDs is comparable to the collision mean free path. Microscopic tracking algorithms (Garfield++) developed within WG4 have shed light on the effects of surface and space charge in GEMs, as well as on the transparency of meshes in Micromegas. The microscopic tracking technique has also led to better understanding of the avalanche-size statistics, clarifying in particular why light noble gases perform better than heavier noble gases. Significant effort has also been devoted to modelling the performance of MPGDs for particular applications – for example, studies of electron losses in Micromegas with different mesh specifications, and of GEM electron transparency, charging-up and ion-backflow processes, for the ATLAS and ALICE upgrades.
WG5 MPGD-Related Electronics. Initiated in WG5 in 2009 as a basic multichannel read-out-system for MPGDs, the scalable read-out system (SRS) electronics has evolved into a popular RD51 standard for MPGDs. Many groups contribute to SRS hardware, firmware, software and applications, and the system has already extended beyond RD51. SRS is generally considered to be an “easy-to-use” portable system from detector to data analysis, with read-out software that can be installed on a laptop for small laboratory set-ups. Its scalability principle allows systems of 100,000 channels and more to be built through the simple addition of more electronic SRS slices, and operated at very high bandwidth using the online software of the LHC experiments. The front-end adapter concept of SRS represents another degree of freedom, because basically any sensor technology typically implemented in multi-channel ASICs may be used. So far, five different ASICs have been implemented on SRS hybrids as plug-ins for MPGDs: APV25, VFAT, Beetle, VMM2 and Timepix.
The number of SRS systems deployed is now nearing 100, with more than 300,000 APV channels, corresponding to a total volume of SRS sales of around CHF1 million. SRS has been ported for the read-out of photon detectors and tracking detectors, and is being used in several of the upgrades for ALICE, ATLAS, CMS and TOTEM at the LHC. Meanwhile, CERN’s Technology Transfer group has granted SRS reproduction licences to several companies. Since 2013, SRS has been re-designed according to the ATCA industry standard, which allows for much higher channel density and output bandwidth.
WG6 Production and Industrialization. A key point that must be solved in WG6 to advance cost-effective MPGDs is the manufacturing of large-size detectors and their production by industrial processes. The CERN PCB workshop is a unique MPGD production facility, where generic R&D, detector-component production and quality control take place. Today, GEM and Micromegas detectors can reach areas of 1 m2 in a single unit and nearly 2 m2 by patching some elements inside the detectors. Thanks to the completion of the upgrade to its infrastructure in 2012, CERN is still leading in the MPGD domain in terms of maximum detector size; however, more than 10 companies are already producing detector parts of reasonable size. WG6 serves as a reference point for companies interested in MPGD manufacturing and helps them to reach the required level of competences. Contacts with some have strengthened to the extent that they have signed licence agreements and engaged in a technology-transfer programme co-ordinated within WG6. As an example, the ATLAS New Small Wheel (NSW) upgrade will be the first detector mass produced in industry using a large high-granularity MPGD, with a detecting area around 1300 m2 divided into 2 m × 0.5 m detectors.
WG7 Common Test Facilities. The development of robust and efficient MPGDs entails understanding of their performance and implies a significant investment for laboratory measurements and detector test-beam activities to study prototypes and qualify final designs. Maintenance of the RD51 lab at CERN and test-beam facilities plays a key role among the objectives of WG7. A semi-permanent common test-beam infrastructure has been installed at the H4 test-beam area at CERN’s Super Proton Synchrotron for the needs of the RD51 community. It includes three high-precision beam telescopes made of Micromegas and GEM detectors, data acquisition, services, and gas-distribution systems. One advantage of the H4 area is the “Goliath” magnet (around 1.5 T over a large area), allowing tests of MPGDs in a magnetic field. RD51 users can also use the instrumentation, services and infrastructures of the Gas Detector Development (GDD) laboratory at CERN, and clean rooms are accessible for assembly, modification and inspection of detectors. More than 30 groups use the general RD51 infrastructure every year as a part of the WG7 activities; three annual test-beam campaigns attract on average three to seven RD51 groups at a time, working in parallel.
The RD51 collaboration also advances the MPGD domain with scientific, technological and educational initiatives. Thanks to RD51’s interdisciplinary and inter-institutional co-operation, the University Antonio Nariño in Bogota has built a detector laboratory where doctoral students and researchers are trained in the science and technology of MPGDs. With this new infrastructure and international support, the university is leveraging co-operation with other Latin American institutes to build a critical mass around MPGDs in this part of the world.
Given the ever-growing interest in MPGDs, RD51 re-established an international conference series on the detectors. The first meeting in the new series took place in Crete in 2009, followed by Kobe in 2011 and Zaragoza in 2013 (CERN Courier November 2013 p33). This year, the collaboration is looking forward to holding the fourth MPGD conference in Trieste, on 12–15 October.
The vitality of the MPGD community resides in the relatively large number of young scientists, so educational events constitute an important activity. A series of specialized schools, comprising lectures and hands-on training for students, engineers and physicists from RD51 institutes, has been organized at CERN covering the assembly of MPGDs (2009), software and simulation tools (2011), and electronics (2014). This is particularly important for young people who are seeking meaningful and rewarding work in research and industry. Last year, RD51 co-organized the MPGD lecture series and the IWAD conference in Kolkata, the Danube School on Instrumentation in Novi Sad, and the special “Charpak Event” in Lviv, organized in the context of CERN’s 60th anniversary programme “60 Years of Science for Peace” (CERN Courier November 2014 p38). The latter was organized at a particularly fragile time for Ukraine, to enhance the role of science diplomacy to tackle global challenges via the development of novel technologies.
In conclusion
During the past 10 years, the deployment of MPGDs in operational experiments has increased enormously, and RD51 now serves a broad user community, driving the MPGD domain and any potential commercial applications that may arise. Because of a growing interest in the benefits of MPGDs in many fields of research, technologies are being optimized for a broad range of applications, demonstrating the capabilities of this class of detector. Today, RD51 is continuing to grow, and now has more than 90 institutes and 450 participants from more than 30 countries in Europe, America, Asia and Africa. Last year, six new institutes from Spain, Croatia, Brazil, Korea, Japan and India joined the collaboration, further enhancing the geographical diversity and expertise of the MPGD community. Since its foundation, RD51 has provided a fundamental boost from isolated developers to a world-wide MPGD network, as illustrated by collaboration-spotting software (figure 2, p29). Many opportunities are still to be exploited, and RD51 will remain committed to the quest to help shape the future of MPGD technologies and pave the way for novel applications.
The first results at a new high-energy frontier in particle physics were a major highlight for the 2015 edition of the European Physical Society Conference on High Energy Physics (EPS-HEP). The biennial conference took place at the University of Vienna on 22–29 July, only weeks after data taking at the LHC at CERN had started at the record centre-of-mass energy of 13 TeV. In addition to the hot news from the LHC, the 723 participants from all over the world were also able to share a variety of exciting news in different areas of particle and astroparticle physics, presented in 425 parallel talks, 194 posters and 41 plenary talks. The following report focuses on a few selected highlights, including the education and outreach session – a “first” for EPS-HEP conferences (see box below).
After more than two years of intense work during the first long shutdown, the LHC and the experiments have begun running again, ready to venture into unexplored territories and perhaps observe physics beyond the Standard Model, following the discovery of the Higgs boson in 2012. Both the accelerator teams and the LHC experimental collaborations made a huge effort to provide collisions and to gather physics data in time for EPS-HEP 2015. By mid-July, the experiments had already recorded 100 times more data than they had at around the same time after the LHC had started up at 7 TeV in 2010, and the collaborations had worked hard to be able to bring the first results using 2015 data.
Talks at the conference provided detailed information about the operation of the accelerator and expectations for the near and distant future. The ATLAS, CMS and LHCb collaborations all presented results at 13 TeV for the first time (CERN Courier September 2015 pp8–11). Measurements of the charged-particle production rate as a function of rapidity provide a first possibility to test hadronic physics models in the new energy region. Several known resonances, such as the J/ψ and the Z and W bosons, have been rediscovered at these higher energies, and the cross-section for top–antitop production has been measured and found to be consistent with the predictions of the Standard Model. The first searches for new phenomena have also been performed, but unfortunately with no sign of unexpected behaviour. In all, the early results presented at the conference were very encouraging and everyone is looking forward to more data being delivered and analysed.
At the same time, the LHC collaborations have continued to extract interesting new physics from the collider’s first long run. According to the confinement paradigm of quantum chromodynamics, the gauge theory of strong interactions, only bound states of quarks and gluons that transform trivially under the local symmetries of this description are allowed to exist in nature. It forbids free quarks and gluons, but allows bound states composed of two, three, four, five, etc, quarks and antiquarks, and provides no reason why such states cannot exist. While quark–antiquark and three-quark bound states have been known since the first formulation of the basic theory some 40 years ago, it is only a year or so since unambiguous evidence for tetraquark states was first presented. Now, at EPS-HEP 2015, the LHCb collaboration reported on the observation of exotic resonances in the decay products of the Λb, which could be interpreted as charmonium-pentaquarks. The best fit of the findings requires two pentaquark states with spin-parity JP = 3⁻⁄2 and JP = 5⁺⁄2, although other assignments and even a fit in terms of merely one pentaquark are also possible (CERN Courier September 2015 p5).
The study of semileptonic decays of B mesons with τ leptons in the final state offers the possibility of revealing hints of “new physics” sensitive to non-Standard Model particles that preferentially couple to third- generation fermions. The BaBar experiment at SLAC, the Belle experiment at KEK and the LHCb experiment at CERN have all observed an excess of events for the B-meson decays B → D + τ– + ντ and B → D* + τ– + ντ. Averaging over the results of the three experiments, the discrepancy compared with Standard Model expectations amounts to some 3.9σ.
Nonzero neutrino masses and associated phenomena such as neutrino oscillations belong to what is currently the least well-understood sector of the Standard Model. The Tokai to Kamioka (T2K) experiment, using a νμ beam generated at the Japan Proton Accelerator Complex situated approximately 300 km east of the Super-Kamiokande detector, was the first to observe νμ to νe oscillations. It has also made a precise measurement of the angle θ23 in the Pontecorvo–Maki–Nakagawa–Sakata neutrino-mixing matrix, the leptonic counterpart of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix. However, as this value is practically independent of the relative magnitudes of the neutrino masses, it does not enable the different scenarios for the neutrino-mass hierarchy to be distinguished. A comparison of neutrino oscillations with those of antineutrinos might provide clues to the still unsolved puzzle of charge-parity violation. In this context, T2K presented an update of their earlier results on νμ disappearance results and three candidates for the appearance of νe.
At the flavour frontier, the LHCb collaboration reported a new exclusive measurement of the magnitude of the CKM matrix element |Vub|, while Belle revisited the CKM magnitude |Vcb|. In the case of |Vub|, based on Λb decays, there remains a tension between the values distilled from exclusive and inclusive decay channels that is still not understood. For |Vcb|, Belle presented an updated exclusive measurement that is, for the first time, completely consistent with the inclusive measurement of the same parameter.
Weak gravitational lensing provides a means to estimate the distribution of dark matter in the universe. By looking at more than a million source galaxies at a mean co-moving distance of 2.9 Gpc (about nine thousand million light-years), the Dark Energy Survey collaboration has produced an impressive map of both luminous and dark matter, exhibiting potential candidates for superclusters and (super)voids. The mass distribution deduced from this map correlates nicely with the “known”, that is, optically detected, galaxy clusters in the foreground.
More than a year ago, the BICEP2 collaboration caused some disturbance in the scientific community by claiming to have observed the imprint of primordial gravitational waves, generated during inflation, in the B-mode polarization spectrum of the cosmic-microwave background. Since then, the Planck collaboration has collected strong evidence that, upon subtraction of the impact of foreground dust, the BICEP2 data can be explained by a “boring ordinary” cosmic-microwave background (CERN Courier November 2014 p15).
Following the parallel sessions that formed the first part of the conference, Saturday afternoon was devoted to the traditional special joint session with the European Committee for Future Accelerators (ECFA). The comprehensive title for this year was “Connecting Scales: Bridging the Infinities”, with an emphasis on particle-physics topics that influence the evolution of the universe. This joint EPS-HEP/ECFA session, which was well attended, gave the audience a unique occasion to profit from broad overviews in various fields.
As is traditional, the award of the latest prizes of the EPS High Energy and Particle Physics Division started the second half of the conference, which is devoted to the plenary sessions. The 2015 High Energy and Particle Physics Prize was awarded to James Bjorken “for his prediction of scaling behaviour in the structure of the proton that led to a new understanding of the strong interaction”, and to Guido Altarelli, Yuri Dokshizer, Lev Lipatov and Giorgio Parisi “for developing a probabilistic field theory framework for the dynamics of quarks and gluons, enabling a quantitative understanding of high-energy collisions involving hadrons”. The 2015 Giuseppe and Vanna Cocconi Prize was awarded to Francis Halzen “for his visionary and leading role in the detection of very-high-energy extraterrestrial neutrinos, opening a new observational window on the universe”. The Gribov Medal, Young Experimental Physicist Prize, and Outreach Prize for 2015 were also presented to their recipients, respectively, Pedro Vieira, Jan Fiete Grosse-Oetringhaus and Giovanni Petrucciani, and Kate Shaw (CERN Courier June 2015 p27).
An integral part of every conference is the social programme, which offers the local organizers the opportunity to present impressions of the city and the country where the conference is being held. Vienna is well known for classical music, and on this occasion the orchestra of the Vienna University of Technology performed Beethoven’s 7th symphony at the location where it was first performed – the Festival Hall of the Austrian Academy of Sciences. The participants were also invited by the mayor of the city of Vienna to a “Heurigen” – an Austrian wine tavern where recent year’s wines are served, combined with local food. A play called Curie_Meitner_Lamarr_indivisible presented three outstanding women pioneers of science and technology, all of whom had a connection to Vienna. A dinner in the orangery of the Schönbrunn Palace, the former imperial summer residence, provided a fitting conclusion to the social programme of this important conference for particle physics.
• EPS-HEP 2015 was jointly organized by the High Energy and Particle Physics Division of the European Physical Society, the Institute of High Energy Physics of the Austrian Academy of Sciences, the University of Vienna, the Vienna University of Technology, and the Stefan-Meyer Institute of the Austrian Academy of Sciences. For more details and the full programme, visit http://eps-hep2015.eu.
All about communication
The EPS-HEP 2015 conference made several innovations to communicate not only to the participants and particle physicists elsewhere, but also to a wider general public.
Each morning the participants were welcomed with a small newsletter containing information for the day. During the first part of the conference with only parallel sessions, the newsletter summarized the topics of all of the sessions, highlighting expected new results. The idea was to give the participants a glimpse of the topics being discussed at the parallel sessions they could not attend. For the second part of the conference with plenary presentations only, the daily newsletter also contained interviews that looked behind the scenes. The conference was accompanied online in social media, with tweets, Facebook entries and blogs highlighting selected scientific topics and social events. The tweets, in particular, attracted a large audience of people who were not able to attend the conference.
During the first week, a dedicated parallel session on education and outreach took place – the first ever at an EPS-HEP conference. The number of abstracts submitted for the session was remarkable, clearly indicating the need for exchange and discussions on this topic. The conveners chose a slightly different format from the standard parallel sessions, so that besides oral presentations on specific topics, a lively panel discussion with various contributions from the audience also took place. The session concluded with a “Science Slam” – a format in which scientists give short talks explaining the focus of their research in lively terms for the public. Extending the scope of the EPS-HEP conference towards topics concerned with education and outreach was clearly an important strength of this year’s edition.
In addition, a rich outreach programme formed an important part of the conference in Vienna; from the start, everyone involved in planning had a strong desire to take the scientific questions of the conference outside of the particle-physics community. One highlight of the programme was the public screening of the movie Particle Fever, followed by a discussion with Fabiola Gianotti, who will be the next director-general of CERN, and the producer of the movie, David Kaplan. Visual arts have become another important way to bring the general public in touch with particle physics, and several exhibitions, reflecting different aspects of particle physics from an artistic point of view, took place during the conference.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.