Comsol -leaderboard other pages

Topics

FAIR agreement

CCnew7_01_11

On 18 November 2010, CERN signed an agreement with the Facility for Antiproton and Ion Research (FAIR) GmbH, the company that is co-ordinating the construction of the accelerator and experiment facilities for the FAIR project in Germany. The agreement, which was signed by CERN’s director-general, Rolf Heuer (left) and FAIR’s scientific director Boris Sharkov, concerns collaboration in accelerator sciences and technologies and other scientific domains of mutual interest.

Fermi sees giant bubbles in the Milky Way

The Fermi gamma-ray space telescope has detected high-energy emission from two giant lobes on both sides of the plane of the Galaxy. This unexpected finding suggests that the Milky Way was more active in the past, either through a phase of intense stellar formation or of much higher activity of the central black hole.

The launch of a new facility with much higher sensitivity than its predecessor always raises the hope of finding something unexpected. The Fermi satellite, observing in the relatively unexplored area of giga-electron-volt photons, is especially suited for such discoveries (CERN Courier November 2008 p13). Having already found pulsars emitting pulsed radiation only in gamma-rays (CERN Courier December 2008 p9) and evidence for intergalactic magnetic fields (CERN Courier June 2010 p10), it has now detected mysterious gamma-ray bubbles in the Milky Way.

The two gamma-ray-emitting bubbles extend 50° above and below the Galactic plane with a width of about 40°. They have been revealed by Meng Su and two colleagues from the Harvard-Smithsonian Center for Astrophysics. The features were hidden in the diffuse galactic gamma-ray emission that arises mainly from inverse-Compton radiation of relativistic electrons and from π decay induced by cosmic-ray interactions with interstellar gas. The distinct characteristic of the bubbles is their hard spectrum, i.e. with more high-energy gamma-rays, which allows them to be disentangled more easily from the other diffuse emission features. Su and colleagues use different ways to remove the latter from the all-sky Fermi images to reveal the faint glow of the two bubbles.

The emission of the gamma-ray bubbles is remarkably uniform, with no significant change of intensity over their 25,000 light-years extent or between the north and south bubbles. They must therefore have been produced by a powerful process near the Galactic centre. Further indications as to the origin of the giant features comes from apparently associated X-ray emission from the rim of the bubbles, which has been identified in all-sky maps from the early 1990s by the Germany-led Roentgen Satellite (ROSAT), and from a spatially consistent haze of radio emission detected by the Wilkinson Microwave Anisotropy Probe (WMAP). The presence of these lower-energy counterparts disfavours the annihilation or decay of dark matter as the origin for the gamma-ray emission. The association of the radio signal with the high-energy gamma-ray emission suggests instead emission by relativistic electrons. The WMAP signal would then come from synchrotron radiation in the Galactic magnetic field, while Fermi would have recorded the inverse-Compton gamma-rays from electrons scattering off photons from the Galaxy or the cosmic microwave background. The X-ray signal from the edge of the bubbles further suggests a shock-wave interaction of expanding gas with the surrounding medium.

According to the authors of the paper, published in the Astrophysical Journal, the most likely origin of the bubbles is a large episode of energy injection from the Galactic centre. This could consist either of past accretion events on the now quiescent supermassive black hole at the centre of the Milky Way or a nuclear starburst event in the past 10 million years or so. However, both explanations have some difficulties in accounting for the observations. While a simple jet explanation would not easily produce the smooth surface brightness and north–south symmetry, an intense and prolonged star formation period is not suggested by recent observations of radioactive decay of aluminium-26 (CERN Courier January/February 2006 p10).

Planck reveals a stellar first year

CCpla1_01_11

The cosmic microwave background (CMB) is one of the most powerful resources that cosmologists have to investigate the evolution of the universe since its earliest moments. Like a “fabric” that permeates the cosmos, it holds information about the temperature distribution, keeping a permanent memory of all of the events that the universe has gone through. In particular, its anisotropies – deviations from the isotropic distribution that characterizes the universe – contain the signatures of the primordial perturbations that gave birth to the large-scale structure of the universe observed today.

Reading among these ripples in the CMB is by no means easy because they appear as tenuous fluctuations (1 part in 100,000) in a cold background at 3 K. In May 2009, ESA’s Planck spacecraft was launched into space to prise out the secrets hidden there. The result of about 20 years of work by the international Planck collaboration, it is a third-generation satellite that follows on from the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP). Since mid-July 2009, Planck has been orbiting at the second Lagrangian point (L2) of the Earth–Sun system, 1.5 million kilometres from Earth. It carries on board a Low Frequency Instrument (LFI), consisting of an array of 22 radiometers, and a High Frequency Instrument (HFI), which has 48 bolometric detectors. Since its launch, Planck has performed extremely well. The two instruments have so far scanned the whole sky almost three times in nine different frequency channels, with a sensitivity that is up to 10 times better and an angular resolution up to 3 times better than that of its most recent predecessor, WMAP (figure 1).

On 11 January, the Planck collaboration released its first catalogue of compact astrophysical sources. This is the first full-sky source catalogue to cover the frequency range 30–857 GHz at nine different frequencies. It includes a variety of different types of sources, from nearby objects in the galaxy to various classes of radio galaxies, and dusty galaxies to distant clusters of galaxies.

Because Planck is optimized to measure the CMB, the catalogue turns out to be an extremely powerful tool for identifying the cold objects that populate the interstellar medium (ISM) and measuring their temperature accurately. In this task Planck is allied with the Herschel space observatory, which was launched by ESA on board the same spacecraft. Herschel, designed to study cold objects, is not a survey telescope; rather, its purpose is to look closely at one part of the sky at a time. Planck and Herschel are thus good companions, whereby Planck provides the whole-sky survey and points Herschel to interesting locations that it can focus on.

Among the sources detected by Planck are “protostellar objects”, that is, clusters of matter that could give rise to a star. The complex processes at the origin of stars are among the hottest topics for astronomers, who carefully investigate the properties of the ISM to identify the trigger factors for star formation. Researchers at many Earth-based observatories will be able to use data from Planck to improve our understanding of these processes.

After only a few months of observation, Planck is also shedding light on another component of the ISM: namely, spinning dust grains. These are tiny aggregates of matter that appear to be slightly bigger than molecules such as CO2. They spin and radiate with a particular spectrum. Planck has for the first time been able to reconstruct this spectrum at high frequencies and so confirm that the spinning dust grains really do exist. This opens up a completely new field of study for astronomers, who will now have to understand the exact nature and behaviour of this intriguing component of the ISM.

Moving away from the interior of the Galaxy, one of the major contributions of the first part of Planck’s scientific programme is the identification of clusters of galaxies and the study of their properties through the signature that they leave in the CMB when its photons travel through the hot gas of the cluster. This is the Sunyaev-Zel’dovich effect, in which photons in the CMB increase in energy through inverse Compton scattering off hot electrons in the galaxy clusters. As a consequence, along the cluster direction, the CMB temperature increases at high frequency (>217 GHz) and decreases at low frequency (<217 GHz) with a well defined frequency spectrum, observable by Planck thanks to its wide frequency coverage.

Matter in the universe is grouped in enormous clusters surrounded by vast, empty spaces. These clusters can contain hundreds of galaxies and large amounts of dark matter. Dark matter consists of particles observed so far only through their gravitational effect; their exact nature remains unknown. Observing clusters of galaxies is crucial to understanding why matter of any kind aggregates in this fashion. The Sunyaev-Zel’dovich effect can be used to estimate the total mass of the cluster, which, when combined with X-ray observations, can in turn provide evaluations of the proportion of dark matter. The list of sources of this type that Planck has identified through the Sunyaev-Zel’dovich effect is 2–3 times larger than those published so far by the best observatories on Earth.

Planck has also been able to extend the spectrum of conventional radio sources. Previously this was known up to about 100 GHz but Planck has now pushed this to 857 GHz, giving new insight into the behaviour of these sources and the physical processes involved.

This first set of results is just the beginning of the Planck adventure. There will be more accurate catalogues and further findings in astrophysics, followed in early 2013 by Planck’s crucial contributions to cosmology. While the theoretical models used at present in cosmology seem to fit the current observations well, they require important components whose nature is not yet known – dark matter and dark energy. A major aim of the Planck mission is to cast light on both of these enigmatic components.

Dark energy is yet another contribution to the energy density of the universe, being different from dark and ordinary matter. It is presumed to provide the current acceleration to the expansion of the universe and its existence is inferred from observations of Type Ia supernovae, of the CMB and of the baryon acoustic oscillations that are determined by surveying galaxies at different cosmic epochs. The equation-of-state of dark energy characterizes the late and future evolution of the universe. Planck will be able to measure the parameters ρ (energy density) and w (ratio of pressure to ρ) of the equation-of-state with an accuracy that is expected to be an order of magnitude greater than for the previous data from WMAP. Moreover, studies of CMB anisotropies will allow the Planck collaboration to distinguish between various theoretical models that do not consider new ingredients in the energy-budget of the universe (such as dark energy and dark matter) but, rather, change the Einstein equations (as for example in “modified gravity” models).

CCpla2_01_11

As far as gravity is concerned, Planck’s contribution will depend on which theoretical model best describes the evolution of the universe. Among the many models that try to explain the initial conditions of the Big Bang, two have gained particular prominence: one is the inflationary model, in which the early universe underwent a period of exponential expansion; the other is the “bouncing model”, where the universe is described as something that was contracting and then “bounced” at the time when quantum gravity was important and began to re-expand. Inflationary models generally generate gravitational waves that can in principle be detected by Planck, depending on their amplitude, the value of which is a feature of the specific inflationary model. By contrast, the bouncing models do not predict gravitational waves. Planck will also constrain the expected deviation from the Gaussian distribution of the primordial fluctuations that are imprinted in the CMB. This feature characterizes more the bouncing models than the inflationary ones. While Planck will not have the final say in this field, it will indeed have the opportunity to rule out several models.

In addition to questions directly related to cosmology and astrophysics, Planck will also address a number of problems that are linked to particle physics and the Standard Model. It will increase by at least a factor three the accuracy of limits on the mass and number of neutrino species that WMAP currently sets at 0.56 eV and 4.3+/–0.8, respectively. Planck may also provide limits on the mass of the Higgs boson in certain theoretical models for a non-minimally coupled Higgs-inflation field with gravity.

According to current theories, the conditions of the universe today were set at the time of inflation, about 10–35 s after the Big Bang. The LHC below ground and Planck in deep space are allies in probing these first moments of the universe’s evolution. While the physicists at CERN are seeking to reproduce the conditions of the early universe, with Planck we observe the first light that came out of this “soup” of matter and radiation. Particle physicists, as well as astrophysicists and cosmologists, must work towards a concordant description of this early epoch in which data from the different sources fit together to give a consistent picture of the universe that we all inhabit.

Warwick hosts a feast of flavour

In September 2010, the University of Warwick played host to CKM2010, the 6th International Workshop on the CKM Unitarity Triangle. The CKM workshops, named after the Cabibbo-Kobayashi-Maskawa matrix that describes quark mixing in the Standard Model, date from 2002 when the first meeting took place at CERN. The workshop has since established itself as one of the most important meetings in the field.

With a two-year gap since the previous meeting, there was much at CKM2010 for theorists and experimentalists alike to discuss. This was the first time since the inauguration of the series that the workshop occurred with neither of the B-factory experiments – BaBar and Belle – being operational. A generation of experiments in charm and kaon physics have also completed data-taking. While much is being done to archive the knowledge that has been accumulated from this era, the organizers of CKM2010 chose instead to look to the future.

Uncharted territory

Only by looking forwards is it possible to address the many open questions in flavour physics, which Paride Paradisi of the Technische Universität München presented in the first of the opening plenary sessions. The biggest issue, perhaps, concerns the fact that there is still no real understanding of the underlying reason for the flavour structure of the Standard Model. More pressing, however, is the so-called “new-physics flavour puzzle”: how is the need for physics beyond the Standard Model at the tera-electron-volt scale – to resolve the hierarchy problem – to be reconciled with the absence of such new physics in precision flavour measurements? The most popular solution is the “minimal flavour violation” hypothesis, which can be tested by observables that are either highly suppressed or precisely predicted in the Standard Model.

Two sectors where the experimental measurements do not yet reach the desired sensitivity are those of the D0 and Bs mesons. Guy Wilkinson of the University of Oxford described the progress made at Fermilab’s Tevatron over the past few years, emphasizing the potential of the LHC experiments at CERN – particularly LHCb – to explore uncharted territory. It will be interesting to see if the datasets with larger statistics confirm the hints of contributions from new physics to Bs mixing that have been seen by the CDF and DØ experiments at the Tevatron. The large yields of D, J/ψ, B and U mesons already observed by the LHC experiments augur well for exciting results in the near future.

However, the LHC will not be the only player in flavour physics in the next decade. Yangheng Zheng of the Graduate University of Chinese Academy of Sciences and Marco Sozzi of the Università di Pisa and INFN described the new facilities and experiments that are coming online in the charm and kaon sectors, respectively. The BEPCII collider in Beijing has achieved an instantaneous luminosity above 3×1032 cm–2 s–1, and the BES III collaboration has already published the first results from its world’s largest datasets of electron–positron collisions in the charmonium resonance region. The kaon experiments NA62 at CERN and K0T0 at J-PARC are well on the way towards studies of the ultra-rare decays K→ π+νν and K→ π0νν.

CCckm1_01_11

Meanwhile, there are plans for a new generation of B factories, which Peter Križan of the University of Ljubljana and J Stefan Institute described. The clean environment of electron–positron colliders provides a unique capability for various measurements, such as B+→τντ. The upgrade of the KEKB facility and the Belle detector to allow operation with a peak luminosity of 8×1035 cm–2 s–1 (40 times higher than achieved to date) has been approved and construction is now ongoing, with commissioning due to start in 2014. The design shares many common features – most notably the “crab-waist” collision scheme – with the SuperB project, recently approved by the Italian government (Italian government approves SuperB).

Maximizing the impact of these new experiments will require progress in lattice QCD calculations. Junko Shigemitsu of Ohio State University described recent developments in this field, showing that accuracy below a per cent has been reached for several parameters in the kaon sector, with calculations using different lattice actions giving consistent results. In the charm sector, determinations of constants are approaching the per cent level of precision; this advance, when combined with new measurements, appears to have resolved the apparent discrepancy in the value of the Ds decay constant. Further work is needed to reach the desired level of precision in B physics but excellent progress is being made by several groups around the world.

The main body of the workshop consisted of parallel meetings of six working groups, which provided opportunities for detailed discussions between experts. The summaries from these working groups were presented in two plenary sessions on the final day.

Working group I, convened by Federico Mescia of the Universitat de Barcelona, Albert Young of the University of North Carolina and Tommaso Spadaro of INFN Frascati, focused on the precise determination of |Vud| and |Vus|. A measurement of the muon lifetime at a precision of one part per million by the MuLAN collaboration determines the reference value of the Fermi coupling. Improved measurements of |Vud| and |Vus|, mainly from nuclear β-decay and (semi-)leptonic kaon decay, respectively, set constraints on the unitarity of the first row of the CKM matrix at better than 1 permille. Interesting discrepancies in the measurements of the neutron lifetime and of |Vus| demand further studies.

Hint of new physics?

Working group II, convened by Jack Laiho of the University of Glasgow, Ben Pecjak of the University of Mainz and Christoph Schwanda of the Institute of High Energy Physics in Vienna, had as its subject the determination of |Vub|, |Vcb|, |Vcs| and |Vcd|. This is an area where dialogue between theorists and experimentalists has been extremely fruitful in driving down the uncertainties. Lively discussions continue, stimulated in part by the apparent discrepancies between inclusive and exclusive determinations of both |Vcb| and |Vub|. The latest data on the leptonic decay B+→τντ, which is sensitive to contributions from charged Higgs bosons, show an interesting discrepancy that may prove to be a first hint of new physics.

Working group III, convened by Martin Gorbahn of the Technische Universität München, Mitesh Patel of Imperial College London and Steven Robertson of the Canadian Institute for Particle Physics, at McGill University and SLAC, tackled rare B, D and K decays. One particularly interesting decay is B→K*l+l, where first measurements of the forward-backward asymmetry by BaBar, Belle and CDF hint at non-standard contributions. This is exciting for LHCb, where additional kinematic variables will be studied. Inclusive rare decays, such as b→sγ, and those with missing energy in the final state are better studied in electron–positron collisions and help to motivate the next generation of B factories. Among other golden modes, improved results on Bs→μ+μ and K→πνν remain eagerly anticipated by theorists, who continue to refine the expectations for these decays in various models.

The fourth working group, convened by Alexander Lenz of the Technische Universität Dortmund and Universität Regensburg, Olivier Leroy of the Centre de Physique des Particules de Marseille and Michal Kreps of the University of Warwick, was concerned with the determination of the magnitudes and relative phases of Vtd, Vts and Vtb. While the Tevatron experiments have started to set constraints on these quantities from direct top production, with further improvement anticipated at the LHC, the strongest tests at present come from studies of the oscillations of charm and beauty mesons. Hints for new physics contributions in the Bs sector provided the main talking point, but the potential for and the importance of improved searches for CP violation in charm oscillations was also noted.

Measurements of the angles of the unitarity triangle were the subject of the remaining two working groups. Working group V, convened by Robert Fleischer of NIKHEF and Stefania Ricciardi of the Rutherford Appleton Laboratory, focused on determinations of the angle γ using B→DK decays, while working group VI, convened by Matt Graham of SLAC, Diego Tonelli of Fermilab and Jure Zupan of the University of Ljubljana and the J Stefan Institute, covered measurements using charmless B decays. The angle γ plays a special role because it is has negligible theoretical uncertainty. The precision of the measurements is not yet below 10°, leaving room for results from LHCb – combined with measurements from charm decays – to have a big impact on the unitarity triangle fits. The measurements based on charmless decays, which are dominated by loop (“penguin”) amplitudes, tend to have significant theoretical uncertainties that must be tamed to isolate any new physics contribution. The main issue concerns developing methods to understand whether existing anomalous results (such as the pattern of CP asymmetries in B→Kπ decays) are caused by QCD corrections or by something more exotic.

A common feature of all working groups was the strong emphasis on the sensitivity to new physics and the utility of flavour observables to distinguish different extensions of the Standard Model. Less than two years after the award of the Nobel prize to Kobayashi and Maskawa “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature”, their greatest legacy – and that of Nicola Cabibbo (see box) – will perhaps be a discovery that finally goes beyond the paradigm of the Standard Model.

• CKM2010 was generously supported by the University of Warwick, the Science and Technology Facilities Council, the Institute for Particle Physics Phenomenology and the Institute of Physics.

The origin of cosmic rays

As 2012 approaches, and with it the centenary of Victor Hess’s famous discovery, it really is time that we found out where cosmic rays originate. Gamma-ray astronomy has shown that most of the particles come from the Galaxy, but even this discovery was 63 years in coming (Dodds et al. 1975). Supernova remnants (SNR) have long been suspected to be the source of cosmic rays below about 100 PeV, the production mechanism being Fermi acceleration in shock-borne magnetic fields. The energies involved are reasonable: about 1043 J into cosmic rays per SNR per century. However, there are doubts, with pulsars or extended sources being other possibilities.

The origin problem arises because magnetic fields on a variety of scales in the Galaxy cause particles at these energies to pursue tortuous paths, so that their arrival directions at Earth bear virtually no relationship to the directions of the sources. Solving the problem therefore requires other approaches. Structure in the energy spectrum could provide such a possibility.

Clues from the energy spectrum

The only feature of the cosmic-ray energy spectrum that researchers currently agree on is a steepening that starts in the region of 3–5 PeV. First observed by German Kulikov and George Khristiansen at Moscow University around 50 years ago (Kulikov and Khristiansen 1958), this so-called “knee” has been confirmed time and again from the 1960s onwards. It was not until 13 years ago, however, that we pointed out that the “knee” is too sharp for a conventional explanation in which the galactic magnetic field gradually “loses its grip” on the particles (Erlykin and Wolfendale 1997). We argued instead that it results from a dominant contribution from a single, nearby source. The idea is that particles from the single source provide a component that pokes through the background arising from an amalgam of many differing sources (figure 1).

The single-source model has had a rough ride, with most researchers being unwilling to accept that there could be “fine structure” in the spectrum caused by nuclei heavier than protons from this source. Our early analysis was based on extensive air-shower (EAS) data from a variety of EAS arrays. While these results are still valid, we have recently analysed new data from some 10 arrays, thus extending the reach to higher energies than before. Remarkably, and importantly, a new feature has appeared at about 70 PeV.

When the energy spectrum is plotted as log(E3I(E)) versus logE, a “knee” will appear as a peak and it is a new peak in the spectrum plotted in this way that is of interest (figure 2). It was first reported by the GAMMA collaboration led by Romen Martirosov, using the GAMMA EAS array of the A I Alikhanyan National Science Laboratory at Mount Aragats in Armenia (Garyaka et al. 2008). Our own survey shows that it is also present in most of the other reported spectra (Erlykin and Wolfendale 2010). It is interesting to note that our first paper on the single-source model showed a small excess at the level of 2.6σ just where the GAMMA collaboration finds its 70 PeV peak. However, we did not claim that this small peak was significant at the time.

We have now used these recent spectra to investigate the case for SNRs in general as the source of cosmic rays at energies below 100 PeV. The model for the acceleration of the cosmic rays predicts that those with charge Z should have a differential energy spectrum, with a negative slope of about 2, up to a maximum energy proportional to Z. Nuclei are conventionally grouped into the following nuclear bands: P, He, CNO, M(Ne, Mg, Si) and Fe (actually Fe and Ni). The spectrum expected at some distance from the single source differs from that at the SNR itself because of propagation effects, but these can be calculated. This is what we have done, assuming that the single source is Monogem, a “recent” SNR with an age of 85–115 ky, at a “local” distance of 250–400 pc (Erlykin and Wolfendale 2003).

Figure 3 shows a synthesis of the 10 reported spectra from the EAS arrays from which the (predicted) smooth background, also shown, has been subtracted. The resulting “spectrum” is thus our estimate of the extra contribution from the single source. The figure also shows our fits of the individual single-source spectra in the different nuclear bands to the observations. Inevitably, there is no question of a perfect fit: although the He and Fe peaks seem well founded, those for CNO and P are less well so. Peaks for P and He have been seen in other experiments, however. The whole range is thus reasonably well represented.

Our calculations give the relative abundances at a fixed energy per particle of the various nuclear groups on ejection from the single source as: P(0.477), He(0.406), CNO(0.081), M(0.010) and Fe(0.026). Remarkably, with the exception of the M group, these abundances are close to those inferred for the ambient cosmic radiation at 103 GeV, an energy where direct measurements are available. We interpret this as showing that the majority of the galactic sources are SNR, like Monogem, but of course with different ages and distances.

The search for confirmation

The identification of the peaks in figure 3 could be confirmed by searching for discontinuities in those entities that have given rise to estimates of the mean mass of the ambient cosmic radiation. However, such a search is bedevilled by two facts. First, it is in the nature of things that at any energy the mean mass of the single-source particles should be close to that of those injected for the ambient cosmic radiation. Second, the different analyses of the variety of EAS parameters used in deriving the mean masses give, notoriously, different results. Our conclusion, however, is that there is no evidence against our identifications.

Monogem Ring

These differences provide a happy hunting ground for searches for changes with increasing energy in the nature of the interactions between cosmic rays and nuclei in the atmosphere. It must also be said, however, as we have pointed out, that recent results from CERN show no significant change in at least some of the interaction characteristics over the range 0.4–26 PeV. This is just where the cosmic-ray energy spectrum has its knee; LHC data on forward physics are eagerly awaited.

Electronics experts connect in Aachen

CCtwe1_01_11

Each year, the Topical Workshop on Electronics for Particle Physics (TWEPP) provides the opportunity for experts to come together to discuss electronics for particle-physics experiments and accelerator instrumentation. Established in 2007, it succeeds the workshops initiated in 1994 to focus on electronics for LHC experiments, but with a much broader scope. As the LHC experiments have now reached stable operating conditions, the emphasis is shifting further towards R&D for future projects, such as the LHC upgrades, the studies for the Compact Linear Collider and the International Linear Collider, as well as neutrino facilities and other experiments in particle- and astroparticle physics.

The latest workshop in the series, TWEPP-2010, took place on 20–24 September at RWTH Aachen University and attracted 190 participants, mainly from Europe but also from the US and Japan. It covered a wide variety of topics, including electronics developments for particle detection, triggering and data acquisition; custom analogue and digital circuits; optoelectronics; programmable digital logic applications; electronics for accelerator and beam instrumentation; and packaging and interconnect technology. The programme of plenary and parallel sessions featured 16 invited talks together with 63 oral presentations and 66 poster presentations selected from a total of 150 submissions – an indication of the attractiveness of the workshop concept. The legacy of the meeting as a platform for the discussion of common LHC electronics developments is reflected in several electronics working groups for the super-LHC (sLHC) project holding their bi-annual meeting during the workshop, namely the Working Groups for Power Developments and for Optoelectronics, as well as the Microelectronics User Group. In addition, two new working groups on Single Event Upsets and on development of electronics in the emerging xTCA standard had “kick-off” meetings during TWEPP-2010.

After a welcome and introduction to particle physics in the host country and the host institute (see box), the opening session continued with “Physics for pedestrians”, a talk by Patrick Michel Puzo of the Laboratoire de l’Accélérateur Linéaire, Orsay, in which he explained the Standard Model of particle physics, as well as experimental measurement techniques, to the audience of hardware physicists and engineers. DESY’s Peter Göttlicher went on to present the European X-ray Free Electron Laser project (XFEL) currently under construction at DESY. This fourth-generation light source will provide ultra-short flashes of intense and coherent X-ray light for the exploration of the structure and dynamics of complex systems, such as biological molecules. Dedicated two-dimensional camera systems, such as the Adaptive Gain Integrating Pixel Detector (AGIPD), are being developed to record up to 5000 images a second with a resolution of 1 megapixel. The session closed with a summary of the status of the LHC by CERN’s Ralph Assmann, who also discussed the expected and observed limitations and prospects for further increases in intensity, luminosity and beam energy at the LHC, as well as short- and long-term planning.

From ASICs to optical links

For the next three days, morning and afternoon sessions began with plenary talks, after which the audience separated into two parallel sessions. With 20 presentations, the session on application-specific integrated circuits (ASICs) was again by far the most popular, demonstrating the demand of chip designers for a forum to present and discuss their work. One increasingly important aspect in the next generation of experiments with high radiation levels is the mitigation of single-event effects (SEE), such as single event upsets (SEU), which are caused by the interaction of particles with the semiconductor material. Deep-submicron integrated circuit technologies with low power consumption are becoming increasingly sensitive to SEEs and this must be carefully taken into account at both the system level and the ASIC design level. Invited speaker Roland Weigand of the European Space Agency gave insight into the various approaches of SEE mitigation that are employed in space applications, where integrated circuits are exposed to solar and cosmic radiation.

A relatively new development is the 3D integration of circuits, where several circuit layers are stacked on each other and interconnected, for example by through-silicon-vias. The advantages include the reduction of the chip area, reduced power consumption, a high interconnection density and the possibility to combine different processes in one device. Within particle physics, a possible future application is in the upgrades of the large silicon trackers of the LHC experiments. Kholdoun Torki from Circuits Multi-Projets, Grenoble, presented the plans for a 3D multiproject wafer run for high-energy physics, which allows several developers to share the cost of low-volume production by dividing up the reticle area.

The parallel session on “Power, grounding and shielding” focused mainly on novel power-provision schemes for upgrades of the LHC experiments, namely serial powering and DC–DC conversion. An increase in the number of readout channels and the possible implementation of additional functionality, such as a track trigger, in the tracker upgrades of ATLAS and CMS will lead to higher front-end power consumption and consequently larger power losses in the supply cables (already installed) and to an excessive increase in the material budget of power services. New ways to deliver the power therefore need to be devised. Both of the new schemes discussed solve this problem by lowering the current to be delivered. In serial powering, this is done by daisy-chaining many detector modules, while DC–DC conversion schemes provide the power at a higher voltage and lower current, with on-detector voltage conversion. These topics were further expanded in the session of the Working Group for Power Developments.

CCtwe3_01_11

Another parallel session was devoted to the topic of optoelectronics and optical links. Data transmission via optical links is already standard in the LHC experiments because such links do not suffer from noise pick-up and contribute less material than the classic copper wires. In the session and in the following working-group meeting, presentations focused on experience with installed systems as well as on new developments, in particular for the Versatile Link project, which will develop high-speed optical-link architectures and components suitable for deployment at the sLHC. In an inspiring talk, invited speaker Mark Ritter of IBM expanded on optical technologies for data communication in large parallel systems. He explained that scaling in chip performance is now constrained by limitations on electrical communication bandwidth and power dissipation and he described how optical technologies can help overcome these constraints. The combination of silicon nanophotonic transceivers and 3D integration technologies might be the way forwards, with a photonic layer integrated directly into the chip such that on-board data transmission between the individual circuit layers is performed optically.

First LHC experience

A highlight of this year’s workshop was the topical session devoted to the performance of LHC detectors and electronics under the first beam conditions. Gianluca Aglieri Rinella of CERN presented the experience with ALICE, a detector designed specifically for the reconstruction of heavy-ion collisions, where high particle-multiplicities and large event sizes are expected. He showed that more than 90% of the channels are alive for most of the ALICE detector subsystems, with the data-taking efficiency being around 80%. The ALICE collaboration’s goal for proton–proton collisions is to collect a high-quality, minimum-bias sample with low pile-up in the time projection chamber, corresponding to an interaction rate of 10 kHz. For this reason, the peak luminosity at ALICE is deliberately reduced during proton–proton running.

CCtwe4_01_11

Thilo Pauly of CERN presented the ATLAS report. He showed that more than 97% of the channels are operational for all detector systems and that 94% of the delivered data are good for physics analysis. The ATLAS momentum scale for tracks at low transverse momentum is measured with a precision of a few per mille, while the energy scale for electromagnetic showers is known from the reconstruction of neutral pions to better than 2%. The experience of CMS, presented by Anders Ryd of Cornell University, is similarly positive, with all subsystems being 98% functional with a data-taking efficiency of 90%. He explained that the collaboration struggled for a while with the readout of high-occupancy beam-induced events in the pixel detector – the main reason for detector downtime – but managed to solve the problem.

Last but not least, Karol Hennessy of the University of Liverpool reported on LHCb, which is optimized to detect decays of beauty and charm hadrons for the study of CP violation and rare decays. This experiment has had a detector uptime of 91% and a fraction of working channels above 99% in most subdetectors. One specialty is the Vertex Locator – a silicon-strip detector consisting of retractable half-discs whose innermost region is only 8 mm away from the beam. This detector reaches an impressive peak spatial resolution of 4 μm.

Posters and more

The well attended poster session took place in the main lecture hall and featured 66 posters. Discussions were so lively that the participants had to be reminded to stop because they would otherwise miss the guided city tour. The workshop dinner took place in the coronation hall of the town hall, where participants were welcomed by the mayor of Aachen. The dinner saw the last speech by CERN’s François Vasey as Chair of the Scientific Organizing committee. He became Workshop Chair in 2007, shaping the transition to TWEPP and after four successful workshops he now passes the baton to Philippe Farthouat, also of CERN. The next workshop in the series will take place on 26–30 September 2011 in Vienna.

TWEPP-10 was organized by the Physikalisches Institut 1B, RWTH Aachen University, with support from Aachen University, CERN and ACEOLE, a Marie Curie Action at CERN funded by the European Commission under the 7th Framework Programme.

CERN’s ISR: the world’s first hadron collider

The concept of a particle collider was first laid down by Rolf Widerøe in a German patent that was registered in 1943 but not published until 1952. It proposed storing beams and allowing them to collide repeatedly so as to attain a high energy in the centre-of-mass. By 1956, the first ideas for a realistic proton–proton collider were being publicly discussed, in particular by Donald Kerst and Gerard O’Neill. At the end of the same year, while CERN’s Proton Synchrotron (PS) was still under construction, the CERN Council set up the Accelerator Research Group, which from 1960 onwards focused on a proton–proton collider. By 1962, the group had chosen an intersecting ring layout for the collider over the original concept of two tangential rings because it offered more collision points. Meanwhile, in 1961, CERN had been asked by Council to include a 300 GeV proton synchrotron in the study.

In 1960 construction began on a small proof-of-principle 1.9 MeV electron storage ring, the CERN Electron Storage and Accumulation Ring (CESAR). This was for experimental studies of particle accumulation (stacking). This concept, which had been proposed by the Midwestern Universities Research Association (MURA) in the US in 1956, would be essential for obtaining sufficient beam current and, in turn, luminosity. The design study for the Intersecting Storage Rings (ISR) was published in 1964 – involving two interlaced proton-synchrotron rings that crossed at eight points.

Those who were against the ISR were afraid of the leap in accelerator physics and technology required by this venture

After an intense and sometimes heated debate, Council approved the principle of a supplementary programme for the ISR at its meeting in June 1965. The debate was between those who favoured a facility to peep at interactions at the highest energies and those who preferred intense secondary beams with energies higher than that provided by the PS. Those who were against the ISR were also afraid of the leap in accelerator physics and technology required by this venture, which appeared to them as a shot into the dark.

France made land available for the necessary extension to the CERN laboratory and the relevant agreement was signed in September 1965. The funds for the ISR were allocated at the Council meeting in December of the same year and Kjell Johnsen was appointed project leader. Finally, Greece was the only country out of the 14 member states whose budget did not allow it to participate in the construction. In parallel, the study of a 300 GeV proton synchrotron was to be continued; this would eventually lead to the construction of the Super Proton Synchrotron (SPS) at CERN.

Figure 1 shows the ISR layout with the two rings intersecting at eight points at an angle of 15°. To create space for straight sections and to keep the intersection regions free of bulky accelerator equipment, the circumference of each ring was set at 943 m, or 1.5 times that of the PS. Out of the eight intersection regions (I1–I8) six were available for experiments and two were reserved for operation (I3 for beam dumping and I5 for luminosity monitoring).

The ISR construction schedule benefited from the fact that the project had already been studied for several years and many of the leading staff had been involved in the construction of the PS. The ground-breaking ceremony took place in autumn 1966 and civil engineering started for the 15-m wide and 6.5-m high tunnel, using the cut-and-fill method at a level 12 m above the PS to minimize excavation. The construction work also included two large experimental halls (I1 and I4) and two transfer tunnels from the PS to inject the counter-rotating beams. In parallel, the West Hall was built for fixed-target physics with PS beams; ready in July 1968, it was used for assembling the ISR magnets. The civil engineering for the ISR rings was completed in July 1970, including the earth shielding. The production of the magnet steel started in May 1967 and all of the major components for the rings had been ordered by the end of 1967. The first series magnets arrived in summer 1968 and the last magnet unit was installed in May 1970.

Pioneering performance

The first proton beam was injected and immediately circulated in October 1970 in Ring 1. Once Ring 2 was available, the first collisions occurred on 27 January 1971 at a beam momentum of 15 GeV/c (figure 2, p27). By May, collisions had taken place at 26.5 GeV/c per beam – the maximum momentum provided by the PS – which was equivalent to protons of 1500 GeV hitting a fixed target.

In the first year of operation, the maximum circulating current was already 10 A, the luminosity was as high as 3×1029 cm–2 s–1 and the loss-rates at beam currents of up to 6 A were less then 1% per hour (compared with a design loss-rate of 50% in 12 h). Happily, potentially catastrophic predictions that the beams would grow inexorably and be lost – because, unlike in electron machines, the stabilizing influence of synchrotron radiation would be absent – proved to be unfounded.

The stacking in momentum space, pioneered by MURA, was an essential technique for accumulating the intense beams. In this scheme, the beam from the PS was slightly accelerated by the RF system in the ISR and the first pulse deposited at the highest acceptable momentum on an outer orbit in the relatively wide vacuum chamber. Subsequent pulses were added until the vacuum chamber was filled up to an orbit close to the injection orbit, which was on the inside of the chamber. This technique was essential for the ISR and had been proved experimentally to work efficiently in CESAR.

Borrowed quadrupoles from the PS, DESY and the Rutherford Appleton Laboratory increased the luminosity by a factor of 2.3

The design luminosity was achieved within two years of start-up and then increased steadily, as figure 3 shows. It was particularly boosted in I1 (originally for one year in I7) and later in I8 by low-beta insertions that decreased the vertical size of the colliding beams. The first low-beta insertion, which consisted largely of borrowed quadrupoles from the PS, DESY and the Rutherford Appleton Laboratory, increased the luminosity by a factor of 2.3. Later, for the second intersection, more powerful superconducting quadrupoles were developed at CERN but built by industry. This increased the luminosity by a factor of 6.5, resulting in a maximum luminosity of 1.4×1032 cm–2 s–1. This remained a world record until 1991, when it was broken by the Cornell electron–positron storage ring.

The stored currents in physics runs were 30–40 A (compared with 20 A nominal); the maximum proton current that was ever stored was 57 A. Despite these high currents, the loss rates during physics runs were typically kept to one part per million per minute, which provided the desired low background environment for the experiments. Beams of physics quality could last 40–50 hours.

Because the ISR’s magnet system had a significant reserve, the beams in the two rings could be accelerated to 31.4 GeV/c by phase displacement, a technique that was also proposed by MURA. This consisted of moving empty buckets repeatedly through the stacked beam. The buckets were created at an energy higher than the most energetic stored particles and moved through the stack to the injection orbit. In accordance with longitudinal phase-space conservation, the whole stack was accelerated and the magnet field was simultaneously increased to keep the stack centred in the vacuum chamber. This novel acceleration technique required the development of a low-noise RF system operating at a low voltage with a fine control of the high-stability, magnet power supplies.

The ISR was also able to store deuterons and alpha particles as soon as they became available from the PS, leading to a number of runs with p–d, d–d, p–α and α–α collisions from 1976 onwards. For CERN’s antiproton programme, a new beamline was built from the PS to Ring 2 for antiproton injection and the first p-p runs took place in 1981. During the ISR’s final year, 1984, the machine was dedicated to single-ring operation with a 3.5 GeV/c antiproton beam.

The low loss-rates observed for the gradually rising operational currents were only achievable through a continuous upgrading of the ultra-high vacuum system, which led to a steady decrease in the average pressure (figure 4). The design values for the vacuum pressure were 10–9 torr outside the intersection regions and 10–11 torr in these regions to keep the background under control. The initial choice of a stainless-steel vacuum chamber bakeable to 300°C turned out to be the right one but nevertheless a painstaking and lengthy programme of vacuum improvement had to be launched. The vacuum chambers were initially baked to only 200°C and had to be re-baked at 300°C and, later, at 350°C. Hundreds of titanium sublimation pumps needed to be added and all vacuum components had to be glow-discharge cleaned in a staged programme. These measures limited the amount of residual gas, and hence the production of ions from beam–gas collisions, as well as the out-gassing that occurred when positive ions impinged on the vacuum chamber walls after acceleration through the electrostatic beam potential.

The electrons created by ionization of the residual gas were often trapped in the potential well of the coasting proton beam. This produced an undesirable focusing and coupling between the electron cloud and the beam, which led to “e–p” oscillations. The effect was countered by mounting clearing electrodes in the vacuum chambers and applying a DC voltage to suppress potential wells in the beam.

By 1973 the ISR had suffered two catastrophic events in which the beam burnt holes in the vacuum chamber

By 1973 the ISR had suffered two catastrophic events in which the beam burnt holes in the vacuum chamber. Collimation rings were then inserted into the flanges to protect the bellows. The vacuum and engineering groups also designed and produced large, thin-walled vacuum chambers for the intersection regions. The occasional collapse of such a chamber would leave a spectacular twisted sculpture and require weeks of work to clean the contaminated arcs.

While the ISR broke new ground in many ways, the most important discovery in the field of accelerator physics was that of Schottky noise in the beams – a statistical signal generated by the finite number of particles, which is well known to designers of electronic tubes. This shot noise not only has a longitudinal component but also a transverse component in the betatron oscillations (the natural transverse oscillations of the beam). This discovery opened new vistas for non-invasive beam diagnostics and active systems for reducing the size and momentum-spread of a beam.

The longitudinal Schottky signal made it possible to measure the current density in the stack as a function of the momentum (transverse position) without perturbing it. These scans clearly showed the beam edges and any markers (figure 5). A marker could be created during stacking by making a narrow region of low current-density or by losses on resonances.

The transverse Schottky signals gave information about how the density of the stack varied with the betatron frequency, or “tune”, which meant that stacking could be monitored in the tune diagram and non-linear resonances could be avoided. During stacking, space–charge forces increase and change the focusing experienced by the beam. Using the Schottky scans as input, the magnet system could be trimmed to compensate the space–charge load. A non-invasive means to verify the effect of space charge and to guide its compensation had suddenly become available.

The discovery of the transverse Schottky signals had another, and arguably more important impact, namely the experimental verification of stochastic cooling of particle beams. This type of cooling was invented in 1972 by Simon van der Meer at CERN. Written up in an internal note, it was first considered as a curiosity without any practical application. However, Wolfgang Schnell realized its vast potential and actively looked for the transverse Schottky signals at the ISR. This was decisive for the resurrection of van der Meer’s idea from near oblivion and its experimental proof at the ISR (figure 6). Towards the end of ISR’s life, stochastic cooling was routinely used on antiproton beams to increase the luminosity in antiproton–proton collisions by counteracting the gradual blow-up of the antiproton beam through scattering with residual gas as well as resonances.

Stochastic cooling was the decisive factor in the conversion of the SPS to a pp collider and in the discovery there in 1983 of the long-sought W and Z bosons. This led to the awarding of the Nobel Prize in Physics to Carlo Rubbia and van der Meer, the following year. The technique also became the cornerstone for the success of the more powerful Tevatron proton–antiproton collider at Fermilab. In addition, CERN’s low-energy antiproton programmes in the Low Energy Antiproton Ring and the Antiproton Decelerator, as well as similar programmes at GSI in Germany and at Brookhaven in the US, owe their existence to stochastic cooling. The extension to bunched beams and to optical frequencies makes stochastic cooling a basic accelerator technology today.

A lasting impact

With its exceptional performance, the ISR dispelled the fears that colliding beams were impractical and dissolved the reluctance of physicists to accept the concept as viable for physics. In addition to stochastic cooling, the machine pioneered and demonstrated large-scale, ultra-high vacuum systems as well as ultra-stable and reliable power converters, low-noise RF systems, superconducting quadrupoles and diagnostic devices such as a precise DC current transformer and techniques such as vertically sweeping colliding beams through each other to measure luminosity – another of van der Meer’s ideas.

The ISR had been conceived of in 1964 in an atmosphere of growing resentment against the high costs of particle physics and it was in a similar climate in the early 1980s that the rings were closed down to provide financial relief for the new Large Electron–Positron collider at CERN. The political pressures in the 1960s had fought and accepted the ISR as a cost-efficient gap-filler because the financial and political climates were not ready for a 300 GeV machine. However, had CERN built the 300 GeV accelerator instead of the ISR, then the technology of hadron colliders would have been seriously delayed. Instead, the decision to build the ISR opened the door to collider physics and allowed an important expansion in accelerator technology that would affect everyone for the better, including the 300 GeV project, the pp project and eventually the LHC.

Evolution and revolution: detectors at the ISR

CCdet1_01_11

The committee for the ISR experimental programme – the ISRC – started its work in early 1969, with the collider start-up planned for mid-1971. Two major lines of experimental programmes emerged: “survey experiments” would aim to understand known features (in effect, the Standard Model of the time) in the new energy regime, while other experiments would aim at discoveries. This was surprisingly similar to the strategy 40 years later for the LHC. But in reality the two approaches are worlds apart.

Hadronic physics in the late 1960s was couched in terms of thermodynamical models and Regge poles. The elements of today’s Standard Model were just starting to take shape; the intermediate vector bosons (W+, W and W0, as the Z0 was called then) were thought to have masses in the range of a few giga-electron-volts. The incipient revolution that was to establish the Standard Model was accompanied by another revolution in experimentation. Georges Charpak and collaborators had demonstrated the concept of the multiwire proportional chamber (MWPC) just one year earlier, propelling the community from a photographic-analogue into the digital age with his stroke of genius. Nor should the sociological factor be forgotten: small groups, beam exposures of a few days to a few weeks, as well as quick and easy access to the experimental apparatuses – all characterized the style of experimentation of the time.

These three elements – limited physics understanding, collaboration sociology and old and new experimental methods – put their stamp on the early programme. From today’s perspective, particle physics was at the dawn of a “New Age”. I will show how experimentation at the ISR contributed to the “New Enlightenment”.

1971–1974: the ‘brilliant, early phase’

Maurice Jacob, arguably one of the most influential guiding lights of the ISR programme, called this first period the “brilliant, early phase”, in reference to its rich physics harvest (Jacob 1983). The lasting contributions include: the observation of the rising total-cross section; measurements of elastic scattering; inclusive particle production and evidence for scaling; observation of high-pT production; and the non-observation of free quarks. Several experimental issues of the period deserve particular mention.

CCdet2_01_11

The experimental approach matched the “survey” character of this first period. The ISR machine allowed tempting flexibility, with operation at three – later four – collider and asymmetric beam energies. Requests for low- or high-luminosity running and special beam conditions could all be accommodated. A rapid switch-over from one experimental set-up to another at the same interaction point was also one of the guiding design principles. Notwithstanding this survey character, this early period saw several imaginative contributions to experimentation – some with a lasting influence.

The devices known today as “Roman pots” were invented at the ISR with the aim to place detectors close to the circulating beams, which was a requirement for elastic-scattering experiments in the Coulomb interference region. During beam injection and set up these detectors had to be protected from high radiation doses by retracting them into a “stand-by” position. The CERN-Rome group (R601, for experiment 1 at intersection 6 (I6)) in collaboration with the world-famous ISR Vacuum group developed the solution: the detectors were housed in “pots” made from thin stainless-steel sheets, which could be remotely moved into stand-by mode or one of several operating positions. This technique has been used at every hadron collider since, including at the LHC.

The first 4π-detector was installed by the R801 Pisa-Stony-Brook collaboration. It used elaborate scintillator hodoscopes, providing 4π-coverage with high azimuthal and polar-angle granularity, well adapted to the measurement of the rising total cross-section and short-range particle correlations. The Split-Field Magnet (SFM), ultimately installed at interaction 4 (I4), was the first general ISR facility. The SFM was groundbreaking in many ways and was proposed by Jack Steinberger in 1969 as the strategy for exploring terra incognita at the ISR with an almost 4π magnetic facility.

CCdet3_01_11

The SFM’s unconventional magnet topology – two dipole magnets of opposite polarity – addressed dual issues: minimizing the magnetic interference with the two ISR proton beams; and providing magnetic analysis preferentially in the forward region, the place of physics interest according to prevailing understanding. It made daring use of the new MWPC technology for tracking, propelling this technique within a few years from 10×10 cm2 prototypes to instrumentation of hundreds of MWPCs with hundreds of square metres and almost 100,000 electronic read-out channels. The SFM became operational towards the end of 1973 – a fine example of what CERN can accomplish with a meeting of minds and the right leadership. True to its mission this facility was used by many collaborations, changing the detector configuration or adding detection equipment as necessary, throughout the whole life of the ISR. The usefulness of a dipole magnetic field at a hadron collider was later to be beautifully vindicated by the magnetic spectrometer of the UA1 experiment in the 1980s.

The Impactometer was the name given by Bill Willis to a visionary 4π detector, proposed in 1972 (Willis 1972). It anticipated many physics issues and detection features that would become “household” concepts in later years. The 4π-coverage, complete particle measurements and high-rate capabilities were emphasized as the road to new physics. One novel feature was high-quality electromagnetic and hadronic calorimetry, thanks to a futuristic concept: liquid-argon calorimetry. In a similar spirit, an Aachen-CERN-Harvard-Genoa collaboration, with Carlo Rubbia and others, proposed a 4π-detector using total-absorption calorimetry but based on more conventional techniques: among the three options evaluated were iron/scintillator, water Cherenkovs and liquid-scintillator instrumentation. However, the ISRC swiftly disposed of both this proposal and the Impactometer.

CCdet4_01_11

The discovery of high-pT π0-production at rates much higher than anticipated was one of the most significant early discoveries at the ISR, which profoundly advanced the understanding of strong interactions. Unfortunately, this physics “sensation” proved also to be an unexpected, ferocious background to other new physics. Discovered by the CERN-Columbia-Rockefeller (CCR, R103) collaboration in 1972, the high rate of π0s masked the search for electrons in the region of a few giga-electron-volts as a possible signal for new physics – as in, for example, the decay of an intermediate vector boson into e+e pairs – and ultimately prevented this collaboration from discovering the J/ψ.

The reaction to this “sensation plus dilemma” was immediate, resulting in several proposals for experiments, all of which were capable of discoveries – as their later results demonstrated. These more evolved experimental approaches brought a new level of complexity and longer lead-times from proposal to data-taking. However, the fruition of these efforts came a few years too late to make the potentially grand impact that was expected from and deserved by the ISR.

In 1973, the CCOR collaboration (CCR plus Oxford) proposed the use of a superconducting solenoid, equipped with cylindrical drift chambers for tracking and lead-glass walls for photon and electron measurements (R108). The Frascati-Harvard-MIT-Naples-Pisa collaboration proposed an instrumented magnetized iron toroid for muon-pair studies and associated hadrons. Originally intended for installation in I8 (R804), it was finally installed because of scheduling reasons in I2 (R209). The SFM facility was complemented with instrumentation (Cherenkov counters and electromagnetic calorimetry) for electron studies, and later for charm and beauty studies.

CCdet5_01_11

The 1972 Impactometer proposal was followed with a reduced-scale modular proposal concentrating on e+e and γ-detection, submitted in November 1973 by a collaboration of Brookhaven, CERN, Saclay, Syracuse and Yale. It combined the liquid-argon technology for electromagnetic calorimetry with novel transition-radiation detectors for electron/hadron discrimination. (These latter consisted of lithium-foil radiators and xenon-filled MWPCs, with two-dimensional read-out, as the X-ray detectors.) The advanced technologies proposed led to the cautious approval of the detector as R806T (T for test) in June 1974 with a gradual, less than optimal build-up.

1974-1977: learning the lessons

The first “brilliant period” ended with a Clarion call for the particle-physics community at large and sobering soul-searching for the ISR teams: the discovery of the J/ψ in November 1974. The subsequent period brought a flurry of activities, with the initial priority to rediscover the J/ψ at the ISR.

First was R105 (CCR plus Saclay), which employed threshold Cherenkovs and electromagnetic calorimeters, permitting electron identification at the trigger level. Second, an overwhelmingly clear physics justification for a new magnetic facility with an emphasis on high-pT phenomena, including lepton detection, emerged. Several groups, including teams from the UK and Scandinavia, were studying a facility based on a large superconducting solenoid, while a team around Antonino Zichichi explored the potential of a toroidal magnetic facility. The inevitable working group, constituted by the ISRC and chaired by Zichichi, received the remit to motivate and conceptualize a possible new magnetic facility.

With exemplary speed – January to March 1976 – the working group documented the physics case and explored magnets and instrumentation, but shied away from making a recommendation, leaving the choice between toroid and solenoid to other committees. It is a tribute to the ISRC that it made a clear recommendation for a solenoid facility with large, open structures in the return yoke for additional instrumentation (particle identification and calorimetry). The toroidal magnet geometry, while recognized as an attractively suitable magnet topology for proton–proton collider physics, was considered too experimental a concept for rapid realization. It would take another 30 years before a major toroid magnet would be built for particle physics, namely the ATLAS Muon Spectrometer Toroid.

The CERN Research Board did not endorse the ISRC recommendation, possibly being concerned – I am speculating – about the constraints on the long-term impact on the ISR schedule and adequate support among the user community. Despite this negative outcome, the working group had a significant influence on CERN’s research agenda. It provided an assessment of state-of-the-art collider experimentation and many members of the working group would use their work to shape the UA1 and UA2 facilities at the SppS programme, which was proposed at about the same time.

Within weeks following the negative decision from the Research Board, some key members of the working group banded together and submitted a new proposal for a fully instrumented facility, building around Tom Taylor’s innovative Open Axial Field Magnet (warm Helmholtz coils with conical poles), as a base for the Axial Field Spectrometer (AFS). The time was just right: it took only three months from the submission of the proposal by the CERN-Copenhagen-Lund collaboration in January 1977 to ISRC recommendation and Research Board approval as R807 in April, thanks to the strong and courageous support from the then Research Director, Paul Falk-Vairant, and the committees.

CCdet6_01_11

The end of this period also coincided with a turning point in our understanding of hadronic interactions. The early view of “soft” hadronic interactions, limited to low-pT phenomena, shaped the initial programme. Ten years later, hadrons were still complicated objects but the point-like substructure had been ascertained. Hard scattering became the new probe and simplicity was found at large pT with jets, photons and leptons. This marked a remarkable “about-turn” in our approach towards hadron physics, which found its expression in the second half of the ISR experimentation and exploitation.

1977–1983: maturity

The shock of 1974, followed by the debates on physics and detector facilities in 1976 focused the minds of the various players. Experimental programmes were being prepared for (relatively) rare, high-pT phenomena, in a variety of manifestations: leptons, photons, charmed particles, intermediate vector bosons with a sensitivity reaching beyond 30 GeV/c2 and jets. This strategy was vindicated by the discovery at Fermilab of the Υ in July 1977. This was yet another cruel blow to the ISR, particularly considering that the R806 collaboration saw the first evidence for the Υ at the ISR in November 1977 (Cobb et al. 1977).

The versatility of the ISR and the incipient SppS development brought also proton–antiproton collisions and light-ion (d, α) physics to the fore. A multifaceted and promising programme – confirmed by the 2nd ISR Physics Workshop in September 1977 – was being put in place. By early 1978 the efforts that were started in 1973 and 1974 brought their first fruits: the R108 collaboration reported their first results at the 1978 Tokyo Conference; the R209 collaboration had their experiment completed by the end of 1977; R806 was completed by early 1976; and R807 was building up to a first complete configuration towards the end of 1979. A walk round the ISR-ring would have shown the diversity of approaches that were being adopted:

• I1 was home for the R108 collaboration using an advanced, thin (1 radiation length) 1.5 T superconducting solenoid, which would become its workhorse for the subsequent six years. This was instrumented with novel – at the time – cylindrical drift chambers inside and lead-glass electromagnetic-shower detectors outside. Several upgrades brought higher sensitivity through the addition of shower counters inside the solenoid, which resulted in full azimuthal coverage both for charged particles and for photons, as well as higher collision rates, as provided by the inventive ISR teams in the form of warm, low-beta quadrupoles for stronger beam focusing (p27).

• I2 was truly complementary to I1, with the R209 (CERN-Frascati-Harvard-MIT-Napoli-Pisa) collaboration betting on muons and magnetized steel toroids and aiming at dimuon mass sensitivity beyond 30 GeV/c2. This was combined with a large-acceptance hadron detector, based on scintillator hodoscopes for hadron correlation studies.

• I4 was where the SFM showed its strength as a “user facility”, accommodating the 22nd ISR experiment, R422, at the end of the ISR. The open magnet structure invited many groups to add equipment for dedicated low- to high-pT physics, with remarkable contributions to charm physics and candidates for Λb.

• I6 explored physics in the forward region with an unusual magnet, known as the “Lamp Shade”. It also had considerable emphasis on charm particles.

• I7 was reserved for “exotica”. In the late 1970s, a group operated a streamer chamber as a rehearsal for what later would become UA5 at the SppS. The last experiment to take data – after the official closure of the ISR on 23 December 1983 – was R704, which used 3.75 GeV/c antiprotons colliding with an internal gas-jet of H2 to perform charmonium spectroscopy.

• I8 became the home of R806, with its finest hour being the discovery of prompt γ production, the golden test-channel for perturbative QCD. It entered into a rich symbiotic relation with the nascent AFS (R807) between the end of 1979 and late 1981, when all of R807 was installed except for the uranium/scintillator calorimeter. After a considerable struggle to obtain the uranium plates, this advanced (and adventurous) hadron calorimeter was finally completed by early 1982. One of the significant results obtained with the calorimeter was the first measurement of the jet production cross-section at ISR energies in 1982, consistent with QCD predictions. With the closure of 2π calorimeter coverage, R806 finally had to yield its place, morphing into two novel photon detectors (NaI crystals with photodiode read-out). The Athens-Brookhaven-CERN-Moscow collaboration (R808) provided these detectors, which were placed on opposite walls of the uranium/scintillator calorimeter. In its final years, the ISR machine teams integrated superconducting low-beta quadrupoles, providing peak luminosities in excess of 1032 cm–2 s–1 – a superb rehearsal for the LHC.

The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained!

Freeman Dyson

The final year, 1983, saw a valiant struggle between the physics communities, hell-bent on extracting the most physics from this unique machine – proton–proton, light ions, proton–antiproton operation with a total of almost 5000 hours of physics delivered – and a sympathetic, yet firm director-general, Herwig Schopper, who presided over the demise of the ISR. In the last session of the ISRC he not only paid tribute to the rich physics harvest but also emphasized the important and lasting contribution of the ISR to experimentation at colliders – or, in the words of one of today’s most brilliant theorists, Freeman Dyson: “New directions in science are launched by new tools more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained!”

• I am grateful to M Albrow, G Belletini, L Camilleri and W Willis for discussions and careful reading.

Physics in collision

CCphy1_01_11

It is difficult to imagine a greater contrast than that between the particle detectors installed at the ISR, when the first proton–proton collisions took place 40 years ago, and those ready for the first collisions at the LHC in 2009. Several experiments were waiting in the wings, but in January 1971 just a few simple scintillation counters were in place to detect the first collisions at the ISR, while an oscilloscope trace showed left-moving and right-moving beam halo and some left–right coincidence signals from collisions.

The ISR was in many ways a “transitional machine”, a bridge between relatively low-energy, fixed-target accelerators and today’s extremely high-energy colliders, as well as between detectors based largely on scintillation and Cherenkov counters, spark chambers or bubble chambers and today’s (almost) full-solid-angle trackers, calorimeters and muon detectors that record gigabytes of data per second. For example, the last large ISR experiment, the Axial Field Spectrometer (AFS), pictured right, with its full-azimuth drift chamber and uranium-scintillator calorimeter, bore no resemblance to any of the first-generation experiments but had much in common with the detectors found in later colliders. Also, from the theoretical point of view, the decade of the ISR saw the transition from confusion to today’s Standard Model, even though other machines made some dramatic key discoveries – charm, the W and Z bosons, and the third family of quarks and leptons.

CCphy2_01_11

Before the start of the ISR, the idea that fractionally charged quarks could be produced there led to a special session of the ISR Committee (ISRC-70-34) that reviewed eight quark-search proposals, of which three were “encouraged”. It was later established that fewer than one charged particle in 1010 has a charge 1/3 or 2/3. It would be a stretch to claim that this was “observation of quark confinement”, but being at a higher energy than other accelerator experiments and with much greater sensitivity than cosmic-ray studies, the ISR played a role in our current belief that free quarks do not exist outside hadronic matter. However, quarks can still be “seen” confined inside hadrons – as the deep-inelastic, electron-scattering experiments at SLAC discovered in 1968.

In 1971, today’s theory of strong interactions, QCD, was also “waiting in the wings”; theorists were groping towards the light. Simple (experimentally, not theoretically) two-body reactions such as proton–proton (pp) elastic scattering or π + p → π0 + n were described by Regge theory, which was based on the sound principles of unitarity (no probabilities higher than 1.0), analyticity (no instantaneous changes) and the crossing symmetry (“what goes in can come out”) of scattering amplitudes. While Regge theory is still a more useful approach than QCD for those reactions, calculations became difficult because the strong interaction between hadrons is strong and the calculations do not converge. It was also clear that at the higher ISR collision energies – jumping from the 28 GeV beams of the Proton Synchrotron (PS) at CERN and the Alternating Gradient Synchrotron (AGS) at Brookhaven to an equivalent beam energy of 2000 GeV – many hadrons could be created and that Regge theory had little to say about it except for certain “inclusive” reactions, discussed below.

At the first ISRC meetings in 1968 and 1969 the decision was taken to devote one of the eight intersection regions to a “large, general-purpose magnet system”. Three systems had been proposed and a working group was asked to make a rapid decision. The choice fell on the Split Field Magnet (SFM) – primarily because its field was strong and simple (a dipole) in the forward directions, where most particles would be produced. Unfortunately, the field was zero at 90° and, with pole pieces above and below the beams, it was unsuitable for physics at high-transverse momentum, pT. By 1978 the SFM had been upgraded with greatly improved detectors, but it remained focused on forward and diffractive particle production.

Hadronic diffraction at high energies, the simplest example being elastic scattering, is described in Regge theory as arising mainly from the exchange of a pomeron between the scattering protons. This has quite different properties from other, virtual meson (or “Reggeon”) exchanges. Before the ISR, the total pp cross-section was known to decrease with energy, as it did for πp (but not for K+p). The early discovery that it rises (as in figure 1), which was a surprise to many, had been predicted if, and only if, the pomeron is an allowed exchange. Today we take it for granted that the total pp cross-section rises with energy but at the time the rise led to much experimental and theoretical activity: does the proton become more opaque? Or larger? Or both? Beautiful experiments, for example by the CERN-Rome group that developed “Roman pots” to place detectors very close to the circulating beam, showed that the slope (in momentum transfer, t) of elastic scattering increases with energy. Thus protons in effect become larger but they also become more opaque. Roman pots have been used at all subsequent hadron colliders, including the LHC.

CCphy3_01_11

The first ISR experiments were mostly concerned with strong interactions at large distances, or small momentum transfers. On the menu, in addition to searches for free quarks, monopoles and weak vector-bosons, were elastic scattering and low- and high- multiplicity final states. How could such complicated final states be handled experimentally? A popular approach, still common today, was to measure the angular and momentum distributions of a single particle from each collision and ignore all of the others – the so-called “inclusive single particle” spectra. As mentioned, Regge theory could be adapted to describe such data, but only at low pT. Experiment R101 (intersection 1, experiment 1) was simplicity itself: literally a toy train with photographic emulsions in each wagon. When colliding beams were established it was shunted alongside the collision region, and left there to measure the angular distribution of produced particles. The first physics publication from the LHC was of the same distribution, although not measured with a toy train set!

Pre-ISR experiments at the PS typically installed detectors for a few weeks or months and then moved on. It was (jokingly?) said that you should not have more photomultipliers than physicists. That mindset persisted in the early ISR days. Four experiments shared Intersection 2 (I2). Three single-arm spectrometers measured inclusive particle spectra at small and large angles. They discovered Feynman scaling – in which forward particle spectra are proportional to the beam energy – at small (but not at large) pT, high-mass diffraction and co-discovered high pT particles. Feynman scaling was shown to be approximate only; indeed, scaling violations are a key feature of QCD. Two of these spectrometers were combined in 1975 to look for hadrons with open charm but, in retrospect, the acceptance was far too small. The fourth experiment at I2 was a large, steel-plate spark chamber designed to look for muons from the decay of the then-hypothetical W boson, supposing its mass might be only a few giga-electron-volts. (It was later found to have a mass of 81 GeV, much too high for the ISR.) Unfortunately, with hindsight, the muon detector was not made in two halves on opposite sides so as to have more acceptance for muon pairs; had the collaboration persevered as the luminosity increased they might have seen J/ψ → μ+μ. One reason they gave for not persisting was that the background from charged π → μ decays was much larger than they had expected.

The reputation of the ISR as a physics-discovery machine suffered greatly from missing the discovery of the J/ψ particle, which made its dramatic entrance in November 1974 at Brookhaven’s AGS and the e+e collider at SLAC. The “November Revolution” convinced remaining doubters of the reality of quarks, with important implications for electroweak interactions. How did the ISR miss it? There is no single answer. Today’s intense interaction between theorists and experimenters hardly existed in the early 1970s – but even if it had, there would have been few, if any, voices insisting on a search for narrow states in lepton pairs.

R103, one of the early experiments designed to measure electron (and π0) pairs by the CERN-Columbia-Rockefeller collaboration (CCR), already had two large lead-glass arrays on opposite sides of the collision region in 1972–1973 and found an unexpectedly high rate of events. This was the important discovery of high-pT hadron production from quark and gluon scattering, but it had the unfortunate consequence that the team had to turn their trigger threshold (with 10 Hz rate-limited spark chambers) up to 1.5 GeV, just too high to accept J/ψ → e+e. This was followed in 1974 by R105 (by CCR plus Saclay), which included a gas Cherenkov counter. There were about a dozen J/ψ events on tape at the end of 1974 but not clear enough and not in time for a discovery. However, before November 1974, R105 (together with Fermilab experiments) had already discovered direct lepton production, in a proportion e/π of 10–4, which was later described by a “cocktail” of processes (J/ψ, open charm and Drell-Yan qq annihilation).

CCphy4_01_11

High-pT particle production was not promoted by theorists until after the ISR started up. In December 1971, Sam Berman, James Bjorken and John Kogut (BBK) used Richard Feynman’s parton model, which was supported by deep-inelastic electron scattering, to predict a much higher production rate of hadrons and photons at high pT than expected from a simple extrapolation of the then-known exponentially falling spectra (Berman et al. 1971). The rates that they calculated were for electromagnetic scattering of the charged partons, but they noted that these were lower bounds and strong scattering (the exchange of a spin-1 gluon) would give much larger cross sections. They also suggested that scattered partons (now known to be quarks and gluons) would fragment into jets (“cores”) of hadrons along the direction of the parent parton. Feynman had similar ideas.

The observation of unexpectedly high rates of high-pT hadron production at the ISR was a major discovery (figure 3); it showed that parton–parton scattering indeed occurred through the strong interaction, but with a weaker coupling than between two protons. This behaviour was later understood in QCD in terms of a decreasing strong coupling at smaller distances – the phenomenon of asymptotic freedom for which David Gross, David Politzer and Frank Wilczek received the Nobel Prize in Physics in 2004. Unfortunately, the high pT discovery – made by the CCR collaboration (for π0) and the British–Scandinavian and Saclay-Strasbourg collaborations (for charged hadrons) – masked the J/ψ in the e+e channel. As noted earlier, high-pT pions produced an unexpected large background also to muon measurements, so the muon pairs were not pursued.

The high-pT jets predicted by BBK took another decade to be discovered in hadron–hadron collisions, almost 10 years after jets had been seen in e+e collisions. One needed to select events with large, total transverse energy in an area much greater than the jets themselves, and with a hadron calorimeter with excellent energy resolution as in the AFS (see photo p39). After a long struggle, the collaborations of the AFS (R807) at the ISR and UA2 at the Super Proton Synchrotron (SPS) running in proton–antiproton (pp) collider mode (SppS), submitted papers on the same day to Physics Letters with convincing evidence for jets. The ISR data extended to a jet-transverse energy, ET = 14 GeV, but the SppS data reached 50 GeV with 1/1000 the luminosity of the ISR. At all post-ISR colliders, high-ET jets are considered as “objects” that are almost as clear as electrons, muons and photons. The experiments at the LHC are already studying the 2-jet mass spectrum for evidence of new particles with masses of up to 2 TeV.

The scattering of two quarks is described in QCD by the exchange of a gluon – the strong-force equivalent of the photon. Gluons must also be present as constituents of protons, being continuously emitted and absorbed by quarks. The “discovery of the gluon” is credited to the observation at DESY of e+e annihilation to three jets, which showed clearly that outgoing q and q jets could be accompanied by gluon radiation. Although not as dramatic, it was clear at the ISR that the high-pT particle production required more scattering partons in the proton than just the three valence quarks, and that the inclusion of gluons gave sensible fits to the data.

A related ISR discovery was the production of high-pT photons, produced directly rather than coming from the decay of hadrons (such as π0); these are direct probes of the processes q + q → g + γ and q + g → q + γ. Direct high-pT γγ production was later observed. Now, at the LHC, direct γγ production is a promising search channel for the Higgs boson.

CCphy5_01_11

With the advent of the parton model most physicists – theorists and experimenters – were happy to be able to leave the complicated, difficult world of hadrons at the femtometre scale and dive down to the next, partonic, layer, which was both simpler theoretically and experimentally exciting. But what they left behind is still unfinished business. While QCD is frequently said to be the theory of strong interactions it still can not calculate hadron processes. Every hadronic collision involves large-distance processes, which we are not yet able to calculate using QCD. The problem is that the strong interaction becomes too strong when distances become as large as the size of hadrons (about 1 fm); indeed, this is responsible for the permanent confinement of quarks and gluons inside hadrons. Calculations that work well on smaller distance scales (or larger momentum transfers), do not converge; they blow up and become intractable.

So, while QCD cannot be used to calculate small-angle elastic scattering, Regge theory with pomeron exchange can describe it, although we recognize that it is less fundamental. High-mass diffraction provided a new tool for studying pomeron exchange and eventually double-pomeron reactions such as pomeron + pomeron → π+π (and to other hadron states) were found. We now understand that the pomeron is, to leading order, a colourless pair of gluons. The idea that there could be quarkless hadrons, or “glueballs”, also motivated these studies. Not finding them implied that if they exist they must be heavy (at least about 1 GeV) and so short-lived that they could not emerge from the collision as free hadrons.

The pomeron itself is not a particle, but an exchanged “entity” with Regge properties (complex angular momentum, negative mass-squared). Heroic attempts have been made to calculate its properties in QCD. Perhaps one day Regge theory will be proved to be a large-distance limit of QCD. While at the ISR, central masses in double-pomeron reactions were limited to less than about 3 GeV, at the LHC they extend to masses a hundred times greater, allowing Higgs bosons – if they exist – to be produced in the simple, final state p + H + p, with no other particles produced. Both the ATLAS and CMS collaborations have groups proposing to search for this process, which can be called “diffractive excitation of the vacuum” because the Higgs field fills (in some sense “is”) the vacuum.

A string theory of hadrons was briefly in vogue in the 1970s, with qq mesons as open strings and pomerons as closed strings. Regge theory is compatible with this idea and can explain the relationship between the mass and spin of mesons. Thirty years later, string theory is in vogue once again but on a much smaller, near-Planck scale, with electrons and quarks as open strings and gravitons being closed strings. Despite the enormous progress in collider technology, no one can imagine a collider that could see such superstrings, unless extra dimensions exist on an LHC scale.

Many other studies of strong interaction physics were made at the ISR. These included particle correlations, short-range order in rapidity, resonance production etc. Multiparticle forward spectrometers also made systematic studies of diffraction, including the production of charmed baryons and mesons.

With its two independent rings, the ISR was more versatile than any other collider – then or since. Not only were pp collisions studied, but antiprotons and deuterons and α-particles were also made to collide with each other and with protons. For the last run, an antiproton beam was stored in the ISR for more than 350 hours, colliding with a hydrogen gas-jet target to form charmonium. So the swansong of the ISR was a fixed-target experiment measuring the very particle that it had missed because high pT physics got in the way!

The ISR machine was outstanding and the detectors eventually caught up and led the way to the modern collider physics programme. When it was closed in 1984, there was still plenty to do, despite the higher energy SppS collider, whose UA1 and UA2 detectors owed so much to the ISR experience. However, the ISR had to make way for the Large Electron–Positron collider, which in turn made way for the LHC, so that proton–proton collisions are once again exciting.

• This has been a personal, and far from comprehensive, view of the physics that we learnt at the ISR. I thank Leslie Camilleri, Luigi Di Lella and Norman McCubbin for careful reading and redressing some balance. I also pay homage to Maurice Jacob, who did so much to bridge the gap between theorists and experimenters.

Lepton Dipole Moments

By B Lee Roberts and William J Marciano (eds.)

World Scientific

Hardback: £113 $164 E-book: $213

CCboo4_01_11

In December 1947, Julian Schwinger wrote a letter to the editor of Physical Review, wherein he reports in a mere five paragraphs that he has found “an additional magnetic moment associated with the electron spin”. He gives the value as α/2π=0.00116 and states that it is “the simplest example of a radiative correction” in the new theory of QED.

We have come a long way since Schwinger’s letter. Toichiro Kinoshita has computed the anomalous magnetic moment of the electron up to the tenth order. Nature has revealed further mysteries in the intervening years, including the existence of the muon, with which to test our theories. Famously, the Brookhaven measurement of the anomalous magnetic moment of the muon shows an approximately 3σ deviation from the theoretical prediction of the Standard Model. Experiments have been searching for the CP-violating electric dipole moment as well, with many more experiments coming.

Lepton Dipole Moments, a review volume edited by Lee Roberts and William Marciano, begins with a historical perspective by Roberts and is followed by many excellent review articles. Articles are written by leaders of the field: Andrzej Czarnecki and Marciano on new physics and dipole moments, Michel Davier on g-2 vacuum polarization issues, Dominik Stoeckinger on new physics in g-2, Yasuhiro Okada on models of lepton-flavour violation, Eugene Commins and David DeMille on the electric dipole moment of the electron, and many more.

One reason that lepton moments are interesting to pursue, even during these heady times of high-energy LHC collisions, is their sensitivity to “chirality enhanced” contributions from new physics. In the case of supersymmetry, some large-tanβ theories can yield parametrically larger supersymmetric contributions than Standard Model contributions, increasing sensitivity to higher scales than usual electroweak precision tests allow. An analogous situation occurs for theories with large, new flavour- or CP-violating effects. Lepton dipole moment experiments are reaching levels of sensitivity that will make or break theories. For example, even theories of baryogenesis, which seem far remote at first thought from the vagaries of lepton dipole moments, “will be put to the ultimate test with the next generation of experiments”, as Maxim Pospelov and Adam Ritz rightly explain.

The energy frontier is not the only place to put fundamental physics under extreme test, as this volume attests. Roberts and Marciano have put together an excellent survey of lepton dipole moments and their certain power to change our world view whatever may come.

bright-rec iop pub iop-science physcis connect