The LHCb collaboration has released updated measurements of central exclusive production of the J/ψ and ψ(2S) mesons (LHCb collaboration 2014).
Central exclusive production is a class of reactions in which one or two particles are produced from a beam collision, but the colliding hadrons emerge intact. At the LHC this leads to an unusual and distinctive topology of low-multiplicity events contained in a small rapidity interval with large rapidity gaps on either side. J/ψ and ψ(2S) mesons are produced when a photon emitted from one proton interacts with a pomeron (a colourless combination of gluons) from the other. Measurements of the process can be used to test QCD predictions – to improve our understanding of the distribution of gluons inside the proton – and are also sensitive to saturation effects.
LHCb’s ability to trigger on low-momentum particles and the low number of proton–proton interactions per beam crossing provide an ideal environment to study these processes with particularly low multiplicity. Using data collected in 2011, around 56,000 central exclusive J/ψ and 1500 ψ(2S) mesons have been identified by reconstructing their decays to pairs of muons. While non-resonant backgrounds are very small, the challenge in the analysis is to estimate the larger background that arises when J/ψ and ψ(2S) mesons are produced and one or both of the colliding protons dissociate. As LHCb is instrumented in the forward region mainly, this effect often cannot be detected directly. Instead the collaboration has developed methods to estimate the background rate from the portion of events that are detected.
The measured cross-sections are compared to theoretical predictions, as well as to photoproduction measurements from the HERA electron–proton collider and from fixed-target experiments. Although these environments are quite different from collisions at the LHC, the underlying process is the same. In the former a photon is emitted from an incoming electron beam, while the latter use photon beams directly.
The figure shows a model-dependent comparison of the LHCb results with those from the other types of experiment. It plots the photoproduction cross-section as a function of the photon–proton centre-of-mass energy (W). There is a two-fold ambiguity in converting LHCb’s proton–proton differential cross-section to a photoproduction cross-section, corresponding to the photon being either an emitter or a target. This is resolved using recent results from the H1 experiment at HERA (H1 collaboration 2013). The data in the figure show broad consistency over two orders of magnitude, but are in marginal agreement with a single power-law distribution expected from leading-order QCD. Better agreement is provided either at next-to-leading order QCD (Jones et al. 2013) or by including saturation effects (Gay Ducati et al. 2013).
Elaborate alterations to the Schwerionensynchrotron (SIS) – the heavy-ion synchrotron at GSI – have finished after one year’s work. The main new feature is an additional accelerator cavity, so that the accelerator now has a total of three cavities. The remodelling of the SIS accelerator was necessary for it to serve in future as an injector for the Facility for Antiproton and Ion Research (FAIR). The FAIR accelerator complex, which is currently under construction through an international effort, will be connected to the existing GSI facility.
The SIS accelerator has a circumference of 216 m, with about 50 magnets – each weighing several tonnes – to keep the particles on the correct path. In the coming years, two further accelerator cavities will be added. With a total of five cavities, the SIS will have the performance that is required to accelerate all kinds of elements and inject them into the FAIR machines.
Since its commissioning in 1990, SIS has been the scene of many successes, including the discovery of hundreds of new isotopes – a field in which a GSI scientist holds the world record – and three new types of radioactive decay. Work on SIS in biophysics also led to the development of ion-beam therapy at GSI, where 450 patients were successfully treated. This method of cancer therapy is now routinely administered at the HIT facility in Heidelberg, using a dedicated accelerator built by GSI.
Scientists and engineers working on the design of the particle detector for the Long-Baseline Neutrino Experiment (LBNE) celebrated a major success in January. They showed that very large cryostats for liquid-argon-based neutrino detectors can be built using industry-standard technology normally employed for the storage of liquefied natural gas. The 35-tonne prototype system satisfies LBNE’s stringent purity requirement on oxygen contamination in argon of less than 200 parts per trillion (ppt) – a level that the team could maintain stably.
The purity of liquid argon is crucial for the proposed LBNE time-projection chamber (TPC), which will feature wire planes that collect electrons from an approximately 3.5 m drift region. Oxygen and other electronegative impurities in the liquid can absorb ionization electrons created by charged particles emerging from neutrino interactions and prevent them from reaching the TPC’s signal wires.
The test results were the outcome of the first phase of operating the LBNE prototype cryostat, which was built at Fermilab and features a membrane designed and supplied by the IHI Corporation of Japan. As part of the test, engineers cooled the system and filled the cryostat with liquid argon without prior evacuation. On 20 December, during a marathon 36 hour session, they cooled the membrane cryostat slowly and smoothly to 110 K, at which point they commenced the transfer of some 20,000 litres of liquid argon, maintained at about 89 K, from Fermilab’s Liquid-Argon Purity Demonstrator to the 35 tonne cryostat. By the end of the session, the team was able to verify that the systems for purifying, recirculating and recondensing the argon were working properly.
The LBNE team then topped off the tank with an additional 6000 litres of liquid argon and began to determine the argon’s purity by measuring the lifetime of ionization electrons travelling through the liquid, accelerated by an electric field of 60 V/cm. The measured electron lifetimes were between 2.5 and 3 ms – corresponding to an oxygen contamination approaching 100 ppt and nearly two times better than LBNE’s minimum requirement of 1.5 ms.
The Phase II testing programme, scheduled to begin at the end of 2014, will focus on the performance of active TPC detector elements submerged in liquid argon. Construction of the LBNE experiment, which will look for CP violation in neutrino oscillations by examining a neutrino beam travelling 1300 km from Fermilab to the Sanford Underground Research Facility, could begin in 2016. More than 450 scientists from 85 institutions collaborate on LBNE.
Neutrons are a common by-product of particle-accelerator operations. While these particles can be studied or gainfully used, at other times they are a nuisance, with the potential to damage sensitive electronics and cause data-acquisition systems to fail mid-experiment.
Preventing neutrons from causing damage was a chief goal in the design of a shield house for an apparatus being built for the 12 GeV Upgrade project being carried out at the US Department of Energy’s Jefferson Lab. The $338 million upgrade will double the energy of the electron beam, adding an additional experimental hall while improving the existing halls along with other upgrades and additions.
The new apparatus, the Super High Momentum Spectrometer (SHMS), will enable measurements at high luminosity of particles with momentum approaching the beam energy, scattered at forward angles. It complements the existing High Momentum Spectrometer (HMS).
The physicists and engineers designing the SHMS shield house capitalized on data from more than 15 years of operations with the HMS and various large, open detector systems operated at Jefferson Lab. Using Monte Carlo calculations carried out with Geant4, material specifications were optimized for shielding the electronics from neutrons. However, current technologies did not meet the requirements. Existing systems were too bulky, expensive and difficult to manufacture. So a new system was designed, consisting of three parts: a hydrogen-rich and lightweight concrete layer to thermalize neutrons, a boron-rich concrete layer to absorb them and a thin lead layer to halt residual radiation.
The hydrogen-rich, lightweight concrete is the main structural component of the shield house. This material lacks most of the grit and rocks in ordinary concrete and instead contains shredded plastic and lightweight shale. It looks and pours like concrete and has the same strength, but it has two-thirds of the weight and has four times the neutron-thermalizing capability.
The boron-rich concrete is similarly produced using a patented new recipe, with boron powder replacing the typical aggregate. The boron-rich concrete has the same consistency and strength as ordinary concrete and concrete simply doped with boron, yet stops neutrons with less material. A protective layer that is 15 cm thick encloses concrete electronics rooms in the SHMS shield house, topped with thin lead plates to stop the 0.48 MeV γ rays produced in neutron–boron interactions. A third, panel-like product was designed to stop neutrons in space-constricted areas. It is about 2.5 cm thick and consists of boron embedded in an epoxy resin.
All of the new products are easily manufactured using existing techniques, and systems built with these patented and patent-pending technologies have applications in nuclear-waste storage, compact nuclear reactors and for shielding radiation sources in medical applications.
The very bright, relatively nearby gamma-ray burst (GRB) of 27 April 2013 – GRB 130427A – offers astronomers the most complete set of observational data of this phenomenon to date. While its general properties are in line with theory, the detection of extremely energetic gamma rays and other intriguing features challenges synchrotron-shock models.
GRBs are extremely powerful flashes of gamma rays arising about once per day from an arbitrary direction in the sky. Astronomers have known for about a decade that long GRBs are associated with the supernova explosion of massive stars (CERN Courier September 2003 p15). Since then, the dedicated Swift satellite has detected hundreds of gamma-ray bursts, therefore providing a huge data set of GRB observations (CERN Courier December 2005 p20). On 19 March 2008, Swift detected an extraordinarily bright burst, GRB 080319B, bright enough to see with the naked eye (CERN Courier June 2008 p12). However, it was the Gamma-ray Burst Monitor on board the Fermi Gamma-ray Space Telescope that gave the alert last year of the new record-setting GRB 130427A. The long duration and extreme brightness of this burst allowed the collection of an impressive data set from 58 ground- and space-based observatories, described and interpreted in four recent articles in Science.
Although slightly dimmer in the visible range than the one of 2008, the burst of 2013 was the brightest detected so far in the X-ray and gamma-ray range by the Swift and Fermi satellites. Maselli and collaborators point out that while the overall properties of GRB 130427A are similar to those of the most luminous, high-redshift GRBs, this time the burst was relatively nearby (redshift z = 0.34) – although still a quarter of the distance across the observable universe. The observed similarity suggests that there is no fundamental difference between recent, nearby GRBs and those of the early, remote universe. The detection of a supernova associated with this very bright GRB is further evidence that this relationship also holds for the most luminous GRBs, and strengthens the case for a common engine that powers all kinds of long GRBs.
This extraordinary burst is therefore an ideal laboratory for testing current models of GRBs. In the standard picture, the supernova explosion is accompanied by the formation of a black hole from the collapse of the massive star’s core. The accretion of matter onto this rapidly spinning black hole then launches a jet that finds its way through the star’s outer layers and into the surrounding gas. Shock waves propagating inside the jet and at its outer boundary would be at the origin of the prompt and afterglow emission, respectively.
One surprise is that the Large Area Telescope (LAT) on the Fermi satellite detected energetic gamma rays for 20 hours after the onset of the burst. Among them, two events of 73 GeV and 95 GeV are the highest-energy photons from a GRB ever recorded. The analysis of these data by the Fermi collaboration challenges the widely accepted model that this high-energy emission is of synchrotron origin. A natural additional emission component would be synchrotron self-Compton emission as Maselli and co-workers describe. In this process, the relativistic electron population in the jet both produces the synchrotron emission and then up-scatters these synchrotron photons to high energies via inverse-Compton interactions. While the overall understanding of GRBs seems to be valid, Preece and colleagues conclude, however, that it is difficult for any of the existing models to account for the observed spectral and temporal behaviours displayed by this rare GRB event.
ALICE : la matière nucléaire froide est-elle vraiment froide ?
En septembre 2012, pour la première fois, des collisions proton-plomb ont eu lieu au LHC ; une exploitation proton-plomb de plus grande durée et de plus grande intensité a eu lieu en janvier-février 2013. L’expérience ALICE a permis de recueillir des données au cours des deux périodes d’exploitation. Outre les résultats attendus, de nombreuses observations inattendues ont été faites. Les surprises proviennent de similitudes de plusieurs observables entre collisions proton-plomb et collisions plomb-plomb au LHC. Ces similitudes pourraient indiquer l’existence de phénomènes collectifs dans les collisions proton-plomb avec forte multiplicité, et, finalement, la formation de l’état déconfiné de la matière connu sous le nom de plasma quark-gluon.
In September 2012, the LHC provided proton–lead (pPb) collisions for the first time, two years after its heavy-ion collisions opened a new chapter in exploration of the properties of the deconfined, chirally symmetrical state of matter known as quark–gluon plasma, or QGP (CERN Courier October 2013 p17). Until then, measurements in lead–lead (PbPb) collisions had been typically compared with the corresponding proton–proton (pp) results to assess the properties of QGP. The strategy proved to be successful but remained incomplete. Indeed, within such an approach, the initial wave function of the colliding nuclei is not taken into account. This consideration was the primary motivation for including the measurements in pPb collisions as part of the heavy-ion programme at the LHC, with the expectation of being able to disentangle effects arising from the structure of the initial state of the collision – often dubbed “cold nuclear matter” effects – from the final-state effects related to the medium created, presumably, only in PbPb collisions.
In addition, understanding the structure of the initial state is interesting in its own right, because at the LHC energy, experiments probe the structure of the nucleus in a novel and unexplored QCD regime of very low values of the longitudinal parton-momentum fraction (x <10–3). In this kinematic region, the extremely high gluon density is expected to saturate by means of strongly non-linear coherent processes, leading theorists to predict the existence of yet another pre-collision state of matter – the so-called colour glass condensate (CGC).
ALICE collected data from pPb collisions at the LHC during both the short pilot run in September 2012 (CERN Courier November 2012 p6) and the longer high-luminosity run in January and February 2013 (CERN Courier March 2013 p5). The two-in-one design of the LHC magnets imposes the same magnetic rigidity for the two beams, necessitating an asymmetrical set-up in beam energy for pPb collisions. So in these runs, the energy in the nucleon–nucleon centre-of-mass system was √sNN = 5.02 TeV, with the centre of mass shifted in the direction of the proton beam by about half a unit in rapidity with respect to the laboratory system.
The results from the basic control measurements, the charged-particle multiplicity density and transverse-momentum spectrum, were in agreement with expectations from theoretical models that incorporate present knowledge of the function describing the longitudinal distribution of partons inside the nucleus. In addition, when compared with the results obtained for pp collisions at 2.76 and 7 TeV, on one hand the charged-particle multiplicity in pPb collisions was found to scale roughly with the mean number of nucleons participating in the collision, as expected for particle production through soft processes. On the other hand, the particle spectrum at large transverse momenta – beyond 3–4 GeV/c – was found to scale with the mean number of binary nucleon–nucleon collisions, as expected for particle production through hard processes (CERN Courier December 2012 p6). Extending this latter measurement to charged jets and charmed mesons (figure 1) revealed no deviation from binary nucleon–nucleon scaling outside the large experimental uncertainties. The above observations confirm that the suppression of high transverse-momenta hadrons and jets observed in central PbPb collisions can be attributed to a final-state effect, namely, the energy loss experienced by partons traversing the medium created in these collisions.
As figure 2 shows, the measured rapidity distribution of the J/ψ charmonium state in pPb collisions, when compared with that measured in pp collisions, exhibits a moderate suppression in the forward hemisphere (positive values of rapidity, in the direction of the proton beam), while there is no suppression and even a slight enhancement in the backward hemisphere (negative values of rapidity, in the direction of the lead beam). These results can be well described by models that invoke the cold-nuclear-matter effects only and seem to disfavour the CGC-based models.
The first surprise came from the study of two-particle correlations in high-multiplicity pPb events. A surprising near-side, long-range (elongated in pseudorapidity) correlation, forming a ridge-like structure observed in high-multiplicity pp collisions, was also found in high-multiplicity pPb collisions, but with a much larger amplitude (CERN Courier January/February 2013 p9). However, the biggest surprise came from the observation that this near-side ridge is accompanied by an essentially symmetrical away-side ridge, opposite in azimuth (CERN Courier March 2013 p6). This double ridge was revealed after the short-range correlations arising from jet fragmentation and resonance decays were suppressed by subtracting the correlation distribution measured for low-multiplicity events from the one for high-multiplicity events.
Similar long-range structures in heavy-ion collisions have been attributed to the collective flow of particles emitted from a thermalized system undergoing a collective hydrodynamic expansion. Early interactions produce pressure gradients that translate the spatial anisotropy in the overlapping region of the nuclei into an anisotropy in momentum space. This anisotropy can be characterized by means of the vn (n = 2, 3, …) coefficients of a Fourier decomposition of the single-particle azimuthal distribution.
Recently, the analysis has been extended to four-particle correlations, which have been proved in heavy-ion collisions to be much less sensitive to jet-related correlations. Again, the similarity with results obtained in PbPb collisions is striking: the v2 harmonic coefficients measured in pPb are similar to the ones obtained in PbPb collisions at comparable event multiplicities, as is their transverse momentum dependence – a continuous increase up to a transverse momentum of 2–3 GeV/c, followed by a gradual decrease owing to the increasing contribution from the jet fragmentation. The latter effect arises because the harmonic coefficients measure the strength of the particle correlations with respect to a symmetry plane, while the jet fragments are expected to be only slightly correlated via energy losses to that plane, because they are not expected to participate in the collective motion of the system.
To test the possible presence of collective phenomena further, the ALICE collaboration has extended the two-particle correlation analysis to identified particles, checking for a potential mass ordering of the v2 harmonic coefficients. Such an ordering in mass was observed in heavy-ion collisions, where it was interpreted to arise from a common radial boost – the so-called radial flow – coupled to the anisotropy in momentum space. Continuing the surprises, a clear particle-mass ordering, similar to the one observed in mid-central PbPb collisions (CERN Courier September 2013 p6), has been measured in high-multiplicity pPb collisions.
These similarities are not the only ones. By measuring the dependence on the event multiplicity of the identified-particle transverse-momentum spectra or the averaged transverse momentum (CERN Courier September 2013 p10), a significant hardening of the spectra has been observed with increasing multiplicity. Moreover, the hardening is found to be stronger for heavier particles. Such an observation is interpreted in PbPb collisions as a signature of radial flow. The blast-wave fit, with parameters associated with the kinetic freeze-out temperature and the mean transverse-expansion velocity, describes the shape of the particle spectra, as well as the v2 coefficient (figure 3).
The value and the multiplicity dependence of these two parameters are similar to the ones measured in PbPb collisions. Although models that consider a hydrodynamic evolution of the colliding system can describe this feature satisfactorily, an alternative explanation has been put forward: colour reconnection. This mechanism, implemented in the PYTHIA event generator, can be seen as a final-state interaction between outgoing partons originating from different hard-scattering processes and might result in effects that are qualitatively similar to particle-flow correlations.
The final surprise, so far, comes from the charmonium states. Whereas J/ψ production does not reveal any unexpected behaviour, the production of the heavier and less-bound (2S) state indicates a strong suppression (0.5–0.7) with respect to J/ψ, when compared with pp collisions. Is this a hint of effects of the medium? Indeed, in heavy-ion collisions, such a suppression has been interpreted as a sequential melting of quarkonia states, depending on their binding energy and the temperature of the QGP created in these collisions.
Taking all of these observations together and in view of the astonishing similarities between pPb and PbPb collisions, it is tempting to raise the obvious question: is QGP formed in high-multiplicity pPb collisions? By considering the measured particle multiplicity and the small size of the colliding system, it can be deduced that the initial energy density in high-multiplicity pPb events exceeds the one measured in PbPb collisions, and therefore the critical value for the QGP phase transition. However, could such a small and short-lived system reach thermal equilibrium fast enough to form a QGP-like droplet?
Part of the answer comes from the measured properties of QGP. As the estimated ratio of shear viscosity to entropy is close to the quantum limit for a perfect liquid, it is possible to deduce a value of the mean free path of constituents inside QGP that is several times smaller than the typical pPb system size of 1–2 fm. However, other explanations of the origin of the observed collective-like phenomena cannot be excluded. Rather as the colour-reconnection mechanism can explain the observed hardening of the transverse-momentum spectra with multiplicity, other models developed within the CGC framework can describe the dependence of two-particle correlations on the event multiplicity and the identified particle transverse momentum, but they are less successful in describing results from four-particle correlations and identified-particle correlations. Of course, this does not exclude the melting of CGC during the initial stage of the collision, followed by thermalization of the system and hydrodynamic flow.
To summarize this first pPb measurement campaign, expected results were widely accompanied by unanticipated observations. Among the expected results is the confirmation that proton–nucleus collisions provide an appropriate tool to study the partonic structure of cold nuclear matter in detail. The surprises have come from the similarity of several observables between pPb and PbPb collisions, which hint at the existence of collective phenomena in pPb collisions with high particle multiplicity and, eventually, the formation of QGP.
A deeper insight into the dynamics of pPb collisions can come from exclusive measurements classifying events according to their impact parameter and extracting the physics observables in intervals of collision centrality. In PbPb collisions, the centrality estimators typically make use of the particle multiplicity or total transverse-energy measured in various pseudorapidity intervals, and the estimator value is then transformed into a number of binary nucleon–nucleon collisions through the Glauber model, which provides a classical representation of the initial geometry of the colliding system. In pPb collisions, the event classification into centrality intervals is strongly biased, owing to fluctuations of the measured particle multiplicity for a given number of binary nucleon–nucleon collisions.
In general, central (peripheral) events have larger (smaller) particle multiplicity per participating nucleon than average, the particle-production source being parton scattering. At LHC energies, such fluctuations are large because they are introduced by multi-parton interactions whose amount fluctuates strongly among events of similar centrality. Therefore the event classification using the conventional centrality estimators breaks the scaling of the measured particle production with the number of binary nucleon–nucleon collisions. This has been demonstrated by comparing the charged-particle spectra at mid-rapidity in centrality intervals defined by various centrality estimators available in ALICE.
Nevertheless, if a proper centrality determination can be established, all of the above-mentioned physics observables will be studied in centrality intervals to answer in particular the question of whether the jet-quenching phenomenon is present in central pPb collisions. At the same time, while the LHC is being prepared for its second phase of operation, the ALICE collaboration continues to analyse the precious sample of pPb collision data already recorded.
More than 180 physicists from around the world gathered at the Paul Scherrer Institut (PSI) last year for the 3rd workshop on the “Physics of fundamental Symmetries and Interactions” at low energies and the precision frontier – PSI2013. Broadly speaking, the focus was on high-precision experiments, with results complementary to those at the LHC, often covering a parameter space in physics beyond the Standard Model that is inaccessible to direct searches at the LHC or even at future colliders.
PSI’s particle-physics laboratory fosters cutting-edge research using the unmatched high power of its 590 MeV, 2.2 mA proton cyclotron to produce the brightest low-momentum beams of muons and pions and, since 2011, ultracold neutrons. This environment set the scene for lively discussions on the latest results and the future direction of worldwide low-energy precision experiments. Among the many workshop contributions, there were several major topical areas of interest.
Fundamental physics probed with antiprotons and antihydrogen featured prominently, with recent results from experiments at CERN’s Antiproton Decelerator. The now regular production of antihydrogen has moved these experiments closer to final physics measurements. Among the main goals are sensitive tests of CPT symmetry and measurements in antihydrogen spectroscopy, such as determination of the ground-state hyperfine splitting, together with tests of antihydrogen free fall. A recent result is the Penning-trap measurement by the ATRAP collaboration of the antiproton’s magnetic moment to 5 ppm precision. A further highlight, involving Penning traps but with ordinary matter, is determination of the electron’s mass with unprecedented precision by the MPI-Heidelberg group, achieving an order-of-magnitude improvement.
Many presentations covered experiments using cold (CN) or ultracold (UCN) neutrons. A full session was devoted to the neutron lifetime and worldwide progress on improving its precision, to resolve the significant outstanding discrepancy between results from neutron-storage experiments and those using beams. For the latter, a new result from the National Institute of Standards and Technology in the US was presented, consolidating the existing discrepancy.
Neutron-decay parameters and spin correlations of the decay particles are sensitive to physics beyond the Standard Model. Competing CN and UCN experiments using improved experimental techniques such as precision neutron polarimetry at the 100 ppm level were presented, with future plans for UCNs at Los Alamos National Laboratory (LANL) and the Proton Electron Radiation Channel project at the FRM II neutron source at the Technische Universität München. Other parity-violation experiments were also discussed, with a new result for neutron capture on hydrogen by the NPDG experiment at the Spallation Neutron Source (SNS) at Oak Ridge, trapped radium ions at KVI Groningen, and neutron spin rotation in helium.
UCN production with new-generation sources – either in existence or under construction – was extensively covered, including the use of superfluid helium (at Institut Laue–Langevin (ILL) and TRIUMF) and solid deuterium (Mainz, LANL and PSI) as superthermal converters. UCN densities are steadily increasing, despite experimental and technical difficulties that have slowed down the expected progress. The main thrust for these high-intensity UCN sources comes from the search for a permanent electric dipole moment (EDM) of the neutron. Because it is the focus of an experiment at PSI, there was intensive discussion on this topic at the workshop. Several talks elaborated on efforts to search for the neutron EDM by international collaborations at various institutions. These are mainly based on UCN-storage measurements that employ either Ramsey’s Oscillatory Field method (at ILL, SNS, PSI, the Petersburg Nuclear Physics Institute, TRIUMF, Osaka University and FRM II) or crystal diffraction (at ILL).
Complementary atomic (Fr, Ra, Xe) and molecular (YbF, ThO) EDM searches have even higher experimental sensitivities, but sometimes suffer from being more difficult to interpret in terms of the fundamental EDMs. Diamagnetic atoms are usually interpreted in terms of searches for nuclear EDMs, whereas measurements in polar molecules and paramagnetic atoms give limits on the electron EDM. However, the workshop was a little too early to see the result of the new ThO experiment ACME, by a Harvard/Yale University group, which appeared shortly afterwards. Proposed storage-ring-based EDM measurements with protons and deuterons are also being pursued actively.
Common to all of the EDM searches are the many challenging experimental difficulties, especially in terms of magnetic shielding and the control and measurement of the magnetic field. Presentations from the theoretical side underlined that EDM studies in different systems are complementary and necessary in helping to identify the underlying models of CP or T violation. Also in this context, recent results on CP violation were presented from the NA62 experiment at CERN, on the kaon system, and from LHCb at the LHC.
UCNs also allow study of the quantization of gravitational bound states of the neutron, which are sensitive to non-Newtonian gravity and hypothetical extra forces, mediated by, for example, axions, axion-like particles, or chameleons. Such forces can also be probed in clock-comparison experiments, as explained at the workshop for the 3He/129Xe case. These are sensitive to possible Lorentz violations, which can be accommodated in the framework of the so-called Standard Model Extension (SME). In the SME, Lorentz violation stems from an underlying background field in the universe, resulting, for example, in day/night or annual variations of fundamental parameters. Recently calculated effects in neutron decay, as well as in muonium and positronium spectroscopy, were also discussed, with experimental efforts.
Charged-lepton flavour violation was another key topic where increasing worldwide efforts are under way. Lepton flavour violation involving muons is predicted by various models that go beyond the Standard Model, at levels that might be within reach of the next generation of experiments. Nevertheless, major progress is needed, both in experimental techniques and in increased muon-beam intensities, and is being pursued actively.
The international PSI-based MEG collaboration presented its new limit of 5 × 10–13 on the μ → eγ branching ratio. The project to search for the decay μ → 3e at a sensitivity level of 10–16 was presented by the Mu3e collaboration. Impressive efforts towards the construction of the Muon Campus at Fermilab were also shown, with the goal of a new, more precise (g–2)muon measurement to help solve or confirm the present discrepancy with the Standard Model calculation. There are also plans to search for μ → e conversion within Project-X, at a sensitivity of 10–17 and beyond. Similar efforts in Japanese projects that are ongoing at Osaka University and the Japan Proton Accelerator Research Complex (J-PARC) were also detailed. These involve huge efforts in the muon sector towards, for example, μ → e conversion and muon g–2 experiments. The progress shown at J-PARC following repairs of the extensive earthquake damage was impressive.
There was great encouragement on the part of all participants to meet again at PSI for PSI2016
The new result on the pseudoscalar coupling between the muon and the proton from the MuCAP experiment at PSI was presented and discussed, finally solving a long-standing puzzle and providing the first precise value of this Standard Model parameter. Interpretations within recent calculations based on effective field theory were presented, together with relevant ongoing precision measurements in the deuterium system.
In the light of the current construction of the free-electron laser – SwissFEL – at PSI, the possible use of such high photon-intensities or electron beams for particle-physics experiments attracted much interest, for example in using high-intensity lasers for “light shining through the wall experiments”, which search for weakly interacting sub-electronvolt particles. The final session of the workshop – held with the detector workshop of the Swiss Institute of Particle Physics, CHIPP – provided an overview of state-of-the-art detector technology, which is under development to cope with future high-intensity experiments.
Aside from fundamental science, the Hochrhein Bigband jazz concert delighted participants, as did the workshop dinner featuring a “fundamental classic” of Swiss cusine – raclette. A workshop summary of the year 2034 provided an amusing outlook from a theoretician’s point of view of what might be important in particle physics 20 years from now. In the meantime, there was great encouragement on the part of all participants to meet again at PSI for PSI2016.
When the project for the Large Electron–Positron (LEP) collider began at CERN in the early 1980s, the programme required the concentration of all available CERN resources, forcing the closure not only of the Intersecting Storage Rings and its experiments, but of all the bubble chambers and several other fixed-target programmes. During this period, the LAA detector R&D project was approved at the CERN Council meeting in December 1986 as “another CERN programme of activities” (see box) opening a door to initiate developments for the future. A particular achievement of the project was to act as an incubator for the development of microelectronics at CERN, together with the design of silicon-strip and pixel detectors – all of which would become essential ingredients for the superb performance of the experiments at the LHC more than two decades later.
The start of the LAA project led directly to the build-up of know-how within CERN’s Experimental Physics Facilities Division, with the recruitment of young and creative electronic engineers. It also enabled the financing of hardware and software tools, as well as the training required to prepare for the future. By 1988, an electronics design group had been set up at CERN, dedicated to the silicon technology that now underlies many of the high-performing detectors at the LHC and in other experiments. Miniaturization to submicrometre scales allowed many functions to be compacted into a small volume in sophisticated, application-specific integrated circuits (ASICS), generally based on complementary metal-oxide-silicon (CMOS) technology. The resulting microchips incorporate analogue or digital memories, so selective read-out of only potentially useful data can be used to reduce the volume of data that is transmitted and analysed. This allows the recording of particle-collision events at unprecedented rates – the LHC experiments register 40 million events per second, continuously.
Last November, 25 years after the chip-design group was set up, some of those involved in the early days of these developments – including Antonino Zichichi, the initiator of LAA – met at CERN to celebrate the project and its vital role in establishing microelectronics at CERN. There were presentations from Erik Heijne and Alessandro Marchioro, who were among the founding members of the group, and from Jim Virdee, who is one of the founding fathers of the CMS experiment at the LHC. Together, they recalled the birth and gradual growth to maturity of microelectronics at CERN.
The beginnings
The story of advanced ASIC design at CERN began around the time of UA1 and UA2, when the Super Proton Synchrotron was operating as a proton–antiproton collider, to supply enough interaction energy for discovery of the W and Z bosons. In 1988, UA2 became, by chance, the first collider experiment to exploit a silicon detector with ASIC read-out. Outer and inner silicon-detector arrays were inserted into the experiment to solve the difficulty of identifying the single electron that comes from a decay of the W boson, close to the primary interaction vertex. The inner silicon-detector array with small pads could be fitted in the 9 mm space around the beam pipe, thanks to the use of the AMPLEX – a fully fledged, 16-channel 3-μm CMOS chip for read-out and signal multiplexing.
The need for such read-out chips was triggered by the introduction of silicon microstrip detectors at CERN in 1980 by Erik Heijne and Pierre Jarron. These highly segmented silicon sensors allow micrometre precision, but the large numbers of parallel sensor elements have to be dealt with by integrated on-chip signal processing. To develop ideas for such detector read-out, in the years 1984–1985 Heijne was seconded to the University of Leuven, where the microelectronics research facility had just become the Interuniversity MicroElectronics Centre (IMEC). It soon became apparent that CMOS technology was the way ahead, and the experience with IMEC led to Jarron’s design of the AMPLEX.
(Earlier, in 1983, a collaboration between SLAC, Stanford University Integrated Circuits Laboratory, the University of Hawaii and Bernard Hyams from CERN had already initiated the design of the “Microplex” – a silicon-microstrip detector read-out chip using nMOS, which was eventually used in the MARK II experiment at SLAC in the summer of 1990. The design was done in Stanford by Sherwood Parker and Terry Walker. A newer iteration of the Microplex design was used in autumn 1989 for the microvertex detector in the DELPHI experiment at LEP.)
Heijne and Jarron were keen to launch chip design at CERN, as was Alessandro Marchioro, who was interested in developing digital microelectronics. However, finances were tight after the approval of LEP. With the appearance of new designs, the tools and methodologies developed in industry had to be adopted. For example, performing simulations was better than the old “try-and-test technique” of wire wrapping, but this required the appropriate software, including licences and training. The LAA project came at just the right time, allowing the chip-design group to start work in the autumn of 1988, with a budget for workstations, design software and analysis equipment – and crucially, up to five positions for chip-design engineers, most of whom remain at CERN to this day.
On the analogue side, there were three lines to the proposed research programme within LAA: silicon-microstrip read-out, a silicon micropattern pixel detector and R&D on chip radiation-hardness. The design of the first silicon-strip read-out chip at CERN – dubbed HARP for Hierarchical Analog Readout Processor – moved ahead quickly. The first four-channel prototypes were already received in 1988, with work such as the final design verification and layout check still being done at IMEC.
The silicon micropattern pixel detector, with small pixels in a 2D matrix, required integration of the sensor matrix and the CMOS read-out chip, either in the same silicon structure (monolithic) or in a hybrid technology with the read-out chip “bump bonded” to each pixel. Such a chip was developed as a prototype at CERN in 1989 in collaboration with Eric Vittoz of the Centre Suisse d’Electronique et de Microtechnique and his colleagues at the École polytechnique fédérale de Lausanne. While it turned out that this first chip could not be bump bonded, it successfully demonstrated the concept. In 1991, the next pixel-read-out chip designed at CERN was used in a three-unit “telescope” to register tracks behind the WA94 heavy-ion experiment in the Omega spectrometer. This test convinced the physicists to propose an improved heavy-ion experiment, WA97, with a larger telescope of seven double planes of pixel detectors. This experiment not only took useful data, but also proved that the new hybrid pixel detectors could be built and exploited.
Research on radiation hardness in chips remained limited within the LAA project, but took off later with the programme of the Detector Research and Development Committee (DRDC) and the design of detectors for the LHC experiments. Initially, it was more urgent to show the implementation of functioning chips in real experiments. Here, the use of AMPLEX in UA2 and later the first pixel chips in WA97 were crucial in convincing the community.
In parallel, components such as time-to-digital converters (TDCs) and other Fastbus digital-interface chips were successfully developed at CERN by the digital team. The new simulation tools purchased through the financial injection from the LAA project were used for modelling real-time event processing in a Fastbus data-acquisition system. This was to lead to high-performance programmable Fastbus ASICs for data acquisition in the early 1990s. Furthermore, a fast digital 8-bit adder-multiplier with a micropipelined architecture for correcting pedestals, based on a 1.2 μm CMOS technology, was designed and used in early 1987. By 1994, the team had designed a 16-channel TDC for the NA48 experiment, with a resolution of 1.56 ns, which could be read out at 40 MHz. The LAA had well and truly propelled the engineers at CERN into the world of microtechnology.
The LAA
The LAA programme, proposed by Antonino Zichichi and financed by the Italian government, was launched as a comprehensive R&D project to study new experimental techniques for the next step in hadron-collider physics at multi-tera-electron-volt energies. The project provided a unique opportunity for Europe to take a leading role in advanced technology for high-energy physics. It was open to all physicists and engineers interested in participating. A total of 40 physicists, engineers and technicians were recruited, and more than 80 associates joined the programme. Later in the 1990s, during the operation of LEP for physics, the programme was complemented by the activities overseen by CERN’s Detector R&D Committee.
The challenge for the LHC
A critical requirement for modern high-energy-physics detectors is to have highly “transparent” detectors, maximizing the interaction of particles with the active part of the sensors while minimizing similar interactions with auxiliary material such as electronics components, cables, cooling and mechanical infrastructure – all while consuming absolute minimum power. Detectors with millions of channels can be built only if each channel consumes milliwatts of power. In this context, the developments in microelectronics offered a unique opportunity, allowing the read-out system of each detector to be designed to provide optimal signal-to-noise characteristics for minimal power consumption. In addition, auxiliary electronics such as high-speed links and monitoring electronics could be highly optimized to provide the best solution for system builders.
However, none of this was evident when thoughts turned to experiments for the LHC. The first workshop on the prospects for building a large proton collider in the LEP tunnel took place in Lausanne in 1984, the year following the discovery of the W and Z bosons by UA1 and UA2. A prevalent saying at the time was “We think we know how to build a high-energy, high-luminosity hadron collider – we don’t have the technology to build a detector for it.” Over the next six years, several seminal workshops and conferences took place, during the course of which the formidable experimental challenges started to appear manageable, provided that enough R&D work could be carried out, especially on detectors.
The LHC experiments needed special chips with a rate capability compatible with the collider’s 40 MHz/25 ns cycle time and with a fast signal rise time to allow each event to be uniquely identified. (Recall that LEP ran with a 22 μs cycle time.) Thin – typically 0.3 mm – silicon sensors could meet these requirements, having a dead time of less than 15 ns. With sub-micron CMOS technology, front-end amplifiers could also be designed with a recovery time of less than 50 ns, therefore avoiding pile-up problems.
Thanks to the LAA initiative and the launch in 1990 by CERN of R&D for LHC detectors, overseen by the DRDC, technologies were identified and prototyped that could operate well in the harsh conditions of the LHC. In particular, members of the CERN microelectronics group pioneered the use of special full custom-design techniques, which led to the production of chips capable of withstanding the extreme radiation environment of the experiments while using a commercially available CMOS process. The first full-scale chip developed using these techniques is the main building block of the silicon-pixel detector in the ALICE experiment. Furthermore, in the case of CMS, the move to sub-micron 0.25-μm CMOS high-volume commercial technology for producing radiation-hard chips enabled the front-end read-out for the tracker to be both affordable and delivered on time. This technology became the workhorse for the LHC and has been used since for many applications, even where radiation tolerance is not required.
An example of another area that benefited from an early launch, assisted by the LAA project, is optical links. These play a crucial role in transferring large volumes of data, an important example being the transfer from the front ends of detectors that require one end of the link to be radiation hard – again, a new challenge.
Today, applications that require a high number of chips can profit from the increase in wafer size, with many chips per wafer, and low cost in high-volume manufacturing. This high level of integration also opens new perspectives for more complexity and intelligence in detectors, allowing new modes of imaging.
Looking ahead
Many years after Moore’s law was suggested, miniaturization still seems to comply with it. There has been continuous progress in silicon technology, from 10 μm silicon MOS transistors in the 1970s to 20 nm planar silicon-on-insulator transistors today. Extremely complex FinFET devices promise further downscaling to 7 nm transistors. Such devices will allow even more intelligence in detectors. The old dream of having detectors that directly provide physics primitives – namely, essential primary information about the phenomena involved in the interaction of particles with matter – instead of meaningless “ADC counts” or “hits” is now fully within reach. It will no longer be necessary to wait for data to come out of a detector because new technology for chips and high-density interconnections will make it possible to build in direct vertex-identification, particle-momenta evaluation, energy sums and discrimination, and fast particle-flow determination.
Some of the chips developed at CERN – or the underlying ideas – have found applications in materials analysis, medical imaging and various types of industrial equipment that employ radiation. Here, system integration has been key to new functionalities, as well as to cost reduction. The Medipix photon-counting chip developed in 1997 with collaborators in Germany, Italy and the UK is the ancestor of the Timepix chip that is used today, for example, for dosimetry on the International Space Station and in education projects. Pixel-matrix-based radiation imaging also has many applications, such as for X-ray diffraction. Furthermore, some of the techniques that were pioneered and developed at CERN for manufacturing chips sufficiently robust to survive the harsh LHC conditions are now adopted universally in many other fields with similar environments.
Looking ahead to Europe’s top priority for particle physics, exploitation of the LHC’s full potential until 2035 – including the luminosity upgrade – will require not only the maintenance of detector performance but also its steady improvement. This will again require a focused R&D programme, especially in microelectronics because more intelligence can now be incorporated into the front end.
Lessons learnt from the past can be useful guides for the future. The LAA project propelled the CERN electronics group into the new world of microelectronic technology. In the future, a version of the LAA could be envisaged for launching CERN into yet another generation of discovery-enabling detectors exploiting these technologies for new physics and new science.
Aristotle said that ‘‘An iron ball of one hundred pounds, falling from a height of one hundred cubits [about 5.2 m], reaches the ground before a one-pound ball has fallen a single cubit.” Galileo Galilei replied, “I say that they arrive at the same time.” The universality of free fall illustrated by the latter’s legendary experiment at the tower of Pisa was formulated by Isaac Newton in his Principia and became, with Albert Einstein, the weak equivalence principle (WEP): the motion of any object under the influence of gravity does not depend on its mass or composition. This principle is the cornerstone of general relativity.
The WEP has been verified to incredible precision by dropping experiments and Eötvös-type torsion balances, the latter reaching an amazing accuracy of one part in 1013. The acceleration of the Earth and the Moon towards the Sun has also been determined to the same accuracy by measuring the transit time of laser pulses between the planet and the reflectors left on the Moon by the Apollo and Soviet space missions. But does the WEP also hold for antimatter for which no direct measurement has been performed, in particular for antimatter particles such as positrons or antiprotons? Or does antimatter even fall up?
The purpose of the 2nd International Workshop on Antimatter and Gravity, which took place on 13–15 November, was to review the experimental and theoretical aspects of antimatter interaction with gravity. The meeting was hosted by the Albert Einstein Center for Fundamental Physics of the University of Bern, following the success of the first workshop held in 2011 at the Institut Henri Poincaré in Paris. The highlights are summarized here.
Free-fall experiments with charged particles are notoriously difficult because they must be carefully shielded from electromagnetic fields
Free-fall experiments with charged particles are notoriously difficult because they must be carefully shielded from electromagnetic fields. For example, the sagging of the gas of free electrons in metallic shielding induces an electric field that can counterbalance the effect of gravity. Indeed, measurements based on dropping electrons led to a value of the acceleration of gravity, g, consistent with zero (instead of g = 9.8 m/s2). A free-fall experiment with positrons has not yet been performed, owing to the lack of suitable sources of slow positrons. In the 1980s, a team proposed a free-fall measurement of g with antiprotons at CERN’s Low Energy Antiproton Ring (LEAR), but it could not be performed before the closure of LEAR in 1996.
Using neutral antimatter such as antihydrogen can alleviate the disturbance from electromagnetic fields. The ALPHA collaboration at CERN’s Antiproton Decelerator (AD) has set the first free-fall limit on g with a few hundred antihydrogen atoms held for more than 400 ms in an octupolar magnetic field. The results exclude a ratio of antimatter to matter acceleration larger than 110 (normal gravity) and smaller than −65 (antigravity). Plans to measure this ratio at the level of 1% by using a vertical trap are under way.
Positronium matters
The AEgIS collaboration at the AD uses positronium produced by bombarding a nanoporous material with a positron pulse derived from a radioactive sodium source. Positronium (Ps) is then brought to highly excited states with lasers and mixed with captured antiprotons to produce antihydrogen (H) through the reaction Ps + p → e– + H. The highly excited antihydrogen atoms possess large electric dipole moments and can be accelerated with inhomogeneous electric fields to form an antihydrogen beam. The sagging of the beam over a distance of typically 1 m is measured with a two-grating deflectometer by observing the intensity pattern with high-resolution (around 1 μm) nuclear emulsions. AEgIS is currently setting up, with antiprotons (around 105) and positrons (3 × 107) successfully stacked. A first measurement of g is planned in 2015 and the initial goal is to reach 1% uncertainty.
As a neutral system, positronium is also suitable for gravity measurements, but free-fall experiments are not easy because positronium lives for 140 ns only. Such studies require sufficiently cold positronium in long-lived, highly excited states and the appropriate atom optics. Preparations for a free-fall experiment at University College London are under way.
At ETH Zurich, a team is measuring the 1s → 2s atomic transition in positronium with a precision better than one part per billion (1 ppb) by using a high-intensity positron beam that traverses a solid neon moderator and impinges on a porous silica target. The positronium ejected from the target is laser-excited to the 2s state and the γ-decay rate is measured by scintillating crystals, as a function of laser frequency. The 1s → 2s frequency can be calculated from hydrogen data. For hydrogen, the frequency is redshifted in the gravitational potential of the Sun, but the shift cannot be observed because the clocks used to measure the frequency are equally redshifted. However, for positronium (equal amounts of matter and antimatter) and assuming antigravity, measurements should yield a higher frequency than is calculated from hydrogen. At the level of 0.1 ppb, such studies could even test the hypothesis of antigravity as the Earth revolves around the Sun.
A similar experiment with muonium – an electron orbiting a positive muon – is planned at PSI in Switzerland. Ultra-slow muon beams with sub-millimetre sizes and sub-electronvolt energy for re-acceleration could also be used in a free-fall experiment employing gratings (a Mach–Zehnder interferometer).
Free-fall experiments
At CERN, the AD delivers bunches of 5.3 MeV antiprotons (3 × 107) every 100 s. However, storing antiprotons requires lower energies, which are reachable by inserting thin foils, albeit at the expense of substantial losses and degradation in beam size. Prospects for improved experiments are now bright with ELENA, a 30 m circumference electron-cooled ring that decelerates the AD beam further to 100 keV (figure 1). ELENA will be installed in 2015 and will be available for physics in summer 2017.
The first free-fall experiment to profit from this new facility will be GBAR. Antihydrogen atoms will be obtained by the interaction of antiprotons from ELENA with a positronium cloud. The positrons will be produced by a 4.3 MeV electron linac. In contrast to AEgIS, the antihydrogen atom will capture a further positron to become a positively charged ion, which can be transferred to an electromagnetic trap, cooled to 10 mK with cold beryllium ions and then transported to a launching trap where the additional positron will be photodetached. The mean velocity of the antihydrogen atoms will be around 1 m/s and the fall distance will be about 30 cm. GBAR will be commissioned in 2017 with the initial goal of reaching 1% accuracy on g.
The sensitivity of GBAR, limited by the velocity distribution of the antihydrogen atoms, could be improved substantially by using quantum reflection, a fascinating effect that was discussed at the workshop. Antihydrogen atoms dropped towards a surface experience a repulsive force, which leads to gravitational quantum states. A similar phenomenon was observed with cold neutrons at the Institut Laue–Langevin (ILL) in Grenoble. Now, the ILL team proposes to bounce the atoms in GBAR between two layers – a smooth lower surface to reflect slow enough antihydrogen atoms and a rough upper surface to annihilate the fast ones. Transition frequencies between the gravitational levels – which depend on g – could also be measured by recording the annihilation rate on the bottom surface. Provided that the lifetime of these antihydrogen levels is long enough, orders of magnitude improvements could be obtained on the determination of g.
Atom interferometers might be able to measure g to within 10–6. In a Ramsey–Bordé interferometer, the falling atom interacts with pulses from two counter-propagating vertical laser beams. Having absorbed a photon from the first beam, the atom is stimulated to emit another photon with the frequency of the second beam, thereby modifying its momentum. The signal from the annihilating antihydrogen atom, for example at the top of the interferometer, interferes with the one from another atom that has equal momentum but was not subject to the laser kick. The interference pattern will depend on the value of g.
At FLAIR the antiproton flux will be an order of magnitude higher than at ELENA
In the more distant future, the Facility for Low-energy Antiproton and Ion Research (FLAIR) will become operational at GSI. As an extension to the high-energy antiproton facility, FLAIR will consist of a low-energy storage ring decelerating antiprotons from 30 MeV to 300 keV, followed by an electrostatic ring capable of reducing the energy even further, down to 20 keV. At FLAIR the antiproton flux will be an order of magnitude higher than at ELENA, and slow extracted antiproton beams will be available for experiments in nuclear and particle physics.
The question of how large an effect these free-fall experiments could measure cannot be answered without theoretical assumptions, such as exact symmetry between matter and antimatter (the CPT theorem). However, string theory can break CPT. The standard model extension proposed by the Indiana/Carleton group involves Lorentz and CPT violation. Also, atoms and nuclei contain virtual antiparticles in amounts that depend on the atomic number. The calculable quantum corrections agree with measurements, arguing against antigravity. However, there is a huge discrepancy in the value of the cosmological constant estimated from vacuum particle–antiparticle pair fluctuations, which might question our understanding of the interaction between gravity and virtual particles. As pointed out at the workshop, if all of the theoretical assumptions are valid, then antimatter experiments should not expect to see discrepancies in g at a level larger than 10–7. Ultimately, the issue must be settled by experiments.
To compare with matter, a presentation was given on the 10–9 precision achievable on g at the Swiss Federal Institute of Metrology (METAS) using a free-fall interferometer. Together with improved measurements of Planck’s constant with a watt balance, this might lead to a re-definition of the kilogram based on natural units.
The workshop also included a session on antimatter in the universe. Is there any antimatter and could it repel matter (the Dirac–Milne universe) and provide the accelerating expansion? Can the excess of positrons observed above 10 GeV by balloon experiments, the PAMELA satellite experiment and, more recently, the Fermi Gamma-ray Space Telescope and the Alpha Magnetic Spectrometer (AMS-02), be explained by antimatter annihilation?
In his summary talk, Mike Charlton of Swansea University concluded that “the challenge of measuring gravity on antihydrogen remains formidable”, but that “in the past decade the prospects have advanced from the totally visionary to the merely very difficult”.
The workshop, with 28 plenary talks, was attended by 70 participants. A visit to the house where Einstein spent the years 1903–1905 and dinner at Altes Tramdepot were part of the social programme.
CERN was founded in 1954 with the aim of bringing European countries together to collaborate in scientific research after the horrors of the Second World War. After the end of the war, however, Europe had been divided politically by the “Iron Curtain”, and countries in the Eastern Bloc were not in a position to join CERN. Nevertheless, through personal contacts dating back to pre-war days, scientists on either side of the divide were able to keep in touch. From the start, CERN had schemes to welcome physicists from outside its member states. At the same time, the bubble-chamber experiments in particular provided a way that research groups in the East could contribute to physics at CERN from their home institutes. The groups could analyse bubble-chamber events with relatively few resources and make their mark by choosing specific areas of analysis.
In the case of my country, Poland, this contact with CERN from the 1950s provided a precious window on modern science, allowing us to maintain a good level in particle physics. The first Polish physicist was welcomed to the laboratory in 1959 and was soon followed by others when CERN awarded several scholarships to young researchers from Cracow and Warsaw. Collaboration between CERN and Polish institutes followed, and despite the difficult circumstances, physicists in Poland were able to make important contributions to CERN’s research programmes. In 1963, the country gained observer status at CERN Council, as the only country from Eastern Europe.
My association with CERN began when I was a student at the Jagellonian University in Cracow in the early 1970s, working on the analysis of events collected by the 2-m bubble chamber. During the 1960s, the experimental groups in Cracow and Warsaw had made the analysis of high-multiplicity events their speciality, and this was the topic for my doctoral thesis. The collaborative work with CERN gradually extended to electronic detectors, and from the 1970s Polish groups contributed hardware such as wire chambers to a number of experiments. The DELPHI experiment at the Large Electron–Positron (LEP) collider already used a variety of Polish contributions to both hardware and software.
It is hard today to imagine the world without the web. It was CERN’s gift to humanity
The start-up of LEP coincided with the big political changes in Eastern Europe at the end of the 1980s. Poland became the first former Eastern Bloc country to be invited to become a CERN member state, and in July 1991 my country became the 16th member of CERN – a moment of great pride. Hungary, the Czech Republic and Slovakia followed soon after.
The end of the 1980s also coincided with the development of the World Wide Web to help the large collaborations at LEP work together. It revolutionized the way we could work in our home institutions. In particular in Poland, a dedicated phone line set up in 1991 between CERN and the institutes in Cracow and Warsaw provided a “magic” link, allowing us, for example, to make changes remotely to software running underground at LEP.
It is hard today to imagine the world without the web. It was CERN’s gift to humanity – creating connections, allowing the exchange of ideas and communication between people all over the world. Developed in a scientific, non-commercial organization, the web’s international annual economic value is now estimated at €1.5 trillion. As Chris Llewellyn Smith, CERN’s director-general from 1994 to 1998, asked: how many yearly budgets of CERN have been saved because it was developed quickly in a non-commercial environment?
Now, after some four decades in particle physics, I have the enormous privilege to be president of CERN Council. I have already experienced the exceptional moment when the Israeli flag was raised for the first time at the Meyrin entrance to the laboratory, representing the first new member state to join the organization for 14 years. Other countries are at various stages in the process of accession to become member states or to attain associate membership. In discussions with the physicists from these countries, I recognize the same feelings that we had in countries like Poland in the 1960s or 1970s.
As one person said to me recently, it is not only CERN as the organization, but the idea of CERN that has such a strong appeal. It brings people together from different nationalities and cultures, people who have different ways of doing things – and this brings added value. CERN really is something where the whole is greater than the sum of the parts, as we all work together towards a common goal – a noble goal – to learn more about the universe that we inhabit.
During the past 60 years, the idea of CERN has succeeded in the goal of bringing European countries to work peacefully together, helping to bridge the divisions that existed between East and West. I sincerely believe that this “idea” will continue to inspire people around the world for years to come.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.