Stories about how technology giants like Hewlett Packard started in a wooden garage appeal to everyone. Private venture capital is constantly searching for the right spin-off or newly founded company to emulate such success stories. Likewise, politicians try to create the right atmosphere for technology-driven projects in the hope of creating jobs and driving economic growth. Institutes like CERN are expected to stimulate this process of rejuvenation by generating new ideas and supporting a high turnover of students and staff to spread those ideas. Most institutes duly hold open days, run technology fairs, compile lists of their in-house technologies, apply for patents and support technology transfer in general.
However, much of technology transfer is possibly more subtle and incremental in nature. It is as if ideas have a life of their own and are working away steadily for years, often through seemingly unconnected events, while the different elements are being assembled. One example that has weaved its way through the activities of CERN and its member states concerns a particle detector using artificial diamond, which is finding an expanding role in the LHC experiments and machine, as well as in other applications.
Following the tracks
The advanced-technology landscape is constantly evolving and there is no absolute beginning or end to any particular development. The action moves from one field to another and is driven forward by different goals at different times. Here, the quest to produce artificial diamond provides an appropriate starting point. This was akin to the search for the philosopher’s stone – recorded efforts date back a hundred years – but the first person to succeed was William G Eversole of the Union Carbide Corporation in the US in 1952. Contrary to intuition and the bulk of earlier work, he used a low-pressure process called chemical vapour deposition (CVD).
The CVD technology made it possible to manufacture diamond coatings, films and precise shapes. Prior to this time, natural diamonds had been demonstrated as UV detectors in the 1920s and as ionizing radiation detectors in the 1940s. The advent of CVD diamond removed limitations arising from size, shape and uncertainty in material characteristics and provided a rich potential for the development of sophisticated particle detectors.
The transition from fixed-target physics to colliding-beam physics during the 1970s stimulated a tremendous growth in the technology of particle detectors, and the requirements for speed and radiation hardness increased with each new collider project. In 1989 the DIAMAS collaboration of the Superconducting Super Collider (SSC) project in the US was the first to propose diamond for its particle trackers. With the closure of the SSC the focus moved to CERN, where the RD42 collaboration for the Development of Diamond Tracking Detectors for High Luminosity Experiments at the LHC was founded in 1994. This collaboration looked into CVD diamond technologies under the leadership of Peter Weilhammer of CERN, Harris Kagan of Ohio State University (and formerly of the DIAMAS collaboration) and William Trischuk, who was a founding member of RD42 and co-spokesperson in the early days.
One important activity in this group was the development of a beam condition monitor (BCM) for ATLAS, under the project leadership of Marko Mikuz. In fact, the first diamond BCM had been proposed and constructed some time earlier by Patricia Burchat of SLAC and Harris Kagan of the BaBar experiment at SLAC. The CMS, ALICE and LHCb experiments quickly followed the lead from ATLAS and installed diamond beam monitors.
Just before RD42 got going, in 1993 Erich Griesmayer, a postdoc working for the AUSTRON study in CERN, nurtured the idea of building a gigahertz particle counter for medical applications and wrote a proposal for its use in hadron therapy. (AUSTRON was an initiative in technology transfer funded by the Austrian government and hosted by CERN to lend its expertise in machine design. It later metamorphosed into MedAustron, which was recently funded for construction in Wiener Neustadt, Austria, but that is another story.) At that time, Griesmayer used silicon for his base calculations, although the material was too slow for what he had in mind.
In 1995, he returned to Austria to head the Department of Electrical Engineering of the Technische Fachhochschule in Wiener Neustadt and later its spin-off company FOTEC. There he pursued his ideas for a counter capable of resolving 109 particles a second, still with hadron therapy in mind. Meanwhile, a fellow postdoc, Heinz Pernegger, was working at MIT in the Laboratory for Nuclear Science for the PHOBOS collaboration, building a silicon detector for the Relativistic Heavy Ion Collider at Brookhaven. Griesmayer and Pernegger found that they had a common interest and Griesmayer and his engineer Helmut Frais-Kölbl built the calibration electronics for PHOBOS. This was the start of a long and fruitful collaboration that continued within the RD42 collaboration. In particular, the pre-amplifiers for the ATLAS BCM were built for CERN by FOTEC.
This was already a successful spin-off story for Austria and CERN, demonstrating how CERN could stimulate hi-tech projects in member states, but history was to be made when the Wiener Neustadt Technische Fachhochschule became a full member of the ATLAS collaboration, supplying electronic components for the readout system of the new diamond BCM. This was an unprecedented move and an inspiration to educational institutions across Europe.
Diamond benefits
Compared with silicon, diamond produces a lower linear density of electron-hole pairs along the incident particle track, but this is more than balanced by the positive effects of much higher electron and hole mobilities and a quasi-zero noise contribution from the diamond (see box). The leading edge of a single-particle pulse can be resolved in tens of picoseconds and individual pulses can be resolved on a nanosecond scale. Diamond is also an extraordinary material for radiation resistance. This is not only from the point of view of damage; diamond also responds linearly to the incident flux and its range is limited by the attached electronics rather than the material of the detector. According to the application, a diamond detector can be configured as a particle-counting ionization chamber or an energy-measuring calorimeter.
The potential of the diamond detector was clear to Griesmayer, who conducted many tests on prototypes with different particles and particle energies at accelerators in Europe and the US. Eventually, he founded his own company, CIVIDEC Instrumentation GmbH, in December 2009, creating a second-generation spin-off. The company now produces beam-monitoring systems based on diamond detectors with ultra-fast, low-noise electronics. It also specializes in the R&D aspects of tailoring the systems to particular problems. CIVIDEC recently collaborated with CERN to instrument the LHC machine with diamond beam-loss monitors.
The LHC is back in action again after the technical stop that began on 6 December, with initial preparations for the 2011 run in full swing. On 19 February the previous few weeks of careful preparation paid off, with circulating beams being rapidly re-established. There then followed a programme of beam measurements and re-commissioning of the essential subsystems. The initial measurements show that the LHC is in good shape and magnetically little-changed from last year. The first collisions of 2011 were produced on 2 March, with stable beams and collisions for physics planned for later in the month.
In addition to the maintenance work, a number of modifications were made to the LHC during the technical stop. These included the installation of small solenoids to combat the build-up of electrons inside the vacuum chamber with increasing proton beam intensity; the replacement of a number of uninterruptible power-supply installations for essential systems such as the cryogenics; the installation of additional capacitors on the quench-protection system to prepare for a possible increase in beam energy in 2011; plus a host of other improvements to RF, beam instrumentation, power convertors and kickers, etc.
During the same period similar maintenance took place on the injector chain, namely LINAC2, the Booster, the Proton Synchrotron (PS) and the Super Proton Synchrotron (SPS). An example of this work is the programme to exchange eight magnets in the SPS machine. This is part of regular preventive maintenance in which the SPS magnets are exhaustively tested at the end of each year and those presenting any initial signs of weakness are changed during the accelerator stop.
At the PS, the technical stop was used to begin the commissioning of the new PS main power supply (POPS), which will replace the old rotating machine that has powered the PS magnets since 1968. The PS power supply must be capable of delivering extremely high-power (60 MW) electrical pulses to the magnets and then reabsorbing the energy at each accelerator cycle, less than 2s later. The rotating machine has been replaced by an enormous system of power converters and capacitors. The system is crucial because the PS is one of the lynchpins of CERN’s accelerator complex and any failure in the electrical system would practically paralyse all of the experiments.
POPS was inaugurated and tested on 10 SPS test magnets in 2010 and then hooked up to the 101 PS main magnets for testing on 31 January 2011. This system was tested with gradually increasing intensities, right up to 6000 A. It then took a few days to pass the operation of POPS from the specialists controlling it locally to the CERN Control Centre prior to the crucial beam test on 11 February.
The successful completion of the upgrade to the Nuclotron at JINR marks the end of an important first step in the construction of the Nuclotron-based Ion Collider Facility and Multi-Purpose Detector (NICA/MPD) project. NICA, which is JINR’s future flagship facility in high-energy physics, will allow the study of heavy-ion collisions both in fixed-target experiments and in collider experiments with 197Au79+ ions at a centre-of mass energy of 4–11 GeV (1–4.5 GeV/u ion kinetic energy) and an average luminosity of 1027 cm−2 s−1. Other goals include polarized-beam collisions and applied research.
NICA’s main element is the Nuclotron, a 251 m circumference superconducting synchrotron for accelerating nuclei and multi-charged heavy ions, which started up in 1993. It currently delivers ion beams for experiments at internal targets and has a slow extraction system for fixed-target experiments. By 2007, it was accelerating proton beams to 5.7 GeV, deuterons to 3.8 GeV/u and nuclei (Li, F, C, N, Ar, Fe ) to 2.2 GeV/u.
The Nuclotron upgrade – the Nuclotron-M project – was a key part of the first phase of construction work for NICA. It included work to develop the existing accelerator complex for the generation of relativistic ion-beams with atomic masses from protons to gold and uranium, at energies corresponding to the maximum design magnetic field of 20 T. The goals were to reach a new level in beam parameters and to improve substantially the reliability and efficiency of accelerator operation and renovate or replace some of the equipment.
The Nuclotron facility includes a cryogenic supply system with two helium refrigerators, as well as infrastructure for the storage and circulation of helium liquid and gas. The injection complex consists of a high-voltage pre-injector with a 7000 kV pulsed transformer and an Alvarez-type linac, LU-20, which accelerates ions of Z/A ≥0 0.33 up to an energy 50 MeV/u. The wide variety of ion types is provided by a heavy-ion source, ESIS “KRION-2”, a duoplasmatron, a polarized deuteron source, POLARIS, and a laser ion-source for light ions.
As a key element of the NICA collider injection chain, the Nuclotron has to accelerate a single bunch of fully stripped heavy ions (U92+, Pb82+ or Au79+) from 0.6–4.5 GeV/u with a bunch intensity of about 1–1.5 × 109 ions. The particle losses during acceleration must not exceed 10% and the magnetic field should ramp at 1 T/s. To demonstrate the capacity of the Nuclotron complex and satisfy these requirements, the general milestones of the Nuclotron-M project were specified as the acceleration of heavy ions (at atomic numbers larger than 100) and stable and safe operation at 2 T of the dipole magnets.
The upgrade, which started in 2007, involved the modernization of almost all of the Nuclotron systems, with time in six beam runs devoted to testing newly installed equipment. Two stages of the ring vacuum system were upgraded and cryogenic power was doubled. A new power supply for the electrostatic septum of the slow extraction system was constructed and tested, and new power supplies for the closed-orbit corrector magnets were also designed and tested at the ring. The ring’s RF system was upgraded to increase the RF voltage and for tests of adiabatic trapping of particles into the acceleration mode. Vacuum conditions at the Nuclotron’s injector were improved to increase the acceleration efficiency. A completely new power-supply system as well as a quench protection system for magnets and magnetic lenses were also constructed, including: new main power supply units; a new power supply unit for current decrease in the quadrupole lenses; 10 km of new cable lines; and 2000 new quench detectors. In parallel, there was also progress in the design and construction of new heavy-ion and polarized light-ion sources.
Following the Nuclotron’s modernization, in March 2010 124Xe42+ ions were accelerated to about 1.5 GeV/u and slow extraction of the beam at 1 GeV/u was used for experiments. In December, the stable and safe operation of the magnetic system was achieved with a main field of 2 T. During the run the power supply and the quench protection systems were tested in cycles with the bending field of 1.4, 1.6, 1.8 and 2 T at the plateau. The field ramped at 0.6 T/s and the active time for each cycle was about 7s. A few tens of the energy evacuation events were acquired; in all of them the process was in the nominal regime.
In parallel with the upgrade work, the technical design was prepared for elements in the collider injection chain (a new heavy-ion linear accelerator, booster synchrotron and LU-20 upgrade programme). In addition, the technical design for the collider is in the final stage. The dipole and quadrupole magnets for the collider, as well as for the booster, are based on the design of the Nuclotron superconducting magnets. These have a cold-iron window-frame yoke and low-inductance winding made of a hollow composite superconductor; the magnetic-field distribution is formed by the iron yoke. The fabrication of these magnets gave JINR staff a great deal of experience in superconducting magnet design and manufacturing.
The prototype dipole magnet for the NICA booster was made in 2010 and construction of the magnet model for the collider, based on the preliminary design, is in the final stage. To construct the booster and collider rings, JINR needs to manufacture more than 200 dipole magnets and lenses during a short time period. The working area for magnet production and test benches for the magnet commissioning are currently being prepared.
The conferences on Computing in High Energy and Nuclear Physics (CHEP), which are held approximately every 18 months, reached their silver jubilee with CHEP 2010, held at the Academia Sinica Grid Computing Centre (ASGC) in Taipei in October. ASGC is the LHC Computing Grid (LCG) Tier 1 site for Asia and the organizers are experienced in hosting large conferences. Their expertise was demonstrated again throughout the week-long meeting, drawing almost 500 participants from more than 30 countries, including 25 students sponsored by CERN’s Marie Curie Initial Training Network for Data Acquisition, Electronics and Optoelectronics for LHC Experiments (ACEOLE).
Appropriately, given the subsequent preponderance of LHC-related talks, the LCG project leader, Ian Bird of CERN, gave the opening plenary talk. He described the status of the LCG, how it got there and where it may go next, and presented some measures of its success. The CERN Tier 0 centre moves some 1 PB of data a day, in- and out-flows combined; it writes around 70 tapes a day; the worldwide grid supports some 1–million jobs a day; and it is used by more than 2000 physicists for analysis. Bird was particularly proud of the growth in service reliability, which he attributed to many years of preparation and testing. For the future, he believes that the LCG community needs to be concerned with sustainability, data issues and changing technologies. The status of the LHC experiments’ offline systems were summarized by Roger Jones of Lancaster University. He stated that the first year of operations had been a great success, as presentations at the International Conference on High Energy Physics in Paris had indicated. He paid tribute to CERN’s support of Tier–0 and he remarked that data distribution has been smooth.
In the clouds
As expected, there were many talks about cloud computing, including several plenary talks on general aspects, as well as technical presentations on practical experiences and tests or evaluations of the possible use of cloud computing in high-energy physics. It is sometimes difficult to separate hype from initiatives with definite potential but it is clear that clouds will find a place in high-energy physics computing, probably based more on private clouds rather than on the well known commercial offerings.
Harvey Newman of Caltech described a new generation of high-energy physics networking and computing models. As the available bandwidth continues to grow exponentially in capacity, LHC experiments are increasingly benefiting from it – to the extent that experiment models are being modified to make more use of pulling data to a job rather than pushing jobs towards the data. A recently formed working group is gathering new network requirements for future networking at LCG sites.
Lucas Taylor of Fermilab addressed the issue of public communications in high-energy physics. Recent LHC milestones have attracted massive media interest and Taylor stated that the LHC community simply has no choice other than to be open, and welcome the attention. The community therefore needs a coherent policy, clear messages and open engagement with traditional media (TV, radio, press) as well as with new media (Web 2.0, Twitter, Facebook, etc.). He noted major video-production efforts undertaken by the experiments, for example ATLAS-Live and CMS TV, and encouraged the audience to contribute where possible – write a blog or an article for publication, offer a tour or a public lecture and help build relationships with the media.
There was an interesting presentation of the Facility for Antiproton and Ion Research (FAIR) being built at GSI, Darmstadt. Construction will start next year and switch-on is scheduled for 2018. Two of the planned experiments are the size of ALICE or LHCb, with similar data rates expected. Triggering is a particular problem and data acquisition will have to rely on event filtering, so online farms will have to be several orders of magnitude larger than at the LHC (10,000 to 100,000 cores). This is a major area of current research.
David South of DESY, speaking on behalf of the Study Group for Data Preservation and Long-term Analysis in High-Energy Physics set up by the International Committee for Future Accelerators, presented what is probably the most serious effort yet for data preservation in high-energy physics. The question is: what to do with data after the end of an experiment? With few exceptions, data from an experiment are often stored somewhere until eventually they are lost or destroyed. He presented some reasons why preservation is desirable but needs to be properly planned. Some important aspects include the technology used for storage (should it follow storage trends, migrating from one media format to the next?), as well as the choice of which data to store. Going beyond the raw data, this must also include software, documentation and publications, metadata (logbooks, wikis, messages, etc.) and – the most difficult aspect – people’s expertise.
Although some traditional plenary time had been scheduled for additional parallel sessions, there were still far too many submissions to be given as oral presentations. So, almost 200 submissions were scheduled as posters, which were displayed in two batches of 100 each over two days. The morning coffee breaks were extended to permit attendees to view them and interact with authors. There were also two so-called Birds of a Feather sessions on LCG Operations and LCG Service Co-ordination, which allowed the audience to discuss aspects of the LCG service in an informal manner.
The parallel stream on Online Computing was, of course, dominated by LHC data acquisition (DAQ). The DAQ systems for all experiments are working well, leading to fast production of physics results. Talks on event processing provided evidence of the benefits of solid preparation and testing; simulation studies have proved to provide an amazingly accurate description of LHC data. Both the ATLAS and CMS collaborations report success with prompt processing at the LCG Tier 0 at CERN. New experiments, for example at FAIR, should take advantage of the experiment frameworks used currently by all of the LHC experiments, although the analysis challenges of the FAIR experiments exceed those of the LHC. There was also a word of caution – reconstruction works well today but how will it cope with increasing event pile-up in the future?
Presentations in the software engineering, data storage and databases stream covered a heterogeneous range of subjects, from quality assurance and performance monitoring to databases, software re-cycling and data preservation. Once again, the conclusion was that the software frameworks for the LHC are in good shape and that other experiments should be able to benefit from this.
The most popular parallel stream of talks was dedicated to distributed processing and analysis. A main theme was the successful processing and analysis of data in a distributed environment, dominated, of course, by the LHC. The message here is positive: the computing models are mainly performing as expected. The success of the experiments relies on the success of the Grid services and the sites but the hardest problems take far longer to solve than foreseen in the targeted service levels. The other two main themes were architecture for future facilities such as FAIR, the Belle II experiment, at the SuperKEKB upgrade in Japan, and the SuperB project in Italy; and improvements in infrastructure and services for distributed computing. The new projects are using a tier structure, but apparently with one layer fewer than in the LCG. Two new, non-high-energy-physics projects – the Fermi gamma-ray telescope and the Joint Dark Energy Mission – seem not to use Grid-like schemes.
Tools that work
The message from the computing fabrics and networking stream was that “hardware is not reliable, commodity or otherwise”; this statement from Bird’s opening plenary was illustrated in several talks. Deployments of upgrades, patches, new services are slow – another quote from Bird. Several talks showed that the community has the mechanism, so perhaps the problem is in communications and not in the technology? Yes, storage is an issue and there is a great deal of work going on in this area, as shown in several talks and posters. However, the various tools available today have proved that they work: via the LCG, the experiments have stored and made accessible the first months of LHC data. This stream included many talks and posters on different aspects and uses of virtualization. It was also shown that 40 Gbit and 100 Gbit networks are a reality: network bandwidth is there but the community must expect to have to pay for it.
Compared with previous CHEP conferences, there was a shift in the Grid and cloud middleware sessions. These showed that pilot jobs are fully established, virtualization is entering serious large-scale production use and there are more cloud models than before. A number of monitoring and information system tools were presented, as well as work on data management. Various aspects of security were also covered. Regarding clouds, although the STAR collaboration at the Relativistic Heavy Ion Collider at Brookhaven reported impressive production experience and there were a few examples of successful uses of Amazon EC2 clouds, other initiatives are still at the starting gate and some may not get much further. There was a particularly interesting example linking CernVM and Boinc. It was in this stream that one of the more memorable quotes of the week occurred, from Rob Quick of Fermilab: “There is no substitute for experience.”
The final parallel stream covered collaborative tools, with two sessions. The first was dedicated to outreach (Web 2.0, ATLAS Live and CMS Worldwide) and new initiatives (Inspire); the second to tools (ATLAS Glance information system, EVO, Lecture archival scheme).
• The next CHEP will be held on 21–25 May, 2012, hosted by Brookhaven National Laboratory, at the NYU campus in Greenwich Village, New York, see www.chep2012.org/.
CERN has announced that the LHC will run through to the end of 2012, with a short technical stop at the end of 2011. The beam energy for 2011 will be 3.5 TeV. This decision, taken by CERN management following the annual planning workshop held in Chamonix last week and a report delivered by the laboratory’s machine advisory committee, gives the LHC experiments a good chance of finding new physics in the next two years, before the machine goes into a long shutdown to prepare for higher-energy running starting in 2014.
“If the LHC continues to improve in 2011 as it did in 2010, we’ve got a very exciting year ahead of us,” says Steve Myers, CERN’s director for accelerators and technology. “The signs are that we should be able to increase the data-collection rate by at least a factor of three over the course of this year.”
The LHC was previously scheduled to run to the end of 2011 before going into a long technical stop to prepare it for running at the full design energy of 7 TeV per beam. However, the machine’s excellent performance in its first full year of operation forced a rethink. Improvements in 2011 should increase the rate at which the experiments can collect data by at least a factor of three compared with 2010. That would lead to enough data being collected in 2011 to bring tantalizing hints of any new physics that might be within reach of the LHC operating at its current energy. However, to turn those hints into a discovery would require more data than can be delivered in one year, hence the decision to postpone the long shutdown. Running through 2012 will give the LHC experiments the data needed to explore this energy range fully before moving up to higher energy.
“With the LHC running so well in 2010, and further improvements in performance expected, there’s a real chance that exciting new physics may be within our sights by the end of the year,” says Sergio Bertolucci, CERN’s director for research and computing. “For example, if nature is kind to us and the lightest supersymmetric particle, or the Higgs boson, is within reach of the LHC’s current energy, the data we expect to collect by the end of 2012 will put it within our grasp.”
The schedule foresees beams back in the LHC in late February and running through to mid-December. There will then be a short technical stop before resuming in early 2012.
• See also comments by CERN’s director-general, Rolf Heuer
In their quest to learn more about the fundamental nature of matter, high-energy physicists have developed particle accelerators to reach ever higher energies to allow them to “see” how matter behaved in the extreme conditions that existed in the very early universe. The LHC at CERN has set the latest record for this “energy frontier” in particle physics, but looking beyond the LHC affordable colliders operating at ever larger centre-of-mass energies will call for new – perhaps even radical – approaches to particle acceleration.
In the past decade, the plasma wakefield accelerator (PWFA) has emerged as one such promising approach, thanks to the spectacular experimental progress at the Final Focus Test Beam (FFTB) facility at the SLAC National Accelerator Laboratory. Experiments there have shown that plasma waves or wakes generated by high-energy particle beams can accelerate and focus both high-energy electrons and positrons. Accelerating wakefields in excess of 50 GeV/m – roughly 3000 times the gradient in the SLAC linac – have been sustained in a metre-scale PWFA to give, for the first time using an advanced acceleration scheme, electron energy gains of interest to high-energy physicists.
To develop the potential of the PWFA and other exploratory advanced concepts for particle acceleration further, the US Department of Energy recently approved the construction of a new high-energy beam facility at SLAC: the Facility for Accelerator Science and Experimental Tests (FACET). It will provide electron and positron beams of high energy density, which are particularly well suited for next-generation experiments on the PWFA (Hogan et al. 2010).
In 2006 the FFTB facility was decommissioned to accommodate the construction of the Linac Coherent Light Source (LCLS) – the world’s first hard X-ray free-electron laser. The new FACET facility is located upstream of the injector for the LCLS (figure 1). It uses the first 2 km of the SLAC linac to deliver 23 GeV electron and positron beams to a new experimental area at Sector 20 in the existing linac tunnel. By installing a new focusing system and compressor chicane at Sector 20, the electron beam can be focused to 10 μm and compressed to less than 50 fs – dimensions appropriate for research on a high-gradient PWFA. Comparable positron beams will be provided with the addition of an upstream positron bunch-compressor in Sector 10. Peak intensities greater than 1021 W/cm2 at a pulse repetition rate of 30 Hz will be routinely available at the final focus of FACET. Electron and positron beams of such high energy-density are not available to researchers anywhere else in the world.
The construction phase of the FACET project started in July 2010 and should finish in April this year. Beam commissioning will follow and the first experiments are expected to begin in the summer. A recently completed shielding wall at the end of Sector 20 allows simultaneous operation of FACET and the LCLS.
The FACET beam will offer new scientific opportunities not only in plasma wakefield acceleration but also in dielectric wakefield acceleration, investigation of material properties under extreme conditions and novel radiation sources. To get a head start on the research opportunities, university researchers and SLAC physicists met at SLAC in March 2010 for the first FACET Users Workshop. This was the first opportunity for SLAC to unveil details about FACET’s capabilities and for the visiting scientists to outline their research needs. Beam time will be allocated using an annual, peer-reviewed proposal process.
In the PWFA a short but dense bunch of highly relativistic charged particles produces a space-charge density wave or a wake as it propagates through a plasma. As figure 2 shows, the head of the single bunch ionizes a column of gas – lithium vapour – to create the electrically neutral plasma and then expels the plasma electrons to set up the wakefield. As the plasma electrons rush outward, they create a longitudinally decelerating electric field that extracts energy from the head of the bunch. The plasma ions that are left behind create a restoring force that draws the plasma electrons back to the beam axis. When the electrons rush inwards, they create a longitudinally accelerating field in the back half of the wake, which returns energy to the particles in the back of the same bunch or alternately to a distinct second accelerating bunch. The plasma thus acts as an energy transformer.
The FFTB plasma wakefield experiments used a single 20 kA electron drive bunch to excite 50 GeV/m wakes in plasma of density 2.7 × 1017 e–/cm3. Energy was transferred from the particles in the front of the bunch to the particles in the tail of the same bunch via the wakefield. These experiments verified that the accelerating gradient scales inversely with the square of the bunch length and demonstrated that these large fields can be sustained over distances of a metre, leading to doubling of the energy of the initially 42 GeV electrons in the trailing part of the drive bunch.
Plasma wakefield acceleration will be a major area of research at FACET. Simply put, this research will strive to answer most of the outstanding physics issues for high-gradient plasma acceleration of both electrons and positrons, so that the potential for a PWFA as a technology for a future collider can be realistically assessed. The main goal of these future experiments is to demonstrate that plasma wakefield acceleration can not only provide an energy gain of giga-electron-volts for electron and positron bunches in a single, compact plasma stage, but can also accelerate a significant charge while preserving the emittance and energy spread of the beam.
The plasma wakefield experiments on FACET will need two distinct bunches, each about 100 fs long separated by about 300 fs. The first contains about 10 kA of peak current both to produce a uniform, metre-long column of plasma and then to drive the wake. The second bunch, which extracts energy from the wake, has a variable peak current. The sub-100 fs bunches needed for plasma wakefield acceleration are generated at FACET through a three-stage compression process that continually manipulates the longitudinal phase space so as to exchange correlated energy spread for bunch length, in a process called “chirped pulse compression”. There will be an additional collimation system within the final compression stage at FACET and the collimation in the transverse plane will result in structures in the temporal distribution of the final compressed bunch(es).
In this way FACET will produce two co-propagating bunches. By adjusting the charge and duration of the witness bunch, FACET will be able to pass from the regime of negligible beam-loading that has been studied so far to beam acceleration with strong wake-loading. By loading down or flattening the accelerating wakefield, FACET will accelerate the witness bunch with a narrow, well defined, energy spread as the simulation in figure 3 shows.
Improving accelerator performance is one of the forefronts of research in beam physics that can be explored at FACET
High-energy physics applications require not only high energies but also high beam power to deliver sufficient luminosity. For a linear collider with an energy of tera-electron-volts in the centre-of-mass this translates to nearly 20 MW of beam power for a luminosity of 1034 cm–2s–1. When combined with the efficiencies of other subsystems (wall-plug to klystron to drive beam), maximizing the efficiency of the plasma interaction will be a crucial element in keeping down the overall costs of the facility. For example, a recent conceptual design for a PWFA-based linear collider (PWFA-LC) used a drive-beam-to-witness-beam coupling of 60% to achieve an overall efficiency of 15% (Seryi et al. 2009). Theoretical models and computer simulations have estimated the efficiency of the plasma interaction to be on the order of 60% for Gaussian beams and approaching 90% for specifically tailored current profiles (Tzoufras et al. 2008).
Improving accelerator performance using spatially and temporally shaped pulses is one of the forefronts of research in beam physics that can be explored at FACET. Tailoring the current profile of the drive beam allows the plasma to extract energy at a uniform rate along the bunch so as to maximize the overall efficiency. Figure 4 shows an example of such a tailored current profile for FACET and the accompanying simulated plasma wake. Bunch shaping has the added benefit of increasing the transformer ratio – that is, the ratio of the peak accelerating field divided by the peak decelerating field. A larger transformer ratio will lead to more energy gain per plasma stage. Finally, tailoring the profile of the witness beam loads the accelerating wakefield to produce the desired narrow energy spread.
In addition to high beam power, the luminosity needed to do physics at the energy frontier will require state-of-the-art emittance with final beam sizes in the nanometre range. The ion column left in the wake of the drive beam provides a focusing channel with strong focusing gradients (MT/m for 1017 e–/cm3) that are linear in radius and constant along the bunch. This ion column allows a trailing witness bunch to propagate over many betatron wavelengths in a region free of geometric aberrations and emittance growth. There are however other sources of emittance growth in the PWFA. For instance, hosing instability (in which any transverse displacement of the beam slices grows as the beam propagates) between the beam and the wake, motion of the plasma ions in response to the dense beam, synchrotron radiation and multiple-Coulomb scattering can all lead to emittance growth. For a plasma accelerator at a few tera-electron-volts, the latter two effects have been shown to be negligible for appropriately injected beams. Experiments at FACET will determine the influence of the electron hose instability and the ion motion on emittance growth.
Turning plasma wakefield acceleration into a future accelerator technology.
Although plasma wakefield acceleration may find applications in areas other than high-energy physics, such as compact X-FELs, collider applications will require plasmas to accelerate not only electrons, but also positrons. Studies have already shown that relatively long positron bunches can create wakefields analogous to the electron case, which can be used to accelerate particles over distances of a metre or so with energy gains approaching 100 MeV (Blue et al. 2003). The response of the plasma to the incoming positron beam is different than for an electron beam. In the positron case, the plasma electrons are drawn in towards the beam core. This leads to fields that vary nonlinearly in radius and position along the bunch, resulting in halo formation and emittance growth (Hogan et al. 2003 and Muggli et al. 2008). FACET will be the first facility in the world to deliver compressed positron bunches suitable for studying positron acceleration with gradients of giga-electron-volts per metre in high-density plasmas.
Recent studies have shown that there may be an advantage in accelerating positrons in the correct phase of the periodic wakes produced by an electron drive beam. A simple yet elegant study of this concept will be done at FACET by placing a converter target at the entrance of the plasma cell and allowing the trailing witness beam to create an e–/e+ shower. The positrons born at the correct phase will ride the wake of multi-giga-electron-volts/metre through the plasma and emerge from the downstream end with a potentially narrow energy spread and emittance (Wang et al. 2008). In the longer term, FACET has been designed to allow an upgrade to the Sector 20 beam line, called a “sailboat chicane”, which will allow electron and positron bunches from the SLAC linac to be delivered simultaneously to the plasma entrance with varying separation in time (figure 5). By switching the bunch order and delivering the compressed positron beam to the plasma first, FACET can also study the physics of proton-driven plasma wakefield acceleration (CERN Courier March 2010 p7). The combination of high energy, and high peak-current electron and positron beams will make FACET the premier facility in the world for studying advanced accelerator concepts and lead the way in turning plasma wakefield acceleration into a future accelerator technology.
• Work supported by the US Department of Energy under contract numbers DE-AC02-76SF00515 and DE-FG02-92ER40727.
On 18 December 2010, just after 6 p.m. New Zealand time, seven austral summers of construction came to an end as the last of 86 optical sensor strings was lowered into the Antarctic ice – IceCube was complete, a decade after the collaboration submitted the proposal. A cubic kilometre of ice has now been fully instrumented with 5160 optical sensors built to detect the Cherenkov light from charged particles produced in high-energy neutrino interactions.
The rationale for IceCube is to solve an almost century-old mystery: to find the sources of galactic and extragalactic cosmic rays. Neutrinos are the ideal cosmic messengers. Unlike charged cosmic rays they travel without deflection and, as they are weakly interacting, arrive at Earth from the Hubble distance. The flip side of their weak interaction with matter is that it takes a very large detector to observe them – this is where the 1 km3 of ice comes in. The IceCube proposal argues that 1 km3 is required to reach a sensitivity to cosmic sources after several years of operation. This volume will allow IceCube to study atmospheric muons and neutrinos while searching for extra-terrestrial neutrinos with unprecedented sensitivity.
The concept is simple. A total of 5160 optical sensors turn a cubic kilometre of natural Antarctic ice into a 3D tracking calorimeter, measuring energy deposition by the amount of Cherenkov light emitted by charged particles. Each sensor is a complete, independent detector – almost like a small satellite – containing a photomultiplier tube 25 cm in diameter, digitization and control electronics, and built-in calibration equipment, including 12 LEDs.
Designing these digital optical modules (DOMs) was not easy. As well as the requirement for a high sampling speed of 300 million samples a second and a timing resolution better than 5 ns across the array (the actual time resolution is better than 3 ns), the DOMs needed to have the reliability of a satellite but on a much smaller budget. They were designed with a 15-year lifetime and operate from room temperature down to –55 °C, all the while using less than 5 W. This power per DOM may not sound like much, but it mounts up to about 10 planeloads of fuel a year. Nevertheless, the design was good, and 98% of the IceCube DOMs are working perfectly, with another 1% usable. Since the first deployments in January 2005, only a few DOMs have failed, so the 15-year lifetime should be met easily.
Building the DOMs was only the first challenge. Because the shallow ice contains air bubbles, the DOMs must be placed deep, between 1450 and 2450 m below the surface. The sensors are deployed on strings, each containing 60 DOMs spaced vertically at 17 m intervals. Pairs of DOMs communicate with the surface via twisted pairs that transmit power, data, control signals and bidirectional timing calibration pulses. The 78 “original” strings are laid out on a 125 m triangular grid, covering 1 km2 on the surface. The remaining eight strings are then placed in the centre of IceCube, with a dense packing of 50 high-quantum-efficiency DOMs covering the bottom 350 m of the detector. This more densely instrumented volume, known as DeepCore, will be sensitive to neutrinos with energies as low as 10 GeV, which is an order of magnitude below the threshold for the rest of the array.
The key to assembling the detector was a fast drill. Hot water does the trick: a 200 gal/min stream of 88ºC water can melt a hole 60 cm in diameter and 2500 m deep in about 40 hours. It takes another 12 hours to attach the DOMs to the cable and lower them to depth. This proved fast enough to drill 20 holes in roughly two months.
Speed was vital because the construction season is necessarily short in this region – the Amudsen-Scott South Pole Station is accessible by plane for only four and a half months a year. Add the time to set up the drill at the start of the season and take it down at the end, and less than two months are left for drilling.
This brief description does not do justice to the host of difficulties faced by the construction crew. First, hot water drills are not sold at hardware stores – many human-years of effort went into developing a reliable, fuel-efficient system. Second, the South Pole is one of the least hospitable places on Earth. Every piece of equipment and every gallon of fuel is flown in from McMurdo station, 1500 km away on the Antarctic coast. The altitude of 2800 m and the need to land on skis limited the cargo that could be carried: everything had to fit inside an LC130 turboprop plane. The weather also complicates operations. Typical summer temperatures are between –15 °C and –45 °C, which is hard on both people and equipment. The need for warm clothing further exacerbates the effect of the high altitude; many tasks become challenging when you are wearing thick gloves and 10 kg of extreme cold weather gear.
Nevertheless the collaboration succeeded. From the humble single string deployed in 2005 (and, incidentally, adequate by itself to see the first neutrinos), construction ramped up every year, reaching a peak of 20 strings deployed during the 2009/2010 season. This was good enough to allow for a shorter season this final year, leaving time to clean up and prepare the drill for long-term storage.
Even though IceCube has just been completed, the collaboration has been actively analysing data taken with the partially completed detector. This is also no simple matter. Even at IceCube’s depths, there are roughly a million times as many downwards-going muons produced in cosmic-ray air showers as there are upwards-going muons from neutrino interactions in the rock and ice below IceCube. To avoid false neutrino tracks, IceCube analysers must be extremely efficient at avoiding misreconstructed events. Worse still, IceCube is big enough to observe two or more muons, from different cosmic-ray interactions, simultaneously. Still, with stringent cuts to reject background events, it is possible to select an almost pure neutrino sample. In a one-year sample, taken with half of the full detector, IceCube collected more than 20 000 neutrinos. This sample was used to extend measurements of the atmospheric neutrino spectrum to an energy of 400 TeV. The events are being scrutinized for any deviation from the anticipated flux that would mean evidence of new neutrino physics or, on the more exotic side, deviations in neutrino arrival directions that could signal a breakdown of Lorentz invariance or Einstein’s equivalence principle.
With the 40-string event sample the collaboration has produced a map of the neutrino sky that has been examined for evidence of suspected cosmic-ray accelerators. None have been found, although it is important to realize that at this stage no signal is expected at a significant statistical level. For instance, we have reached a sensitivity that can observe a single cosmogenic neutrino for the higher end of the range of fluxes calculated. We have also started to probe the neutrino flux predicted from gamma-ray bursts, assuming that they are the sources of the highest-energy cosmic rays.
The first surprise from IceCube does not involve neutrinos at all. IceCube triggers on cosmic-ray muons at a rate of about 2 kHz, thus collecting billions of events a year. These muons have energies of tens of tera-electron-volts and are produced in atmospheric interactions by cosmic rays with energies of hundreds of tera-electron-volts, i.e. the highest-energy Galactic cosmic rays. A skymap of well reconstructed muons with an average energy of 20 TeV reveals a rich structure with a dominant excess in arrival directions pointing at the Vela region. These muons come from cosmic rays with energies of many tens to hundreds of tera-electron-volts; the gyroradiius of these particles in the microgauss field of the galaxy is in the order of 0.1 parsec, too large to be affected by our solar neighbourhood. However, these radii are too small to expect that the cosmic rays would point back even to the nearest star, never mind a candidate source like the Vela pulsar or any other distant source remnant at more than 100 parsec.
Either we do not understand propagation in the field, or we do not understand the field itself
There is some mystery here: either we do not understand propagation in the field, or we do not understand the field itself. Does the detector work? Definitely: we observe in the same data sample the Moon’s shadow in cosmic rays at more than 10 σ, as well as the dipole resulting from the motion of the Earth around the Sun relative to the cosmic rays.
Additionally, IceCube has established the tightest limits yet on the existence of dark matter, which consists of weakly interacting massive particles that have spin-dependent interactions with ordinary matter. In the alternative case – dominant spin-independent interactions – IceCube’s limits are almost competitive with direct searches. In addition, by monitoring the signal rates from its photomultiplier tubes, IceCube will be sensitive to million-electron-volt energy neutrinos from supernova explosions anywhere in the galaxy.
Looking forward
Now the 220-strong IceCube collaboration – with members from the US, Belgium, Germany, Sweden, Barbados, Canada, Japan, New Zealand, Switzerland and the UK – is eagerly looking forward to analysing data from a complete and stable detector. Analysing and simulating data from an instrument that changed every Antarctic season has been a challenge.
At the same time, neutrino astronomers are thinking about the future. Even IceCube is too small to collect a significant number of events at the highest energies. This has already been pointed out in the case of cosmogenic neutrinos with typical energies in excess of 106 TeV. These are produced when ultra-high-energy cosmic rays interact with photons in the cosmic microwave background. To observe these neutrinos requires a much larger detector. Physicists are aiming for a volume of 100 km3. This will require a new technology, and several groups are already deploying antennas to observe the brief coherent radio Cherenkov pulses emitted by neutrino-induced showers. The advantages of radio detection are that the signal is coherent, so it scales as the neutrino energy squared. Also, the radio signals have larger attenuation lengths than light, allowing detectors to be placed on a 1 km, rather than 125 m, grid. The cost is that radiodetectors have energy thresholds that are much higher than IceCube.
Each year, the Topical Workshop on Electronics for Particle Physics (TWEPP) provides the opportunity for experts to come together to discuss electronics for particle-physics experiments and accelerator instrumentation. Established in 2007, it succeeds the workshops initiated in 1994 to focus on electronics for LHC experiments, but with a much broader scope. As the LHC experiments have now reached stable operating conditions, the emphasis is shifting further towards R&D for future projects, such as the LHC upgrades, the studies for the Compact Linear Collider and the International Linear Collider, as well as neutrino facilities and other experiments in particle- and astroparticle physics.
The latest workshop in the series, TWEPP-2010, took place on 20–24 September at RWTH Aachen University and attracted 190 participants, mainly from Europe but also from the US and Japan. It covered a wide variety of topics, including electronics developments for particle detection, triggering and data acquisition; custom analogue and digital circuits; optoelectronics; programmable digital logic applications; electronics for accelerator and beam instrumentation; and packaging and interconnect technology. The programme of plenary and parallel sessions featured 16 invited talks together with 63 oral presentations and 66 poster presentations selected from a total of 150 submissions – an indication of the attractiveness of the workshop concept. The legacy of the meeting as a platform for the discussion of common LHC electronics developments is reflected in several electronics working groups for the super-LHC (sLHC) project holding their bi-annual meeting during the workshop, namely the Working Groups for Power Developments and for Optoelectronics, as well as the Microelectronics User Group. In addition, two new working groups on Single Event Upsets and on development of electronics in the emerging xTCA standard had “kick-off” meetings during TWEPP-2010.
After a welcome and introduction to particle physics in the host country and the host institute (see box), the opening session continued with “Physics for pedestrians”, a talk by Patrick Michel Puzo of the Laboratoire de l’Accélérateur Linéaire, Orsay, in which he explained the Standard Model of particle physics, as well as experimental measurement techniques, to the audience of hardware physicists and engineers. DESY’s Peter Göttlicher went on to present the European X-ray Free Electron Laser project (XFEL) currently under construction at DESY. This fourth-generation light source will provide ultra-short flashes of intense and coherent X-ray light for the exploration of the structure and dynamics of complex systems, such as biological molecules. Dedicated two-dimensional camera systems, such as the Adaptive Gain Integrating Pixel Detector (AGIPD), are being developed to record up to 5000 images a second with a resolution of 1 megapixel. The session closed with a summary of the status of the LHC by CERN’s Ralph Assmann, who also discussed the expected and observed limitations and prospects for further increases in intensity, luminosity and beam energy at the LHC, as well as short- and long-term planning.
From ASICs to optical links
For the next three days, morning and afternoon sessions began with plenary talks, after which the audience separated into two parallel sessions. With 20 presentations, the session on application-specific integrated circuits (ASICs) was again by far the most popular, demonstrating the demand of chip designers for a forum to present and discuss their work. One increasingly important aspect in the next generation of experiments with high radiation levels is the mitigation of single-event effects (SEE), such as single event upsets (SEU), which are caused by the interaction of particles with the semiconductor material. Deep-submicron integrated circuit technologies with low power consumption are becoming increasingly sensitive to SEEs and this must be carefully taken into account at both the system level and the ASIC design level. Invited speaker Roland Weigand of the European Space Agency gave insight into the various approaches of SEE mitigation that are employed in space applications, where integrated circuits are exposed to solar and cosmic radiation.
A relatively new development is the 3D integration of circuits, where several circuit layers are stacked on each other and interconnected, for example by through-silicon-vias. The advantages include the reduction of the chip area, reduced power consumption, a high interconnection density and the possibility to combine different processes in one device. Within particle physics, a possible future application is in the upgrades of the large silicon trackers of the LHC experiments. Kholdoun Torki from Circuits Multi-Projets, Grenoble, presented the plans for a 3D multiproject wafer run for high-energy physics, which allows several developers to share the cost of low-volume production by dividing up the reticle area.
The parallel session on “Power, grounding and shielding” focused mainly on novel power-provision schemes for upgrades of the LHC experiments, namely serial powering and DC–DC conversion. An increase in the number of readout channels and the possible implementation of additional functionality, such as a track trigger, in the tracker upgrades of ATLAS and CMS will lead to higher front-end power consumption and consequently larger power losses in the supply cables (already installed) and to an excessive increase in the material budget of power services. New ways to deliver the power therefore need to be devised. Both of the new schemes discussed solve this problem by lowering the current to be delivered. In serial powering, this is done by daisy-chaining many detector modules, while DC–DC conversion schemes provide the power at a higher voltage and lower current, with on-detector voltage conversion. These topics were further expanded in the session of the Working Group for Power Developments.
Another parallel session was devoted to the topic of optoelectronics and optical links. Data transmission via optical links is already standard in the LHC experiments because such links do not suffer from noise pick-up and contribute less material than the classic copper wires. In the session and in the following working-group meeting, presentations focused on experience with installed systems as well as on new developments, in particular for the Versatile Link project, which will develop high-speed optical-link architectures and components suitable for deployment at the sLHC. In an inspiring talk, invited speaker Mark Ritter of IBM expanded on optical technologies for data communication in large parallel systems. He explained that scaling in chip performance is now constrained by limitations on electrical communication bandwidth and power dissipation and he described how optical technologies can help overcome these constraints. The combination of silicon nanophotonic transceivers and 3D integration technologies might be the way forwards, with a photonic layer integrated directly into the chip such that on-board data transmission between the individual circuit layers is performed optically.
First LHC experience
A highlight of this year’s workshop was the topical session devoted to the performance of LHC detectors and electronics under the first beam conditions. Gianluca Aglieri Rinella of CERN presented the experience with ALICE, a detector designed specifically for the reconstruction of heavy-ion collisions, where high particle-multiplicities and large event sizes are expected. He showed that more than 90% of the channels are alive for most of the ALICE detector subsystems, with the data-taking efficiency being around 80%. The ALICE collaboration’s goal for proton–proton collisions is to collect a high-quality, minimum-bias sample with low pile-up in the time projection chamber, corresponding to an interaction rate of 10 kHz. For this reason, the peak luminosity at ALICE is deliberately reduced during proton–proton running.
Thilo Pauly of CERN presented the ATLAS report. He showed that more than 97% of the channels are operational for all detector systems and that 94% of the delivered data are good for physics analysis. The ATLAS momentum scale for tracks at low transverse momentum is measured with a precision of a few per mille, while the energy scale for electromagnetic showers is known from the reconstruction of neutral pions to better than 2%. The experience of CMS, presented by Anders Ryd of Cornell University, is similarly positive, with all subsystems being 98% functional with a data-taking efficiency of 90%. He explained that the collaboration struggled for a while with the readout of high-occupancy beam-induced events in the pixel detector – the main reason for detector downtime – but managed to solve the problem.
Last but not least, Karol Hennessy of the University of Liverpool reported on LHCb, which is optimized to detect decays of beauty and charm hadrons for the study of CP violation and rare decays. This experiment has had a detector uptime of 91% and a fraction of working channels above 99% in most subdetectors. One specialty is the Vertex Locator – a silicon-strip detector consisting of retractable half-discs whose innermost region is only 8 mm away from the beam. This detector reaches an impressive peak spatial resolution of 4 μm.
Posters and more
The well attended poster session took place in the main lecture hall and featured 66 posters. Discussions were so lively that the participants had to be reminded to stop because they would otherwise miss the guided city tour. The workshop dinner took place in the coronation hall of the town hall, where participants were welcomed by the mayor of Aachen. The dinner saw the last speech by CERN’s François Vasey as Chair of the Scientific Organizing committee. He became Workshop Chair in 2007, shaping the transition to TWEPP and after four successful workshops he now passes the baton to Philippe Farthouat, also of CERN. The next workshop in the series will take place on 26–30 September 2011 in Vienna.
TWEPP-10 was organized by the Physikalisches Institut 1B, RWTH Aachen University, with support from Aachen University, CERN and ACEOLE, a Marie Curie Action at CERN funded by the European Commission under the 7th Framework Programme.
The concept of a particle collider was first laid down by Rolf Widerøe in a German patent that was registered in 1943 but not published until 1952. It proposed storing beams and allowing them to collide repeatedly so as to attain a high energy in the centre-of-mass. By 1956, the first ideas for a realistic proton–proton collider were being publicly discussed, in particular by Donald Kerst and Gerard O’Neill. At the end of the same year, while CERN’s Proton Synchrotron (PS) was still under construction, the CERN Council set up the Accelerator Research Group, which from 1960 onwards focused on a proton–proton collider. By 1962, the group had chosen an intersecting ring layout for the collider over the original concept of two tangential rings because it offered more collision points. Meanwhile, in 1961, CERN had been asked by Council to include a 300 GeV proton synchrotron in the study.
In 1960 construction began on a small proof-of-principle 1.9 MeV electron storage ring, the CERN Electron Storage and Accumulation Ring (CESAR). This was for experimental studies of particle accumulation (stacking). This concept, which had been proposed by the Midwestern Universities Research Association (MURA) in the US in 1956, would be essential for obtaining sufficient beam current and, in turn, luminosity. The design study for the Intersecting Storage Rings (ISR) was published in 1964 – involving two interlaced proton-synchrotron rings that crossed at eight points.
Those who were against the ISR were afraid of the leap in accelerator physics and technology required by this venture
After an intense and sometimes heated debate, Council approved the principle of a supplementary programme for the ISR at its meeting in June 1965. The debate was between those who favoured a facility to peep at interactions at the highest energies and those who preferred intense secondary beams with energies higher than that provided by the PS. Those who were against the ISR were also afraid of the leap in accelerator physics and technology required by this venture, which appeared to them as a shot into the dark.
France made land available for the necessary extension to the CERN laboratory and the relevant agreement was signed in September 1965. The funds for the ISR were allocated at the Council meeting in December of the same year and Kjell Johnsen was appointed project leader. Finally, Greece was the only country out of the 14 member states whose budget did not allow it to participate in the construction. In parallel, the study of a 300 GeV proton synchrotron was to be continued; this would eventually lead to the construction of the Super Proton Synchrotron (SPS) at CERN.
Figure 1 shows the ISR layout with the two rings intersecting at eight points at an angle of 15°. To create space for straight sections and to keep the intersection regions free of bulky accelerator equipment, the circumference of each ring was set at 943 m, or 1.5 times that of the PS. Out of the eight intersection regions (I1–I8) six were available for experiments and two were reserved for operation (I3 for beam dumping and I5 for luminosity monitoring).
The ISR construction schedule benefited from the fact that the project had already been studied for several years and many of the leading staff had been involved in the construction of the PS. The ground-breaking ceremony took place in autumn 1966 and civil engineering started for the 15-m wide and 6.5-m high tunnel, using the cut-and-fill method at a level 12 m above the PS to minimize excavation. The construction work also included two large experimental halls (I1 and I4) and two transfer tunnels from the PS to inject the counter-rotating beams. In parallel, the West Hall was built for fixed-target physics with PS beams; ready in July 1968, it was used for assembling the ISR magnets. The civil engineering for the ISR rings was completed in July 1970, including the earth shielding. The production of the magnet steel started in May 1967 and all of the major components for the rings had been ordered by the end of 1967. The first series magnets arrived in summer 1968 and the last magnet unit was installed in May 1970.
Pioneering performance
The first proton beam was injected and immediately circulated in October 1970 in Ring 1. Once Ring 2 was available, the first collisions occurred on 27 January 1971 at a beam momentum of 15 GeV/c (figure 2, p27). By May, collisions had taken place at 26.5 GeV/c per beam – the maximum momentum provided by the PS – which was equivalent to protons of 1500 GeV hitting a fixed target.
In the first year of operation, the maximum circulating current was already 10 A, the luminosity was as high as 3×1029 cm–2 s–1 and the loss-rates at beam currents of up to 6 A were less then 1% per hour (compared with a design loss-rate of 50% in 12 h). Happily, potentially catastrophic predictions that the beams would grow inexorably and be lost – because, unlike in electron machines, the stabilizing influence of synchrotron radiation would be absent – proved to be unfounded.
The stacking in momentum space, pioneered by MURA, was an essential technique for accumulating the intense beams. In this scheme, the beam from the PS was slightly accelerated by the RF system in the ISR and the first pulse deposited at the highest acceptable momentum on an outer orbit in the relatively wide vacuum chamber. Subsequent pulses were added until the vacuum chamber was filled up to an orbit close to the injection orbit, which was on the inside of the chamber. This technique was essential for the ISR and had been proved experimentally to work efficiently in CESAR.
Borrowed quadrupoles from the PS, DESY and the Rutherford Appleton Laboratory increased the luminosity by a factor of 2.3
The design luminosity was achieved within two years of start-up and then increased steadily, as figure 3 shows. It was particularly boosted in I1 (originally for one year in I7) and later in I8 by low-beta insertions that decreased the vertical size of the colliding beams. The first low-beta insertion, which consisted largely of borrowed quadrupoles from the PS, DESY and the Rutherford Appleton Laboratory, increased the luminosity by a factor of 2.3. Later, for the second intersection, more powerful superconducting quadrupoles were developed at CERN but built by industry. This increased the luminosity by a factor of 6.5, resulting in a maximum luminosity of 1.4×1032 cm–2 s–1. This remained a world record until 1991, when it was broken by the Cornell electron–positron storage ring.
The stored currents in physics runs were 30–40 A (compared with 20 A nominal); the maximum proton current that was ever stored was 57 A. Despite these high currents, the loss rates during physics runs were typically kept to one part per million per minute, which provided the desired low background environment for the experiments. Beams of physics quality could last 40–50 hours.
Because the ISR’s magnet system had a significant reserve, the beams in the two rings could be accelerated to 31.4 GeV/c by phase displacement, a technique that was also proposed by MURA. This consisted of moving empty buckets repeatedly through the stacked beam. The buckets were created at an energy higher than the most energetic stored particles and moved through the stack to the injection orbit. In accordance with longitudinal phase-space conservation, the whole stack was accelerated and the magnet field was simultaneously increased to keep the stack centred in the vacuum chamber. This novel acceleration technique required the development of a low-noise RF system operating at a low voltage with a fine control of the high-stability, magnet power supplies.
The ISR was also able to store deuterons and alpha particles as soon as they became available from the PS, leading to a number of runs with p–d, d–d, p–α and α–α collisions from 1976 onwards. For CERN’s antiproton programme, a new beamline was built from the PS to Ring 2 for antiproton injection and the first p-p runs took place in 1981. During the ISR’s final year, 1984, the machine was dedicated to single-ring operation with a 3.5 GeV/c antiproton beam.
The low loss-rates observed for the gradually rising operational currents were only achievable through a continuous upgrading of the ultra-high vacuum system, which led to a steady decrease in the average pressure (figure 4). The design values for the vacuum pressure were 10–9 torr outside the intersection regions and 10–11 torr in these regions to keep the background under control. The initial choice of a stainless-steel vacuum chamber bakeable to 300°C turned out to be the right one but nevertheless a painstaking and lengthy programme of vacuum improvement had to be launched. The vacuum chambers were initially baked to only 200°C and had to be re-baked at 300°C and, later, at 350°C. Hundreds of titanium sublimation pumps needed to be added and all vacuum components had to be glow-discharge cleaned in a staged programme. These measures limited the amount of residual gas, and hence the production of ions from beam–gas collisions, as well as the out-gassing that occurred when positive ions impinged on the vacuum chamber walls after acceleration through the electrostatic beam potential.
The electrons created by ionization of the residual gas were often trapped in the potential well of the coasting proton beam. This produced an undesirable focusing and coupling between the electron cloud and the beam, which led to “e–p” oscillations. The effect was countered by mounting clearing electrodes in the vacuum chambers and applying a DC voltage to suppress potential wells in the beam.
By 1973 the ISR had suffered two catastrophic events in which the beam burnt holes in the vacuum chamber
By 1973 the ISR had suffered two catastrophic events in which the beam burnt holes in the vacuum chamber. Collimation rings were then inserted into the flanges to protect the bellows. The vacuum and engineering groups also designed and produced large, thin-walled vacuum chambers for the intersection regions. The occasional collapse of such a chamber would leave a spectacular twisted sculpture and require weeks of work to clean the contaminated arcs.
While the ISR broke new ground in many ways, the most important discovery in the field of accelerator physics was that of Schottky noise in the beams – a statistical signal generated by the finite number of particles, which is well known to designers of electronic tubes. This shot noise not only has a longitudinal component but also a transverse component in the betatron oscillations (the natural transverse oscillations of the beam). This discovery opened new vistas for non-invasive beam diagnostics and active systems for reducing the size and momentum-spread of a beam.
The longitudinal Schottky signal made it possible to measure the current density in the stack as a function of the momentum (transverse position) without perturbing it. These scans clearly showed the beam edges and any markers (figure 5). A marker could be created during stacking by making a narrow region of low current-density or by losses on resonances.
The transverse Schottky signals gave information about how the density of the stack varied with the betatron frequency, or “tune”, which meant that stacking could be monitored in the tune diagram and non-linear resonances could be avoided. During stacking, space–charge forces increase and change the focusing experienced by the beam. Using the Schottky scans as input, the magnet system could be trimmed to compensate the space–charge load. A non-invasive means to verify the effect of space charge and to guide its compensation had suddenly become available.
The discovery of the transverse Schottky signals had another, and arguably more important impact, namely the experimental verification of stochastic cooling of particle beams. This type of cooling was invented in 1972 by Simon van der Meer at CERN. Written up in an internal note, it was first considered as a curiosity without any practical application. However, Wolfgang Schnell realized its vast potential and actively looked for the transverse Schottky signals at the ISR. This was decisive for the resurrection of van der Meer’s idea from near oblivion and its experimental proof at the ISR (figure 6). Towards the end of ISR’s life, stochastic cooling was routinely used on antiproton beams to increase the luminosity in antiproton–proton collisions by counteracting the gradual blow-up of the antiproton beam through scattering with residual gas as well as resonances.
Stochastic cooling was the decisive factor in the conversion of the SPS to a pp collider and in the discovery there in 1983 of the long-sought W and Z bosons. This led to the awarding of the Nobel Prize in Physics to Carlo Rubbia and van der Meer, the following year. The technique also became the cornerstone for the success of the more powerful Tevatron proton–antiproton collider at Fermilab. In addition, CERN’s low-energy antiproton programmes in the Low Energy Antiproton Ring and the Antiproton Decelerator, as well as similar programmes at GSI in Germany and at Brookhaven in the US, owe their existence to stochastic cooling. The extension to bunched beams and to optical frequencies makes stochastic cooling a basic accelerator technology today.
A lasting impact
With its exceptional performance, the ISR dispelled the fears that colliding beams were impractical and dissolved the reluctance of physicists to accept the concept as viable for physics. In addition to stochastic cooling, the machine pioneered and demonstrated large-scale, ultra-high vacuum systems as well as ultra-stable and reliable power converters, low-noise RF systems, superconducting quadrupoles and diagnostic devices such as a precise DC current transformer and techniques such as vertically sweeping colliding beams through each other to measure luminosity – another of van der Meer’s ideas.
The ISR had been conceived of in 1964 in an atmosphere of growing resentment against the high costs of particle physics and it was in a similar climate in the early 1980s that the rings were closed down to provide financial relief for the new Large Electron–Positron collider at CERN. The political pressures in the 1960s had fought and accepted the ISR as a cost-efficient gap-filler because the financial and political climates were not ready for a 300 GeV machine. However, had CERN built the 300 GeV accelerator instead of the ISR, then the technology of hadron colliders would have been seriously delayed. Instead, the decision to build the ISR opened the door to collider physics and allowed an important expansion in accelerator technology that would affect everyone for the better, including the 300 GeV project, the pp project and eventually the LHC.
The committee for the ISR experimental programme – the ISRC – started its work in early 1969, with the collider start-up planned for mid-1971. Two major lines of experimental programmes emerged: “survey experiments” would aim to understand known features (in effect, the Standard Model of the time) in the new energy regime, while other experiments would aim at discoveries. This was surprisingly similar to the strategy 40 years later for the LHC. But in reality the two approaches are worlds apart.
Hadronic physics in the late 1960s was couched in terms of thermodynamical models and Regge poles. The elements of today’s Standard Model were just starting to take shape; the intermediate vector bosons (W+, W– and W0, as the Z0 was called then) were thought to have masses in the range of a few giga-electron-volts. The incipient revolution that was to establish the Standard Model was accompanied by another revolution in experimentation. Georges Charpak and collaborators had demonstrated the concept of the multiwire proportional chamber (MWPC) just one year earlier, propelling the community from a photographic-analogue into the digital age with his stroke of genius. Nor should the sociological factor be forgotten: small groups, beam exposures of a few days to a few weeks, as well as quick and easy access to the experimental apparatuses – all characterized the style of experimentation of the time.
These three elements – limited physics understanding, collaboration sociology and old and new experimental methods – put their stamp on the early programme. From today’s perspective, particle physics was at the dawn of a “New Age”. I will show how experimentation at the ISR contributed to the “New Enlightenment”.
1971–1974: the ‘brilliant, early phase’
Maurice Jacob, arguably one of the most influential guiding lights of the ISR programme, called this first period the “brilliant, early phase”, in reference to its rich physics harvest (Jacob 1983). The lasting contributions include: the observation of the rising total-cross section; measurements of elastic scattering; inclusive particle production and evidence for scaling; observation of high-pT production; and the non-observation of free quarks. Several experimental issues of the period deserve particular mention.
The experimental approach matched the “survey” character of this first period. The ISR machine allowed tempting flexibility, with operation at three – later four – collider and asymmetric beam energies. Requests for low- or high-luminosity running and special beam conditions could all be accommodated. A rapid switch-over from one experimental set-up to another at the same interaction point was also one of the guiding design principles. Notwithstanding this survey character, this early period saw several imaginative contributions to experimentation – some with a lasting influence.
The devices known today as “Roman pots” were invented at the ISR with the aim to place detectors close to the circulating beams, which was a requirement for elastic-scattering experiments in the Coulomb interference region. During beam injection and set up these detectors had to be protected from high radiation doses by retracting them into a “stand-by” position. The CERN-Rome group (R601, for experiment 1 at intersection 6 (I6)) in collaboration with the world-famous ISR Vacuum group developed the solution: the detectors were housed in “pots” made from thin stainless-steel sheets, which could be remotely moved into stand-by mode or one of several operating positions. This technique has been used at every hadron collider since, including at the LHC.
The first 4π-detector was installed by the R801 Pisa-Stony-Brook collaboration. It used elaborate scintillator hodoscopes, providing 4π-coverage with high azimuthal and polar-angle granularity, well adapted to the measurement of the rising total cross-section and short-range particle correlations. The Split-Field Magnet (SFM), ultimately installed at interaction 4 (I4), was the first general ISR facility. The SFM was groundbreaking in many ways and was proposed by Jack Steinberger in 1969 as the strategy for exploring terra incognita at the ISR with an almost 4π magnetic facility.
The SFM’s unconventional magnet topology – two dipole magnets of opposite polarity – addressed dual issues: minimizing the magnetic interference with the two ISR proton beams; and providing magnetic analysis preferentially in the forward region, the place of physics interest according to prevailing understanding. It made daring use of the new MWPC technology for tracking, propelling this technique within a few years from 10×10 cm2 prototypes to instrumentation of hundreds of MWPCs with hundreds of square metres and almost 100,000 electronic read-out channels. The SFM became operational towards the end of 1973 – a fine example of what CERN can accomplish with a meeting of minds and the right leadership. True to its mission this facility was used by many collaborations, changing the detector configuration or adding detection equipment as necessary, throughout the whole life of the ISR. The usefulness of a dipole magnetic field at a hadron collider was later to be beautifully vindicated by the magnetic spectrometer of the UA1 experiment in the 1980s.
The Impactometer was the name given by Bill Willis to a visionary 4π detector, proposed in 1972 (Willis 1972). It anticipated many physics issues and detection features that would become “household” concepts in later years. The 4π-coverage, complete particle measurements and high-rate capabilities were emphasized as the road to new physics. One novel feature was high-quality electromagnetic and hadronic calorimetry, thanks to a futuristic concept: liquid-argon calorimetry. In a similar spirit, an Aachen-CERN-Harvard-Genoa collaboration, with Carlo Rubbia and others, proposed a 4π-detector using total-absorption calorimetry but based on more conventional techniques: among the three options evaluated were iron/scintillator, water Cherenkovs and liquid-scintillator instrumentation. However, the ISRC swiftly disposed of both this proposal and the Impactometer.
The discovery of high-pT π0-production at rates much higher than anticipated was one of the most significant early discoveries at the ISR, which profoundly advanced the understanding of strong interactions. Unfortunately, this physics “sensation” proved also to be an unexpected, ferocious background to other new physics. Discovered by the CERN-Columbia-Rockefeller (CCR, R103) collaboration in 1972, the high rate of π0s masked the search for electrons in the region of a few giga-electron-volts as a possible signal for new physics – as in, for example, the decay of an intermediate vector boson into e+e– pairs – and ultimately prevented this collaboration from discovering the J/ψ.
The reaction to this “sensation plus dilemma” was immediate, resulting in several proposals for experiments, all of which were capable of discoveries – as their later results demonstrated. These more evolved experimental approaches brought a new level of complexity and longer lead-times from proposal to data-taking. However, the fruition of these efforts came a few years too late to make the potentially grand impact that was expected from and deserved by the ISR.
In 1973, the CCOR collaboration (CCR plus Oxford) proposed the use of a superconducting solenoid, equipped with cylindrical drift chambers for tracking and lead-glass walls for photon and electron measurements (R108). The Frascati-Harvard-MIT-Naples-Pisa collaboration proposed an instrumented magnetized iron toroid for muon-pair studies and associated hadrons. Originally intended for installation in I8 (R804), it was finally installed because of scheduling reasons in I2 (R209). The SFM facility was complemented with instrumentation (Cherenkov counters and electromagnetic calorimetry) for electron studies, and later for charm and beauty studies.
The 1972 Impactometer proposal was followed with a reduced-scale modular proposal concentrating on e+e– and γ-detection, submitted in November 1973 by a collaboration of Brookhaven, CERN, Saclay, Syracuse and Yale. It combined the liquid-argon technology for electromagnetic calorimetry with novel transition-radiation detectors for electron/hadron discrimination. (These latter consisted of lithium-foil radiators and xenon-filled MWPCs, with two-dimensional read-out, as the X-ray detectors.) The advanced technologies proposed led to the cautious approval of the detector as R806T (T for test) in June 1974 with a gradual, less than optimal build-up.
1974-1977: learning the lessons
The first “brilliant period” ended with a Clarion call for the particle-physics community at large and sobering soul-searching for the ISR teams: the discovery of the J/ψ in November 1974. The subsequent period brought a flurry of activities, with the initial priority to rediscover the J/ψ at the ISR.
First was R105 (CCR plus Saclay), which employed threshold Cherenkovs and electromagnetic calorimeters, permitting electron identification at the trigger level. Second, an overwhelmingly clear physics justification for a new magnetic facility with an emphasis on high-pT phenomena, including lepton detection, emerged. Several groups, including teams from the UK and Scandinavia, were studying a facility based on a large superconducting solenoid, while a team around Antonino Zichichi explored the potential of a toroidal magnetic facility. The inevitable working group, constituted by the ISRC and chaired by Zichichi, received the remit to motivate and conceptualize a possible new magnetic facility.
With exemplary speed – January to March 1976 – the working group documented the physics case and explored magnets and instrumentation, but shied away from making a recommendation, leaving the choice between toroid and solenoid to other committees. It is a tribute to the ISRC that it made a clear recommendation for a solenoid facility with large, open structures in the return yoke for additional instrumentation (particle identification and calorimetry). The toroidal magnet geometry, while recognized as an attractively suitable magnet topology for proton–proton collider physics, was considered too experimental a concept for rapid realization. It would take another 30 years before a major toroid magnet would be built for particle physics, namely the ATLAS Muon Spectrometer Toroid.
The CERN Research Board did not endorse the ISRC recommendation, possibly being concerned – I am speculating – about the constraints on the long-term impact on the ISR schedule and adequate support among the user community. Despite this negative outcome, the working group had a significant influence on CERN’s research agenda. It provided an assessment of state-of-the-art collider experimentation and many members of the working group would use their work to shape the UA1 and UA2 facilities at the SppS programme, which was proposed at about the same time.
Within weeks following the negative decision from the Research Board, some key members of the working group banded together and submitted a new proposal for a fully instrumented facility, building around Tom Taylor’s innovative Open Axial Field Magnet (warm Helmholtz coils with conical poles), as a base for the Axial Field Spectrometer (AFS). The time was just right: it took only three months from the submission of the proposal by the CERN-Copenhagen-Lund collaboration in January 1977 to ISRC recommendation and Research Board approval as R807 in April, thanks to the strong and courageous support from the then Research Director, Paul Falk-Vairant, and the committees.
The end of this period also coincided with a turning point in our understanding of hadronic interactions. The early view of “soft” hadronic interactions, limited to low-pT phenomena, shaped the initial programme. Ten years later, hadrons were still complicated objects but the point-like substructure had been ascertained. Hard scattering became the new probe and simplicity was found at large pT with jets, photons and leptons. This marked a remarkable “about-turn” in our approach towards hadron physics, which found its expression in the second half of the ISR experimentation and exploitation.
1977–1983: maturity
The shock of 1974, followed by the debates on physics and detector facilities in 1976 focused the minds of the various players. Experimental programmes were being prepared for (relatively) rare, high-pT phenomena, in a variety of manifestations: leptons, photons, charmed particles, intermediate vector bosons with a sensitivity reaching beyond 30 GeV/c2 and jets. This strategy was vindicated by the discovery at Fermilab of the Υ in July 1977. This was yet another cruel blow to the ISR, particularly considering that the R806 collaboration saw the first evidence for the Υ at the ISR in November 1977 (Cobb et al. 1977).
The versatility of the ISR and the incipient SppS development brought also proton–antiproton collisions and light-ion (d, α) physics to the fore. A multifaceted and promising programme – confirmed by the 2nd ISR Physics Workshop in September 1977 – was being put in place. By early 1978 the efforts that were started in 1973 and 1974 brought their first fruits: the R108 collaboration reported their first results at the 1978 Tokyo Conference; the R209 collaboration had their experiment completed by the end of 1977; R806 was completed by early 1976; and R807 was building up to a first complete configuration towards the end of 1979. A walk round the ISR-ring would have shown the diversity of approaches that were being adopted:
• I1 was home for the R108 collaboration using an advanced, thin (1 radiation length) 1.5 T superconducting solenoid, which would become its workhorse for the subsequent six years. This was instrumented with novel – at the time – cylindrical drift chambers inside and lead-glass electromagnetic-shower detectors outside. Several upgrades brought higher sensitivity through the addition of shower counters inside the solenoid, which resulted in full azimuthal coverage both for charged particles and for photons, as well as higher collision rates, as provided by the inventive ISR teams in the form of warm, low-beta quadrupoles for stronger beam focusing (p27).
• I2 was truly complementary to I1, with the R209 (CERN-Frascati-Harvard-MIT-Napoli-Pisa) collaboration betting on muons and magnetized steel toroids and aiming at dimuon mass sensitivity beyond 30 GeV/c2. This was combined with a large-acceptance hadron detector, based on scintillator hodoscopes for hadron correlation studies.
• I4 was where the SFM showed its strength as a “user facility”, accommodating the 22nd ISR experiment, R422, at the end of the ISR. The open magnet structure invited many groups to add equipment for dedicated low- to high-pT physics, with remarkable contributions to charm physics and candidates for Λb.
• I6 explored physics in the forward region with an unusual magnet, known as the “Lamp Shade”. It also had considerable emphasis on charm particles.
• I7 was reserved for “exotica”. In the late 1970s, a group operated a streamer chamber as a rehearsal for what later would become UA5 at the SppS. The last experiment to take data – after the official closure of the ISR on 23 December 1983 – was R704, which used 3.75 GeV/c antiprotons colliding with an internal gas-jet of H2 to perform charmonium spectroscopy.
• I8 became the home of R806, with its finest hour being the discovery of prompt γ production, the golden test-channel for perturbative QCD. It entered into a rich symbiotic relation with the nascent AFS (R807) between the end of 1979 and late 1981, when all of R807 was installed except for the uranium/scintillator calorimeter. After a considerable struggle to obtain the uranium plates, this advanced (and adventurous) hadron calorimeter was finally completed by early 1982. One of the significant results obtained with the calorimeter was the first measurement of the jet production cross-section at ISR energies in 1982, consistent with QCD predictions. With the closure of 2π calorimeter coverage, R806 finally had to yield its place, morphing into two novel photon detectors (NaI crystals with photodiode read-out). The Athens-Brookhaven-CERN-Moscow collaboration (R808) provided these detectors, which were placed on opposite walls of the uranium/scintillator calorimeter. In its final years, the ISR machine teams integrated superconducting low-beta quadrupoles, providing peak luminosities in excess of 1032 cm–2 s–1 – a superb rehearsal for the LHC.
The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained!
Freeman Dyson
The final year, 1983, saw a valiant struggle between the physics communities, hell-bent on extracting the most physics from this unique machine – proton–proton, light ions, proton–antiproton operation with a total of almost 5000 hours of physics delivered – and a sympathetic, yet firm director-general, Herwig Schopper, who presided over the demise of the ISR. In the last session of the ISRC he not only paid tribute to the rich physics harvest but also emphasized the important and lasting contribution of the ISR to experimentation at colliders – or, in the words of one of today’s most brilliant theorists, Freeman Dyson: “New directions in science are launched by new tools more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained!”
• I am grateful to M Albrow, G Belletini, L Camilleri and W Willis for discussions and careful reading.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.