Bluefors – leaderboard other pages

Topics

Technological spinoffs from accelerators

Accelerator technologies

Particle-accelerator performance depends critically on the underlying technology. Thus the construction of larger, more powerful and more sophisticated accelerators has resulted in technological progress, yielding applications in other areas.

The basic particle accelerator technologies are electrical and radiofrequency engineering, for the powerful electric and magnetic fields needed respectively to accelerate the particles and control the beams.

Superconductivity, with its suppression of ohmic losses, makes the generation of these fields more efficient. With present superconducting materials requiring extremely low temperatures, cryogenics has also become a key accelerator technology.

Beams also need a high vacuum to minimize unwanted collisions. Mechanical engineering appears in the design of nearly every component, while another essential ingredient is the particle source.

Finally, accelerators have led to the development of a variety of monitoring and controlling techniques, both for their construction (high-precision surveying) and for their operation.

Superconductivity

Large-scale applications of superconductivity have been pioneered by particle accelerator engineers. Improved accelerator performance needed increased magnetic fields and electric fields while keeping the energy consumption within acceptable limits.

This has stimulated the development of superconducting dipole and quadrupole magnets and of superconducting radiofrequency accelerating cavities. The first super-conducting cables (for bubble chambers and nuclear magnetic resonance – NMR – spectrometers) were capable only of d.c. operation. Accelerator requirements seeded the development of superconducting cables for a.c. operation, at the heart of all major ongoing applications. These cables are made of intrinsically stable conductors, twisted thin strands of superconducting wires embedded in a copper matrix and suitable for ramped fields.

The implementation of a fusion reactor, either based on magnetic or inertial confinement, will probably rely on superconductivity. In the case of magnetic confinement, a net energy gain can only be achieved if the confining magnetic field does not require excessive power. Furthermore, the high magnetic fields permitted by superconductivity may allow the design of more compact machines. ITER, the future large international research tokamak, will be designed around superconducting magnets.

Research on inertial confinement fusion explores several ways of imploding the fuel pellet. Particle beam fusion systems are based on ideas resulting directly from particle physics research, while the most promising laser system, the free electron laser, also derives from particle accelerator technology.

The output of electric power generators has grown considerably in recent decades, performance also having been boosted by improved cooling, allowing higher current densities. However this also produces a rise in ohmic losses and a corresponding reduction in efficiency, prompting a closer look at superconductivity.

Transmission of electric power is another possible application of superconductivity. The prospect of replacing the vast electrical highways feeding large cities by underground superconducting cables is an attractive proposition. Successful tests of a twin-conductor 60 Hz 115 m-long flexible cable transporting triple-phase 1000 MVA at Brookhaven National Laboratory in the 1970s have opened the way to longer transmission lines. Increased environmental consciousness will strongly encourage the development of compact underground power lines.

Another potentially far-reaching application of superconductivity in power engineering is the large scale storage of electricity. Large coal-fired and nuclear power plants are designed to operate close to full capacity. Their efficiency and expected lifetime is decreased significantly if they have to suppress large fractions of their capacity. On the other hand electricity demand has large seasonal, weekly and daily variations. A variety of technologies, ranging from gas turbines to pumped hydroelectricity, are currently used to handle these variations. Reducing unwanted gas emission and the difficulty of finding suitable hydrostorage sites would make new techniques such as SMES (Superconducting Magnetic Energy
Storage) attractive.

One of its main advantages is that energy is stored in its electrical form and requires no intermediate conversion from or into thermal or kinetic energy. Reference systems for 5 GVA have been designed in the US and Japan. The feasibility of the concept has successfully been tested in a 30 MJ system installed in 1982 to stabilize the electric power transmission between the Pacific Northwest and Southern California.

High speed ground transportation could also become a large scale application of superconductivity. Prototypes have been demonstrated in Germany and Japan. A 10 ton Japanese test vehicle, using levitation from eddy currents created by an electromagnet moving above a conducting rail, has exceeded 500 km/h. Superconducting coils fulfil the three functions of suspension, guiding and propelling the vehicle.

Research on the application of superconducting magnetic coils for marine propulsion is also underway in Japan and a prototype boat has recently been tested successfully.

Another industrial application of superconductivity is magnetic separation for mineral and scrap metal processing – requiring high magnetic forces over large volumes.

Eddy currents induced by superconducting magnets could slow convection currents during the crystallization process of silicon for semiconductor production. This would lead to more homogeneous crystals and open up the manufacture of larger single chip devices.

Another avenue worth exploring is ultra-fast computers based on the rapid switching of Josephson diodes.

Superconducting magnets are used all over the world for the characterization and identification of chemical compounds by nuclear magnetic resonance spectroscopy. While some 10 years ago typical systems used magnets in the 5 Tesla range, commercial devices now use magnets operating near 10 Tesla, with correspondingly increased performance.

Superconductivity is also finding applications in medical diagnosis through magnetic resonance imaging scanners, less invasive than classical X-ray diagnosis. Again, performance improvements would follow from higher field magnets.

The high temperature superconductors discovered only a few years ago have not yet found their way into accelerator technology as they can neither be made into high density current-carrying cables for magnet coils nor deposited on large surfaces for radiofrequency cavities. However, this rapidly developing field is being closely monitored. Any materials breakthrough would open the way to wider applications.

Cryogenics

Cryogenics, the technique of low temperature, goes hand in hand with superconductivity. Classical superconductors operate at a few degrees K, provided by liquid helium cooling. Physicists had become familiar with large-scale low temperature work through the liquid hydrogen bubble chamber, one of the most widely used detectors of the 1960s and early 70s.

Most superconducting magnets now use niobium-titanium wire and operate at temperatures close to 4.2 K, the boiling point of helium. At this temperature, the field achievable with NbTi is limited to about 6.5 Tesla.

In the quest for higher magnetic fields, superconductors with better magnetic properties, such as niobium-tin, are troublesomely brittle. For its next accelerator project, the LHC proton collider, CERN thus prefers to exploit the improved NbTi performance at lower temperatures (2 K).

Such temperatures offer attractive features: liquid helium becomes superfluid, with an enormous increase in thermal conductivity and reduction of viscosity and much practical payoff.

Cryogenics is applied in other fields, for instance in vacuum and space science and for sensitive instrumentation such as low noise amplifiers and infra-red night vision devices.

The NMR superconducting magnets used both in the laboratory and for medical diagnostics employ cryogenics technology developed for accelerator magnets. What used to be delicate, fragile and complex systems requiring continuous attention have today become a reliable technology.

Another cryogenics outlet is the production through liquefaction of extremely pure gases, useful in any industrial processes (e.g. in the semiconductor industry) requiring extreme cleanliness or purity.

Vacuum and surface science

Particle acceleration requires a good vacuum to avoid scattering the beam on residual gas. Pressures in the region of 10–6 – 10–7 Torr are generally sufficient for synchrotrons, where acceleration lasts only a few seconds. However storage rings and colliders which must hold beams over several days have more critical requirements, calling for the 10–10 – 10–11 Torr range. Even lower pressures are needed near the detectors to reduce background.

The valuable experience at CERN’s Intersecting Storage Rings (ISR) brought considerable progress in this field. The ISR was the first large machine to be operated using the advanced technology of ultra-high vacuum systems (UHV), eventually reaching 10–12 Torr. This catalysed the vacuum industry to develop UHV components (e.g. sputter ion pumps, all-metal valves, seals, gauges).

Equally important for UHV systems is the cleanliness of all surfaces. Techniques for cleaning and preparing surfaces – chemical treatments, bakeout and glow discharge to reduce gas desorption – were developed.

The construction of CERN’s Large Electron Positron collider (LEP) has further stimulated progress in vacuum technology. Although the vacuum level is less than the ISR, evacuating a 27-kilometre ring posed special problems. This led to the development of a linear non-evaporable getter (NEG) pump using an aluminium-zirconium alloy bonded in powder form on a constantan ribbon. Another development has been the all-aluminium vacuum chamber with better thermal conductivity and lower residual radioactivity than stainless steel and which can be extruded into complicated shapes.

Other vacuum components have been developed following accelerator experience, particularly where mechanical motion under vacuum is needed. As pressure falls, lubrication is inhibited and friction increases dramatically. Ingenious solutions had to be found for fast closing valves, beam diagnostic devices or shutters and movable sensing electrodes and deflectors.

Vacuum seals have also undergone considerable improvements. Elastomers can sustain neither high radiation nor bakeout at 300–400C, and metal joints have now generally been introduced.

This progress in vacuum technology is finding direct applications in space science and fusion test facilities, and in industry, for example in the technologies for semiconductor manufacture.

Even when extremely low pressures are not required, for example in surgery, the pharmaceuticals industry or in food preparation and conservation, the extreme cleanliness of UHV systems and their reliability have brought benefits. Cleanliness and special surface conditions are essential for quality and performance in many high technology areas. Vacuum and surface technology therefore play an increasingly important role.

Particle sources

Accelerators require intense sources of electrons and ions. An important application of such sources is the implantation of ions for semiconductor circuit elements. Ion beams are also used in material preparation such as pre-deposition surface cleaning/conditioning and low energy Ion Beam Assisted Deposition (IBAD). IBAD films have remarkable properties (adhesion, hardness, optical transmission,…). Another important industrial application is the electron beam technique used for precision welding.

Finally the classic applications example is the range of electron tubes used for telecommunications, broadcasting and radar which derive from the cathode-ray tube protoaccelerator devices developed at the turn of the century for basic physics research.

  • This article was adapted from text in CERN Courier vol. 34, May 1994, pp6–10.

Computing in high energy physics

David Williams

Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations.

The state of the art, as well as new trends and hopes, were reflected in this year’s ‘Computing In High Energy Physics’ conference held in the dreamy setting of Oxford’s spires. The 260 delegates came mainly from Europe, the US, Japan and the USSR. Accommodation and meals in the unique surroundings of New College provided a special atmosphere, with many delegates being amused at the idea of a 500-year-old college still meriting the adjective ‘new’.

The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference’s aim – ‘to bring together high energy physicists and computer scientists’.

The complexity and volume of data generated in particle physics experiments is the reason why the associated computing problems are of interest to computer science.These ideas were covered by David Williams (CERN) and Louis Hertzberger (Amsterdam) in their keynote addresses.

The task facing the experiments preparing to embark on CERN’s new LEP electron–positron collider is enormous by any standards but a lot of thought has gone into their preparation. Getting enough computer power is no longer thought to be a problem but the issue of storage of the seven Terabytes of data per experiment per year makes computer managers nervous even with the recent installation of IBM 3480 cartridge systems.

With the high interaction rates already achieved at the CERN and Fermilab proton–antiproton colliders and orders of magnitude more to come at proposed proton colliders, there are exciting areas where particle physics and computer science could profitably collaborate. A key area is pattern recognition and parallel processing for triggering. With ‘smart’ detector electronics this ultimately will produce summary information already reduced and fully reconstructed for events of interest.

Is the age of the large central processing facility based on mainframes past? In a provocative talk Richard Mount (Caltech) claimed that the best performance/cost solution was to use powerful workstations based on reduced instruction set computer (RISC) technology and special purpose computer-servers connected to a modest mainframe, giving maybe a saving of a factor of five over a conventional computer centre.

Networks bind physics collaborations together, but they are often complicated to use and there is seldom enough capacity. This was well demonstrated by the continuous use of the conference electronic mail service provided by Olivetti-Lee Data. The global village nature of the community was evident when the news broke of the first Z particle at Stanford’s new SLC linear collider.

The problems and potential of networks were explored by François Fluckiger (CERN), Harvey Newman (Caltech) and James Hutton (RARE). Both Fluckiger and Hutton explained that the problems are both technical and political, but progress is being made and users should be patient. Network managers have to provide the best service at the lowest cost. HEPNET is one of the richest structures in networking, and while OSI seems constantly to remain just around the corner, a more flexible and user-friendly system will emerge eventually.

Harvey Newman (Caltech) is not patient. He protested that physics could be jeopardized by not moving to high speed networks fast enough. Megabit per second capacity is needed now and Gigabit rates in ten years. Imagination should not be constrained by present technology.

This was underlined in presentations by W. Runge (Freiburg) and R. Ruehle (Stuttgart) of the work going on in Germany on high bandwidth networks. Ruehle concentrated on the Stuttgart system providing graphics workstation access through 140 Mbps links to local supercomputers. His talk was illustrated by slides and a video showing what can be done with a local workstation online to 2 Crays and a Convex simultaneously! He also showed the importance of graphics in conceptualization as well as testing theory against experiment, comparing a computer simulation of air flow over the sunroof of a Porsche with a film of actual performance.

Computer graphics for particle physics attracted a lot of interest. David Myers (CERN) discussed the conflicting requirements of software portability versus performance and summarized with ‘Myers’ Law of Graphics’ – ‘you can’t have your performance and port it’. Rene Brun (CERN) gave a concise presentation of PAW (Physics Analysis Workstation). Demonstrations of PAW and other graphics packages such as event displays were available at both the Apollo and DEC exhibits during the conference week. Other exhibitors included Sun, Meiko, Caplin and IBM. IBM demonstrated the power of relational database technology using the online Oxford English Dictionary.

Interactive data analysis on workstations is well established and can be applied to all stages of program development and design. Richard Mount likened interactive graphics to the ‘oscilloscope’ of software development and analysis.

Establishing a good data structure is essential if flexible and easily maintainable code is to be written. Paulo Palazzi (CERN) showed how interactive graphics would enhance the already considerable power of the entity-relation model as realized in ADAMO. His presentation tied in very well with a fascinating account by David Nagel (Apple) of work going on at Cupertino to extend the well-known Macintosh human/computer interface, with tables accessed by the mouse, data highlighted and the corresponding graphical output shown in an adjacent window.

The DEC demonstration

The importance of the interface between graphics and relational databases was also emphasized in the talk by Brian Read (RAL) on the special problems faced by scientists using database packages, illustrated by comparing the information content of atmospheric ozone concentration from satellite measurements in tabular and graphical form – ‘one picture is worth a thousand numbers’.

The insatiable number-crunching appetite of both experimental and theoretical particle physicists has led to many new computer architectures being explored. Many of them exploit the powerful new (and relatively cheap) RISC chips on the market. Vector supercomputers are very appropriate for calculations like lattice gauge theories but it has yet to be demonstrated that they will have a big impact on ‘standard’ codes.

An indication of the improvements to be expected – perhaps a factor of five – came in the talk by Bill Martin (Michigan) on the vectorization of reactor physics codes. Perhaps more relevant to experimental particle physics are the processor ‘farms’ now well into the second generation.

Paul Mackenzie (Fermilab) and Bill McCall (Oxford) showed how important it is to match the architecture to the natural parallelism of a problem and how devices like the transputer enable this to be done. On a more speculative note Bruce Denby (Fermilab) showed the potential of neural networks for pattern recognition. They may also provide the ability to enable computers to learn. Such futuristic possibilities were surveyed in an evening session by Phil Treleavan (London). Research into this form of computing could reveal more about the brain as well as help with the new computing needs of future physics.

The importance of UNIX

With powerful new workstations appearing almost daily and with novel architecture in vogue, a machine-independent operating system is obviously attractive. The main contender is UNIX. Although nominally machine independent, UNIX comes in many implementations and its style is very much that of the early 7Os with cryptic command acronyms – few concessions to user-friendliness! However it does have many powerful features and physicists will have to come to terms with it to exploit modern hardware.

Dietrich Wiegandt (CERN) summarized the development of UNIX and its important features – notably the directory tree structure and the ‘shells’ of command levels. An ensuing panel discussion chaired by Walter Hoogland (NIKHEF) included physicists using UNIX and representatives from DEC, IBM and Sun. Both DEC and IBM support the development of UNIX systems.

David McKenzie of IBM believed that the operating system should be transparent and looked forward to the day when operating systems are ‘as boring as a mains wall plug’. (This was pointed out to be rather an unfortunate example since plugs are not standard!)

The insatiable number-crunching appetite of both experimental and theoretical particle physicists has led to many new computer architectures being explored

Two flavours of UNIX are available – Open Software Foundation version one (OSF1), and UNIX International release four (SVR4) developed at Berkeley. The two implementations overlap considerably and a standard version will emerge through user pressure. Panel member W. Van Leeuwen (NIKHEF) announced that a particle physics UNIX group had been set up (contact HEPNIX at CERNVM).

Software engineering became a heated talking point as the conference progressed. Two approaches were suggested: Carlo Mazza, head of the Data Processing Division of the European Space Operations Centre, argued for vigorous management in software design, requiring discipline and professionalism, while Richard Bornat (‘SASD- All Bubbles and No Code’) advocated an artistic approach, likening programming to architectural design rather than production engineering. A. Putzer (Heidelberg) replied that experimenters who have used software engineering tools such as SASD would use them again.

Software crisis

Paul Kunz (SLAC) gave a thoughtful critique of the so-called ‘software crisis’, arguing that code does not scale with the size of the detector or collaboration. Most detectors are modular and so is the code associated with them. With proper management and quality control good code can and will be written. The conclusion is that both inspiration and discipline go hand in hand.

A closely related issue is that of verifiable code – being able to prove that the code will do what is intended before any executable version is written. The subject has not yet had much impact on the physics community and was tackled by Tony Hoare and Bernard Sufrin (Oxford) at a pedagogical level. Sufrin, adopting a missionary approach, showed how useful a mathematical model of a simple text editor could be. Hoare demonstrated predicate calculus applied to the design of a tricky circuit.

Less high technology was apparent at the conference dinner in the candle-lit New College dining hall. Guest speaker Geoff Manning, former high energy physicist, one-time Director of the UK Rutherford Appleton Laboratory and now Chairman of Active Memory Technology, believed that physicists have a lot to learn from advances in computer science but that fruitful collaboration with the industry is possible. Replying for the physicists, Tom Nash (Fermilab) thanked the industry for their continuing interest and support through joint projects and looked forward to a collaboration close enough for industry members to work physics shifts and for physicists to share in profits!

Summarizing the meeting, Rudy Bock (CERN) highlighted novel architectures as the major area where significant advances have been made and will continue to be made for the next few years. Standards are also important provided they are used intelligently and not just as a straitjacket to impede progress. His dreams for the future included neural networks and the wider use of formal methods in software design. Some familiar topics of the past, including code and memory management and the choice of programming language, could be ‘put to rest’.

The microprocessor boom

A microprocessor

During the past few years, electronic circuitry techniques have been developed which enable complex logic units to be produced as tiny elements or ‘chips’. These units are now mass-produced, and are available relatively cheaply to anyone building data processing equipment.

Just a few years ago, the first complete processing unit on a single chip was produced. Now micro logic elements can be combined together to provide micro data processing systems whose capabilities in certain respects can rival those of more conventional computers. Commercially- available microcomputers are used widely in many fields.

Where an application requires special capabilities, it is preferable to take the individual micro logic units and wire them together on a printed circuit board to provide a tailor-made processing unit: If there is sufficient demand for the perfected design, the printed circuit board stage subsequently can be dispensed with and the processor can be mass-produced by large-scale integration (LSI) techniques as a single microprocessor.

With these processing units, there generally is a trade-off between speed and flexibility, the ultimate in speed being a hard-wired unit which is only capable of doing one thing. Flexibility can be achieved through programmable logic, but this affects the overall speed.

Programming micros is difficult, but one way of sidestepping these problems would be to design a unit which emulates a subset of an accessible mainframe computer. With such an emulator, programs could be developed on the main computer, and transferred to the micro after they have reached the required level of reliability. This could result in substantial savings in program development time. In addition, restricting the design to a subset of the mainframe architecture results in a dramatic reduction in cost.

High energy physics, which has already amply demonstrated its voracious appetite for computer power, could also soon cash in on this microcomputer boom and produce its own ‘brand’ of custom-built microprocessors.

According to Paolo Zanella, Head of CERN’s Data Handling Division, now is the time to explore in depth the uses of microprocessors in high energy physics experiments. If initial projects now under way prove to be successful, the early 1980s could see microprocessors come into their own.

One of the biggest data processing tasks in any physics experiment is to sift through the collected signals from the various detecting units to reject spurious information and separate out events of interest. Therefore to increase the richness of the collected data, triggering techniques are used to activate the data collection system of an experiment only when certain criteria are met.

Even with the help of this ‘hardwired’ selection, a large proportion of the accumulated data has to be thrown away, often after laborious calculations. With experiments reaching for higher energies where many more particles are produced, and at the same time searching for rarer types of interaction, physicists continually require more and more computing power.

Up till now, this demand has had to be met by bringing in more and bigger computers, both on-line at the experiments and off-line at Laboratory computer centres. With the advent of microprocessors, a solution to this problem could be in sight. Micros could be incorporated into experimental set-ups to carry out a second level of data selection after the initial hard-wired triggering – an example of the so called ‘distributed processing’ approach where computing power is placed as far upstream as possible in the data handling process. In this way the demand on the downstream central computer would be reduced, and the richness of the data sample increased.

The micros would filter the readout in the few microseconds before the data is transferred to the experimental data collection system. Zanella is convinced that this could significantly improve the quality of the data and reduce the subsequent off-line processing effort to eliminate bad triggers.

As well as being used in the data collection system, micros would also be useful for control and monitoring functions. The use of off the-shelf microcomputers in accelerator control systems, for example, is already relatively widespread. Some limited applications outside the control area are already being made in experiments, a notable example being the CERN/Copenhagen/Lund/Rutherford experiment now being assembled at the CERN Intersecting Storage Rings.

Microcomputer projects are now being tackled at several Laboratories. At CERN three projects are under way in the Data Handling Division. Two of these are programmable emulators (one being based on the IBM 370/168 and the other on the Digital Equipment PDP-11), while the third is a very fast microprogrammable unit called ESOP.

High energy physics still has a lot to learn about microprocessor applications, and there is some way to go before their feasibility is demonstrated and practical problems, such as programming, are overcome.

However this year could see some of these initial projects come to fruition, and the early 1980s could live up to Zanella’s expectations as the time when the microprocessor becomes a routine part of the high energy physicists’ toolkit. 

400 GeV machine vacuum and r.f. progress

Things are obviously hotting up in the construction of the 400 GeV proton synchrotron, the SPS. With completion of the ring not much more than a year away, there is new information every month concerning the progress of installation. This time the vacuum system and the radio-frequency acceleration system have passed two important milestones.

Assembly and testing on the vacuum chamber and the pumping stations in the SPS tunnel began last November and the work has continued at about the rate at which the magnets have been installed – 32 m per day, equivalent to one half-period. By now a tenth of the full system, including beam transfer lines, has been evacuated and held under vacuum for a month.

At the beginning of February, a 550 m length of the vacuum chamber in sextant 3 was pumped down. The pressure was taken to the region of 10–5 torr by three roughing pump stations fitted with vane and turbo-molecular pumps. At this stage, the control computer in auxiliary building No. 3 brought in forty-one ion pumps with a capacity of 25 l/s, distributed along the length of the vacuum pipe.

The time involved in reducing the pressure from atmospheric to the nominal operating pressure of the SPS – 3 × 10–7 torr – in this first pump down, was sixteen hours. A pressure of 1 × 10–7 torr was reached in forty-five hours.

It was then necessary to open up the chamber. The second pump down gave the nominal pressure after only five hours. In March, a further 420 m section was pumped with the same success. After a month under vacuum, this 970 m length is at 1 × 10–8 torr.

Two sections of one of the r.f. accelerating cavities have now been installed in the SPS tunnel and have successfully undergone vacuum tests. Each cavity is some 20 m long with five tank sections and the machine will have two cavities located in straight section No. 3.

The five sections for the first cavity are now at CERN. Some work has to be carried out on them before they are taken down to the tunnel – the trickiest job being the fitting of the drift tubes. There are eleven in each section and they have to be aligned with great precision. Allowance must be made for any irregularity in the shape of the tank and the length of the drift tube bars has to be adjusted before they are welded and brazed to the mounting blocks which make contact with the tank walls.

The coaxial lines and couplers which feed the power to the cavities in the tunnel are already in place. The lines, some 80 m long, go up to the power amplifiers in auxiliary building No. 3. Cooling pipes are also installed as are the terminating loads which absorb the r.f. power once it has passed through the cavities.

All that remains to be done in the tunnel is to install and connect up the tank sections. They must be aligned very accurately so that the intermediate seals remain completely vacuum-tight and allow the cavity’s 500 kW of r.f. power at 200 MHz to be properly conducted all the way around the tank. The sections rest on special supports which are designed to absorb any distortion caused by temperature rise in the cavities during operation.

The last three sections of the first cavity will be taken down to the tunnel at the rate of one every three weeks and the cavity will be completed towards the middle of June. The second cavity will be installed in Autumn. 

  • This article was adapted from text in CERN Courier vol. 15, May 1975, pp156–158

Much ado about nothing

The vacuum system of the CERN Intersecting Storage Rings differs from those of typical particle accelerators in one vital respect: the pressure has to be four to five orders of magnitude lower. This requirement can be readily understood in terms of the time ratio the particles spend circulating (of the order of one second in an accelerator and, typically, one day in the ISR). It would be an exaggeration to say that the problem of attaining this vacuum was more difficult in the same ratio but it was considerably more difficult and involved many basically different techniques. Some of these were known on a small scale from the laboratory, others had to be developed.

A major triumph of the ISR vacuum system has been the successful marrying of many hitherto specialised laboratory techniques into one very large and very complex system without loss in reliability or performance. It is still not unusual to find ultra-high vacuum laboratories which have difficulty in working at 10–11 torr – in the ISR there are hundreds of metres at this pressure and soon it is expected to extend to the full 2 kilometres of the rings.

This article will try to sketch some of the problems encountered in meeting the requirements of the vacuum system and how the applied research in this field led to their solution.

Sources of gas

The pressure in a vacuum system, in the simplest analysis, is given by the balance between the residual gas inflow rate and the exhaust rate. The latter, determined by the size and speed of the vacuum pumps, is limited by available space and cost. The former is the sum of several sources including leaks from the surrounding atmosphere, desorption of gas which has been adsorbed on the interior surface of the vacuum chamber and the permeation of gas through the chamber material itself.

Assuming that all leaks have been eliminated – in itself not a trivial problem since these may range from leaky joints to microscopic pores via slag inclusions in the chamber material – and that surface desorption has been reduced to a negligible value by in situ bakeout of the vacuum system at 300 °C, there is left what perhaps appears the negligible possibility of gas permeating through the metal chambers. In fact, this constitutes the major limitation in a vacuum system such as the ISR where the available pumping speed is severely restricted by the low conductance of the chamber.

The chamber material is a nitrogen enriched austenitic stainless steel chosen on the basis of mechanical strength, low permeability, good vacuum properties, etc. Careful measurements showed that this material, even after an in situ bakeout at 300 °C, was releasing hydrogen gas at the rate of about 3.10–12 torr litre per second per cm2 (equivalent to about 108 hydrogen molecules per second per cm2). The measurements also showed that this hydrogen appeared to be diffusing out of the bulk of the material (rather than desorbing from the surface) and the constancy of the rate over long times suggested a virtually infinite reservoir of hydrogen. This was confirmed by chemical analysis which showed the hydrogen impurity to be about 0.001% or 1019 molecules per cm3 of steel.

This outgassing rate would have caused unacceptably large pressures in the beam pipe between pumping stations – pressures which could not be reduced by larger pumps but only by reducing the outgassing rate by one to two orders of magnitude. Laboratory measurements showed that the diffusion rate, and hence the outgassing rate, was very temperature dependent. This suggested a way of removing the source of hydrogen – the steel was subjected to a heat treatment of about 1000 °C in a vacuum furnace before being used. At this temperature the hydrogen release is so fast that the concentration of dissolved hydrogen falls rapidly to a value determined by the partial pressure in the vacuum furnace. In this way it was possible to obtain the tons of stainless steel with special low outgassing rates which were needed for the ultra-high vacuum system of the ISR.

Cryopumping for even lower pressure

In addition to the ultra-high vacuum requirement of 10–10 to 10–11 torr all around the ISR rings, dictated essentially by beam life-time considerations, the experimenters would like the intersection regions with pressures in the 10–12 torr range or better. Such low pressures reduce the ratio of the background signals due to proton–gas molecule collisions compared to the true proton–proton collisions. Pressures even into the 10–13 torr range have been obtained, notably in intersection region 1-6, using cryopumping techniques.

In a cryopumped intersection region a surface is cooled to a low temperature and acts as a trap to ‘solidify’ any gas molecule which strikes it. The speed of the pump depends on the area of the cooled surface (12 l/s and 45 l/s per cm2 of surface are possible pumping rates for nitrogen and hydrogen respectively) and the pressure limit depends on the temperature of the surface and the gas (lower temperatures are needed the lower the boiling point of the gas).

Hydrogen, as described above, is the major gas load in the ISR; unfortunately it is the most difficult gas to condense apart from helium. The theoretical low pressure limit of a cryopump is given by the saturated vapour pressure of the gas being condensed at the temperature of the pumping surface. For hydrogen this is still about 10–6 torr at 4.2 K (a convenient refrigeration temperature using liquid helium as coolant) but it should fall to an extrapolated value of 10–13 torr at 2.5 K. In practice, however, it proved impossible to condense hydrogen to pressures below about 10–10 torr, independently of what temperature was used. This anomaly was traced to an interaction between the (black body) thermal radiation coming from the vacuum chambers at room temperature and the condensed hydrogen layer – the latter being continually desorbed by this thermal bombardment. The interaction is not one of simple heating but depends in a complex way on the thickness and purity of the condensed layer and on the characteristics of the cold substrate carrying the condensed layer.

Although it was possible to operate a liquid helium cooled cryopump down to 10–13 torr in the laboratory even when exposed to thermal radiation by modifying the substrate (e.g. by the precondensation of an inert gas layer of argon or neon) a more practical solution has been developed and used in the ISR. This involves using optically and almost thermally opaque chevron baffles at 77 K which are optimised for molecular transmission. This is a compromise involving a considerable loss of conductance, and hence pumping speed, to the vacuum chamber. The pump and baffles are thus designed to achieve a given pressure and then dimensioned to give the required pumping speed. Two liquid helium cooled cryopumps have been installed in 1-6. Each has a speed of about 15 000 l/s and a limit pressure of about 2.10–13 torr. They have operated for several months in the upper 10–13 torr range.

Measuring very low pressures

An advance in one technique often exposes a weakness in another. It was noticed that laboratory systems, designed to extend knowledge of very low pressures, frequently appeared to be limited at about 1 to 2.10–12 torr. Nearly all very low pressure gauges use a tungsten filament heated to about 2300 °C to provide a source of ionising electrons. The apparent limit pressures were traced to an artefact introduced by the gauge itself – the vapour pressure of tungsten evaporated from the heated tungsten filament. Operating the filament at a carefully chosen and reduced temperature can extend the useful range of the gauge by almost an order of magnitude.

The hot tungsten filament is at the root of another problem. It produces heating in the surrounding vacuum chamber causing an increase of hydrogen desorption and a real increase of the system pressure. Recent development work has shown that it will be possible to construct an extremely sensitive gauge using the high gain of an integral channel electron multiplier. The gauge should be useful down to 10–15 torr and, since it uses an extremely low ionising current of a few nanoamperes, it will produce practically no heating or disturbance to the vacuum system.

Beam induced vacuum problems

‘Pressure bumps’ in the ISR have been in the news before (see vol. 11, page 245). They are the major obstacle to achieving the design stored beams of 20 A and the design luminosity. They are localised regions of about 10 m in which the pressure, normally stable at about 10–10 torr in the absence of the beam, begins to rise when the stacked proton beam current exceeds a certain value. Pressure bumps may occur anywhere around either ring at one or several points simultaneously. The mechanism is one of gas release from the wall of the vacuum chamber under ion bombardment. The ions, formed by the ionising effect of the proton beam on the 10–10 torr of residual gas, are ejected out of the space charge potential of the beam onto the wall with an energy of about 1 keV. The released gas increases the local pressure and thus gives, in turn, more ions. Hence we have the makings of an avalanche effect and the pressure may stabilise at some higher value or increase without limit until it destroys the stacked beam.

The danger is obviously greatest where the residual pressure is greatest, where the pumping speed is lowest or where the vacuum chamber wall is contaminated and there is a large gas yield per incident ion. It is now clear that, even after the elaborate cleaning and degassing procedures, the vacuum chamber is not as clean as was thought. On the basis of thermally induced desorption, it had been concluded that hydrogen dissolved in the metal is the only source of gas. But now it is apparent that the surface is covered with a layer of strongly adsorbed contaminants (H2, H2O, CO, CO2, CH4, hydrocarbons etc.) which are only released under energetic ion bombardment.

During the initial operation of the ISR the critical current for run-away pressure bumps was about 4 A. At that time the 10–10 torr operating pressure was achieved after an in situ bakeout of about 4 hours at 200 °C. Since then the temperature has been raised to 300 °C and the bakeout time lengthened to about 24 hours – the critical currents have climbed to 10 to 12 A. The normal operating pressure is still about 10–10 torr but clearly the surfaces now appear much cleaner under ion bombardment.

Why stop there ? Because many components of the ISR were designed for a maximum bakeout temperature of 300 °C, and further increase of the bakeout times at constant temperature seems to give practically no advantage. Other parameters have to be attacked – those of residual pressure and pumping speed. All the intersection regions were initially equipped with titanium sublimation pumps in addition to the normal sputter-ion pumps, which are the standard pumping element around the ISR. Pressures at the intersections were typically around 2.10–11 torr and the ‘pressure bump’ phenomenon rarely occurred in these regions.

A vacuum improvement programme is therefore under way to equip the whole of the ISR with additional sublimation pumps. They are installed in eleven of the 24 sectors which operate regularly at 2.10–11 torr. Pressure bumps in the unimproved sectors still limit the performance but there are high hopes of reaching the design current early next year when the whole vacuum system is improved and running at 2.10–11 torr.

In the meantime extensive laboratory investigations are under way to find ways of eliminating surface contamination. The most promising approach at the moment seems to be to simulate and accelerate the ion-induced desorption by running a high pressure (10–2 torr) inert gas discharge in the vacuum chamber. Subsequent gas release rates on test samples have been reduced by two or even three orders of magnitude by this technique. But there is a technical problem – how to propagate and control a gas discharge around 2 kilometres of ISR vacuum chamber. At the same time, more sophisticated cleaning and bakeout techniques, new surface treatments or even the possibility of a new chamber of an altogether different material is under study.

The results of applied research in the fields of materials science, low temperature physics, ultra-high vacuum technology and engineering have helped to create in the ISR the largest ultra-high vacuum system ever built. Exciting specialised techniques, such as cryopumping at 2.5 K have been integrated with everyday nuts and bolts in their thousands. Possibly the most important achievement of the ISR vacuum system is the extremely high reliability of many apparently commonplace components – there are over 10 000 demountable flanges, for example, which must all be leak-tight simultaneously. This reliability is the result of careful and thorough applied research. There are still problems – such as the pressure bumps but, to (mis-) quote from the Shakespeare play which gave us our title, ‘Think not on it till tomorrow: we’ll devise thee brave punishments for it’.

  • This article was adapted from text in CERN Courier vol. 12, November 1972, pp359–362

Computers, Why?

CERN’s new CDC 7600 central computer

CERN is the favourite showpiece of international co-operation in advanced scientific research. The public at large is, by now, quite used to the paradox of CERN’s outstandingly large-scale electromagnetic machines (accelerators) being needed to investigate outstandingly small-scale physical phenomena. A visitor finds it natural that this, largest-in-Europe, centre of particle research should possess the largest, most complex and costly accelerating apparatus.

But when told that CERN is also the home of the biggest European collection of computers, the layman may wonder: why is it precisely in this branch of knowledge that there is so much to compute? Some sciences such as meteorology and demography appear to rely quite naturally on enormously vast sets of numerical data, on their collection and manipulation. But high energy physics, not so long ago, was chiefly concerned with its zoo of ‘strange particles’ which were hunted and photographed like so many rare animals. This kind of preoccupation seems hardly consistent with the need for the most powerful ‘number crunchers’.

Perplexities of this sort may arise if we pay too much attention to the (still quite recent) beginnings of the modern computer and to its very name. Electronic digital computers did originate in direct descent from mechanical arithmetic calculators; yet their main function today is far more significant and universal than that suggested by the word ‘computer’. The French term ‘ordinateur’ or the Italian ‘elaboratore’ are better suited to the present situation and this requires some explanation.

What is a computer?

When, some forty years ago, the first attempts were made to replace number-bearing cogwheels and electro-mechanical relays by electronic circuits, it was quickly noticed that, not only were the numbers easier to handle if expressed in binary notation (as strings of zeros and ones) but also that the familiar arithmetical operations could be presented as combinations of two-way (yes or no) logical alternatives. It took some time to realize that a machine capable of accepting an array of binary-coded numbers, together with binary-coded instructions of what to do with them (stored program) and of producing a similarly coded result, would also be ready to take in any kind of coded information, to process it through a prescribed chain of logical operations and to produce a structured set of yes-or-no conclusions. Today a digital computer is no longer a machine primarily intended for performing numerical calculations; it is more often used for non-numerical operations such as sorting, matching, retrieval, construction of patterns and making decisions which it can implement even without any human intervention if it is directly connected to a correspondingly structured set of open-or-closed switches.

Automatic ‘black boxes’ capable of producing a limited choice of responses to a limited variety of input (for example, vending machines or dial telephones) were known before; their discriminating and logical capabilities had to be embodied in their rigid internal hardware. In comparison, the computer can be seen as a universally versatile black box, whose hardware responds to any sequence of coded instructions. The complication and the ingenuity are then largely transferred into the writing of the program.

The new black box became virtually as versatile as the human brain; at the same time it offered the advantages of enormously greater speed, freedom from error and the ability to handle, in a single operation, any desired volume of incoming data and of ramified logical chains of instructions. In this latter respect the limit appears to be set only by the size (and therefore the cost) of the computer, i.e. by the total number of circuit elements which are brought together and interconnected in the same computing assembly.

High energy physics as a privileged user

We are only beginning to discover and explore the new ways of acquiring scientific knowledge which have been opened by the advent of computers. During the first two decades of this exploration, that is since 1950, particle physics happened to be the most richly endowed domain of basic research. Secure in their ability to pay, high energy physicists were not slow to recognize those features of their science which put them in the forefront among the potential users of the computing hardware and software in all of their numerical and non-numerical capabilities. The three most relevant characteristics are as follows:

1. Remoteness from the human scale of natural phenomena

Each individual ‘event’ involving sub-nuclear particles takes place on a scale of space, time and momentum so infinitesimal that it can be made perceptible to human senses only through a lengthy and distorting chain of larger-scale physical processes serving as amplifiers. The raw data supplied by a high energy experiment have hardly any direct physical meaning; they have to be sorted out, interpreted and re-calculated before the experimenter can see whether they make any sense at all – and this means that the processing has to be performed, if possible, in the ‘real time’ of the experiment in progress, or at any rate at a speed only a computer can supply.

2. The rate and mode of production of physical data

Accelerating and detecting equipment is very costly and often unique; there is a considerable pressure from the user community and from the governments who invest in this equipment that it should not be allowed to stand idle. As a result, events are produced at a rate far surpassing the ability of any human team to observe them on the spot. They have to be recorded (often with help from a computer) and the records have to be scanned and sifted – a task which, nowadays, is usually left to computers because of its sheer volume. In this way, experiments in which a prolonged ‘run’ produces a sequence of mostly trivial events, with relatively few significant ones mixed in at random, become possible without wasting the valuable time of competent human examiners.

3. High statistics experiments

As the high energy physics community became used to computer-aided processing of events, it became possible to perform experiments whose physical meaning resided in a whole population of events, rather than in each taken singly. In this case the need grew from an awareness of having the means to satisfy the need; a similar evolution may yet occur in other sciences (e.g. those dealing with the environment), following the currents of public attention and possibly de-throning our physics from pre-eminence in scientific computation.

Modes of application

In order to stress here our main point, which is the versatility of the modern computer and the diversity of its applications in a single branch of physical research, we shall classify all the ways in which the ‘universal black box’ can be put to use in CERN’s current work into eight ‘modes of application’ (roughly corresponding to the list of ‘methodologies’ adopted in 1968 by the U.S. Association for Computing Machinery):

1. Numerical mathematics

This mode is the classical domain of the ‘computer used as a computer’ either for purely arithmetic purposes or for more sophisticated tasks such as the calculation of less common functions or the numerical solution of differential and integral equations. Such uses are frequent in practically every phase of high energy physics work, from accelerator design to theoretical physics, including such contributions to experimentation as the kinematic analysis of particle tracks and statistical deductions from a multitude of observed events.

Installation of the 7600 in the computer centre

2. Data processing

Counting and measuring devices used for the detection of particles produce a flow of data which have to be recorded, sorted and otherwise handled according to appropriate procedures. Between the stage of the impact of a fast-moving particle on a sensing device and that of a numerical result available for a mathematical computation, data processing may be a complex operation requiring its own hardware, software and sometimes a separate computer.

3. Symbolic calculations

Elementary logical operations which underline the computers’ basic capabilities are applicable to all sorts of operands such as those occurring in algebra, calculus, graph theory, etc. High-level computer languages such as LISP are becoming available to tackle this category of problems which, at CERN, is encountered mostly in theoretical physics but, in the future, may become relevant in many other domains such as apparatus design, analysis of track configurations, etc.

4. Computer graphics

Computers may be made to present their output in a pictorial form, usually on a cathode-ray screen. Graphic output is particularly suitable for quick communication with a human observer and intervener. Main applications at present are the study of mathematical functions for the purposes of theoretical physics, the design of beam handling systems and Cherenkov counter optics and statistical analysis of experimental results.

5. Simulation

Mathematical models expressing ‘real world’ situations may be presented in a computer-accessible form, comprising the initial data and a set of equations and rules which the modelled system is supposed to follow in its evolution. Such ‘computer experiments’ are valuable for advance testing of experimental set-ups and in many theoretical problems. Situations involving statistical distributions may require, for their computer simulation, the use of computer-generated random numbers during the calculation. This kind of simulation, known as the Monte-Carlo method, is widely used at CERN.

6. File management and retrieval

As a big organization, CERN has its share of necessary paper-work including administration (personnel, payroll, budgets, etc.), documentation (library and publications) and the storage of experimental records and results. Filing and retrieval of information tend nowadays to be computerized in practically every field of organized human activity; at CERN, these pedestrian applications add up to a non-negligible fraction of the total amount of computer use.

7. Pattern recognition

Mainly of importance in spark-chamber and bubble-chamber experiments – the reconstruction of physically coherent and meaningful tracks out of computed coordinates and track elements is performed by the computer according to programmed rules.

8. Process control

Computers can be made to follow any flow of material objects through a processing system by means of sensing devices which, at any moment, supply information on what is happening within the system and what is emerging from it. Instant analysis of this information by the computer may produce a ‘recommendation of an adjustment’ (such as closing a valve, modifying an applied voltage, etc.) which the computer itself may be able to implement. Automation of this kind is valuable when the response must be very quick and the logical chain between the input and the output is too complicated to be entrusted to any rigidly constructed automatic device. At CERN the material flow to be controlled is usually that of charged particles (in accelerators and beam transport systems) but the same approach is applicable in many domains of engineering, such as vacuum and cryogenics.

Centralization versus autonomy

The numerous computers available at CERN are of a great variety of sizes and degrees of autonomy, which reflects the diversity of their uses. No user likes to share his computer with any other user; yet some of his problems may require a computing system so large and costly, that he cannot expect it to be reserved for his exclusive benefit nor to be kept idle when he does not need it. The biggest computers available at CERN must perforce belong to a central service, accessible to the Laboratory as a whole. In recent years, the main equipment of this service has consisted of a CDC 6600 and CDC 6500. The recent arrival of a 7600 (coupled with a 6400) will multiply the centrally available computing power by a factor of about five.

We are only beginning to discover and explore the new ways of acquiring scientific knowledge which have been opened by the advent of computers

For many applications, much smaller units of comp­uting power are quite adequate. CERN possesses some 80 other computers of various sizes, some of them for use in situations where autonomy is essential (for example, as an integral part of an experimental set-up using electronic techniques or for process control in accelerating systems). In some applications there is need for practically continuous access to a smaller computer together with intermittent access to a larger one. A data-conveying link between the two may then become necessary.

Conclusion

The foregoing remarks are meant to give some idea of how the essential nature of the digital computer and that of high energy physics have blended to produce the present prominence of CERN as a centre of computational physics. The detailed questions of ‘how’ and ‘what for’ are treated in the other articles of this [special March 1972 issue of CERN Courier devoted to computing] concretely enough to show the way for similar developments in other branches of science. In this respect, as in many others, CERN’s pioneering influence may transcend the Organization’s basic function as a centre of research in high energy physics.

High voltage in vacuum

Whether it be in accelerators (accelerating columns, electrostatic inflectors, r.f. cavities or fast ejection units), secondary beams (separators and deflectors) or detectors (all types of spark chamber), the technological problems to be solved often centre around the insulation or switching of high voltages. In the case of insulation, attempts are made to minimize leakage currents, to eliminate flashover, to increase electric field gradients, or to lengthen the useful life and operational reliability. In the case of electrical breakdown, the salient factors are certain features of the arc such as jitter, rise time, lifetime, etc. for spark gaps; and brilliance, plasma size, memory, etc. for spark chambers and especially streamer chambers.

Over the past ten years, a great deal of work has been done, particularly in the USA and the USSR, on the insulation of high voltage in vacuum and on the physics of arcs, because of the rapid development of novel applications for high voltages – including X-ray flash tubes (operating at several MeV), circuit breakers, very high voltage electron microscopy (MV range), and equipment used in plasma physics and in high energy physics. Vacua, and even ultra high vacua, are involved in most of these new applications.

In 1964, the Massachusetts Institute of Technology, together with the University of Illinois and industry (the Ion Physics Corporation and the High Voltage Engineering Company), organized the first International Symposium on High Voltage Insulation in Vacuo. Currently these symposia take place every two years and are attended by delegates from all over the world. An international committee, on which CERN is represented, has been set up.

Since 1961, CERN has devoted special attention to this subject by stimulating applied research in connection with the development of electrostatic separators (see for example CERN COURIER vol. 9, page 132) and accelerating columns.

Technical developments with d.c. voltages

In spite of some spectacular fundamental discoveries concerning the physics involved, and some notable technological advances, the theory of breakdown in vacuum at very high voltage has still to be formulated. The most important technological advances, in which CERN has often played a pioneering role, include:

  • Considerable improvements in voltage holding by abandoning stainless steel for the cathode in favour of heated glass (Berkeley), aluminium oxide (CERN) or titanium (CERN).
  • Discovery of the marked effect of pressure and of the nature of the residual gas between 10~5 and 10–3 torr on behaviour under voltage and on operating life (many Laboratories including CERN).
  • Discovery of the importance of the cleanliness of the surfaces subjected to powerful electric fields leading to the use of ultra high vacuum techniques (CERN).

The progress in the last ten years with homogeneous fields using large electrodes of the order of a square metre has made it possible to pass from 55–60 kV/cm over 5 cm and 40–50 kV/cm over 10 cm in 1960, to 150–160 kV/cm over 5 cm and 100–110 kV/cm over 10 cm in 1969. Nevertheless this is still far from the theoretical limit set by the field emission – 100 000 kV/cm!

The state of the theory

The large difference between the values obtained and the theoretical limit can now be explained. Though no new results have been published for some years on technical aspects, there have been some fundamental discoveries throwing new light on the mechanisms at the origin of an electric arc in vacuum. It has been found that there is high local amplification of the electric field at microscopic points which appear on metal surfaces under the action of intense fields. The heights of these points vary between a few tenths and several hundreds of microns. They can occur at either the anode or the cathode, but exactly how they are produced is still completely unknown.

It is now believed that there are several mechanisms which give rise to breakdown, the predominant one depending on parameters such as distance, voltage, residual pressure and the surface state of the electrodes. There are essentially two major regimes:

1. Short gaps between electrodes (less than a few millimetres) in a uniform field, or strong electric fields which are non-uniform (point-plane geometry);

2. Large gaps (more than a few millimetres) and very high voltages (more than a few hundred kV).

1. In the first regime, breakdown follows local heating either at the cathode due to field emission at the points which mysteriously develop, or at the anode by electron bombardment. The heating causes serious vaporization when current densities reach critical values between 107 and 108 A/cm2. The metal vapour thus produced is then rapidly ionized by cold emission electrons, leading to the final breakdown within a period varying from a few nanoseconds to a few hundred nanoseconds.

There have been some fundamental discoveries throwing new light on the mechanisms at the origin of an electric arc in vacuum

As the breakdown threshold is closely related to a critical current, and thus to a field, the characteristic breakdown voltage Vs as a function of the gap d between the electrodes should be linear. Also, the law of the variation of current with field should follow the predictions of field emission theory. These results have been confirmed over the past few years up to distances of a few millimetres between electrodes in a uniform, d.c, pulsed or high frequency field. The improvements in behaviour under pulsed voltage that can possibly be gained in this case are very small, when the time for which the voltage is applied is longer than a few tens of nanoseconds.

2. At CERN, with only a few exceptions, most of the applications of high voltage in vacuum are in the second regime which it had been difficult to study in University research laboratories because of the cost involved. Theoretical studies, in conjunction with experiments, were undertaken at CERN and have led to several new experimental observations and the elaboration of a model of the discharge phenomena.

When voltages are increased beyond a few hundred kV, the behaviour of the breakdown voltage threshold as a function of the different parameters changes completely. The characteristic Vs as a function of d is no longer linear but proportional to the square root of d. The residual pressure is of considerable importance (which is not so with short distances) and the threshold Vs is no longer determined by a critical current – the current before breakdown varies by several orders of magnitude when the distance varies only by a factor of two or three. Finally, and this is a fundamental point, the average time-lag to breakdown lengthens considerably – in the range of microseconds to several milliseconds.

These characteristics can be explained by the ‘micro-particle’ hypothesis. The mechanism leading to breakdown could then be described as the following: a collection of atoms is torn away from the anode as a result of the application of the field and electron bombardment. This micro-particle, electrically charged, is accelerated by the field between the electrodes and strikes the cathode with a velocity v and an energy W. If v and W are higher than critical values vc and Wc, the energy dissipated at the moment of striking is high enough, and remains within the interaction volume for long enough, to give rise to intense vaporization. Breakdown can then take place inside the bubble of gas thus formed. It can be shown by the double condition W greater than Wc and v greater than vc that the characteristic Vs as a function of d is then indeed of the square root form and that the minimum time lag T min is such that In T min is linear with V2.

Application of the theory to pulsed voltages

For high voltages (MV) and large distances (cm) in ultra high vacua (10~9 to 1CT8 torr), present investigations at CERN show that, in fact, the mechanism involved in initiating a breakdown takes a considerable time (jis to ms) to develop and that these times are statistically distributed in such a way that T min is proportional to V2. Because of these results, obtained with stainless steel and titanium electrodes, it would be possible to increase the strength of electric fields in vacuum very appreciably for all applications where the field is needed for only a short time (a maximum of several microseconds), which is often the case around large particle accelerators. The advantages obtained would allow the present values of d.c. fields to be doubled for times of the order of a few microseconds and distances between electrodes greater than a centimetre.

Future possibilities

Such an increase in the intensity of electric fields would allow further steps forward in the use of high energy particle separators and fast deflectors. Other conceivable applications include strong field accelerating lines in electron ring accelerators, coaxial beam guides, electromagnetic lenses, etc.

In all these applications, the new technical problem which arises is that of generating voltage pulses of several MV with very steep leading edges (less than 10 ns). The duration of the pulse depends on the application in view (from 10 ns to a few lis). A Marx generator in conjunction with a Blumlein line can be used for very short pulses as is already done for the Stanford 600 kV streamer chamber (see CERN COURIER vol. 7, page 219). The most delicate problems are those involved in striking the main spark-gap in the Blumlein line with a very low jitter (ns), since this spark-gap must operate at 2 MV with an impedance of 30 ohms. There are thoughts of using a ruby laser, a multiple-electrode spark-gap or perhaps a liquid dielectric spark-gap. A rise time of 50 to 100 ns would be adequate for pulses lasting several mircroseconds, in which case the Blumlein line would not be needed.

Many other laboratories and commercial firms are particularly interested in the work currently being done at CERN in the field of high voltages

In an application of the deflector or separator type, the beam element can be included in the load on the Blumlein line. Profit could then be drawn from having the magnetic field in phase with the electric field, and a TEM (transverse electromagnetic) wave is then set up. There are two advantages:

1. If the particles are sent into the equipment in the same direction as that of the propagation of the wave, the unit is an electromagnetic separator – a velocity selector with automatic magnetic compensation (with chromatic aberrations reduced to a minimum).

2. If the particles are sent in the opposite direction to the TEM wave, then the electric and magnetic deflections are added together and an electromagnetic deflector is formed. In view of the present technical results, deflectors can be made with 450 to 500 kV/cm (1 MV over 2 cm), or the equivalent of 3 kG. Thus the same piece of equipment can serve either as a deflector or as a separator.

The main potential of high electric field electromagnetic separators lies in the field of the separation of low-energy (a few hundred MeV) kaon beams for bubble chambers. It is possible in these cases to reduce the length of the separator considerably while retaining the same angle of separation, and thus to have particle beams with a short decay length (0.75 m for 100 MeV charged kaons).

The refined technology which has been developed to obtain very high voltage pulses and to overcome the complex problems in striking a spark-gap operating at several MV, allow one to think of building streamer chambers with very high gaps (of the order of a metre) using such high voltages. It is probably in this direction that interest in pulsed high voltages will be concentrated, because the advantages offered by such chambers are so attractive that physicists will almost certainly want them built. Finally, it is interesting to note that many other laboratories and commercial firms are particularly interested in the work currently being done at CERN in the field of high voltages. These laboratories and firms include those working in such varied fields as emitters, the transmission of electric power, circuit breakers, rectifiers, colour television, etc… 

  • This article was adapted from text in CERN Courier vol. 9, July 1969, pp208–210

Gargamelle: CERN’s new heavy liquid bubble chamber

an aerial view of part of the CERN site.

In the April issue of CERN COURIER we reproduced a photograph of the arrival of the first piece of the new heavy liquid bubble chamber ‘Gargamelle’. The 140 ton base-plate for the magnet was towed onto the site by two tractors in a 48-wheel convoy on 31 March. It seems an appropriate time to say something about this significant addition to CERN’s research equipment.

Its use

Gargamelle has been conceived principally as an instrument for research on neutrinos. The fascination of these elusive particles has been brought out in several previous articles in CERN COURIER (see particularly the article by C.A. Ramm, vol. 6, p. 211). They are the most abundant particles in the universe and their study will tell us much about the weak interaction, the only one in which they take part. Their interactions are so rare that ten years ago, our present ability to observe neutrinos was unimaginable. By 1963, it had become possible at high-energy accelerators, where large, refined detection equipment was already in use, to ‘see’ about one neutrino an hour from the millions that the accelerator produced. This will have increased by the early 1970s to something like 10 000 per day and the study of neutrinos will be on the same footing as that of most other particles. At CERN, Gargamelle will be one of the important contributors to this advance.

For neutrino experiments, a heavy liquid bubble chamber has two advantages over a hydrogen bubble chamber:

i) It presents a more dense target so that there are more particles with which the neutrino can interact;

ii) The distance a neutral particle travels in the liquid before producing charged particles (which leave tracks giving information about the parent neutral particle) is shorter. Many important neutrino interactions – such as the elastic scattering of an antineutrino and a proton producing a neutron – yield neutral particles, and the ability of the heavy liquid chamber to give information on them is therefore invaluable.

A heavy liquid chamber is less favourable than the hydrogen chamber in the complexity of the target it presents to the incoming beam and in the accuracy with which the particle tracks can be measured. Also, it is worth adding here that modified hydrogen chambers are now coming into vogue containing hydrogen/neon mixtures or a hydrogen target surrounded by a hydrogen/neon mixture, which compromise between the advantages and disadvantages of pure hydrogen and heavy liquid.

The main detector in the neutrino experiments previously carried out at CERN has been the CERN heavy liquid bubble chamber, which has a volume of 1180 litres. Gargamelle is much bigger with 10 000 litres of useful volume. In a uniform neutrino beam the event rate would be proportional to the volume for the same liquid. In fact, Gargamelle will contribute about a factor of seven to the rate at which neutrino interactions can be observed.

The new chamber is being designed and built at the Saclay Laboratory in France, with help from Ecole Poly­technique, Orsay and industry and is being given to CERN who are providing its buildings and supplies. As mentioned above, the first piece arrived recently and the other components will arrive during the course of the next year. The magnet is coming directly to be assembled at CERN. The other components will be first assembled and tested at Saclay. It is hoped to have the chamber in operation at the end of 1969.

Description of the chamber

The main features of the chamber are as follows: the body (which is almost ready for delivery) is a welded cylinder with dished ends, 1.85 m in diameter and 4.5 m long, with the axis of the cylinder in the direction of the beam. It is constructed of low carbon steel, 60 mm thick increasing to 150 mm in the region of the ports. Its total volume is 12 m3 of which 10 m3 is ‘useful volume’, i.e. can be seen by two cameras. Two diaphragms, made of polyurethane elastromer 4 m by 1 m, running in the direction of the axis on one side of the chamber are used to vary the pressure on the liquid. The liquid can be pure propane (when the chamber would contain 5 tons of liquid) to freon (15 tons) or any intermediate mixture. Four fish-eye lenses, with an angle of view of 110° are set in apertures in each diaphragm; each set of four have their images recorded on a single film. There are 21 xenon flash tubes distributed over the chamber behind diaphragms to give ‘dark field’ illumination (see CERN COURIER vol. 7, p. 144).

a 1/8 scale model of Gargamelle

The chamber is surrounded by a magnet designed to produce a field of 19 kG. The magnet yoke, weighing 800 tons, serves as support for the chamber, the expansion system and the coils. The two sets of coils weigh 80 tons each and are mounted vertically; the field direction is horizontal.

The name Gargamelle is taken from the satirical novel ‘Gargantua’ by Rabelais (1534) in which Gargamelle was the mother of the giant Gargantua. She gave birth to Gargantua through her ear. The association of headaches with Gargamelle is appropriate even in modern times. The construction of the new chamber has created many problems for its makers. Bringing forth the data from Gargamelle will also cause some headaches. The direct interpretation of the events recorded on the two films will be much more complicated than with smaller bubble chambers. New scanning and measuring techniques will be essential and already, under the auspices of the Gargamelle Users’ Committee, much development is in progress.

Gargamelle, in combination with the increases in repetition rate and intensity per pulse of the proton synchrotron and the refinements incorporated in the new neutrino beam-line, should make the coming years of neutrino research at CERN very fruitful ones. 

  • This article was adapted from text in CERN Courier vol. 8, May 1968, pp95–96

Computers at CERN

The IBM 7090 computer room at CERN

The popular picture of a biologist shows him wearing a white coat, peering through a microscope. If it is at all possible to draw a popular picture of an experimental high-energy physicist, he will probably be sitting at his desk and looking at his output from the computer. For the computer is increasingly becoming the tool by which raw experimental results are made intelligible to the physicist.

By one of those apparent strokes of luck that occur so often in the history of science, electronic digital computers became available to scientists at just that stage in the development of fundamental physics when further progress would otherwise have been barred by the lack of means to perform large-scale calculations. Analogue computers, which had been in existence for a number of decades previously, have the disadvantage for this work of limited accuracy and the more serious drawback that a more complicated calculation needs a more complicated machine. What was needed was the equivalent of an organized team of people, all operating desk calculating machines, so that a large problem in computation could be completed and checked in a reasonable time. So much was this need felt, that in England, for example, before digital computers became generally available, there was a commercial organization supplying just such a service of hand computation.

At CERN the requirement for computing facilities in the Theory Division was at first largely satisfied by employing a calculating prodigy, Willem Klein. Mr. Klein is one of those rare people who combine a prodigious memory with a love of numbers, and it was some time before computers in their ever-increasing development were able to catch up with him. He is still with us in the Theory Division, giving valuable help to those who need a quick check calculation. It is of interest to note, however, that he has now added a knowledge of computer programming to his armoury of weapons for problem solution!

The first computer calculations made at CERN were also done in the very earliest days of the Organization. Even before October 1958, when the Ferranti Mercury computer was installed, computer work had been sent out to an English Electric Deuce in Teddington, an IBM 704 in Paris and Mercury computers at Harwell and Manchester. Much of this work was concerned with orbit calculations for the proton synchrotron, then being built.

We have now reached a stage at which there is hardly a division at CERN not using its share of the available computer time. On each occasion that a new beam is set up from the accelerator, computer programmes perform the necessary calculations in particle optics as a matter of routine; beam parameters are kept in check by statistical methods; hundreds of thousands of photographs from track-chamber experiments are ‘digitized’ and have kinematic and statistical calculations performed on them; the new technique of sonic spark chambers, for the filmless detection of particle tracks, uses the computer more directly. In addition, more than 80 physicists and engineers use the computer on their own account, writing programmes to solve various computational problems that arise in their day-to-day work.

There are now two computers at CERN, the original Ferranti Mercury and the IBM 7090, a transistorized and more powerful replacement for its predecessor, the IBM 709. The 7090, in spite of its great speed (about 100 000 multiplications per second!), is rapidly becoming overloaded and is to be replaced towards the end of this year by a CDC 6600, which at present is the most powerful computing system available in the world.

It is hoped that this new machine will satisfy the computing needs of CERN for upwards of five years. The Mercury computer is now being used more and more as an experimental machine and there is, for instance, a direct connexion to it at present from a spark-chamber experiment at the proton synchrotron. Calculations are performed and results returned to the experimental area immediately, giving great flexibility.

Track-chamber photographs

Among the biggest users of computer time are the various devices for converting the information on bubble-chamber and spark-chamber photographs, usually on 35-mm film, into a form in which the tracks of the particles can be fitted with curves and the entire kinematics of an event subsequently worked out. To this end, from the earliest days of CERN, IEPs (instruments for the evaluation of photographs) [rumour once had it that IEP stood for ‘instrument for the elimination of physicists’!] have been built and put into use. These instruments enable accurately measured co-ordinates of points on a track, together with certain identifying information, to be recorded on punched paper tape. Their disadvantage is that measuring is done manually, requires skill and, even with the best operator, is slow and prone to errors. The paper tapes produced have to be copied on to a magnetic tape, checking for various possible errors on the way, and the magnetic tape is then further processed to provide in turn geometric, kinematic and statistical results.

It was recognized at an early stage, both in Europe and in the United States, that for experiments demanding the digitization of very large numbers of pictures, for example those requiring high statistical accuracy, some more-automatic picture-reading equipment would be needed. Two such devices are now coming into use at CERN. One, the Hough-Powell device (known as HPD), developed jointly by CERN, Brookhaven, Berkeley and the Rutherford Laboratory, is an electro-mechanical machine of high precision, which still requires a few pilot measurements to be made manually, on a measuring table named ‘Milady’, when used for bubble-chamber pictures. It has already been used in one experiment, for the direct processing of 200 000 spark-chamber photographs (for which the pilot measurements are unnecessary). The other device is called ‘Luciole’, a faster, purely electronic machine, although of lower precision, specially developed at CERN for digitizing spark-chamber photographs.

With the sonic spark chamber, the position of the spark between each pair of plates is deduced from the time intervals between its occurrence and the detection of the sound by each of four microphones. Arrays of such devices can be connected directly to the computer, thus dispensing with the taking, developing and examination of photographs.

The study of dynamics of particles in magnetic and electric fields gives rise to another important family of computer programmes. The electron storage ring, or beam-stacking model, required the writing of a programme that followed the motion of individual batches of particles during their acceleration and stacking in the ring. A previous study by the same group resulted in a series of programmes to examine the behaviour and stability of a proposed fixed-field, alternating-gradient stacking device. Various aspects of the performance of the linac (the linear accelerator that feeds the PS) have been studied and improved using the computer, and the later stages of the design of the PS itself involved a detailed computer simulation of the beam in the ring, including the various transverse or ‘betatron’ oscillations to which it is subjected.

There also exists a series of particle-optics programmes used for the design and setting up of particle beams, particularly the ‘separated’ beams producing particles of only one kind. These are ‘production’ calculations, in the sense that the programme is run with new parameters every time there is a major change in beam layout in any of the experimental halls.

Computer language

At first, a major bar to the use of computers for small, but important, calculations was the difficulty of programming them in their own special ‘language’ to solve the specific problem in hand. What is the use of a machine that can perform a particular calculation in a minute, the would-be-user asks, if it will take a month to provide the programme for the calculation? Given a hand calculating machine, a pencil and paper, and a quiet room, I can do it myself in three weeks! This valid argument limited the use of computers to two kinds of calculation: those too long to be performed by hand, and those that had to be carried out so often that the original effort of producing the programme was justified.

This situation was rectified by the use of ‘programming languages’, which make it possible to express one’s problem in a form closely resembling that of mathematics. Such languages, if defined rigorously enough, express the problem unambiguously, and they can be translated automatically (by the computer) into the instructions for a particular computer. Two such languages have been used at CERN: Mercury Autocode, and Fortran. The use of Mercury Autocode has recently been discontinued, but until a short time ago many physicists used both languages with great success to express their computational problems in a form directly comprehensible by a computer. Courses in the Fortran language, given both in English and French, are held regularly, and usually last about three weeks. Such ‘compiler languages’, as they are called, used to be considered a rather inferior method for programming computers, as the translations obtained from them often used the machine at a low efficiency, but it is now recognized that the advantage of writing programmes in a language that can be translated mechanically for a number of different computers far outweighs a small loss in programme efficiency.

In the Theory Division, computer programmes are often written by individual theorists to check various mathematical models. Having devised a formula based on a novel theory, the physicist computes theoretical curves for some function that can be compared with experimental results.

Operation

The Data Handling Division, which actually has CERN’s central computers under its charge, contains a number of professional programmers, mostly mathematicians by training. Some of these are responsible for the ‘systems programmes’, that is, the Fortran compiler and its associated supervisor programme. Some have the job of disseminating programming knowledge, helping individual users of the computer and writing programmes for people in special cases. Others are semi-permanently attached to various divisions, working on particular experiments as members of the team.

A small number of mathematicians are also engaged in what might be called ‘specialist computer research’, covering such things as list-programming languages and methods of translating from one programme language into another. Such work might be expected to yield long-term profits by giving increases in computing power and efficiency.

As at all large computing installations, computer programmers at CERN do not operate the machine themselves

As at all large computing installations, computer programmers at CERN do not operate the machine themselves. Data and programmes are submitted through a ‘reception office’ and the results are eventually available in a ‘computer output office’, leaving the handling and organizing of the computer work-load, and the operating of the machine, to specialist reception staff and computer operators in the Data Handling Division.

What of the future of computers at CERN? In a field as new as this, predictions are even more dangerous than in others, but it is clear that the arrival of the new computer at the end of the year will cause a great change in the way computers are used. Having ten peripheral processors, each of which is effectively an independent computer, the machine may have many pieces of equipment for data input and output attached to it ‘on-line’. The old concept that a computer waiting for the arrival of data is standing idle, and that this is therefore wasteful and expensive, need no longer be true. With the new system, a computer that is waiting for new data for one problem is never idle, but continues with calculations on others. Every moment of its working day is gainfully employed on one or other of the many problems it is solving in parallel. Even so, there will still be a need for a number of smaller computer installations forming part of particular experiments.

As M.G.N. Hine, CERN’s Directorate Member for Applied Physics, pointed out at a recent conference, even with the growth of such facilities, the amount of computing time available may one day dictate the amount of experimental physics research done at CERN, in much the same way as the amount of accelerator time available dictates it now.

The Synchrocyclotron and the PS: a comparison

The two particle accelerators built at CERN for studying the structure of matter are called the synchro-cyclotron and the synchrotron. What similarity and what difference is there between these two machines? All accelerators have certain points in common: a source of particles to accelerate; a vacuum tank in which the particles can move without being slowed down too much by air molecules; and a target, internal or external. Like all accelerators, the CERN machines both use electrical phenomena for “pushing” the particles, and a magnetic field to keep them on an almost circular orbit. The synchro-cyclotron and the synchrotron are “circular accelerators” also called “orbital accelerators”. The particles move along curved trajectories. In both CERN machines these particles are protons or nuclei of the hydrogen atom. By and large, the resemblance can be said to end here. There are basic differences in the design of the two machines and in the kinetic energy – or acceleration – which they can impart to the nuclear projectiles.

The synchro-cyclotron

Derived from the cyclotron, which it resembles from outside, the synchro-cyclotron – or SC – gives the accelerated particles a curved trajectory. It forces them repeatedly to cross an accelerating electrode, called a “Dee” because of its shape. The particles are injected 54 times per second from a source in the middle of the vacuum tank. Each accelerating push increases the speed of the proton which, owing to centrifugal force, has a trajectory in the shape of a growing spiral.

The accelerating process lasts a few milliseconds, during which the particles make 150 000 turns in the vacuum tank, covering about 2500 km, and reach 80% of the velocity of light! When the pulsed proton beam reaches the energy at which it is used – a maximum of 600 MeV – i.e. at the circumference of the vacuum tank, there are two ways of using it. The proton beam can be extracted as it is and directed towards the experimental apparatus, or the beam may strike a target inside the vacuum tank; in this way, a source of secondary particles – neutrons or pi mesons – is created and they in turn are directed towards the experimental apparatus.

The photograph above shows the inner sanctum of the SC, the machine room, which no one may enter when the machine is operating.

There one can see the huge structure of the 2500-ton electro-magnet. Its horizontal yoke consists of 18 magnetic steel plates, 11 m long, 1.5 m high and 36 cm thick. The
electro-magnet is excited by two enormous coils. One is clearly visible on the photograph: its aluminium cover can be seen shining at the top of the photograph. The other is placed symmetrically to it but is in the shadow and does not show up so clearly. Each of the coils weighs 55 tons, measures 7.2 m in diameter and consists of 9 pancakes of aluminium conductors measuring 19 cm2 in cross section. There is a 3 cm2 hole in the centre of the conductor for the circulation of 30 000 litres/hour of demineralized cooling water. This is necessary as the exciting current is 1750 amperes d.c. at 400 volts. It was a considerable undertaking to bring these two coils from the factory where they were made in Belgium, by barge up the Rhine and on a special trailer through Switzerland.

The magnetic field set up by the electro-magnet is constant: 18 500 gauss. For the sake of comparison the magnetic field of the earth is 1 gauss, the strongest field which the best anti-magnetic watches can stand is 1000 gauss. The 18 500 gauss field is applied across the vacuum tank between the 5 m diameter poles of the electromagnet – which were forged at Rotterdam and machined at Le Creusot. It “focuses” the accelerated particles on their spiral orbit in the vacuum tank.

The vacuum tank, in which the protons turn, has a cubic capacity of 23 m3 and its stainless steel walls are 60 mm thick. The vacuum in this tank is better than 10–5 mm of mercury, in other words the pressure is 76 million times less than that of the earth’s atmosphere. This vacuum is created and maintained by two large vacuum pumps, one of which is clearly visible in the foreground of the photograph.

The radio-frequency system generates the alternating electric field which accelerates the particles. The frequency decreases from 29.3 to 16.4 Mc/s (millions of cycles per second) as the velocity and mass of the particles increase. The most spectacular part of the system is perhaps the modulator in the shape of a tuning fork, 0.5 m long and 2 m wide, housed in the bulge in the vacuum tank which can be seen to the left of the pump.

The concrete building in which the SC operates is intended to stop radiation. The walls which box in the SC are up to 5.8 m thick and are fitted with two 200-ton doors, which slide slowly into position to block the entrances. The total weight of concrete – standard or with baryte added – is 22 000 tons.

The cost of the synchro-cyclotron ? About twenty-four million Swiss francs, including the buildings. The staff in charge of maintenance, operation and technical development, totals 26, excluding the experimental physicists.

What place does the SC occupy in the world of accelerators? Out of the 18 synchro-cyclotrons now operating, the CERN SC comes third in respect of energy (600 MeV), after the 730 MeV SC at Berkeley in California and the 680 MeV machine at Dubna, in Russia.

The synchrotron

The synchrotron looks quite different from the SC. It is as extensive as the SC is massive and squat.

The PS also curves the trajectory of the particles it accelerates, but the fixed radius of curvature is about 100 m. The protons go through 16 accelerating units, placed at intervals round the 628 m circumference. They are generally injected into the big accelerator’s vacuum tank, once every three seconds, after having gone through a pre-accelerator and a linear accelerator. When they enter the 200 m diameter ring, the particles already have an initial kinetic energy of 50 million electronvolts (MeV). Every time the protons pass through one of the 16 accelerating cavities, their velocity is increased, while a growing magnetic field keeps their trajectory on an orbit of constant radius.

In the PS the acceleration process lasts one second. During this time the particles make some 450 000 turns in the vacuum tank, covering about 300 000 km and reaching 99.94% of the velocity of light.

Once maximum energy – 25 or 28 thousand million electronvolts (GeV) – has been reached, the bunches of particles strike a target in the vacuum tank itself. This produces secondary beams of particles and antiparticles. In 1961, it will be possible to extract the primary proton beam and make use of it; at present, only a small proportion of the protons scattered by the nuclei of the target cross the wall of the vacuum tank and can be used as a high energy proton beam for experiments.

The photograph here was taken in the tunnel – the 200 m diameter ring in which the synchrotron is installed. Of course, nobody is allowed in the ring when the machine is operating, because of the radiation danger.

The electro-magnet consists of 100 separate units placed right round the circumference. Each unit weighs 28.7 tons net and consists of ten C-shaped blocks made up of laminations 1.5 mm thick. Each of the 100 units is excited by two coils placed longitudinally on the pole pieces of the units. Each coil consists of two pancakes, each with 5 windings of aluminium conductor 21 cm2 in cross-section; this conductor is hollow to let cooling water through since the current can reach 6400 amperes at 5400 volts.

The magnetic field produced varies from 147 gauss when the particles are injected into the synchrotron to 12 000 or 14 000 gauss when they have been fully accelerated. The magnetic field is applied across the vacuum tank, a long elliptical tube with a cross-section of 14 × 7 cm, curving round the 628 m circumference of the machine between the poles of the 100 magnet units. The walls of the tank are made of stainless steel, in order to be unaffected by the magnetic field and are only 2 mm thick. A vacuum better than 10-5 mm of mercury is maintained in the tank by 66 pumps on the outer edge of the ring and 5 others in the injection system.

In each accelerating cavity, the radio-frequency system creates the alternating electric field which accelerates the protons. As we have seen in the SC, the applied radio-
frequency has to decrease in order to keep in step with particles which take more and more time to go round as the orbit expands; in the PS, on the other hand, the orbit is constant and the ever-increasing velocity of the particles calls for an increase in frequency from 2.9 to 9.55 M c/s.

The ring-shaped concrete building which contains the PS is buried underground as an additional protection against gamma and fast neutron radiation: the 40 cm of concrete walls are covered with more than 3 m of earth and stones.

There are no gigantic doors here as in the SC: a zigzag passage is used to stop radiation, while giving access to the machine when it is not operating. When it is in operation, a whole network of electric connections would stop the accelerator if anyone, regardless of the danger, tried to gain access to it by forcing one of the locked doors. The level of radio-activity is probably more intense between the two experimental halls – north and south – because of the presence of the targets. There the ceiling of the funnel is made of 2 m of special baryte concrete with a density of 3.5 t/m3. There is a series of movable blocks in the walls so that openings can be made as required for admitting the beams into the experimental halls.

The synchrotron cost 120 million Swiss francs: ten cigarettes for each of the 220 million inhabitants of CERN’s twelve Member States.

The “Machine Group” in charge of exploiting and developing the PS is a team of 139. The actual running of the machine needs ten operators.

As for the position of the PS with respect to other accelerators, the CERN synchrotron comes second in the world with 28 GeV, after the 31 GeV synchrotron commissioned at Brookhaven on 30 July. 

  • This article was adapted from text in CERN Courier vol. 1, September 1960, pp6–8
bright-rec iop pub iop-science physcis connect