The early 1980s would see the microprocessor become a routine part of the high-energy physics toolkit, predicted the Courier in the summer of 1979.
During the past few years, electronic circuitry techniques have been developed which enable complex logic units to be produced as tiny elements or ‘chips’. These units are now mass-produced, and are available relatively cheaply to anyone building data processing equipment.
Just a few years ago, the first complete processing unit on a single chip was produced. Now micro logic elements can be combined together to provide micro data processing systems whose capabilities in certain respects can rival those of more conventional computers. Commercially- available microcomputers are used widely in many fields.
Where an application requires special capabilities, it is preferable to take the individual micro logic units and wire them together on a printed circuit board to provide a tailor-made processing unit: If there is sufficient demand for the perfected design, the printed circuit board stage subsequently can be dispensed with and the processor can be mass-produced by large-scale integration (LSI) techniques as a single microprocessor.
With these processing units, there generally is a trade-off between speed and flexibility, the ultimate in speed being a hard-wired unit which is only capable of doing one thing. Flexibility can be achieved through programmable logic, but this affects the overall speed.
Programming micros is difficult, but one way of sidestepping these problems would be to design a unit which emulates a subset of an accessible mainframe computer. With such an emulator, programs could be developed on the main computer, and transferred to the micro after they have reached the required level of reliability. This could result in substantial savings in program development time. In addition, restricting the design to a subset of the mainframe architecture results in a dramatic reduction in cost.
High energy physics, which has already amply demonstrated its voracious appetite for computer power, could also soon cash in on this microcomputer boom and produce its own ‘brand’ of custom-built microprocessors.
According to Paolo Zanella, Head of CERN’s Data Handling Division, now is the time to explore in depth the uses of microprocessors in high energy physics experiments. If initial projects now under way prove to be successful, the early 1980s could see microprocessors come into their own.
One of the biggest data processing tasks in any physics experiment is to sift through the collected signals from the various detecting units to reject spurious information and separate out events of interest. Therefore to increase the richness of the collected data, triggering techniques are used to activate the data collection system of an experiment only when certain criteria are met.
Even with the help of this ‘hardwired’ selection, a large proportion of the accumulated data has to be thrown away, often after laborious calculations. With experiments reaching for higher energies where many more particles are produced, and at the same time searching for rarer types of interaction, physicists continually require more and more computing power.
Up till now, this demand has had to be met by bringing in more and bigger computers, both on-line at the experiments and off-line at Laboratory computer centres. With the advent of microprocessors, a solution to this problem could be in sight. Micros could be incorporated into experimental set-ups to carry out a second level of data selection after the initial hard-wired triggering – an example of the so called ‘distributed processing’ approach where computing power is placed as far upstream as possible in the data handling process. In this way the demand on the downstream central computer would be reduced, and the richness of the data sample increased.
The micros would filter the readout in the few microseconds before the data is transferred to the experimental data collection system. Zanella is convinced that this could significantly improve the quality of the data and reduce the subsequent off-line processing effort to eliminate bad triggers.
As well as being used in the data collection system, micros would also be useful for control and monitoring functions. The use of off the-shelf microcomputers in accelerator control systems, for example, is already relatively widespread. Some limited applications outside the control area are already being made in experiments, a notable example being the CERN/Copenhagen/Lund/Rutherford experiment now being assembled at the CERN Intersecting Storage Rings.
Microcomputer projects are now being tackled at several Laboratories. At CERN three projects are under way in the Data Handling Division. Two of these are programmable emulators (one being based on the IBM 370/168 and the other on the Digital Equipment PDP-11), while the third is a very fast microprogrammable unit called ESOP.
High energy physics still has a lot to learn about microprocessor applications, and there is some way to go before their feasibility is demonstrated and practical problems, such as programming, are overcome.
However this year could see some of these initial projects come to fruition, and the early 1980s could live up to Zanella’s expectations as the time when the microprocessor becomes a routine part of the high energy physicists’ toolkit.