Topics

Computing in high energy physics

1 July 1989

The 1989 Computing In High Energy Physics conference weighed up the challenges of analysing LEP and other data, reported Sarah Smith and Robin Devenish.

David Williams

Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations.

The state of the art, as well as new trends and hopes, were reflected in this year’s ‘Computing In High Energy Physics’ conference held in the dreamy setting of Oxford’s spires. The 260 delegates came mainly from Europe, the US, Japan and the USSR. Accommodation and meals in the unique surroundings of New College provided a special atmosphere, with many delegates being amused at the idea of a 500-year-old college still meriting the adjective ‘new’.

The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference’s aim – ‘to bring together high energy physicists and computer scientists’.

The complexity and volume of data generated in particle physics experiments is the reason why the associated computing problems are of interest to computer science.These ideas were covered by David Williams (CERN) and Louis Hertzberger (Amsterdam) in their keynote addresses.

The task facing the experiments preparing to embark on CERN’s new LEP electron–positron collider is enormous by any standards but a lot of thought has gone into their preparation. Getting enough computer power is no longer thought to be a problem but the issue of storage of the seven Terabytes of data per experiment per year makes computer managers nervous even with the recent installation of IBM 3480 cartridge systems.

With the high interaction rates already achieved at the CERN and Fermilab proton–antiproton colliders and orders of magnitude more to come at proposed proton colliders, there are exciting areas where particle physics and computer science could profitably collaborate. A key area is pattern recognition and parallel processing for triggering. With ‘smart’ detector electronics this ultimately will produce summary information already reduced and fully reconstructed for events of interest.

Is the age of the large central processing facility based on mainframes past? In a provocative talk Richard Mount (Caltech) claimed that the best performance/cost solution was to use powerful workstations based on reduced instruction set computer (RISC) technology and special purpose computer-servers connected to a modest mainframe, giving maybe a saving of a factor of five over a conventional computer centre.

Networks bind physics collaborations together, but they are often complicated to use and there is seldom enough capacity. This was well demonstrated by the continuous use of the conference electronic mail service provided by Olivetti-Lee Data. The global village nature of the community was evident when the news broke of the first Z particle at Stanford’s new SLC linear collider.

The problems and potential of networks were explored by François Fluckiger (CERN), Harvey Newman (Caltech) and James Hutton (RARE). Both Fluckiger and Hutton explained that the problems are both technical and political, but progress is being made and users should be patient. Network managers have to provide the best service at the lowest cost. HEPNET is one of the richest structures in networking, and while OSI seems constantly to remain just around the corner, a more flexible and user-friendly system will emerge eventually.

Harvey Newman (Caltech) is not patient. He protested that physics could be jeopardized by not moving to high speed networks fast enough. Megabit per second capacity is needed now and Gigabit rates in ten years. Imagination should not be constrained by present technology.

This was underlined in presentations by W. Runge (Freiburg) and R. Ruehle (Stuttgart) of the work going on in Germany on high bandwidth networks. Ruehle concentrated on the Stuttgart system providing graphics workstation access through 140 Mbps links to local supercomputers. His talk was illustrated by slides and a video showing what can be done with a local workstation online to 2 Crays and a Convex simultaneously! He also showed the importance of graphics in conceptualization as well as testing theory against experiment, comparing a computer simulation of air flow over the sunroof of a Porsche with a film of actual performance.

Computer graphics for particle physics attracted a lot of interest. David Myers (CERN) discussed the conflicting requirements of software portability versus performance and summarized with ‘Myers’ Law of Graphics’ – ‘you can’t have your performance and port it’. Rene Brun (CERN) gave a concise presentation of PAW (Physics Analysis Workstation). Demonstrations of PAW and other graphics packages such as event displays were available at both the Apollo and DEC exhibits during the conference week. Other exhibitors included Sun, Meiko, Caplin and IBM. IBM demonstrated the power of relational database technology using the online Oxford English Dictionary.

Interactive data analysis on workstations is well established and can be applied to all stages of program development and design. Richard Mount likened interactive graphics to the ‘oscilloscope’ of software development and analysis.

Establishing a good data structure is essential if flexible and easily maintainable code is to be written. Paulo Palazzi (CERN) showed how interactive graphics would enhance the already considerable power of the entity-relation model as realized in ADAMO. His presentation tied in very well with a fascinating account by David Nagel (Apple) of work going on at Cupertino to extend the well-known Macintosh human/computer interface, with tables accessed by the mouse, data highlighted and the corresponding graphical output shown in an adjacent window.

The DEC demonstration

The importance of the interface between graphics and relational databases was also emphasized in the talk by Brian Read (RAL) on the special problems faced by scientists using database packages, illustrated by comparing the information content of atmospheric ozone concentration from satellite measurements in tabular and graphical form – ‘one picture is worth a thousand numbers’.

The insatiable number-crunching appetite of both experimental and theoretical particle physicists has led to many new computer architectures being explored. Many of them exploit the powerful new (and relatively cheap) RISC chips on the market. Vector supercomputers are very appropriate for calculations like lattice gauge theories but it has yet to be demonstrated that they will have a big impact on ‘standard’ codes.

An indication of the improvements to be expected – perhaps a factor of five – came in the talk by Bill Martin (Michigan) on the vectorization of reactor physics codes. Perhaps more relevant to experimental particle physics are the processor ‘farms’ now well into the second generation.

Paul Mackenzie (Fermilab) and Bill McCall (Oxford) showed how important it is to match the architecture to the natural parallelism of a problem and how devices like the transputer enable this to be done. On a more speculative note Bruce Denby (Fermilab) showed the potential of neural networks for pattern recognition. They may also provide the ability to enable computers to learn. Such futuristic possibilities were surveyed in an evening session by Phil Treleavan (London). Research into this form of computing could reveal more about the brain as well as help with the new computing needs of future physics.

The importance of UNIX

With powerful new workstations appearing almost daily and with novel architecture in vogue, a machine-independent operating system is obviously attractive. The main contender is UNIX. Although nominally machine independent, UNIX comes in many implementations and its style is very much that of the early 7Os with cryptic command acronyms – few concessions to user-friendliness! However it does have many powerful features and physicists will have to come to terms with it to exploit modern hardware.

Dietrich Wiegandt (CERN) summarized the development of UNIX and its important features – notably the directory tree structure and the ‘shells’ of command levels. An ensuing panel discussion chaired by Walter Hoogland (NIKHEF) included physicists using UNIX and representatives from DEC, IBM and Sun. Both DEC and IBM support the development of UNIX systems.

David McKenzie of IBM believed that the operating system should be transparent and looked forward to the day when operating systems are ‘as boring as a mains wall plug’. (This was pointed out to be rather an unfortunate example since plugs are not standard!)

The insatiable number-crunching appetite of both experimental and theoretical particle physicists has led to many new computer architectures being explored

Two flavours of UNIX are available – Open Software Foundation version one (OSF1), and UNIX International release four (SVR4) developed at Berkeley. The two implementations overlap considerably and a standard version will emerge through user pressure. Panel member W. Van Leeuwen (NIKHEF) announced that a particle physics UNIX group had been set up (contact HEPNIX at CERNVM).

Software engineering became a heated talking point as the conference progressed. Two approaches were suggested: Carlo Mazza, head of the Data Processing Division of the European Space Operations Centre, argued for vigorous management in software design, requiring discipline and professionalism, while Richard Bornat (‘SASD- All Bubbles and No Code’) advocated an artistic approach, likening programming to architectural design rather than production engineering. A. Putzer (Heidelberg) replied that experimenters who have used software engineering tools such as SASD would use them again.

Software crisis

Paul Kunz (SLAC) gave a thoughtful critique of the so-called ‘software crisis’, arguing that code does not scale with the size of the detector or collaboration. Most detectors are modular and so is the code associated with them. With proper management and quality control good code can and will be written. The conclusion is that both inspiration and discipline go hand in hand.

A closely related issue is that of verifiable code – being able to prove that the code will do what is intended before any executable version is written. The subject has not yet had much impact on the physics community and was tackled by Tony Hoare and Bernard Sufrin (Oxford) at a pedagogical level. Sufrin, adopting a missionary approach, showed how useful a mathematical model of a simple text editor could be. Hoare demonstrated predicate calculus applied to the design of a tricky circuit.

Less high technology was apparent at the conference dinner in the candle-lit New College dining hall. Guest speaker Geoff Manning, former high energy physicist, one-time Director of the UK Rutherford Appleton Laboratory and now Chairman of Active Memory Technology, believed that physicists have a lot to learn from advances in computer science but that fruitful collaboration with the industry is possible. Replying for the physicists, Tom Nash (Fermilab) thanked the industry for their continuing interest and support through joint projects and looked forward to a collaboration close enough for industry members to work physics shifts and for physicists to share in profits!

Summarizing the meeting, Rudy Bock (CERN) highlighted novel architectures as the major area where significant advances have been made and will continue to be made for the next few years. Standards are also important provided they are used intelligently and not just as a straitjacket to impede progress. His dreams for the future included neural networks and the wider use of formal methods in software design. Some familiar topics of the past, including code and memory management and the choice of programming language, could be ‘put to rest’.

bright-rec iop pub iop-science physcis connect