Topics

Theory in the computer age

4 June 2007

Some years ago, it was customary to divide work in the exact sciences of physics, chemistry and biology into three categories: experimental, theoretical and computational. Those of us breathing the rarified air of pure theory often considered numerical calculations and computer simulations as second-class science, in sharp contrast to our highbrow elaborate analytical work.

CCvie1_06_07

Nowadays, such an attitude is obsolete. Practically all theoreticians use computers as an essential everyday tool and find it hard to imagine life in science without the glow of a monitor in front of their eyes. Today an opposite sort of prejudice seems to hold sway. A referee might reject an article demonstrating the nearly forgotten fine art of rigorous theoretical thought and reasoning if the text is not also full of plots showing numerous results of computer calculations.

Sometimes it seems that the only role that remains for theoreticians – at least in nuclear physics, which I know best – is to write down a computer code, plug in numerical values, wait for the results and finally insert them into a prewritten text. However, any perception of theorists as mere data-entry drones misses the mark.

First, to write reasonable code one needs to have sound ideas about the underlying nature of physical processes. This requires clear formulation of a problem and deep thinking about possible solutions.

Second, building a model of physical phenomena means making hard choices about including only the most relevant building blocks and parameters and neglecting the rest.

Third, the computer results themselves need to be correctly interpreted, a point made by the now-famous quip of theoretical physicist Eugene Wigner. “It is nice to know that the computer understands the problem,” said Wigner when confronted with the computer-generated results of a quantum-mechanics calculation. “But I would like to understand it, too.”

We live in an era of fast microprocessors and high-speed internet connections. This means that building robust high-performance computing centres is now within reach of far more universities and laboratories. However, physics remains full of problems of sufficient complexity to tax even the most powerful computer systems. These problems, many of which are also among the most interesting in physics, require appropriate symbiosis of human and computer brains.

Consider the nuclear-shell model, which has evolved to be a powerful tool for achieving the most specific description of properties of complex nuclei. The model describes the nucleus as a self-sustaining collection of protons and neutrons moving in a mean field created by the particles’ co_operative action. On top of the mean field there is a residual interaction between the particles.

Applying the model means being immediately faced by a fundamental question: What is the best way to reasonably restrict the number of particle orbits plugged into the computer? The answer is important since information about the orbits is represented in matrices that must subsequently be diagonalized. For relatively heavy nuclei these matrices are so huge – with at least many billions of dependent variables – that they are intractable even for the best computers. This is why, at least until a few years ago, the shell model was relegated for use describing relatively light nuclei.

The breakthrough came by combining the blunt power of contemporary computing with the nuanced theoretical intellect of physicists. It was theorists who determined that a full solution of the shell-model problem is unnecessary and that it is sufficient to calculate detailed information for a limited number of low-lying states; theorists who came up with a statistical means to average the higher-level states by applying principles of many-body quantum chaos; and theorists who figured out how to use such averages to determine the impact on low-lying states.

Today physicists have refined techniques for truncating shell-model matrices to a tractable size, getting approximate results, and then adding the influence of the higher-energy orbits with the help of the theory of quantum chaos. The ability to apply the shell model to heavier nuclei may eventually advance efforts to understand nucleosynthesis in the cosmos, determine rates of stellar nuclear reactions, solve condensed-matter problems in the study of mesoscopic systems, and perform lattice QCD calculations in the theory of elementary particles. Eventually, that is, because many answers to basic physics questions remain beyond the ken of even the most innovative human–computer methods of inquiry.

So yes, one can grieve over the fading pre-eminence of theory. However, few among us would want to revert to the old days, despite our occasional annoyance with the rise of computer-centric physics and the omnipresent glow of the monitor on our desks. As for my opinion, I happen to agree with the chess grandmaster who, when recently complaining about regular defeats of even the best human players by modern chess computers, said: “Glasses spoil your eyes, crutches spoil your legs and computers your brain. But we can’t do without them.”

bright-rec iop pub iop-science physcis connect