Topics

A New Kind of Science

1 January 2003

by Stephen Wolfram, Wolfram Media, Inc. ISBN 1579550088, $44.95 (US); £40 (UK).

cernbooks1_1-03

“Three centuries ago, science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world. My purpose in this book is to initiate another such transformation, and to introduce a new kind of science that is based on the much more general type of rules that can be embodied in simple computer programs.” Thus begins A New Kind of Science, in a probably self-conscious reference to Newton’s Principia. Ambition is certainly not lacking; this work claims to give us a radically new view of a large number of natural and social sciences. The author says that the discoveries he has made with his new kind of science will transform many fields of scientific endeavour, including the theory of evolution; the interpretation of genetic information; the origin of morphology in biological systems; embryology; the very notions of space and time; elementary particles; quantum mechanics; a fully fledged complexity theory; and brain function. A deeper understanding of things like free will and extraterrestrial intelligence are thrown in for good measure.

Wolfram was a child prodigy who also worked on particle physics and cosmology, making important contributions. He is well known as the author of Mathematica, a magnificent software package that allows sophisticated symbolic manipulations. This provides the basic tool for the investigations presented in this volume. When the program was released, it was an instant success, and most high-energy physicists are almost as addicted to it as they are to Paul Ginsparg’s archives. Nearly 20 years ago, Wolfram decided to study systems known as cellular automata. The simplest of these consists of an array of cells that can be in two states – say black and white – whose evolution generates a pattern in a two-dimensional array. The update rule that allows us to determine the state in the next row is (in the simplest variety) determined by the state of the cell and that of its two nearest neighbours. In this case the total number of possible rules is just 256, and one can program a computer to study their evolution for a variety of initial conditions. A particularly important program is rule 110, which states that if the cell is white, it will only turn black if its right neighbour is black, and if it is black, it will remain black unless its two neighbours are also black, in which case it will turn white. Given the initial condition, one can apply the rule and follow the two-dimensional pattern that is generated after many iterations. Wolfram discovered in the early 1980s that in spite of the simplicity of these rules, the patterns generated can contain great complexity – simple rules can generate complex behaviour. By thoroughly studying many kinds of cellular automata, he proposed their classification into four categories according to the long-term patterns they generate: uniformity, periodicity in time, fractals, and genuine complex non-repetitive patterns. With this principle, he begins his study of how to understand the complexity we observe in nature.

After making the basic observation by looking at computer experiments with linear cellular automata, Wolfram presents many other systems leading to complex behaviour, including higher-dimensional cellular automata, tag-systems, substitution systems, continuous automata and Turing machines. His conclusions always seem to be that once complex behaviour is achieved, the addition of new rules (complicating the initial program) will not significantly change the level of complexity. He also presents a plethora of natural phenomena that at first sight look complex. Traditional intuition might lead to the belief that the underlying rules are complicated, but Wolfram can produce simple automaton rules that visually reproduce their pattern of complexity. This includes snowflakes, leaves in plants, mollusc shells, iterated maps, pigmentation patterns throughout the animal world, the breaking of materials, earthquake patterns, and many others. Some of these phenomena have been studied by others, but since the main body does not include references, it is hard for the reader to know this.

In some instances, Wolfram’s case is convincing; in others it looks more like a good guess. In chapter 9, for example, he offers his view of the origin and exceptions of the second law of thermodynamics, together with a speculative model of the physical universe based on discrete causal networks where elementary particles are identified with localized structures of the universal automaton. The model is far from being testable, and furthermore, the way in which quantum mechanics is incorporated may have difficulties with the Bell inequalities.

The last two chapters on the notion of computations and the principle of computational equivalence are the natural conclusion of previous arguments. Like others (in particular Edward Fredkin), Wolfram proposes that the universe is a computation (“it for bit”, as John Wheeler would say). The fact that running simple programs roughly reproduces a large variety of complex patterns leads him to formulate his principle of computational equivalence (p720): “The principle of computational equivalence introduces a new law of nature to the effect that no system can ever carry out explicit computations that are more sophisticated than those carried out by systems like cellular automata and Turing machines.” In fact, in chapter 11 a proof is presented showing that rule 110 is a universal Turing machine – a universal computer. On p1115 we learn that the proof comes from one of Wolfram’s former employees, Matthew Cook, who was asked to work on it by Wolfram himself. The fact remains that to codify other universal computers as initial conditions to rule 110 so that it can simulate them seems extraordinarily complicated. Assuming the proof to be correct, and Wolfram is aware that a few errors may remain, it provides the simplest universal Turing machine constructed to date. However, a more unsettling conclusion can be drawn. Since humans are more processes than beings (we are gene survival kits, as Richard Dawkins colourfully puts it), we can describe our existence as an ongoing computation. Hence according to the principle of computational equivalence, we are computationally equivalent to rule 110. Ever since Copernicus, our place in the universe has diminished. Wolfram’s conclusion seems the epitome of Copernican recession. “But the Principle of Computational Equivalence also implies that the same is ultimately true of our whole universe,” Wolfram reassures us on p845. The problem may also be in the details of the initial conditions, and the devil is always in the detail.

If we follow the previous arguments, the same principle seems to lead inevitably to the conclusion that the whole universe, with all its subtle and wonderful features, can be encapsulated in a few lines of computer code (for example in Mathematica). The book ends with a humbling thought: “And indeed in the end the Principle of Computational Equivalence encapsulates both the ultimate power and the ultimate weakness of science. For it implies that all the wonders of our universe can in effect be captured by simple rules, yet it shows that there can be no way to know all the consequences of these rules, except in effect just to watch and see how they unfold.”

Wolfram has very high expectations for his new kind of science. No doubt many of his ideas and analyses will be incorporated in scientific discourse, but whether they will have the power to truly solve basic open questions in so many fields of knowledge (even in just one would be a great accomplishment) remains to be seen.

The book is often vague, which is in part due to the style of exposition chosen by the author, who is writing for a general audience. In (traditional) scientific practice, the identification of precise definitions and features of a given problem often takes us a long way towards its resolution. It is clear that much more work will be done following the methods of this book, and in a few years’ time, we will know whether they have become commonplace.

Apart from the controversial and speculative aspects of this book, it is worth mentioning that it provides an excellent expository account of large areas of physics, mathematics, computer science and biology in the main text and in the notes. The latter contain lucid presentations of vast areas of human knowledge. There is a lot to be learned from this book, and without a shadow of doubt, it will not leave you indifferent.

bright-rec iop pub iop-science physcis connect