Stephen Wolfram's A New Kind of Science (NKS) hit the bookstores in 2002 with maximum hype. His thesis is that the laws of physics can be generated by various cellular automata--simple programs producing complexity. Occasionally (meaning rarely) I look at the NKS blog and look for any new applications. I see nothing I consider meaningful. Is anyone aware of any advances in any physics theory resulting from NKS? While CA are both interesting and fun (John Conway, Game of Life), as a theory of everything, I see problems. The generator rules are deterministic, and they are local in that each cell state depends on its immediate neighbors. So NKS is a local deterministic model of reality. Bell has shown that this cannot be. Can anyone conversant with CA comment?
-
2Gerard 't Hooft has been looking at cellular automata inspired models for fundamental physics. You might find some of his recent (and readable) articles at http://arxiv.org/find/quant-ph/1/au:+Hooft_G/ – Siva Mar 21 '13 at 03:03
-
3"Bell's theorem rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables)." It's those non-local hidden variables that open the door for a CA explanation of the universe. Since the underlying structure of space-time is unknown, the local/non-local distinction is meaningless; it's entirely possible that seemingly random quantum occurrences, both local and non-local, are related in a deterministic way. The mere existence of entanglement is an obvious clue that such is the case. Everything is connected. – Triynko Feb 18 '15 at 18:33
5 Answers
While NKS came out with much hype, and with a lot of skepticism from scientists, the scientific ideas there are not completely trivial. I just think they are not foundational for the science of physics (at least not as we know it so far), rather they are foundational for the science of biology.
The main discovery made by Wolfram (although with an important confusion which I will explain below, and with an extremely significant precursor in Conway's game of life) is that a simple 1 dimensional cellular automata whose rules are chosen at random will have a finite not so small probability of being a full computer (in Wolfram's system, 2 of the 128 possibilities). The proof that the system he found, rule 110 in his terminology, is actually a full computer only came two decades later, thanks to the pioneering work of Cook (working under Wolfram). But it justifies his focus on the system as central to science, since before, it was often implicitly assumed that to get a certain amount of complexity, you had to put in complexity by hand. This result is also present in Conway's system, but Wolfram's work is somewhat complementary, because the information flows in 1d systems make it more difficult to imagine a full computer emerging. The fact that it does anyway (although, as Cook's construction shows, with horrible running times, because of the difficulty of shuttling information long distances) is surprising and notable.
This is not so important for physics, because any attempt to model physics with cellular automata will have to be grossly nonlocal in order to avoid Bell's theorem. This is not so implausible today, given gravitational holography, but Wolfram suggested that there would be a direct correspondence between local elementary particle paths and automata structures, and these ideas are flat out impossible, and were ruled out before he proposed them, by Bell's theorem. This means that the chapter of his book dealing with physics is completely wrong, and may be ignored.
But this work is important in a completely different way, it is the foundation of biology!
(EDIT: Chaitin's new book makes some brief comments about NKS which echo the main biological points below. I am not cribbing Chaitin, his book postdates this.)
Biology and Religion
The most puzzling aspect of the world we find ourselves in is that we are surrounded by complex computing devices not of our own design! Namely ourselves, other people, animals, plants, and bacteria. How did these computational structures get built, when we have to work pretty hard to make a computer? It seems that there is a puzzle here.
The puzzle has, in the past, been resolved by assuming some sort of magic put life on Earth, a supernatural agency. This idea is clearly at odds with the laws of nature as we understand them today, but it is important to keep in mind the superstitious answer, because elements of it are salvageable.
The superstitious answer is that God came down into the primordial soup, and mixed up the molecules to make life. The notion of God is not clearly defined in religious texts, where rigor is not the top priority. But I will try to give a positivistic definition below. I find that using this positivistic definition, which does not mention anything supernatural, I can translate the thoughts of religious people and make complete sense of what they are saying, where otherwise it just sounds like the ranting of delusional people suffering from severe brain-damage.
In order to discuss biology sensibly, I believe one must understand this religious point of view thoroughly, in a logical positivistic way, because it is important in biology to the same extent that it is completely unimportant in physics.
In a complex system, such as human social structures, we tend to observe patterns which cannot be attributed solely to the actions of individual people. For example, the protestant reformation seem to have happened all at once, within the span of a few decades in the early 16th century, where Church reformers were active and working for centuries before, with very little success. What made it happen? It wasn't just Luther and Calvin, it was also a network of businessmen and bankers, and disenchanted Catholics. The discovery of America was important in some way, as was the expulsion of Jews from England. To my mind, the most important was the 14th century edict forbidding usury by Catholics, which prevented the formation of banking. But it clearly wasn't one cause, nor was it the work of one person working alone.
When we see such complex phenomena, it is reasonable to attribute them to the working of a larger intelligence than the intelligence of any individual, and this is the intelligence of the collective. Just as a person is a collection of neurons, not any one of which is responsible for her intelligence, the society is a collection of individuals, no one of which is responsible for everything the society does or thinks collectively. The collective pattern is in many ways smarter than the individual--- it contains collective memories, in traditions and conventions, which inform individual action in complex ways.
The notion of of god (lower case g, like Zeus, or Mars) in ancient cultures is the name given to the entities formed from collective human actions. They are nebulous, but important, because the decision to go to war cannot be attributed to any one person, but to an entity, the god of war, formed from many individuals working together with the aim of forming a coherent collective which will lead the society to make that phase-transition of behavior which is going to war. Identifying a notion of a god, and explicitly setting people working for this god, makes them aware of the fact that they are working as parts of a machine, not solely as individual actors. Further, it can inspire them to act without direct orders from a King, or a priest, just through their own introspection, so as to best achieve the goal.
The notion of god was refined somewhere in India or Iran into the notion of God (upper case G), from which the Brahma cults and Abrahamic religion, and Zoroastianism emerged. This notion suggests that the conflict between gods is similar to the conflict between individuals, the gods also make collectives, and some win and some lose. In the end, there is a notion of a supreme God, the God which is the limit of the collective of whatever gods survive, defined as infinitely high up the god hierarchy, and demanding ethical actions.
This limiting conception of God was considered so important by the ancient thinkers, that they let all their other ideas die away in the medieval collapse, choosing to preserve only this through the middle ages.
But in addition to the practical notions of guiding behavior in collectives, the ancients also attributed all sorts of supernatural feats to God, including creating the universe and hand-designing life. These ideas about God are out of place with the conception as a meta-property of a complex system, and are completely contradicted by modern scientific discoveries. They are superfluous to religion, and detrimental to it, because they make people expect miracles and divine intervention in ways that violate the laws of nature, and such things just never happen.
The notion of God, as far as I have been able to make sense of it, is essentially a limiting computational conception--- it is the limit as time goes to infinity of the behavior of a complex system where the computational entities combine and grow in power into ever larger units. The idea of the limit suggests that there will be a coherence between the units at all levels, so that in the infinite time limit, for example, all societies will agree on the ethical course of action in a given circumstance, and will agree on how to organize their economies, and structure their interpersonal relations. These predictions are surprising, considering the divergence in human behavior, and yet, history suggests that such a convergence is slowly happening.
This computational decidability in the evolving limit has a direct counterpart in the idea that as mathematical systems become more complex, by reflection, they decide all arithmetical theorems. This is not a theorem, but an observation. It is noted that as we go up the tower of set theoretic reflection principles, more and more arithmetic theorems are resolved, and there is no in-principle limitation that suggests that the theorems will not all be decided by strong enough reflection. This is Paul Cohen's "Article of faith" in mathematical practice, and I will accept it without reservation.
Further, the article of faith tells you that we already have a name for the mathematical idea of God, it can be identified with the concept of the Church Kleene ordinal, the limit of all countable computable ordinals. Any computable formal system is only able to approach this ordinal gradually, and this ordinal is infinitely rich. If you have a description of this ordinal, you have a reflection principle which should be powerful enough to decide all theorems of arithmetic, to decide what consequences of any axiomatic system will be.
Because this ordinal has all the theological attributes religious folks attribute to God, in relation to pure mathematics, I consider it a sort of heresy to assume that there are larger ordinals. In particular, any notion of the first uncountable ordinal, or inaccessible ordinals, are only meaningful once they are placed in a given axiomatic system, and then they should collapse in the appropriate countable model to be less than the Church Kleene ordinal. This is not technically precise, but it gets the main idea across (it is easy to collapse the ordinals to be countable, but it is not so easy to rearrange the scheme to make them less than Church Kleene, but this is because within any deductive system which is of a set-theoretic nature, you can give a name to the Church Kleene ordinal, and define this ordinal plus 1, etc. These technical considerations are not so significant for the philosophical ideas)
So the interpretation I will take for religious doctrine is that God is to be identified with the Church Kleene ordinal, no higher ordinal is to be interpreted as actually higher, and gods will be identified with human collectives acting together to form a unit greater than the individuals. The monotheistic law of complex systems will state that all gods converge to the ideal represented by God over time, as they battle it out in a Darwinian struggle.
Automata and Darwin's experiment
When you have a cellular automata capable of universal computation, there is a strange phenomenon--- sub-parts of it are always in competition with itself. To explain this, one needs to look at Darwin's experiment, detailed in the Origin of Species.
Most of the Origin is theoretical, but Darwin did do one important experiment. He took a square plot of land, and carefully removed all visible living things from the soil. He uprooted all the plants, sifted to remove insects, and left the plot alone to see how it would be recolonized.
What he observed is that the plant species that recolonized the plot were first of the fast-growing unstable variety, that a whole bunch of weeds and bugs spread over the new area. Then, over time, other more hardy species slowly took over from the weeds, until, many months later, the plot was indistinguishable from the remaining land in the lot.
The purpose of the experiment was to see whether there is an actual struggle for resources in nature. Darwin hypothesized that if nature is in constant struggle, different elements, which are more successful but slowly replicating, will only win out after a time over elements which are less hardy, but whose strategy is quick colonization of new territory. His observations were consistent with the idea that the living things in any area are continually struggling for primacy, and that the limitation is the finite resources in any given plot of land.
This idea can be tested in computing cellular automata. By zeroing out a square patch in a 2d cellular automata which looks stable, one can see whether the remaining data colonizes the space in a way that is uniform, or in a gradually transforming way. I did this experiment using an 8-bit cellular automata (256 values) with random rules, and I found that in many cases, those cases which are complex, the colonization is in stages, much as in Darwin's plot of land. The stages are short-lived, perhaps reflecting the limited computation possible in a small region with 8-bit values. It would be interesting to repeat the experiment using arbitrarily large integers on each cell, which can be thought of as representing a complex polymer, which catalyzes transformations on its neighbors
But the inhomogeneous colonization suggests that once you have a computing cellular automata, there is a constant competition between parts of the automata, which make collective computations, for resources. In other words, that Darwin's struggle is begun.
To make this idea more precise, consider dividing a CA in two, by placing a wall between the left and right half, and not allowing the halves to interact. If the CA is truly computational and complex, the two halves will not come to a statistical equilibrium, but will have complex structures on either side which acquire new characteristics at random over time, as its subparts evolve.
If you now remove the wall, it is unlikely that the left half will have compatible characteristics with the right half. They will not be able to mix. So in this case, the two halves must battle for domination, and whichever half wins will impose its characteristics on the other half filling the whole space with cells which are compatible with its characteristics. These characteristics include typical CA "animals" or structures which are qualitatively similar in their relations, particular configurations which are only stable in the environment of other structures around them. It is difficult to extract these characteristics from a running simulation, because you don't know a-priori what to look for, but I am confident that it can be done.
This type of thing implies that there is continuous competition in a CA which appears the moment it is first seeded, and continues as long as it is operating. In this environment, Darwinian selection and evolution are possible even without any explicit self-replicating structure. Any self-replication is of very high-level qualitative traits, not of low level bit structures.
Replication and evolution
This point of view is different from the most usual point of view regarding evolution (which is not the one originally proposed by Darwin). The usual point of view is the modern-synthesis evolution, which suggests that evolution proceeds by copying bit-strings in molecules, with errors, and that the result is that optimized bit strings are eventually selected.
This point of view is extremely poor in modeling actual biological evolution. First, nothing you are familiar with actually replicated itself. People have sex, bacteria share genes, and crossing over is complicated on non-genetic sequences, it is only a simple shuffling on genes.
Further, mutations seem to be produced by shadowy internal mechanisms directed by complex RNA networks in egg cells and in testicles. They are not random copying errors. To assume that the biological world is produced by a process of copying with error, coupled with selection is as silly as the following parable suggests:
Many years ago, there was only one book. It was a cookbook, with detailed instructions on how to make macaroni and cheese. The book was copied by scribes, who made an error here, an omitted passage there, and these books then competed for attention. Some recipes were improved by the errors, others became unreadable. Eventually, the books grew in length, with new passages produced by accidental duplicated copying, until today, behold! The library of Congress!
This story is ridiculous. But it is this ridiculous story that is currently sold as dogma in the biological sciences.
It is my view that any realistic theory of evolution must be closer to Darwin than to the modern synthesis. It must take into account that the process of mutation is authorly, it proceeds by complex RNA editing of DNA sequences. It must take into account the idea that sexual selection is primary, so that mate-selection is the dominant driving force of evolution in sexual species. It must also take into account the idea that the competition begins well before replication, and requires nothing more than a computing CA.
There is support for this position from computer experiments on self-replicating evolution. In order to test natural selection, little chunks of code were allowed to replicated and self-modify in the 1970s-1980s, to see what the end result would be. The end result was that the programs modified themselves until they found the shortest fastest self-replicator, which then filled up the computer memory.
At the time, this was considered a positive sign, the programs had evolved. But the obvious stasis in the final state leads me to see this as death of a complex system. There is no further progress possible from the end state, without an external agent to kick things around. The result is not a complex system, but a system trapped in a stable equilibrium of parasitic fast replication. Far from being a model of life, it is a model of a self-replicating cancer killing all evolution.
CA properties: Wolfram's Annoying Error
Wolfram classified Cellular automata into four types:
- homogeneous end state
- Simple periodic structures, perhaps separated, with different periods
- Self-similar ("Chaotic") structures
- Complex structures
Type 1 are automata that die. These just have a single stable endpoint that you always reach. Type 2 have infinitely many endpoints, but they are as simple to describe as a classical integrable motion--- you just have cycles of certain types, and to specify the endpoint, you give a list of all the cycles, and where you are in the cycle, and this specifies the result of running the CA from a given initial condition. These first two types of automata obviously will not reproduce a general purpose computer.
Type 3 are those automata that lead to self-similar fractal structures, like the Sierpinski gasket. These are more complex, so that the end-state requires an actual computation to specify, and wolfram identifies these with classical chaotic motions. I think this identification is wrong, but this is what it is.
The Type 4 are the complex automata, where you have to actually run them in full to figure out what they do. I don't like the final category, so I will now give my personal classification.
- homogeneous end state
- simple periodic end states, perhaps separated with different periods
- Self-similar or statistically self-similar fractal structures
- random automata, chaotic stable endpoint, stat. mech.
- Complex automata, biology.
The class 3 is expanded slightly, and class 4 is divided in two. There are random automata, which act to produce a randomized collection of values which wander ergodically through the allowed value space, and class 5, those automata which produce true complex behavior, with a way to map a computer into them with a map of reasonable complexity, which can actually be described by a finite procedure.
Because Wolfram doesn't distinguish between 4 and 5, he lumps together automata that are purely random, thermalizing into a Boltzmann type chaotic equilibrium, like automata 25, together with truly complex automata like 110. The distinction between the two is all important, but perhaps out of a pig-headed inability to admit his earliest classification was incomplete, Wolfram refuses to make it.
I will make this distinction. Type 4 automata are the analogs of chaotic classical systems, randomizing their information into a strange attractor, defined by the allowed values of clumps of sites, and a probability distribution on these. Once you know which allowed clumps occur with what probability, you can generate a typical output with absolutely no work, using a random number generator. It will not be the actual output, since this is deterministic, but it will be indistinguishable from the actual output for all intents and purposes.
Type 4 automata are just as a-biological, just as dead, as types 1-3. CA 25 is not alive. I am 100% sure that I am not misinterpreting Wolfram, because I specifically asked him, in person, at a seminar, whether he believes there is a map between CA 25 and a computer. He answered that he believes there is, but that it is extraordinarily complicated, and random looking. I am sure it does not exist.
Type 5 automata are exemplified by 110. Those are the ones which have predictable structures with non-randomizing behavior. These can be used to encode full Turing computation. That these are not measure 0 is an important discovery--- it gives an explanation for the origin of life.
The existence of typically computing CA's means that life can emerge naturally as soon as a system that can store large amounts of information spontaneously has interactions which are capable of forming a computer. This happens with 110, but it also should happen with random proteins in a pre-biotic soup, because, here we are!
The evolution of life, as I believe it happens, is purely molecular for most of the early stages. The proteins compete and evolve, producing a more precise class which can survive, which eventually catalyze the formation of nucleic acids (among other things), and learn to store data for later retrieval in nucleic acids. The nucleic-acid protein complexes then compute more, and learn to store data in DNA, for permanent storage (since DNA is much more stable). Finally, they package all this up in cells, and you have modern life.
This is a just-so story, but it is important because at no point does it postulate a self-replicating molecular entity. Such entities are poisonous for the emergence of life (as the computer experiments show), and it is good that they do not exist, otherwise life would not be able to emerge.
Wolfram's Sociological Agenda
There is a separate reason for Wolfram's lack of success in penetrating the scientific world which has nothing to do with the quality of his ideas (which are really not that bad). Wolfram made the conscious decision to pursue his science using private money which he raised by producing closed-source software, Mathematica, for sale to universities. In this way, he was producing a model for science research financed by private capital, rather than state money. Because Mathematica is so successful, many saw his work as a model for a New Kind of Capitalist Science.
This idea was very current in the pro-capitalism climate of the 1980s, where state sponsored and funded things were looked down upon, because of the constraints on individual freedom which the modern state imposed. The Soviet Union was the extreme example, there all science was driven by state decisions, which stifled certain fields like genetics, based upon the ideological position of the government. In the U.S., science was taken over by the government and made into big-science in the 1950s, explicitly so as to compete with the Soviets, and many people felt hampered by the big-money, big-science system, which excluded promising research avenues from consideration.
The lack of freedom in the state run system led many individuals to oppose it, and one of the ingredients in this fight was private financing. This was obviously only available outside of communist controlled regions. Wolfram politically made the decision to pursue private financing for his research.
The result reflects all that is good and all that is bad about privately financed science. It is good, because it allows the individual with an idea to pursue it indefinitely, and no outside criticism can stop or kill the work. They can self-publish, without worrying about peer review dismissing their ideas before they have time to germinate.
It is bad in several other ways, which have been the focus of academic criticism
- Self-financing requires the individual to amass large amounts of wealth, leading to sycophantism in those around them which prevents them from hearing cogent criticism, so that mistakes go uncorrected.
- In private enterprise, one does not cite sources. One makes it seem that one did everything on one's own. This is not compatible with academic conventions for citations and respect for the history of a field. While Steve Jobs arguably can take credit for the work of his employees, it is difficult for Wolfram to justify taking credit for Cook's work, even if he paid his salary.
- The Citizen Kane effect: the isolating and corrosive power of money easily lead to megalomania and isolation, which leads one to dismiss the ideas of others. This unfortunately can be seen in Wolfram's blithe one sentence dismissal of the exceedingly important work of Post, Friedberg, and Munchnik on Turing degrees below the halting problem. He claims that all natural CA's either are equivalent to the halting problem, or else random, or else trivial. This is the principle of "computational equivalence". But this is a nontrivial statement, and requires more evidence than what is presented in NKS.
The problems of private research are entirely complementary to the problems of public research, and there is no reason to dismiss the one entirely in favor of the other. But NKS shows those flaws in spades, and this is particularly grating to relatively low-paid public researchers, who have worked equally hard on their ideas, if not harder, without the megaphone of money to shout them out to the world.
I think that the newest thing in NKS is the financing model--- the idea that one can do research privately and independently. Perhaps this is the model of the future, but considering the relative success of publicly funded science as compared to private science, even in the most extreme repressive case of the Soviet Union, I am not optimistic that this is the best way. It is likely that one will have to deal with the annoyances and suboptimal features of public funding for the indefinite future.
Perhaps with an appropriate internet structure, like stackexchange, some of the censorship and group-think of public science can be mitigated.

- 4,880
-
16Don't worry about it, it's only the secret and meaning of life. It might go beyond the scope of the question a bit, but I'm travelling tomorrow, so I thought it might be good to share, ya know, in case the plane crashes. – Ron Maimon Jan 19 '12 at 21:31
-
1Actually I did read a good chunk and I'll probably read the rest at some point - interesting stuff as usual but it seems like a lot is only very tangentially related to the question :) – Jan 19 '12 at 23:11
-
@zephyr: I expected plenty of downvotes for that, but I wanted to explain this stuff, and this was the closest thing to a question about NKS. – Ron Maimon Jan 20 '12 at 00:40
-
2@ronmaimon very interesting, read once and will have to read again and again some other day! Thanks +1! – FrankH Feb 29 '12 at 07:44
-
5
-
-
@TimGoodman: http://meta.physics.stackexchange.com/questions/1124/longest-answer-ever . – Ron Maimon Aug 21 '12 at 00:17
-
"This story is ridiculous. But it is this ridiculous story that is currently sold as dogma in the biological sciences" : What is ridiculous in that story ? It's a working method – agemO Nov 01 '14 at 12:10
-
@agemO: The ridiculous part is that the mutation mechanism is brainless and non-computing. There is no evidence for this. It is true that SNP type mutations in proteins are random, but they are also generally pointless, they make clock-like neutral evolution. The interesting aspects of evolution is the effect on non-coding DNA, and these changes are extremely complicated, and certainly regulated by RNA networks making a sophisticated computation. These mutations bear no relation to the models in population genetics, they look more like intelligent design, with RNA being the designer, not God. – Ron Maimon Nov 02 '14 at 18:38
-
I don't say that I am sure that the mutation mechanism is not better that just random, but if it is just random it works and it's not ridiculous, you can make evolutionary algorithm with random mutation and it works as long as the selection step is not random (which is the case in biology : selection comes from survival or death) – agemO Nov 03 '14 at 02:34
-
@agemO: It only superficially seems to work to the naive intuition, it doesn't really work, and this is what many critics of modern synthesis evolution have been pointing out for decades. It gets impossible to mutate-evolve past a certain complexity without co-evolving the mutation mechanism along with the system. The reason is that the distance between roughly equal fitness maxima generically grows with complexity, so that the steps you make must be larger. The current model is simply not correct. But the correct mechanism to fix this is also obvious today--- RNA editing of DNA. – Ron Maimon Nov 03 '14 at 12:49
-
When I say it works I mean it gives results, of course I suppose evolution itself has undergone a meta evolution, so that it is more efficient/complicated today that pure random. I don't really know much about biology but I had the impression that this was the main point of view today, with evidence of tuned mutation rate for example. – agemO Nov 03 '14 at 13:54
-
1@agemO: It does not give any results either. The local protein mutations which change fitness can be counted on one hand--- moth color and sickle-cell anemia, that's about it. Those are exceptions, not the rule, but they are put as the rule in the books. The picture is simply wrong, because it is a non-computing picture, and it is also deliberately wrong, because it fits with an atheistic idea that natural computations don't exist. This type of no-computation-in-nature atheism is falsifiable and falsified. You had the impression because it is dogma, it's what everyone says, wrongly. – Ron Maimon Nov 03 '14 at 14:05
-
It does give result for optimization problem, building robot, or aerodynamic shape. Maybe random mutation is not the only/main mechanism today, but it does work. And when I said "I had the impression" I mean I had the impression that "more than random" was quite accepted, but I am not sure of this. Btw do you have references for this : "no-computation-in-nature atheism is falsifiable and falsified." – agemO Nov 04 '14 at 02:45
-
1@agemO: It works (badly) as a method of parameter optimization not as a method of evolution. Better parameter optimization is gotten through simulated annealing, or steepest descent, or both, depending on the details of the cost function. Evolution is not a simple optimization process, rather evolution in a computing system involves writing new code, making existing code more complex. It has been unfortunately thought of as a version of parameter optimization. Random mutation just isn't the natural process in a computing system, rather large scale coherent rewriting. – Ron Maimon Nov 04 '14 at 10:43
-
1... I don't give references for anything except for priority, as I don't know and don't care about authority. I noticed this myself. I might have been first, I doubt it. There is a Leslie Valient who says similar things, but is confused on how RNA works. Most people who notice that the random mutation models fails are religious, and use it to say "God did it supernaturally", so I can't cite them with a straight face, as they generally would reject RNA rewrites just as vehemently, as RNA is not Biblical either. But RNA rewriting is required. I really am not sure about acceptance, nor do I care. – Ron Maimon Nov 04 '14 at 10:45
-
By reference I mean evidence or clearer explanations about what you say about RNA – agemO Nov 04 '14 at 11:15
-
@agemO: I see. I'll write up something coherent. I never wrote it, because John Mattick has compiled the evidence well in 2001 (you can google Mattick RNA), and thinks similar things, although not with the computational point of view. The evidence is actually overwhelming by now, it's pretty much the only point of the enormous ENCODE project, to give this thesis scholarly weight. – Ron Maimon Nov 04 '14 at 18:33
Wolfram's early work on cellular automata (CAs) has been useful in some didactical ways. The 1D CAs defined by Wolfram can be seen as minimalistic models for systems with many degrees of freedom and a thermodynamic limit. Insofar these CAs are based on a mixing discrete local dynamics, deterministic chaos results.
Apart from these didactical achievements, Wolfram's work on CAs has not resulted in anything tangible. This statement can be extended to a much broader group of CAs, and even holds for lattice gas automata (LGAs), dedicated CAs for hydrodynamic simulations. LGAs have never delivered on their initial promise of providing a method to simulate turbulence. A derivative system (Lattice Boltzmann - not a CA) has some applications in flow simulation.
It is against this background that NKS was released with much fanfare. Not surprisingly, reception by the scientific community has been negative. The book contains no new results (the result that the 'rule 110 CA' is Turing complete was proven years earlier by Wolfram's research assistant Matthew Cook), and has had zero impact on other fields of physics. I recently saw a pile of NKS copies for sale for less than $ 10 in my local Half Price Books store.
-
5Somehow I ended up with two of them sent to me by Amazon. They make good ballast for my boat. – Gordon Jan 30 '11 at 06:56
-
4-1: 1D CA's do not result in deterministic chaos when they are computing, like 110, they result in complex structures that evolve. The lattice Boltzmann model you give is essentially a CA with random update rules, and it is used in hydrodynamic simulations. The book has a few "new" results (but these are mostly incorrect). It is most important as a summary of Wolfram's thinking. – Ron Maimon Jan 20 '12 at 18:53
-
Ron, if you are making a blanket statement that 1D automata can not lead to deterministic chaos, I wonder how you define the latter? – Johannes Jan 22 '12 at 01:53
-
2@Johannes: (sorry for the downvote, I suppose this is not sufficiently self-explanatory). The definition I use for chaos is not Wolfram's, it is that the automaton radomizes. This means that if you take a finite size snapshot in a window of finite extent, you can compute the statistical distribution in that window to arbitrary accuracy without running the automaton at all, just with a fixed length computation which only depends on the accuracy, not on how long the automaton is run. – Ron Maimon Jan 25 '12 at 05:25
-
Ok, that confirms my suspicion. You might want to read some about chaos, Luyapunov coefficients and the like. – Johannes Jan 26 '12 at 01:50
-
6@Johannes: I don't need to read anything--- I know what Lyapunov exponents are. There are chaotic automata, like 25, where the stuff randomizes, and computing automata, like 110, where the stuff is alive. The two are different. The 25 automata have nonlocal information flow analogous to Lyapunov exponents, while 110 is not analogous to any simple dynamical system. It's a full computer, it has no analogs other than other full computers. – Ron Maimon Mar 16 '12 at 19:47
-
@Gordon: I can't upvote you, but if I could, you would get +10 for your sense of humor which goes against mainstream Ayatollahs/inquisitional/communists/fascists/pro/con point of views. (hope, I have not hurt any side nor feelings). – Shaktyai Aug 21 '12 at 09:05
-
re "LGAs never delivered on...a method to simulate turbulence"-- "most nontrivial CAs are Turing Complete" therefore can (theoretically) simulate anything that is computable. so some of this debate comes down to a nearly philosophical question, are the laws of physics computable? most physicists implicitly assume this to be the case with the adherence to mathematical modelling as the supposed universal language of physics. (re "unreasonable effectiveness of mathematics in the natural sciences") https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences – vzn Apr 16 '20 at 19:49
-
@vzn - “ most nontrivial CAs are Turing Complete" therefore can (theoretically) simulate anything that is computable”. Correct, but from a Computational perspective that doesn’t render them in any way useful. LGAs were introduced as a promising computational tool for simulating turbulence, but never delivered on this promise. – Johannes Apr 17 '20 at 22:14
Shortly after NKS came out, I wrote a review in which I tried to explain why the answer to your excellent question is yes. A deterministic model like Wolfram's can't possibly reproduce the Bell inequality violations, for fundamental reasons, without violating Wolfram's own rule of "causal invariance" (which basically means that the evolution of a CA shouldn't depend on the order in which updates are applied to spatially-distant regions). Even with some "long-range threads" in the cellular automaton (which Wolfram explicitly allows, after noticing the Bell issue), you still can't get causal invariance, unless the actual states of the automaton are probabilistic or quantum. A closely-related observation was later dubbed the "Free-Will Theorem" by John Conway and Simon Kochen.

- 2,452
Most of these automata models are deterministic in the same sense as pseudorandom number generators are. For example in the lattice gas models the deterministic rules end up generating noise and large scale fluctuations in accord to the Navier-Stokes equations (including turbulence, although this is computationally impractical because of the large lattice dimensions required for reducing the lattice viscosity). The lattice gas game turned in the late eighties from noisy discrete particle automata to smooth distribution based lattice Boltzmann mesoscopic scale continuous-value automata (see Guy R. McNamara and Gianluigi Zanetti, Use of the Boltzmann Equation to Simulate Lattice-Gas Automata, Phys. Rev. Lett. 61, 2332–2335 (1988) ), so that's where you find most relevant advances these days.

- 547
Murray Gell-Mann has an interesting take on Bell's theorem which pertains directly to Stephen Wolfram's thesis on modeling physical laws with cellular automata in his tome: 'A New Kind of Science', an analysis which took him over 20 years to complete.
According to Murray, elegant models of physics involve fundamental laws in addition to the results of random chance outcomes of a number of things which are non-deterministic in a quantum mechanical sense (he is referring to physical constants). Indeed, it is hard to imagine Wolfram's cellular automata on any scale determining the fundamentals of a theory like quantum chromodynamics, which has been fine tuned and/or renormalized at every step to assure that the theory works as closely as possible to the way nature does. It is dubious, to say the least, that cellular automata would be able to reproduce even a portion of this iterative process in a manner that would output anything other than utterly useless simulations with no relationship to what happens in the natural world.
One thing that Stephen predicted in NKS that does seem to be happening in a big way is the idea that science is increasingly dependent on big computing in order to get results that advance our knowledge of the universe. The LHC in Geneva is a case in point.
-
The LHC isn't exactly big computing (although the processing of the data is) but the point of your last paragraph is a good one. – Selene Routley Apr 11 '15 at 02:26