0

Evolution is a principle in biology, whereby organisms evolve their ability to replicate and multiply in numbers over successive generations. From a computational point of view, the organisms employ a common programming (DNA, RNA, etc.) and runtime environment (cell biology). This evolution has eventually evolved a more powerful ((in a restricted sense) computational environment, the hierarchy of biological brains, with the currently most powerful class instance being the human brain. A collection of human brains is now working on evolving a quantum computer. Perhaps one day the the quantum computers will produce a....

From an anthropic point of view, we are here to think about these things, because our universe creates environments such as our planet earth, which support the evolution of life.

So my question is whether there is a general physical and computational principle at work here, demonstrated by the following incomplete, in parts almost surely incorrect, and highly speculative hierarchical chain of evolution:

  1. Within the Multiverse, universes are instantiated following particular physical laws originated and adapted from their embedding universe

  2. Each instantiated embedded universe follows computational rules (the particular physical laws of this universe) and create a finite number of embedded universes (the only candidate seems to be for singularities of black holes to correspond with a white-hole, inflationary, embedded universe), each of which evolve in the same general manner

  3. Each universe produces a finite number of computational schemes to build organisms that follow certain classes of program patters. These organism evolve according to the theory of biological evolution.

  4. Biological organisms evolve biological brains, a higher class computational scheme

  5. biological brains build computational devices, which at some point attain the property of replication

  6. The replicating computational devices build higher classes of computational devices, following the general principle of evolution...

  • You may enjoy this (purely fictional): https://docs.google.com/document/d/1DLXMlDiffGXGdpCFuH4KD3LiJXOC7qhdJsT9Q3VfKQI/mobilebasic?authkey=CN-jis4B&pli=1 – Manishearth Mar 24 '12 at 16:20
  • Evolution works because less fit specimens are killed off without offspring...so what do you suggest is "killing" universes? – dmckee --- ex-moderator kitten Mar 24 '12 at 21:27
  • @dmckee, the universe can, depending on the particular physical laws it is instantiated with, for example (a) die fast due to implosion, radiating as a white-hole to the embedding universe (b) die slowly by hawking evaporation , (c) evolve so slowly that it is relatively or completely frozen in its infancy, or (d) just in general evolve so slowly that it is relatively ineffective at spawning embedded universes or generating higher level computational structures. – Halfdan Faber Mar 25 '12 at 05:39
  • The fittest universes are those that evolve relatively fast both horizontally (spawning universes) and vertically (expanding evolutionary/computational levels). Those universes will dominate the multiverse. Satisfying an anthropic principle, any biological or higher order life found in the multiverse will almost surely originate from a dominating region. If anyone thinks it is interesting, I will amend the original question with brief fitness examples for each level in the evolutionary hierarchy. – Halfdan Faber Mar 25 '12 at 05:40
  • @Manishearth, that's a hilarious and interesting read. A very talented writer... Seems to have been written back in 2003? I can only find a couple references online. – Halfdan Faber Mar 25 '12 at 05:56
  • @Grigori Yep, I reread it quite often :D. I found it somewhere on the Net a long time back and copied it to my collection of such things in Docs. I think this one was by the same guy--not sure though. – Manishearth Mar 25 '12 at 06:27
  • Looks like Lee Smolin's book, The Life of the Cosmos, 1999, covers the evolutionary view. Not sure if he has anything to say about computation. Just ordered on kindle, will add info in a day or two, unless someone can comment. – Halfdan Faber Mar 25 '12 at 17:51

2 Answers2

2

This is the fecund universe idea, due to Smolin. The original form assumed that a new universe formed every time a black hole appeared (as a sink for the information loss that relativists believed in back then), and then the universe is tuned to maximize the number of black holes formed, constrained by the condition that life is possible.

These types of ideas are anthropic, and they are hard to make testable. Even if the universe is replicating itself and changing, it is not life. Life is not about replication. Fire replicates itself, and tries to maximize cumbustible consumption. Fire isn't life.

Life is when you have a computer in nature. That's not the case for universe-forming processes, because the universe is just not that complicated on the elementary scale. You can see the universe wasn't designed, and evolution and design are synonyms. All design is a process of evolution in your head, and all evolution in a complex system can be equally well called a process of design in a disembodied computational entity formed by all the evolving creatures.

Since evolution is a property of complex systems, and there is no complex system here, just black hole formation from galaxies (this doesn't allow a universal computer), you don't have evolution as I see it. You just have, at best, something replicating, like fire.

The theory is also incorrect because black holes don't lose information, and don't make new universes. The current universe we are in is also not particularly tuned for black hole formation. Further, the measure which tells you how to maximize is not at all clear: should you maximize the total number of black holes the universe will ever form? Does it matter if they form early or late? What's the weight? These questions are, to my mind, an abuse of language in the sense of Carnap--- they are positivistically meaningless.

  • "All design is a process of evolution in your head" - awesome! – Slaviks Mar 26 '12 at 20:17
  • I saw you talking about computers several times, but I can't follow your perspective. If I think of a computer, then I see something which is able to perform different computations, something I interact with. If there are natural computers then noone is deciding what to compute. So if there is something that's being computed, then it's just an autonom evolution process. This means it's just a step by step action and I don't see why one would call that computer. But I also don't see how rule 110 is turing complete as I don't understand where one would put the input? A starting strings I choose? – Nikolaj-K Mar 26 '12 at 22:06
  • @NickKidman: The way you see a natural system is a computer is if you can encode a computer into the initial conditions with a finitely computable encoding, and run an arbitrarily program on the coded information just by allowing the system dynamics to go forward in time. In rule 110, you take an arbitrary memory/instruction-set and encode it into a very long initial condition, and it will compute the result for you. Similarly, for random proteins, you can encode the informations in their conformations and bindings so that their evolution will reproduce the computation. – Ron Maimon Mar 27 '12 at 17:57
  • You have to be careful that the encoding is not too complex, or takes too long to find, because otherwise you aren't doing the computation in the system itself, but in the encoding algorithm. For example, in a randomizing automaton, you can always encode an arbitrary computation by making the map encode the initial conditions in the initial string, and some random string at some long time will be said to "encode" the result. This is an abuse of the word "encoding", you aren't encoding, you are computing the answer. It is usually easy to find encodings which prove systems are Turing complete. – Ron Maimon Mar 27 '12 at 17:59
  • In real physical systems, there is also always randomness, so the computation has access to a random oracle. This type of computation is stochastic computation, and it is strictly more powerful than Turing (or rule 110) computation. But it is not studied so well, perhaps because it is hard to define a random oracle precisely as a real number in standard mathematics. – Ron Maimon Mar 27 '12 at 18:02
  • @RonMaimon: Aha, I have no clue how these encodings work, but this sounds interesting. To prove something to be Turing complete, one has the check if certain input does certain basic operatons, which are together are able to do everything a Turing complete machine does, right? Also, I find the random thing interesting, specifically, because you indicate that it helps computing - which I would intuitively assume if the thing does something unforseen/random, that destroys the possibility to compute something. – Nikolaj-K Mar 27 '12 at 20:05
  • @NickKidman: Yes--- generally to prove Turing completeness, you embed the simplest closest already Turing complete system. The randomness does not always destroy the computation, only if it infects all the other non-random bits with randomness. There are simple cases where randomness stays confined. These cases make more powerful than Turing computers. – Ron Maimon Mar 28 '12 at 04:08
  • @RonMaimon: In one sentence, how does randomness help? Has it something to do with trial and error? Is there a related wikipedia article or so? – Nikolaj-K Mar 28 '12 at 07:11
  • @NickKidman: If you have randomness, you can compute a random real number! Just spit out an infinite sequence of random digits. There is no deterministic program which can do that. A random number is uncomputable with probability 1. The ability to compute this number is something a random computer has that a normal computer doesn't. As for useful applications, I am not sure, but if you look at randomized primality testing, you can get algorithms that are very efficient to test a prime, but that work with probability approaching 1, if you can choose numbers at random. – Ron Maimon Mar 28 '12 at 08:07
  • If you assume the universe contains uncountable entities, i.e. mathematically continuous objects, then you can have processes with infinite Kolmogorov complexity and can generate random events. A discrete and finite universe can only generate finite strings, and therefore only compute pseudo-random numbers. In other words, really high-quality pseudo-random may be the best you can hope for... – Halfdan Faber Mar 29 '12 at 04:07
  • Thanks, Ron. Yes, 1-2 is Smolin's Fecund universe, which does not include the computational evolution view of 3-6, though. I don't think your analogy with fire as replication is accurate. The embedded universes are adapting and evolving their physical laws (the lowest computational scheme). I think all aspects of biological evolution are duplicated. A more natural comparison would be that of a virus, i.e. a genetic program with an encapsulating shell. – Halfdan Faber Mar 29 '12 at 04:09
  • There is no modification to the external behavior of black holes and information is still preserved (for those of us that have this belief). The standard assumption would be that our universe is average and not particularly well tuned. I think the primary maximizing criteria is star density with optimal mass distribution, as massive stars produce black holes and all stars radiate low entropy energy to their surroundings, with both aspects promoting computational evolution. – Halfdan Faber Mar 29 '12 at 04:10
  • @GrigoriStrassmann: Without information loss, how could you go into the other universe to observe it? It would seem that then the entire content of the new universe would be gravitationally encoded on the BH surface, and this is impossible by finite BH entropy. The analogy with a virus is similar to fire, but viruses are not self-replicating, they hitch onto a cell machinery, and they are not in any way Turing complete. Also we don't really know for sure where viruses come from, or if they evolved from a common ancestor. I don't think they necessarily did. – Ron Maimon Mar 29 '12 at 04:51
  • @GrigoriStrassmann: To see how biological evolution really works (it doesn't have replication and simple modification), see my answer to this question: http://physics.stackexchange.com/questions/4200/is-stephen-wolframs-nks-an-attempt-to-explain-the-universe-with-cellular-autom – Ron Maimon Mar 29 '12 at 04:52
  • @GrigoriStrassmann: Also, regarding the finite business, sure, all computation is an idealization extrapolating finite things to infinite time. But if the universe is quantum or random, the proper idealization is stochastic, not deterministic, computation. That's all I was saying. – Ron Maimon Mar 29 '12 at 04:56
0

You may want to expand your thought process by including false vacuums. If the universe follows an evolutional pathway, I would assume that universes within a multiverse do the same. Let's assume the existence of a metastable universe which would have false vacuum, this universe will at some point cross the barrier to a true vacuum and form a bubble universe expanding from itself. This bubble universe would be more stable (physical constants would have values closer to ideal for a stable universe) but if it crosses the barrier to a more stable state, another universe will evolve and so on...