5

Since I was first introduced to it, I've been intrigued by the claim that the universe contains a finite amount of information. (That link is not where I first encountered the concept; it is simply the first example of this claim I could find from a quick Google search.)

Basically, the argument seems to be that if there is a finite amount of matter in the universe, that matter can only store a finite amount of information. On the surface, I have to concede that this makes a lot of sense. After all, if I'm thinking in terms of bits (for example), I might visualize a hypothetical "infinite hard disk drive" that could store unlimited data. This device would presumably have to be infinite in size, since it stores information on a physical platter that obviously occupies some space.

Digging a little deeper, however, I start to doubt this presumption. After all, information can be compressed according to a system of encoding information in a particular set of symbols. Then as long as the system provides a way of decoding that information, you could effectively increase the capacity of any storage mechanism by encoding its contents using said system (analogous to converting every file on a hard disk using some compression algorithm such as LZMA).

But, there's still more to it than that. It goes without saying that any system of compression like what I just described comprises its own information, and therefore needs to be stored somewhere itself.

Since the universe is "all there is" (?), a system of encoding the information contained within it would have to be a part of that very information. This is where I think I hit a mental wall. On the one hand, it seems that you could extract a seemingly unlimited amount of information from finite data—by using a system to encode that data, another system to encode the encoded data, and so on and so forth—whereas on the other, intuition tells me that there must come a point where, if the data as well as the system must share a space, there is no longer any room for either more data or another system of encoding it. The available space becomes too "crowded," so to speak.

Is there a mathematical principle or theorem that answers this question? Is the problem I'm describing (determining a limit on the capacity of material data to store information) defined, analyzed, and/or illuminated by any particular concept(s) in mathematics?

Dan Tao
  • 461
  • 2
    what is the definition of "information" that you are working with? – Suvrit Jan 07 '11 at 15:18
  • @Suvrit: I wish I could offer a half-decent answer to that question. I guess I was hoping some mathematicians here might be able to shed some light on that issue (a formal definition of information) as well. – Dan Tao Jan 07 '11 at 15:21
  • 3
    You're not going to get anywhere with this question unless you mention quantum mechanics. – Peter Shor Jan 07 '11 at 16:37
  • @Peter Shor: Are you saying I should add that as a tag, or incorporate it into the question? The problem is that I don't think I understand enough about quantum mechanics to comment meaningfully on its relationship to my question. – Dan Tao Jan 07 '11 at 18:09
  • 8
    Here is one potential pitfall of ignoring quantum mechanics: The phase space of a simple harmonic oscillator has a continuum of points, so assuming classical mechanics is a true model of the universe, you can encode an infinite amount of information by just activating a harmonic oscillator (like a spring or a pendulum) with a well-chosen initial condition. The uncertainty principle prevents this by making nearby points of phase space indistinguishable - the space gets broken up into blobs of area about $\hbar$, and there are finitely many below any fixed energy bound. – S. Carnahan Jan 07 '11 at 18:41
  • 8
    You cannot prove propositions about the universe... – Mariano Suárez-Álvarez Jan 07 '11 at 20:35
  • There's a quantum counter-pitfall: pick a quantum system in thermodynamic equilibrium, and observe it at regular intervals. You'll see it shift from state to state randomly as time passes. Since qm says this is real randomness, you will get out you an unbounded number of bits -- you can compress the Huffman encoding of the sequence of bits you get out with a probability that goes to 0 exponentially fast. – Neel Krishnaswami Jan 07 '11 at 20:48
  • 1
    @Mariano: Fair enough—I guess what I really should have asked was whether there is a proof for the notion that there is a finite limit to the amount of information that could be stored in a finite amount of matter. The universe just happens to be a convenient arena in which to ponder this question (for me). – Dan Tao Jan 07 '11 at 21:02
  • 1
    @Dan: you can't reason about the properties of finite amounts of matter without getting into the physical details of what properties you allow matter to have. That's the point of the comments above about quantum mechanics. – Qiaochu Yuan Jan 07 '11 at 21:36
  • 2
    If you look at quantum mechanical systems, and you assume a finite volume of space and a finite amount of energy, you get a finite-dimensional Hilbert space, which can only store a finite amount of quantum information. [With the right assumptions, that is completely rigorous mathematically.] Of course, in real life, you need quantum field theory, and there I'm at a loss for how to analyze anything, since I barely understand QFT. I'm going to vote to reopen. I don't think it's such a crazy question. – Peter Shor Jan 08 '11 at 17:44
  • Provided the question is cleaned-up as per the meta thread I'd be happy to vote to re-open. – Ryan Budney Jan 08 '11 at 19:27
  • Not sure it is related to the question, but does not number Pi contain infinite amount of information? :) – TT_ stands with Russia Mar 01 '16 at 20:09

2 Answers2

4

Perhaps you are looking for the holographic principle. It is conjectural, even at a physical level of rigor, but it puts a bound on the amount of information contained in a region in terms of the region's surface area.

S. Carnahan
  • 45,116
  • 1
    By the way, data compression algorithms are not effective for arbitrary input data. There is a simple proof by enumerating possible inputs and outputs. Such compression schemes are convenient for everyday purposes because most computer files (like English text) have a lot of redundancy. – S. Carnahan Jan 07 '11 at 15:40
  • Yes, this is clearly very closely related to what is puzzling me. Thanks for the link; I am reading about it (and struggling to grasp it) now. – Dan Tao Jan 07 '11 at 15:41
  • About data compression algorithms: yeah, I figured it was an imperfect analogy given the limitations of existing algorithms (specifically their specialization according to certain types of data). But I didn't know if this was actually a fundamental limitation of any system of encoding data or if human cleverness was more the limiting factor in that case. – Dan Tao Jan 07 '11 at 15:43
  • 3
    The essential claim is that a region with sufficiently high information density (entropy) would turn into a black hole, and the black hole entropy is given by $S=Ac^3/(4G \hbar^3)$ with $A$ the area of the event horizon of the black hole. If you put in some numbers you will quickly realize that this bound is many, many orders of magnitude beyond what is possible with current technology. – Jeff Harvey Jan 07 '11 at 16:23
  • In a system with 2 or more symbols, there are more files of size $n$ than there are files of size less than $n$, so there is no injective function from the former to the latter, i.e., no reversible compression algorithm. – JBL Jan 07 '11 at 17:23
  • On compression algorithms: it is possible for these to be reversible if the inputs are suitably restricted. JBL's observation is true for algorithms which have to work for every input (it is just counting, not human weakness). The information situation is potentially worse, because there is a need to include some information about the compression algorithm with any compressed string in order to decompress it and extract information from it - how does one know that it is a compressed string? General algorithms can compress a subspace of strings of interest at the expense of lengthening other st – Mark Bennet Jan 07 '11 at 20:26
4

The holographic principle has been mentioned so I'll just add a little more info from the physics perspective.

The Bekenstein entropy bound implies that there is a finite number of information (entropy) in a finite volume of space with finite energy.

Speaking in terms of entropy, one can see for example that there must be a limit on how many fundamental particles there are. The reasoning is that given a particle made up of sub particles, the total number of degrees of freedom of the particle is the product of the degrees of freedom of each sub particle (are there sub particles with only one degree of freedom?). Since the total number of degrees of freedom must be finite, this implies that one cannot subdivide particles forever.

There are some particularly striking consequences of these entropy bounds. For example, Verlinde argues that the force of gravity between particles is a result of the holographic principle. This can be thought of as (indirect) physical evidence of the holographic principle at work.

Alex R.
  • 4,902
  • 2
  • 40
  • 66
  • 1
    The Bekenstein entropy bound and the holographic principle are based on the approximations for quantum field theory in weak gravitational fields, along with some heuristic arguments about entropy. It would be nice to understand how they really work. Or, failing that (since this is to my thinking one of the major open problems in physics), what information bounds you get in QFT without gravity. – Peter Shor Jan 08 '11 at 17:48
  • I'm not entirely sure these entropy bounds are necessarly physical, rather, they are tools for giving us an alternate perspective. If you look through the Verlinde paper, there are fundamental assumptions made about the entropy carried in the holographic screens. I think you see similar reasoning when deriving the Casimir effect force by using zero point energy. The point is that you can just as easily derive the force WITHOUT appealing to zero point energy. – Alex R. Jan 08 '11 at 21:15