2) Is there a physical limit on how much information may participate in a single computation?

3) Is there even a possibility for simple events that only involve a few bits or information occurring in our universe? — If you can name one — wouldn’t a skeptic say that actually every single particle in the universe that has warped the gravitational field has participated in some way, and contributed atleast some fractional bits?

4) This is just a comment, but if there were a limit on the number of bits that could participate in a computation, I think that could be related to the phenomenon of us having a certain number of space-time dimensions. Like binary functions could exist on a 1-D line world, ternary functions on a 2-D plane, etc.

]]>ln(psi)=info+i.action

This is the mathematical sense of the reality of information.

2) When this substitution is made into the Schr. eqn. the real and imaginary parts result in two equations. One is a Hamilton-Jacobi eqn and the other expresses the conservation of information wrt time.

Conservation is the physical sense of reality.

3) The “hard” problem of psychology, namely what conscious sensation is made from, is the same stuff, information.

This is the psychological sense of reality.

It seems there is nothing more real than information!

]]>Scott #225 How much of the informational character of physics is special to our world? An obvious way to answer this is to start with the question of how much isn’t. The most direct generalization of the classical analysis of finite-state computation involves counting distinct states, and this makes quantities such as entropy and energy generic. But once we talk about their functional dependence on physical degrees of freedom, they become very specific to our physics. Electromagnetic energy. Black hole entropy. Moreover, informational reinterpretations suggest new physics, so this approach has plenty of empirical content.

Of course some aspects of the informational character of macroscopic physics can be inferred indirectly. The extensive nature of entropy implies the existence of at least nearly-identical particles. The finiteness of heat capacities implies material systems are not composed of infinitely divisible stuff. But it is quantum mechanics that makes the informational character of physics directly observable.

I find the idea of an energy eigenstate particularly remarkable, informationally. An atom has an enormous number of possible internal configurations, especially if we include all of the degrees of freedom of its components down to some ultimate scale. And yet we can prepare an atom in an energy eigenstate, where all of the internal detail is absent. We can even use such atoms as qubits, so bereft of other information are they. Nothing like this is possible classically.

So it seems to me that the physical realizability of energy eigenstates makes physical systems with small amounts of information directly accessible to observation and experimentation, and might someday make the exact informational dynamics of our world observable.

]]>7) For the region A’, total information is the sum of information in space (I’) and the information on its total boundaries (I’_b).

When the black hole is created in region A due to the heat energy, the total information content of the universe, according to the observer, is now I’ + I’_b_new – delta I’, where delta I’ is the information dumped into A.

I’_b_new is the total information on the new set of boundaries (which includes the newly formed BH boundary).

8) To any observer in the universe, the total information is always constant: I’ + I’_b_new – delta I’ = I’ + I’_b.

This implies I’_b_new – I’_b = delta I’. But I’_b_new – I’_b is just the information on the black hole boundary.

1) Divide the entire universe into a region A and its complimentary region A’. An observer is located in region A’.

2) The observer attempts to erase information in A’.

3) Assuming that the total information content in the universe is constant, erasing information from A’ simply means dumping the information into region A.

4) According to Landauer’s principle, erasing information creates heat (unusable energy). So, heat is being created in A as the observer dumps information into A.

5) Assume that the heat stays inside region A.

6) According to GR, there is a limit to the amount of energy that can be stored in a region before it turns into a black hole.

7) For the region A’, total information is the sum of information in space (I’) and the information on its boundaries (I’_b). When the black hole is created, the total information content of the universe, according to the observer, is I’ – delta I + I’_b, where delta I is the information dumped into A.

8) To any observer in the universe, the total information is always constant: I’ – delta I + I’_b = I’.

This implies I’_b = delta I.

In short, information erased from A’ is stored on the boundary of region A, according to the observer.

]]>Says Godel to Wittgenstein: How’s the water?

Replies Wittgenstein: What’s water?

I.e, the real question is: Can symbols encapsulate all information? Or is there information that cannot be represented in symbols? That, of course, is equivalent to Russel’s set that contains all the sets that do not contain themselves: It merely shows the limitation of what symbols (aka language) can do.

Saying it differently:

The one assumption Godelian scientists make is that the universe’s attributes can be reflected in symbols, i.e.: in inkblots, screen-blips, and air-noises. Perhaps not all the universe’s attributes, but a great many of them.

But the Wittgensteinians say: No, we can only see the aspects of the universe that “language” can pick up. And “language” is merely the structure of our own brain. Therefore, we can only see the part of the universe’s structure that fit the structure of our brain.

But now comes a third fish: Call him Weigner. He sings (he is a singing fish) Helleluja to the marvelous predictive power of mathematics, that enables fish to forecast all water currents and colors and temperature. Isn’t it great that math works?

So now comes a fourth fish, a sort of a flying fish named Archie Wheeler (who once had been outside water), and says yeah, sure, I met once a fruit fly who marveled that the universe is built of pixels… All we do in science, (says Archie) we do in “language,” which are inkblots and airnoises and screenblips, based on signals of our six or seven meat-based sensors, as processed by a three pounds of meat-computer. To assume that the structure of these is the same as the structure of the universe may be nothing more than a tautology. Or, at best, it may be projections of the universe upon our meat sensor, same as the sphere projected itself upon the plain in Flatland…

Says the Godel fish: But just because we cannot find the flaws in language using language, does it mean we found all that language (and math, and science) can do?

So finally comes the Leviathan (nicknamed Hilbert), who booms: No. We can do more, and we will.

But just how much? Just how much “real information” (whatever that is) can be encapsulated in symbols, and how much information never can be? And is there any use of us even talking about it?

Well, says a crab called Kantor: you never know until you try. ]]>

But I think the danger is, it’s easy to round something like “nature is informational” down to a statement that lacks empirical content, and that’s true not just of our world but of any possible one. Thus, someone could say: really understanding any physical concept (mass, energy, charge…) *means* knowing what it corresponds to in terms of the basic data structures that comprise the state of the universe, so of course they’re going to look “informational” once you actually understand them.

In the post, I tried to give a meaning for a converse claim—“information is physical”—that depends on special features of our world and is thus hopefully immune to that criticism.

]]>> I do disagree, and the same counterexample proves that not even a lower bound can be derived.

How so? In the example of a zero-momentum particle, the spatial variation is zero, so the claimed lower bound is zero, which fits with what you said.

> Daniel pulled out the Klein-Gordon equation, and if you look you will notice that the equation contains an “m”. That is the rest mass of the particle. And that, of course, lower bounds the energy of the state by E = mC^2.

You will also notice it contains a k (or a grad phi), which measures the degree of spatial variation, and which also lower bounds the energy of the state. This is the significant term for our purposes.

]]>I feel like we may be talking about different things on this topic of whether injecting randomness into programs changes things.

Avi Widgerson gave a good talk about leveraging “randomness”–or maybe pseudorandomness/complexity is sufficient–for more efficient algorithms, for achieving some optimization, for game theory strategy, etc. But in the end, the result he talks about (not that I fully understand it, but intuitively it seems to make a lot of sense) is that “every efficient random algorithm has a deterministic counterpart” (assuming P doesn’t equal NP). This wasn’t exactly what I was thinking about when I wrote my last response (although I think there is a relationship), but maybe this relates to what you were saying?

https://www.youtube.com/watch?v=ZzsFb-6wvoE

I made a movie called “Digital Physics” which tries to explore some of these ideas around randomness vs. complexity. Maybe you’d like to check it out. If you leave a review of the movie (good or bad) I’ll send you a free pack of trading cards (with gum!). Scott, the same offer goes for you too 🙂

]]>