## Nö nö nö

At least three people have now asked my opinion of the paper Mathematical Undecidability and Quantum Randomness by Paterek et al., which claims to link quantum mechanics with Gödelian incompleteness. Abstract follows:

We propose a new link between mathematical undecidability and quantum physics. We demonstrate that the states of elementary quantum systems are capable of encoding mathematical axioms and show that quantum measurements are capable of revealing whether a given proposition is decidable or not within the axiomatic system. Whenever a mathematical proposition is undecidable within the axioms encoded in the state, the measurement associated with the proposition gives random outcomes. Our results support the view that quantum randomness is irreducible and a manifestation of mathematical undecidability.

Needless to say, the paper has already been Slashdotted. I was hoping to avoid blogging about it, because I doubt I can do so without jeopardizing my quest for Obamalike equanimity and composure. But similar to what’s happened several times before, I see colleagues who I respect and admire enormously—in this case, several who have done pioneering experiments that tested quantum mechanics in whole new regimes—making statements that can be so easily misinterpreted by a public and a science press hungry to misinterpret, that I find my fingers rushing to type even as my brain struggles in vain to stop them.

Briefly, what is the connection the authors seek to make between mathematical undecidability and quantum randomness? Quantum states are identified with the “axioms” of a formal system, while measurements (technically, projective measurements in the Pauli group) are identified with “propositions.” A proposition is “decidable” from a given set of axioms, if and only if the requisite measurement produces a determinate outcome when applied to the state (in other words, if the state is an eigenstate of the measurement). From the simple fact that no one-qubit state can be an eigenstate of the σ_{x} and σ_{z} measurements simultaneously (in other words, the Uncertainty Principle), it follows immediately that “no axiom system can decide every proposition.” The authors do some experiments to illustrate these ideas, which (not surprisingly) produce the outcomes predicted by quantum mechanics.

But does this have anything to do with “undecidability” in the mathematical sense, and specifically with Gödel’s Theorem? Well, it’s not an illustration of Gödel’s Theorem to point out that, knowing only that x=5, you can’t deduce the value of an unrelated variable y. Nor is it an illustration of Gödel’s Theorem to point out that, knowing only one bit about the pair of bits (x,y), you can’t deduce x and y simultaneously. These observations have nothing to do with Gödel’s Theorem. Gödel’s Theorem is about statements that are undecidable within some formal system, *despite* having definite truth-values—since the statements just assert the existence of integers with certain properties, and those properties are stated explicitly. To get this kind of undecidability, Gödel had to use axioms that were strong enough to encode the addition and multiplication of integers, as well as the powerful inference rules of first-order logic. By contrast, the logical deductions in the Paterek et al. paper consist entirely of multiplications of tensor products of Pauli matrices. And the logic of Pauli matrix multiplication (i.e., is this matrix in the subgroup generated by these other matrices or not?) is, as the authors point out, trivially decidable. (The groups in question are all finite, so one can just enumerate their elements—or use Gaussian elimination for greater efficiency.)

For this reason, I fear that Paterek et al.’s use of the phrase “mathematical undecidability” might mislead people. The paper’s central observation can be re-expressed as follows: given an N-qubit stabilizer state |ψ〉, the tensor products of Pauli matrices that stabilize |ψ〉 form a group of order 2^{N}. On the other hand, the total number of tensor products of Pauli matrices is 4^{N}, and hence the remaining 4^{N}-2^{N} tensor products correspond to “undecidable propositions” (meaning that they’re not in |ψ〉’s stabilizer group). These and other facts about stabilizer states were worked out by Gottesman, Knill, and others in the 1990s.

(Incidentally, the paper references results of Chaitin, which do interpret variants of Gödel’s Theorem in terms of axiom systems “not containing enough information” to decide Kolmogorov-random sentences. But Chaitin’s results don’t actually deal with information in the technical sense, but rather with Kolmogorov complexity. Mathematically, the statements Chaitin is talking about have zero information, since they’re all mathematical truths.)

So *is* there a connection between quantum mechanics and logic? There is—and it was pointed out by Birkhoff and von Neumann in 1936. Recall that Paterek et al. identify propositions with projective measurements, and axioms with states. But in logic, an axiom is just any proposition we assume; otherwise it has the same form as any other proposition. So it seems to me that we ought to identify both propositions *and* axioms with projective measurements. States that are eigenstates of all the axioms would then correspond to *models* of those axioms. Also, logical inferences should derive some propositions from other propositions, like so: “any state that is an eigenstate of both X and Y is also an eigenstate of Z.” As it turns out, this is precisely the approach that Birkhoff and von Neumann took; the field they started is called “quantum logic.”

**Update (Dec. 8):** I’ve posted an interesting response from Caslav Brukner, and my response to his response.

Comment #1 December 3rd, 2008 at 8:48 pm

I thought you said it already had been Slashdotted.

Comment #2 December 3rd, 2008 at 10:35 pm

Technically there are several incompleteness theorems that Gödel came up with. I’ll have to read this paper to see which one they are specifically referencing. But Gödel’s basic idea in all them was, in a nutshell, that to fully prove an axiomatic system you have to step outside that system (i.e. it can’t be proven internally to itself). Personally I’ve always thought that there were inherent similarities between Gödel’s work, computational complexity, and quantum mechanics, though I’m not sure it really has anything to do with the uncertainty principle per sé.

Comment #3 December 3rd, 2008 at 11:09 pm

Gödel’s Theorem is about statements that are undecidable within some formal system, despite having definite truth-values—since the statements just assert the existence of integers with certain properties, and those properties are stated explicitlyActually, as I udnerstand it, this is a description of Godel’s proof of Godel’s Theorem and not incompleteness per se. Godel constructs an undecidable first-order statement. My understanding was that there are logical statements that don’t have a definite truth-value — maybe it’s a theorem of Tarski? It was something about making a statement that is true in one instantiation of the axioms of logic and false in another one.

As for quantum logic, Birkhoff and von Neumann were big thinkers who were unlikely to invent quantum logic for small reasons. But I have never understood why quantum logic is important. Is the problem simply that I never learned much about it?

Comment #4 December 3rd, 2008 at 11:12 pm

Personally I’ve always thought that there were inherent similarities between Gödel’s work, computational complexity, and quantum mechanicsThat’s right, they’re all fascinating to outsiders, they’re all rigorous topics, and they’re all hard to understand. That’s three major similarities right there.

Comment #5 December 3rd, 2008 at 11:21 pm

If this paper got slahdotted so fast, why didn’t this one: Measurement-Based Quantum Computation and Undecidable Logic, Foundations of Physics

May, 2008 cause any reactions?

Comment #6 December 3rd, 2008 at 11:42 pm

I have heard from Christian Calude (at this talk: http://www.youtube.com/watch?v=tYjmiT422yQ) that there are recent results relating quantum and algorithmic randomess. Do you know anything about it? If these are related then Godel undecidability and quantum uncertainty _should_ be linked, if only because you can prove Godel’s theorem via Kolmogorov complexity.

Comment #7 December 4th, 2008 at 12:07 am

Scott, here is something else I’ve wanted to ask about:

http://cs.bath.ac.uk/ag/p/BVQuantCausEvol.pdf

It purports to explain quantum causal evolution in terms of deep inference logic, a logic used for structural proof theory that normally has nothing to do with physics.

Comment #8 December 4th, 2008 at 12:24 am

Greg, Tarski’s Indefinability Theorem as I understand it simply says you can’t write down a logical expression that describes exactly the true formulas. It was actually discovered by Gödel (who didn’t bother to publish it) who was at that time trying to prove the completeness of arithmetic and analysis (this attempt was famously unsuccessful of course–he proved their incompleteness instead).

Chaitin proved a version of Gödel’s theorem based on Kolmogorov complexity that approximately goes: the Kolmogorov complexity of some arbitrarily long formula can be arbitrarily large (e.g. you can generate such formulas with large numbers of coin flips) but the Kolmogorov complexity of any provable statement over some axiom set A is limited buy the Kolmogorov complexity of A itself (since the provable statements are recursively enumerable starting from A). Calude and Jürgensen proved that (iirc) as the size of a randomly chosen true formula increases, the probability of it being provable approaches zero:

http://www.cs.auckland.ac.nz/~cristian/aam.pdf

Comment #9 December 4th, 2008 at 12:56 am

Greg: Perhaps I should have put it this way. For simple enough axiom systems, we can always get trivial “incompleteness” by finding two models and a first-order sentence that’s true in one model and false in the other. E.g., “∀ x,y xy=yx” is independent of the axioms of group theory, and (in the context of the paper we’re talking about) “|ψ⟩ will be found in a +1 eigenstate when measured in the σ

_{x}basis” is independent of the axiom “|ψ⟩ was found in a +1 eigenstate when measured in the σ_{z}basis.” In both cases, the trouble is just that we haven’t provided enough information to answer the question at hand. In this light, Gödel’s achievement was to show that incompleteness applies not only in the trivial situation, but even in a situation where we “know” which mathematical object we want to talk about (namely the positive integers), and might plausibly hope to give axioms that, if they don’t uniquely pick out that object (which is already impossible by the Löwenheim-Skolem theorem), at least imply all first-order sentences that are true of that object.As for Birkhoff-von Neumann: as I see it, quantum logic does give reasonable definitions of proposition, model, etc., but it suffers from one crucial drawback that cripples its interestingness (at least for me). Namely, it throws away the whole

probabilitypart of quantum mechanics, and considers only the lattice of subspaces of Hilbert space with the relationship of orthogonality. And that, in turn, is relatively boring in finite-dimensional Hilbert spaces (except maybe for stuff like the Kochen-Specker theorem), so to get interesting problems you immediately go to infinite-dimensional Hilbert spaces and C*-algebras, and thence even further away from the meaty quantum information problems youshouldbe working on.Comment #10 December 4th, 2008 at 1:22 am

“We demonstrate that the states of elementary quantum systems are capable of encoding mathematical axioms and show that quantum measurements are capable of revealing whether a given proposition is decidable or not within the axiomatic system. Whenever a mathematical proposition is undecidable within the axioms encoded in the state, the measurement associated with the proposition gives random outcomes.”This seems to imply that we can distinguish decidable and undecidable propositions by encoding them into quantum states and making measurements, and then checking if the output is random or not.

Comment #11 December 4th, 2008 at 1:27 am

Oleg and asdf: I was hoping to

decrease, notincrease, the number of requests of the form “can you give an opinion on such-and-such paper making a claim that seems specifically designed to raise your blood pressure?”Incidentally, there seems to be a tragic disease going around, that makes people want to connect quantum mechanics with computability theory whether or not there’s any connection to be made. I have a great deal of sympathy for the sufferers of this disease, and have been searching unsuccessfully for a cure for many years. Reminding people that BQP⊆EXP, and that the impact of quantum mechanics on computer science is therefore fundamentally about

complexityrather than computability, hasn’t seemed to work.Comment #12 December 4th, 2008 at 1:31 am

Scott, sure, that’s an interesting point. However, I had also thought that there was another result in logic that if the axioms are rich enough, then there exist sentences that are true in one model but false in another model. Löwenheim–Skolem does not imply this, because the external cardinality of a model may not be expressible in a sentence in the model. Also I had thought that Tarski proved this result, but apparently that’s not true either.

If the result is not due to Tarski, then I’m not entirely sure that there truly is such a theorem.

So to get interesting problems you immediately go to infinite-dimensional Hilbert spaces and C*-algebrasBut do you actually get interesting problems in quantum logic from this? My limited understanding of quantum logic was that it is an answer in search of a question.

thence even further away from the meaty quantum information problems you should be working on.A question: If operator algebras are an excessively arcane topic because one could study finite quantum probability, are topological spaces and continuous measure spaces arcane because one could study finite sets?

Comment #13 December 4th, 2008 at 1:42 am

Aspiring Vulcan: Indeed we

candistinguish decidable from undecidable propositions by making a quantum measurement and seeing if the outcome is random—provided we’re working with a toy theory that’s been specifically designed so that decidable propositions correspond to deterministic outcomes and undecidable propositions correspond to random outcomes! Or to put it another way, we can decide tocalldeterministic measurement outcomes “decidable propositions,” andcallrandom outcomes “undecidable propositions.”What we

can’tdo is use quantum measurements to determine decidability in the kinds of theories that mathematicians are talking about when they talk about these things (like Peano Arithmetic or ZFC). For we know that a quantum computer can’t solve undecidable problems, any more than a classical computer can: at most it can give an exponential speedup.Indeed, even if we could use polynomial-time quantum measurements to determine decidability in

propositionaltheories (rather than the more powerful first-order theories like ZFC), we’d have NP⊆BQP, which is considered extremely unlikely.Comment #14 December 4th, 2008 at 2:02 am

Alex: The paper by Briegel and van den Nest that you refer to was actually pretty cool (unusually for papers with both “quantum” and “undecidable” in the title ). However, notice that the paper works by putting together two (interesting) pieces:

(1) universality of a graph state for measurement-based quantum computing implies that the underlying graph family has unbounded rank-width, and

(2) unbounded rank-width implies an undecidable “C

_{2}MS theory.”In other words, quantum computing and decidability are still not directly talking to each other, but only through the intermediary of graph theory.

Comment #15 December 4th, 2008 at 2:39 am

For the statements that can’t be proven within some system due to Godel’s result, can we at least always be convinced of their truth value (BPP style)? Or are there statements which we can’t even classify as likely or unlikely?

By the way, is there a class like MA where Merlin can solve the Halting Problem (unsolvable though it be)?

Comment #16 December 4th, 2008 at 5:13 am

Scott says:

The impact of quantum mechanics on computer science is fundamentally about complexity rather than computability.Nice. That is a concisely stated summary of a deep idea … a similar point of view is expressed by Wheeler’s “Undeterred I persisted, and still do, in regarding Feynman’s PhD thesis as marking a moment when quantum theory for the first time became simpler than classical theory.”

Perhaps one modern correlate is the closely-related idea “Quantum simulation is becoming for the first time simpler than classical simulation.”

Surely a strong case can be made that this is so … the mathematical reason being that quantum simulation has stronger and more elegant invariance properties than classical simulation.

It seems (to me) that progress in all aspects of simulation theory is amazingly rapid nowadays … perhaps this is because, after all, isn’t an efficient algorithm simply a simulation of an inefficient algorithm?

Comment #17 December 4th, 2008 at 8:40 am

That’s right, they’re all fascinating to outsiders, they’re all rigorous topics, and they’re all hard to understand. That’s three major similarities right there.Greg: cute. Seriously, the gist of what I was trying to convey was more eloquently alluded to by Scott:

Reminding people that BQP⊆EXP, and that the impact of quantum mechanics on computer science is therefore fundamentally about complexity rather than computability, hasn’t seemed to work.To me, Gödel’s theorems are really statements about complexity. In fact, one could argue that the complexity classes and their relation to each other are an example (corollary of?) Gödel’s theorems.

Comment #18 December 4th, 2008 at 9:00 am

Job:

For the statements that can’t be proven within some system due to Godel’s result, can we at least always be convinced of their truth value (BPP style)? Or are there statements which we can’t even classify as likely or unlikely?The Gödel sentences for standard axiom systems like PA and ZFC are true (assuming those systems are consistent, which they are ). The only problem is that you need to go to a stronger axiom system to

provetheir consistency—such as ZFC for PA, or ZFC plus large cardinal axioms for ZFC.On the other hand, people argue about the truth or falsehood of (e.g.) the Axiom of Choice and the Continuum Hypothesis. And I see no reason why that would change even if P=PSPACE and there were omniscient provers around.

By the way, is there a class like MA where Merlin can solve the Halting Problem (unsolvable though it be)?Even in MA, you can assume without loss of generality that Merlin can solve the halting problem—but such an ability is useless to him, over and above the ability he already has for exponential-time computation. For how can he convince Arthur of the answers?

Comment #19 December 4th, 2008 at 9:12 am

To me, Gödel’s theorems are really statements about complexity. In fact, one could argue that the complexity classes and their relation to each other are an example (corollary of?) Gödel’s theorems.Silly computer scientists! All this time, we’ve been struggling to figure out the relations between complexity classes, when the answers we sought were just corollaries of Gödel’s theorems! Is there anything Gödel’s theorems

don’timply?Seriously, for anyone reading this thread, I cannot recommend the following book in strong enough terms:

Gödel’s Theorem: An Incomplete Guide to its Use and Abuse, by Torkel Franzen.

Please buy it today and read it. It will be good for your soul.

Comment #20 December 4th, 2008 at 9:26 am

Scott:

I was hoping to decrease, not increase, the number of requests of the form “can you give an opinion…”You expert opinion on provoking non-expert papers/articles/claims is precisely what makes this blog worth reading for general public curious about complexity and QM. This is a good follow-up for your lecture notes as well (as a kind of “field study”.) So, please keep posting such entries even if it’s boring to comment on a paper screaming incompetence from its title, the educational value of such a discussion should outweigh the time you spent to write the post. Also, notice the quality of comments under such entries is normally much higher than on average.

Comment #21 December 4th, 2008 at 9:32 am

Thanks, Stas—I’ll try my best! It’s just that there are

so manystrange claims being posted on the arXiv / Slashdotted / given cover stories inNew Scientist, and only one life to live…Comment #22 December 4th, 2008 at 9:49 am

Greg:

I had also thought that there was another result in logic that if the axioms are rich enough, then there exist sentences that are true in one model but false in another model.I think you’re thinking of Rosser’s Theorem. The Rosser sentence (“If this sentence is provable in F, then there’s a shorter proof of its negation”) is still objectively true or false, being a statement that a particular Turing machine doesn’t halt—it’s just that F can’t prove either assuming it’s consistent. (Indeed, if F is consistent, then F can neither prove nor disprove the Rosser sentence, and therefore the Rosser sentence is

true.)But do you actually get interesting problems in quantum logic from this? My limited understanding of quantum logic was that it is an answer in search of a question.My limited understanding basically agrees with yours! But I remember going to a talk where the speaker mentioned nontrivial classification problems in quantum logic that were only solved after considerable effort. Anyone know what those problems might be?

A question: If operator algebras are an excessively arcane topic because one could study finite quantum probability, are topological spaces and continuous measure spaces arcane because one could study finite sets?My basic criterion is not arcaneness, but meatiness. And there are obviously meaty problems in topology and analysis. On the other hand, generalizing things to infinite dimensions just because you can does seem to put you at high risk of protein deficiency.

Comment #23 December 4th, 2008 at 10:26 am

the answers we sought were all just corollaries of Gödel’s theorems!Scott,

I don’t think that being a corollary of Gödel’s theorems necessarily

solvesanything. I simply think that if you accepted it as a premise, it might lead to some interesting conclusions. I’m not defending the paper this discussion began with, by the way, since I have not had the time to read it yet. I’m simply commenting based on my experience, particularly from the standpoint of having taught such an (oddly) wide variety of courses (from real analysis to quantum cryptography).As for the book, I will look for it and read it. Sounds interesting. I have read several others on Gödel’s theorems. And I’m a person that despises things taken out of context so I understand your frustration (I deal with philosophers on a daily basis – ’nuff said).

Comment #24 December 4th, 2008 at 10:42 am

Greg Kuperberg,

you’re thinking of Goedel’s completeness theorem, that the decidable sentences are exactly the ones true in every model. Equivalently, every consistent system has at least one model

Comment #25 December 4th, 2008 at 10:50 am

Scott:

there are so many strange claims being posted on the arXiv / Slashdotted / given cover stories in New Scientist, and only one life to live…arXiv is indeed too much polluted, but Slashdot and New Scientist do not touch theoretical computer science that often, so it should be manageable to blog responses to their articles.

Comment #26 December 4th, 2008 at 11:26 am

One of the great things about quantum info/quantum computing is how many well-defined open problems there are. There’s Werner’s list online, Scott’s most annoying list, etc… In addition to all those yes/no open questions, there’s a lot of questions where some improvement is very important, like lowering the threshold, developing better error-correcting codes,… Coming up with new problems, or better yet, solutions to questions nobody even asked, is even better, of course, but with such a wealth of interesting questions it’s too bad that there’s still a lot of papers that just do philosophy (all those papers explaining how some loophole was overlooked by Bell, etc…) Certainly one can’t quantify what makes a result interesting, as there’s many results that are only interesting once you know that they’re possible, but people in quantum info are lucky to have so many clear problems compared with most other fields of physics, so take advantage of what you’ve got!

Comment #27 December 4th, 2008 at 11:58 am

Matt: Well said! Thanksgiving is past, but let us give thanks anyway for meaty open problems.

Comment #28 December 4th, 2008 at 12:24 pm

you’re thinking of Goedel’s completeness theorem, that the decidable sentences are exactly the ones true in every model.Maybe I am, but it needs more clarification. It seems to contradict the statement that Goedelian statements are objectively true on the argument that a solution to a Diophantine equation always serves as a proof that it has one.

On the other hand, generalizing things to infinite dimensions just because you can does seem to put you at high risk of protein deficiency.Actually I can’t think of any examples of this principle. Certainly operator algebras has plenty of meaty questions, although I agree that they aren’t computer science and they aren’t quantum logic either.

On the other hand I can think of examples where a narrow insistence on finiteness is protein deficient. For instance there is the topic of topological spaces which are finite sets.

One of the great things about quantum info/quantum computing is how many well-defined open problems there are.Yes, and I particularly agree that it’s great that this menu of open problems co-opts the sawdust rations that comprise most of quantum philosophy.

Comment #29 December 4th, 2008 at 2:30 pm

Actually I can’t think of any examples of this principle.Alright, I’ll take the bait:

1. The usual undergrad probability course seems to spend much more time on σ-algebras than on how to upper-bound the probabilities of bad things happening to you. (I don’t mean to say σ-algebras shouldn’t be there—but mastery of tail bounds, discrete random walks, etc. would certainly have been more useful to me.)

2. In game theory, people seem to have studied games with one agent for every real number long before they studied (e.g.) graph games or approximate equilibria. (I’ve seen similar examples of such infinophilia all over the place in the economics literature, for instance with Aumann’s agreement theorem.)

3. When I go through approximation theory books, I usually find more material on how to generalize simple least-squares/convexity facts to arbitrary metric spaces, than on how to lower-bound the degrees of interesting polynomials.

4. There’s another “finite topology,” besides the study of the set of subsets of {1,…,n}: graph theory!

Comment #30 December 4th, 2008 at 5:06 pm

The usual undergrad probability course seems to spend much more time on σ-algebras than on how to upper-bound the probabilities of bad things happening to you.First, what you’re objecting to is abstraction rather than infinitude. A sine curve is an infinite object, while a chain complex is a fairly abstract but often finite object.

Second, I don’t think that your characterization of “usual” undergraduate probability courses is reasonable. For example, here is the homework in ours.

http://www.math.ucdavis.edu/~tracy/courses/math135A/HWAssignments/HWAssignments.html

Maybe you’re thinking of Cornell and MIT as “usual”; or maybe you really have in mind graduate probability?

Comment #31 December 4th, 2008 at 5:08 pm

I agree though that any given topic in math should be treated with the right amount of abstraction, and that it’s easy to have too much or too little — or even too much and too little at the same time!

Comment #32 December 4th, 2008 at 7:08 pm

Alright, I’ll take the bait: …Yes! I completely share the frustration you express in points 1-3.

Comment #33 December 4th, 2008 at 9:17 pm

Greg Kuperberg:

“Actually I can’t think of any examples of this principle [that increasing dimension "trims meat"]Hmm … Nobelist Frank Wilczek discusses a good example in his 27 November

NaturearticleParticle physics: Mass by numbers… Wilczek’s example being that, in practice, hadron masses can be accurately calculated only on (what amount to) small-dimension tensor-product state-spaces, butnoton infinite-dimensional Hilbert spaces.As Wilczek puts it: “The accurate, controlled calculation of hadron masses is a notable milestone. But the fact that it has taken decades to reach this milestone, and that even today it marks the frontier of ingenuity and computer power, emphasizes the limitations of existing methodology and challenges us to develop more powerful techniques. QCD is far from being the only area in which the challenge of solving known quantum equations accurately is crucial. Large parts of chemistry and materials science pose similar mathematical challenges.”

“There have been some remarkable recent developments in the simulation of quantum many-body systems, using essentially new techniques. Can the new methods be brought to bear on QCD? In any case, it seems likely that future progress on these various fronts will benefit from cross-fertilization.”

“The consequences could be enormous. To quote Richard Feynman: ‘Today we cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality — or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.'”

——

Wilczek’s reference is to a 2004 (?!) arxiv preprint by Verstraete and Cirac. So … infinite dimensions? Heck, if WIlczek’s speculations are right, perhaps we don’t even really need EXP dimensions!

Comment #34 December 4th, 2008 at 11:46 pm

Amen to Greg contra Scott.

I would add that complaints about infinitary formalisms such as sigma-algebras are usually the manifestation of irrational cultural prejudice — a form of philistinism, if I may say so. I’m not sure that Scott in particular is necessarily afflicted by this, but there are plenty who are.The work of Bayesian theorist E.T. Jaynes, for example, is in my opinion significantly marred by his neo-Kroneckerian hissyfits. (To make it even worse, he apparently sought to libelously blur the distinction between ordinary mainstream mathematics and the frequentist school of probability.)

It’s not as if sigma-algebras are particuarly difficult. I mean, seriously: “You’re allowed to take countable unions and complements” — this is supposed to be the height of recondite abstraction?

Comment #35 December 5th, 2008 at 1:10 am

I would add that complaints about infinitary formalisms such as sigma-algebras are usually the manifestation of irrational cultural prejudice — a form of philistinism, if I may say so.Well, you can have your fancy-pants elitist σ-algebras. Here in the

realAmerica, we like our steaks sizzling, our beers cold, and our probability spaces finite.Comment #36 December 5th, 2008 at 1:20 am

I would add that complaints about infinitary formalisms such as sigma-algebras are usually the manifestation of irrational cultural prejudice — a form of philistinism, if I may say so.Well, let’s not get carried away with the point. I know Scott moderately well, and in his case it’s as simple as that he is sometimes allergic to pretension. And shouldn’t we all be?

It’s not as if sigma-algebras are particularly difficult.Umm…it’s true that the axioms of a sigma-algebra are fairly natural. But the implications of those axioms are not so easy, at least not for me.

Comment #37 December 5th, 2008 at 1:49 am

It seems to contradict the statement that Goedelian statements are objectively true on the argument that a solution to a Diophantine equation always serves as a proof that it has one.“Objectively true” refers to the actual natural numbers, the numbers that we can represent by iterating the successor operation, rather than other models, which are still generated by the successor in the sense that induction holds, but which have extra elements. But the Peano induction axiom is the only way to axiomatize this, so the sentences can’t distinguish primitive natural numbers from other models.

Sigma-algebras:

I have no idea what the point of sigma-algebras is. My impression is that for lone probability spaces, they’re a terrible formalism, but that filtered sigma-algebras are actually useful for something. But, if this is true, it could be illustrated in the finite case, for, say, a discrete random walk with finite duration.

Comment #38 December 5th, 2008 at 5:51 am

I know Scott moderately well, and in his case it’s as simple as that he is sometimes allergic to pretension. And shouldn’t we all be?Perhaps, but we have to be able to distinguish the pretentious from, say, the merely exotic — which is where the rub lies. I personally am as allergic to mundanity as Scott is to pretension.

Umm…it’s true that the axioms of a sigma-algebra are fairly natural. But the implications of those axioms are not so easy, at least not for me.Is this supposed to be a bad thing? Couldn’t one say the same about the axioms of any important mathematical structure?

Comment #39 December 5th, 2008 at 11:17 am

Is this supposed to be a bad thing? Couldn’t one say the same about the axioms of any important mathematical structure?In the case of sigma-algebras, the situation is not optimal. Classifying or analyzing sigma-algebras in an infinite context is hard, but it’s not all THAT exciting: Most of the sigma-algebras that are used in analysis are isomorphic to each other. The main use of a sigma-algebra (to answer Doug Knight’s question) is as a tool in the rest of analysis. In particular, the sigma-algebra generated by open sets in a manifold (the Borel algebra) is a convenient and convincing way to define volume and integrals; it’s not as buggy as Riemann sums. The complexity of the sigma-algebra itself is then something of a distraction.

To get back to Doug’s question again, you really need sigma-algebras to make sense of a notion such as a probability distribution on the set of real numbers. In order to characterize how someone might pick a real number x at random, you should be able to give the probability that x lies in any closed interval [a,b]. But then this function P[x in [a,b]] is not just any list of real numbers between 0 and 1; it has to satisfy certain consistency conditions. Two other questions arise immediately: What other sets S have the property that the probability that x is in S is well-defined? What if you used some other collection of sets to define the distribution in the first place; would that be equivalent? A good way to address these questions is to look at the sigma-algebra generated by the closed intervals or whatever other class of starting sets. That is, the sets with defined probabilities should be closed under Boolean algebra and countable unions.

There is also a notion of an abstract sigma-algebra (or sigma-field) which is also important. If you are doing probability, you could recognize that the underlying set of points actually plays no essential role. What you really have in a probabilistic system is some collection of Boolean assertions (“events”) that have probabilities. Again, the set of viable assertions should be closed under finite Boolean operations and countable OR (or equivalently countable AND). So that makes a sigma-algebra.

In fact, this second viewpoint can be used to motivate quantum probability. You can just as well use complex-valued random variables as Boolean variables in your characterization of a probabilistic system. There is a trustworthy set of axioms for a consistent set of complex random variables due to von Neumann. In classical probability such an algebra must be commutative; if you drop commutativity you get exactly quantum probability.

Comment #40 December 5th, 2008 at 1:25 pm

Scott,

I just saw this http://science.slashdot.org/science/08/12/05/1446205.shtml on Slashdot. Is it workable? Haven’t started reading the paper yet, but of course, if it be true, it would be significant for very practical reasons.

Comment #41 December 5th, 2008 at 1:40 pm

Saying “you should deal with Borel sets” is rather different than saying “you should deal with Borel sets as the sigma-algebra generated by the open sets” or “you should deal with abstract sigma algebras.”

Was Bourbaki’s famous failure in measure theory trying to get away with Borel sets, when abstract sigma-algebras are necessary? Why are they necessary? Is it that the infinite product of locally compact spaces is not locally compact and the product sigma-algebra is different from Borel sets on the topological product? That a construction from topology is wrong and a construction from sigma-algebras is right is not enough to say that sigma-algebras are better. You either need that no construction from topology works or that the product sigma-algebra is a natural construction. (I expect that both of these are true.)

Are you talking about von Neumann algebras? Even if there is some relation to sigma-algebras, that does not provide much evidence that we should use sigma-algebras; perhaps we should use von Neumann algebras instead!

Incidentally, for many purposes of analysis, as opposed to probability, my impression is that it is best to avoid Borel & measurable sets and just stick with the Riesz representation theorem, define L^2 as the completion of continuous functions, etc.

Comment #42 December 5th, 2008 at 3:07 pm

Drew: Yes, that’s a real algorithm, and a very interesting one! However, there’s a crucial caveat (isn’t there always? ). Given a (well-conditioned) linear system of the form Ax=b, in logarithmic time the algorithm can prepare a log(n)-qubit

superpositionover the components of the solution vector x. It does not give you the components (x_{1},…,x_{n}) explicitly. Indeed, it can be shown that any quantum algorithm for the latter task requires Ω(√n) steps (anyone know a better lower bound?).There are 76 comments so far on the Slashdot article, and as far as I can tell, not one of them mentions this point.

Comment #43 December 5th, 2008 at 3:17 pm

Update:Caslav Brukner (one of the authors of the paper) was kind enough to send me the following email, and to give me permission to share it on this blog.Dear Scott,

I have come across you recent blog entry about our paper on “Mathematical undecidability and quantum randomness”.

You are right that this paper can be misinterpreted by the general audience – and it unfortunately seemed to have happened already. On the other hand, we feel that your criticism does not cover our text in a totally fair way. We explicitly state that mathematical undecidability is a general feature and that we use it synonymously to “logical independence” (in agreement with the literature on these subjects) and give the example of neutral geometry and Euclid’s parallel postulate. Then we mention Gödel in the context that our work is _not_ of the Gödel incompleteness type. In the introduction we say “In this paper, we will consider mathematical undecidability in certain axiomatic systems which can be completed and which therefore are not subject to Gödel’s incompleteness theorem.” How could we have been more clear than that?

We feel that it is not really fair when you write: “Well, it’s not an illustration of Gödel’s Theorem to point out that, knowing only that x=5, you can’t deduce the value of an unrelated variable y” in relation with our paper. Please note that we never said that this would be an illustration of Gödel’s (incompleteness) theorem. We explicitly say that such scenarios are about mathematical undecidability in the sense of logical independence.

The other thing which seems to bring misunderstanding is that our paper considers propositions of the type “The spin is up along x”. Since such propositions are statements about properties of physical systems, there is a priori no reasons why some of them would be logically indepedent from others. One could then simply define some of them as “decidable” and others as “undecidable”, in the way that the first correspond to definite outcomes and the second to probabilistic ones. The line of argument in the paper is different. We consider propositions about binary functions, which are of the type “The functional value f(0)=1″, and of which some will be logically independent from others given a finite amount of information is available to define axioms. Only then we show that testing (un)decidability of these propositions is equivalent to performing certain measurements on quantum spin systems.

As a physicist, I am still fascinated with a possibility of having probabilities that might not be reducible to our ignorance about some prederminated properties. But, how to make sense of this? What makes then the probabilities different at all? Why are some 1/2 and the others 1/4? I feel that our paper might give some insight in this question. If one assumes that there is a fundamental limit on how much information a quantum system can carry, and this information is exhausted in defining axioms (of the type given above), then the measurement results of the experiments that correspond to logically independent propositions must give an irreducible random outcome. The probabilities for the outcomes can then be derived (without directly invoking quantum theory) by looking if the proposition is definite, “partially undecidable” (more-bit propositions of which only some but not all bits are definite) or “fully undecidable”. Of course, one can again say that one could have a classical explanation of the experiments (assuming that one qubit carries two classical bits) and understand the randomness again as arising from the classical ignorance. The point is that one can, but needs not.

I am glad that we can resolve this issue in a scientific and friendly way and I am very much looking forward to reading your next posting on the blog.

If you feel appropriate, you can use this e-mail in your postings.

Best wishes from Vienna,

Caslav

Comment #44 December 5th, 2008 at 6:27 pm

Saying “you should deal with Borel sets” is rather different than saying “you should deal with Borel sets as the sigma-algebra generated by the open sets” or “you should deal with abstract sigma algebras.”It’s important to know that measure and integration are countably additive. So it’s therefore germane that Borel sets do in fact form a sigma-algebra.

Abstract sigma-algebras are interesting to me for a couple of reasons. First, it’s important to have the concept of a measure-o-morphism between two measure spaces. For instance, the unit interval [0,1] together with standard Lebesgue measure is equivalent to a unit-circumference circle, and to the unit square. So therefore a theorem about picking a uniformly random number from 0 to 1 is not really different from a theorem about picking a uniformly random point in [0,1]^2, nor indeed from picking a uniformly random point on any manifold. In these equivalences individual points don’t matter, but of course the sigma-algebra quotiented by measure-0 sets does matter.

Second, and this is related, the view of a probability space as an abstract set of events with assigned probabilities is very important. You really need an abstract sigma-algebra in this case to know what is being discussed.

Are you talking about von Neumann algebras?Yes, I was. I like von Neumann algebras more than I like sigma-algebras just as you do. But they are an even harder structure to study (see below).

Incidentally, for many purposes of analysis, as opposed to probability, my impression is that it is best to avoid Borel & measurable sets and just stick with the Riesz representation theorem, define L^2 as the completion of continuous functions, etc.Well okay, but you can’t define L^2 spaces without integration, and what does that mean? My son is taking calculus right now, and his working definition of an integral is a Riemann sum. Well, Riemann sums are a constricted view of integration because they aren’t countably additive. Okay, a phrase like “countably additive” is very fancy jargon for a beginning calculus student. But he has seen the function that is 1 on the rationals and 0 on the irrationals, and it’s a point that he can grasp that that function is in fact integrable, just not by Riemann sums.

Comment #45 December 5th, 2008 at 9:09 pm

We explicitly state that mathematical undecidability is a general feature and that we use it synonymously to “logical independence” (in agreement with the literature on these subjects)…

In the introduction we say “In this paper, we will consider mathematical undecidability in certain axiomatic systems which can be completed and which therefore are not subject to Gödel’s incompleteness theorem.” How could we have been more clear than that?One problem is that in fact the use of the term “mathematical undecidability” is not in agreement with the literature, at least in mathematics. It’s extremely uncommon in practice to use this term for anything less than Goedel-type results. Aside from that, the paper explains clearly what the results are. However, the title and abstract are definitely misleading: when I read the abstract I was incredulous, because I thought it meant arbitrary axiom systems rather than toy examples. (If I hadn’t recognized any of the authors’ names, I would have dismissed it as a likely crank paper.) After a minute of flipping through the paper, I understood what was meant, but I still felt a bit cheated. It’s a bit like Bilbo’s final riddle in The Hobbit: it is indeed undecidable, but not for any deep or interesting reason.

Comment #46 December 6th, 2008 at 12:42 am

I hadn’t actually looked at this paper by Paterek et al on “Mathematical undecidability and quantum randomness” until continuing discussion piqued my curiosity. After skimming this paper, my question is, why did they stop at mathematical undecidability? The mode of undecidability takes values in Z/2, and Z/2 is a finite simple group. They have found a new link between quantum physics and the classification of finite simple groups.

Granted, in the context of results such as the Feit-Thompson theorem, the classification of finite simple groups normally refers to

non-abeliangroups. It would be prudent to include this disclaimer in the introduction. But abelian finite simple groups are also fundamental.Comment #47 December 6th, 2008 at 2:34 am

From the sigma-algebra point of view that points don’t matter, you should discard the characteristic function of the rationals. Certainly that it what happens in functional analysis, such as in von Neumann algebras. If you define L^2 as Cauchy sequences of Riemann-integrable functions, it’s not at all obvious that the characteristic function of the rationals is in L^2. I’m not entirely sure that it means anything, but I think that it does. But why should we care if this function is missing (rather than zero)? We have to live without non-measureable functions. I have some sympathy for the constructivist point of view that the characteristic function of a point makes Cantor’s paradise too exotic.

The comment about your son was helpful, but I’m not sure I can explain why. It is useful to be able to say “I can deal with that function” before going on to ignore it.

Comment #48 December 6th, 2008 at 4:22 am

If you define L^2 as Cauchy sequences of Riemann-integrable functions,Or equally well Cauchy sequences of continuous functions. Lebesgue integration at least affords the convenience that you don’t need Cauchy sequences.

For that matter, even in the context of standard freshman calculus, Lebesgue integration obviates the awkward construction of improper integrals. Any absolutely convergent improper integral is automatically okay; and the conditionally convergent ones are not entirely trustworthy in any formalism.

Comment #49 December 6th, 2008 at 1:44 pm

Any absolutely convergent improper integral is automatically okay; and the conditionally convergent ones are not entirely trustworthy in any formalism.What about the Henstock-Kurzweil integral?

Comment #50 December 6th, 2008 at 2:36 pm

Using Lebesgue integrable functions to define L^2 avoids sequences, but you still have to take equivalence classes, which seems to me to be the main cost of using sequences.

Improper integrals are a real advantage of Lebesgue integrals over Riemann integrals, but there are many logically independent ideas in Lebesgue integration that allow one to interpolate between the two. One is to start by integrating positive functions and only define the integral of a general function as the difference of the signed parts. That very minor change solves the problem of improper integrals.

Going a step further, one could also eliminate the apparent number of limits in the improper integral by defining the integral of a positive function to be the sup of the integral of compactly supported piecewise constant functions that are less than the integrand. When the function is bounded and compactly supported, this is a pretty minor change from some versions of the Riemann integral, at least once one has made the first change and restricted to positive functions.

You would get the same answer if you started with regions of finite Jordan content instead of segments (or rectangles), but this avoid the extra limits needed to define Jordan content or Lebesgue measure. I don’t like that extra limit, but the pro-Lebesgue position seems to be that it’s locked up in the black box of the measure and can be ignored. Or that the limit is needed to prove the existence of the measure, but it can be defined and manipulated by a universal property. (I guess I advocated one such universal property above, with the Riesz representation theorem.)

If one prefers sigma-algebras over von Neumann algebras, events over random variables, then it makes sense to construct measure before integration. But if one has the opposite preference, why bother with measure theory?

Comment #51 December 6th, 2008 at 3:41 pm

Doug,

It’s not that I’m any huge fan of measure theory. On the contrary, since I study quantum information theory, I like von Neumann algebras better than I like sigma-algebras. All that I’m saying is that some moderate amount of standard graduate measure theory truly is very convenient for analysis. Yes, there are viable alternatives, but they don’t strike me as particularly more convenient.

Any decent treatment of measure and integration, say in the plane, should address these concerns:

1) The area of a rectangle is what you think it is.

2) The area of a numbered sequence of disjoint sets should be the sum of the areas.

3) If a lower bound and an upper bound for the area of a set agree, then the set has a well-defined area.

4) 0 is a lower bound for any area.

Since these are all desirable properties of any conception of area, why not assume them as axioms? That’s all that Lebesgue measure is. Lebesgue integration is then defined as the area under the curve for a positive function (which is after all a freshman calculus motivation of an integral), and for a general function by demanding that integration is linear.

It isn’t more complicated than Riemann sums, and it’s clearly more convenient. The new and subtle step is defining integration using axioms instead of with an explicit formula. Well, even in analysis, not just in abstract algebra, sometimes you should list axioms.

What about the Henstock-Kurzweil integral?The Henstock-Kurzweil integral is one of several completely reasonable methods to control some conditionally convergent integrals. When I said that conditional integrals are never completely trustworthy, I also had in mind that they are usually somewhat trustworthy; and Henstock-Kurzweil is one way to see that. But you can also see in that definition that conditional integrals are negotiable: the value of some integrals depends on the Henstock-Kurzweil “gauge”.

Actually there is a fun variation of this issue that I encountered when teaching graduate analysis: summing a series of integrating a function that takes values in a Banach space. A sum or an integral can then be unconditionally convergent without being absolutely convergent.

Comment #52 December 7th, 2008 at 6:31 pm

Greg Kuperberg asks Scott:

Syllogism:

1. Scott is a computer scientist.

2. Computer scientists believe that anything continuous is arcane.

3. Therefore …

In my experience, both (1) and (2) are true in the ordinary English-language sense. However, this says nothing about Scott’s beliefs one way or the other.

Comment #53 December 7th, 2008 at 6:51 pm

Computer scientists believe that anything continuous is arcane.Someone needs to tell Lenore Blum.

Comment #54 December 8th, 2008 at 5:54 am

I sent Caslav Brukner the following response:

Dear Caslav,

Thanks very much for your gracious message, and sorry for the delay in responding. As I’m sure you know, blog discussions have a tendency to get out of hand, so in writing my entry, I made a conscious effort to keep things on a friendly and scientific plane. I’m grateful that you’ve responded in a similar spirit.

With your permission, I’ve posted your email (in its entirety) in the comments section of my blog. I’ll also link to it from the main entry.

I’m glad you don’t claim that the incompleteness you consider is a manifestation of Gödel’s Theorem. My worry is that sentences like the following, in the abstract:

“Our results support the view that quantum randomness is irreducible and a manifestation of mathematical undecidability.”

might practically be begging unsophisticated readers to make such a connection (as indeed seems to have happened on Slashdot). For me, at least, the phrase “mathematical undecidability” suggests one is talking about logical systems of some reasonable level of expressive power. If it isn’t — if the phrase can mean undecidability in

anylogical system, even one whose only inference rule is bitwise XOR — then I have trouble making sense of claims like the one above.I acknowledge your point regarding the propositions being “about” Boolean functions rather than the quantum states themselves; I should have been clearer about that.

However, something now occurs to me. If your goal is just to have

somestate that lets you decide the truth or falsehood of statements like f_{1}(0) + f_{1}(1) + … + f_{n}(1) = 0, with bounded probability of error, then at least asymptotically you can do just as well with a randomly-chosen classical string instead of a quantum state! For as Greg Kuperberg independently observed, what’s really going is less about logic than group theory. Namely, you have a fixed group G (in your case, G=Z_{2}^{2n}}), as well as some subgroup (or coset of a subgroup) H of G, and you want a quantum state that lets you decide, for a given element x∈G, whether or not x∈H (which in your case corresponds to whether or not the proposition is decidable). As pointed out in this paper of Watrous, it suffices to be given O(1) copies of an equal superposition |H⟩ over the elements of H. One can then solve this problem with bounded probability of error. However, if G is abelian (as it is in your case), then italsosuffices to be given O(1) random samples y_{1},…,y_{k}from the dual group G/H. Given an x, one can then compute the inner product between x and each y_{i}, and output “x∈H” if and only if each of the inner products is 0.In other words, when the “decidability group” G of the propositions is an abelian group, quantum gives no asymptotic savings over classical in the encoding size. By contrast, when the group is non-abelian, it’s possible that one could get up to a quadratic savings. For in the quantum case, it still suffices to send |H⟩ (requiring log|G| qubits), but in the classical case, it might be necessary to send a list of generators of H (requiring ~log

^{2}|G| bits). Proving this separation is a wonderful open problem in quantum communication complexity that I and some others have thought about.All the best,

Scott Aaronson

Comment #55 December 9th, 2008 at 4:50 pm

I like von Neumann algebras more than I like sigma-algebras just as you do. But they are an even harder structure to study.I wonder if anyone is aware of a gentle introduction to von Neumann algebras in the context of quantum mechanical systems. I would like to understand this approach. Thanks.

Comment #56 December 9th, 2008 at 6:19 pm

Hi Richard. I’ve been trying to write one into my intro notes for quantum probability, which are on my home page. But this project (like everything else that I’m doing lately) is beset by procrastination.

Here is the way that you get started. Let’s ask abstractly what requirements we might want for an algebra of complex-valued random variables. A priori there is no Hilbert space. First it should be an algebra over \C: A vector space over \C, closed under addition and multiplication, etc. Second it should have an anti-linear multiplication-reversing involution *, so that we can say that if a = a^*, then a is a real-valued random variable. A structure like this is called a *-algebra and there are a lot of examples.

To go further with this *-algebra A, it helps to say a bit more about our intentions. If a is a real random variable and a^2 = a, then it will be 0-1 valued and can be interpreted as a Boolean random variable. The notion of a state or a distribution in this sketched picture wlll be given by some function rho(a), at least for Boolean random variables, that assigns probabilities to all Booleans. It naturally extends linearly to the expectation functional for all a. We want probabilities to be positive numbers (an event cannot happen less than none of the time), and this requires a positivity condition on A itself. A natural condition is that if a^*a = 0, then a = 0. A *-algebra with this property is a positive *-algebra.

If A is a finite-dimensional positive *-algebra, then by a structure theorem it is already quantum probability as you’ve seen it. That is, A has to be isomorphic to a direct sum of matrix algebras, so that * is the Hermitian transpose of each matrix. If A happens to be commutative, then it is standard finite classical probability. At the other end if A is as noncommutative as possible, then it is a single matrix block and has a Hilbert space and so on. I have a paper on noiseless coding theorems (generalizing Shannon’s, Schumacher’s, and Nayak’s theorems) for all intermediate cases in this finite-dimensional setting.

If A is infinite-dimensional, then it is a standard precept that infinite-dimensional vector spaces in general should have topologies if you want to do much analysis with them. So our *-algebra needs more axioms. It should be a Banach space so that you can talking about the limit of a sequence or series of random variables in A. In order to make this Banach space structure relevant to the rest of the structure, A should be a Banach algebra and, even better, a C^* algebra. This is the axiom that ||a^*a|| = ||a||^2, and it plainly generalizes the positivity condition that I had before. This axiom has a lot of consequences about the structure of A. It implies that the elements of A should be thought of as bounded random variables and that ||a|| is the maximum norm of the values of a as a random variable. It also implies that if f is any univariate continuous function and a is a self-adjoint element of A, then f(a) is another well-defined element of A. It also implies that the norm is rigid. If you strip the norm from A, there is only one way to put it back.

If A is a commutative C^*-algebra, then (by another structure theorem, the Gelfand-Naimark theorem) it has to be the algebra of continuous functions on some (compact, Hausdorff) topological space. C^*-algebras can be motivated as quantum probability, but they end up being something slightly different, non-commutative topology.

Since C^*-algebras are non-commutative topology, they do not have to have non-trivial Boolean elements. (In the commutative case, a Boolean is a continuous 0-1 valued function, so its existence means that the space is disconnected.) There is also no notion of countable additivity of Booleans even if you find them. So something is still missing.

An inspired choice for the final axiom that gets you full quantum probability in the infinite case is that the algebra A should be the dual of another Banach space, i.e., it should have a predual. Then it is a von Neumann algebra. It has a lot of Booleans. In fact if a is a self-adjoint element of A and f is integrable, then f(a) is in A; this is a big source of Booleans.

Von Neumann proved that if M is a commutative von Neumann algebra, then it is the algebra of bounded, measurable functions on a measure space. You don’t quite get every measure space this way, but the only ones that are missing are ill-behaved. This validates von Neumann algebras as non-commutative probability.

The predual of M plays a natural role in the probability interpretation. Before, a state rho was a dual vector on the algebra. In Banach space theory, predual vectors are a subspace of dual vectors. If rho happens to be in the predual, then that is exactly the condition that rho is countably additive on Booleans instead of just finitely additive. It’s then called a “normal state”, although this is a misleading term. It would have been clearer if a normal state had been called a state with no adjective, and the others were called abnormal states.

An example of a von Neumann algebra is all of the bounded operators B(H) on a (typically infinite-dimensional) Hilbert space H. These are the bounded random variables. Unbounded random variables can be defined too, but they have to be treated indirectly.

This treatment has the great merit, in my view, of directly equating observables in quantum theory with random variables in probability theory. It’s better to say that a Hermitian operator

isa random variable than that it yields one. It’s a direct generalization and not an analogy. For instance if you have a qubit, the familiar operators X, Y, and Z are +-1-valued Bernoulli random variables.Comment #57 December 11th, 2008 at 3:42 pm

Hi Greg. Thanks a lot for your post. To be honest, I’ve been struggling with my understanding of it so far (and I haven’t yet looked at the other sources). Here are some dumb questions: How do random variables arise from algebras? Is it that all the elements of the algebra that are “Hermitian” have random variables associated with them (arising from the probabilities of the associated measurement)? And do the non-Hermitian elements have no meaningful random variables associated with them?—but they are included in the picture for the sake of having a nice algebraic structure?

Comment #58 December 11th, 2008 at 7:52 pm

How do random variables arise from algebras?If x and y are two random variables, then it stands to reason that their sum x+y and their product xy are also random variables. It’s a matter of accepting some closure properties of what you can observe.

Is it that all the elements of the algebra that are “Hermitian” have random variables associated with them (arising from the probabilities of the associated measurement)? And do the non-Hermitian elements have no meaningful random variables associated with them?—but they are included in the picture for the sake of having a nice algebraic structure?This part is not entirely tidy in the story that I presented. You can choose to make axioms for Boolean random variables, or for real random variables, or for complex random variables. A priori Boolean random variables are the most logical choice. (Pun intended.) However, a posteriori (meaning in the examples such as a qudit), the set of Boolean random variables in this style of quantum probability does not have any reasonable closure properties such as closure under addition and multiplication. A posteriori the real-valued random variables are closed under addition, but not under multiplication. If you want something closed under addition and multiplication, you should use complex random variables, but a wrinkle arises which I will get to in a bit.

Yes this is a nice algebraic structure, but again the real motivation is to have some kind of logical closure of the set of available random variables (the set of observables). You don’t want a fixed system to be open-ended in the sense of never having a final definition of what you can see.

Provisionally (by analogy with the commutative case where there is a complex algebra of random variables), let’s have a complex algebra. Then the * operation is just interpreted as complex conjugation, so that if x= x^* or that x is self-adjoint, that is just exactly saying that x is real-valued. Then in this case, x is interpretable as an ordinary real-valued random variable.

The remaining question is why it doesn’t happen in the complex case. The answer is that a general z can be expressed as x + iy, where x and y are both self-adjoint. The interpretation of z is that its real and imaginary parts are both real-valued random variables, but they might not be simultaneously observable. They are when they commute, which is equivalent to z commuting with z^*.

Another way to say it (perhaps a better way to say it) is that when you say that x is an observable random variable, you’re saying that it can be witnessed by a classical, meaning commutative, observer. With the value of x in hand, that commutative observer can form any polynomial in x and x^*, so x and x^* should form a commutative subalgebra of the algebra of all random variables and thus commute with each other.

Comment #59 December 15th, 2008 at 7:35 pm

The paper largely seems like another argument in support of Brukner and Zeilinger’s “1 Unentangled Elementary Quantum System = 1 Bit of Information” thesis. Once you’ve measured your bit you hit bedrock stochasticity. In what sense is that different from of computing a Chaitin Omega (per Calude, Dinneen & Shu)?

Chaitin talks somewhere about sitting down with Zeilinger and Svozil back in the early 90s and coming up with a “Quantum Omega”. What happened to that?

Comment #60 December 15th, 2008 at 7:36 pm

Edit: “different from computing”.