Cheers,

Brian

]]>e_{1}=(1,0,…,0),

e_{2}=(0,1,…,0),

…

e_{n}=(0,0,…,1).

Then a superposition

|ψ> = a_{1}|1> + … + a_{n}|n>

just means the vector

a_{1}e_{1} + … + a_{n}e_{n} = (a_{1}, …, a_{n}).

From this perspective, the main function of the kets was to save you from using subscripts every time you wanted to name a vector e_{i}.

The one other thing to know is that, if |v> is a column vector, then <v| is the corresponding row vector. So for example, <v|w> means the inner product between the vectors v and w, and |v><v| means the outer product of v with itself (i.e., a rank-1 matrix).

No, there’s almost nothing scientifically original in chapter 9 of QCSD—these are mostly well-known insights within quantum information and quantum foundations. At most, there’s something “pedagogically original” in choosing to explain quantum mechanics this way.

]]>#2: I don’t understand what you mean by ‘the state |0’ and What it means to ‘apply’ a matrix to a state.

I suspect that what I probably need is a tutorial on kets and their arithmetic. I have tried in vain to find anything via online searches, so if you can point me somewhere, I’m certainly prepared to do the independent study.

I find your top-down method of deriving some QM properties quite amazing, which is what is motivating me to try to follow the math. I’m also curious as to how this was received by your peers. Is it as original and impressive at it appears to me.

Again, thanks in advance for any assistance you might be able to provide.

]]>No, but your argument still fails as an argument for meat chauvinism. But really, even if we agreed the answer was no, that would not be evidence for meat-chauvinism. Consider: You’ve given an abstract description of a machine, and based your judgment of whether it is intelligent or not on that description! The machine could be implemented in metal or in meat; it doesn’t seem to make a difference to your argument. The fact of the matter is, that if you’re basing your judgment of consciousness on behavior rather than on substrate, you’re not supporting meat-chauvinism.

Except, OK, I lied slightly in the above. You didn’t actually give a description of the machine’s behavior and base your judgment on that. What you actually described was the machine’s *design process*, and I suppose you could have judged partly based on that as well. So you may in fact judging on something other than just abstract behavior.

And yet, design process is still not substrate. It still remains true that the design process above could be implemented for a metal machine or a meat machine, so the argument does not support meat-chauvinism.

(By the way, I should point out — the whole “zeta function” part of your argument serves no purpose. Instead of “run through various domains of the zeta function”, you could have just said “run through all finite 0-1 strings in lexicographic order”.)

So maybe you’re not arguing in favor of meat-chauvinism, but just against the Turing Test? The problem is, it doesn’t work very well as an argument against that either. You’re just saying, “Yes, you could beat the Turing Test by being intelligent, but you could also beat it by coincidence!” Which is certainly true, in much the same way that 2+2 could actually be 5 and nobody’s noticed because we all have the same bug in our brains. I.e.: Of course it can happen, but it’s extraordinarily unlikely. Any behavioral test you write can be beat; the question is how easy is it to beat — how much of an assurance does it give us about the things that beat it.

And yes, “Any behavioral test you write can be beat” could be construed as an argument in favor of meat-chauvinism, but only if you’ve entirely missed the point. Any being-made-out-of-meat test can also be beat (though that’s more difficult). The point is that all knowledge is probabilistic, and the question is how much of an assurance does the test give us.

Hell, although I know you’re conscious by any reasonable standard of “knowing”, I certainly don’t know with 100% probability — and all I have to go on is your behavior! You could *be* a domain of the Riemann zeta function for all I know! Except, of course, you couldn’t; that is to say, it’s technically possible, but so unlikely as to not be worth considering.

Btw, I would recommend this post of Eliezer Yudkowsky, on playing “follow the improbability” with such thought experiments: http://lesswrong.com/lw/pa/gazp_vs_glut/

]]>I bought the book on Kindle and read it (it cost $18.56 not $15.40 but it was worth it anyway).

Three points, which I may or may not have made in coments here before:

1) If the fine-structure constant, or any other dimensionless physically measurable number, is not Turing-computable as a real number, then the ordinary Church-Turing thesis is false, and so is the extended Church-Turing thesis. These numbers are polynomial-time equivalent to measurable probabilities, and although the first N bits of such a probability takes exponentially long to measure (asymptotically, you need (4+delta)^N trials in order to have a probability of (1-epsilon) of never getting a bit wrong), you can create an exponentially “padded” version of the number which is still noncomputable classically but is computable in BQP* (a generalized version of BQP that allows us to count experiments measuring these probabilities as computational operations).

2) You discuss the difficulty of answering intelligent design advocates. Complexity theory is relevant in a less trivial way than you think, and you will do fine if you adopt the following intellectually honest 3-tier approach:

(i) it is theoretically possible that ID could be strongly supported by evidence we don’t happen to have uncovered yet (for example if our “junk DNA” turns out to cryptographically encode “Made by Yahweh” in a robust way) and ID supporters are free to go look for it

(ii) There is serious technical work on the origin of the genetic code, by Dembski and others, which probably needs to be answered by a complexity theorist rather than any other kind of scientist

(iii) The “irreducible complexity” argument that is most favored by IDers has a fatal flaw which opponents like Dawkins have not articulated properly: ANY system that evolves to perform complex functions, however redundant and Rube Goldbergish it is initially, will then tend to be streamlined by evolution, which will knock out inessential parts until an irreducibly complex system remains, at which point any further streamlining will fail to reproduce. Irreducible complexity should be EXPECTED, you just can’t get there by only building up, you have to get there by cutting down AFTER evolving a REDUCIBLY complex system.

(3) I was one of the people you refer to whose intuition was that P=BPP already in the 1980’s; I remember explaining to Charles Bennett in 1985 or so why the existence of good pseudo-random number generators was likely to entail P=BPP (the alternative that BPP properly contained P would entail counterintuitive conspiracies that would prevent all possible PRNGs from being “irrelevant” to the problem in BPP/P ).

]]>