Can quantum probability provide a new direction for cognitive modeling?, Emmanuel Pothos and Jerome Busemeyer, Behavioral and Brain Sciences

Andrew Gelman notes its welcome:

http://andrewgelman.com/2013/05/15/does-quantum-uncertainty-have-a-place-in-everyday-applied-statistics/

Choice theory – in the vein of Kenneth Arrow – seems rife with dead ringers of quantum uncertainty. Trickier to imagine what each side might want from the other.

]]>Let X be your favorite definition of [factorization] in terms of observable characteristics. Then consider a physical system, Y, that satisfies X but “only as a robot or zombie”.

[As for philosophical zombies a natural candidate is a lookup table which would happen to include enough q&a to exhaust any realistic tester.]

Now, maybe the above can’t actually be done in our universe: [you bet] maybe [factorization] necessarily accompanies certain sorts of physical organization. But even if so, the very fact that it seems consistent to imagine it being done, suggests that the observable characteristics X can’t possibly have been what we really meant by “[factorization]”: instead we must’ve meant some other thing that accompanies X.

[In a way that’s true, what we meant by “factorization” is the ability to factorize any number, not the ability to recall from a limited set of possible answers. The only way to tell Y appart from this “mathematical zombie” is to look at how Y is constructed. That’s the obvious answer for factorization, that’s my point for consciousness as well.]

]]>Formulated this way the short answer is: there is no reason they would be limited by the set of axioms they discuss.

However I think what you really meant is: “Suppose they can be described by a set of axioms [as computationalism seems to imply] and they discuss a statement that can’t be proved within this set of axioms.”. In which case I’d ask back: how did they came to discuss this statement in the first place?

So my long answer is: if they were able to discuss a statement, by definition they can use this statement to construct a system not unable to prove this statement, so the question is self-contradictory.

Of course, you could prove that there is always some statement that they can’t prove and even discuss, but as we can select axioms at random this limitation is more theorical than practical. Actually, in the context of what intelligence can and can not do I don’t think this line of thought adds anything to the simpler statement: “whatever how long lasting the human civilization will be, there is a set of axioms too large to be reachable.” Of course one won’t sell a lot of book with that.

[neurorehab]

PS: our priors on Scott being able to demonstrate that copies pose some additionnal problems for convergence of Baysian reasoning are starting to converge. 🙂

]]>What you say about unprovable but true statements makes sense. However, I don’t think it changes the convergence problem. Suppose the statement they are arguing over is: “Can RH be proved without additional axioms?”. Then all three alternatives are still possible, and no amount of additional data will force the estimates of which one is most likely to converge.

[Unrelated – what kind of neuroscience do you do?]

]]>I fully agree that “true” and “makes sense” are not always found together where QM is concerned, especially since “makes sense” seems to be time-varying.

For example, when Bell’s inequality was first introduced, folks said “That makes no sense!”, and did not believe it until it was measured. But now when people propose hidden variable theories, folks say “That makes no sense! It does not account for Bell’s inequality!”.

Furthermore, as a Bayesian, I must say that even with subjects less tricky then QM, and even when I’ve been certain, sometimes I’ve been confused, or wrong, or confused and wrong together. So I may be sure that something makes sense, and be willing to bet big bucks, but certainly I’ll agree this is not the same as “true”.

]]>Maybe you should distinguish the possibility that some objection is true from the possibility that it makes sense. As an example I’d agree that Penrose is likely^likely wrong, but I’d not say his reasoning makes no sense at_all^at_all.

[unrelated but I had a look at your current work, and as a neuroscientist I applaud. ;-)]

]]>I’d be willing to bet large amounts of cash that Scott’s objections to copying make no sense (thus revealing my own estimates of priors). I see no problem with copying a large, complex, autonomous object (say Scott, or a Google self driving car). The car would even share Scott’s confusion with priors (as in “Hey! What someone doing in *my* reserved parking place?”). But such confusion is just something you have to live with, like it or not, just as Einstein had to live with QM no matter how much it insulted his sensibilities.

I should add I work in neuroscience, and see no evidence that QM is needed for the (large scale) operation of a brain. It’s needed for the smaller parts, such as ion channels, but these seem to behave as independent actors, with no QM interactions or entanglements. And given the difficulty or preserving QM interactions over large distances, it’s hard to see how something the size of a brain could depend on these interactions and still be robust. So intuitively I feel that overall the brain is a very classical system, and can be copied freely.

Of course, perhaps I am wrong. Maybe consciousness *does* cause wave functions to collapse, so duplicating a consciousness is forbidden by some weird QM selection rule. That’s why, as a Bayesian, I’m willing to bet large amounts of my money, but not all of it…

]]>If that was my position, would I have asked for an explicit exemple?

I like you corrected objection[s] rather than make[s]. You gotta show me this plural, google translate says. 😉

]]>- Look, I won’t bet large amount of cash that Scott’s objection[s] to copies make any sense…

Would you bet a large amount of cash that they *don’t* make sense? 😉

Look, I won’t bet large amount of cash that Scott’s objection to copies make any sense, but your demonstration has some problems too: if you can’t prove a true statement within a given system of axioms, then it must be true and provable within another system (think of your first system augmented with the statement that RH is true).

]]>