## “Quantum Information and the Brain”

A month and a half ago, I gave a 45-minute lecture / attempted standup act with the intentionally-nutty title above, for my invited talk at the wonderful NIPS (Neural Information Processing Systems) conference at Lake Tahoe. Video of the talk is now available at VideoLectures net. That site also did a short written interview with me, where they asked about the “message” of my talk (which is unfortunately hard to summarize, though I tried!), as well as the Aaron Swartz case and various other things. If you just want the PowerPoint slides from my talk, you can get those here.

Now, I could’ve just given my usual talk on quantum computing and complexity. But besides increasing boredom with that talk, one reason for my unusual topic was that, when I sent in the abstract, I was under the mistaken impression that NIPS was at least half a “neuroscience” conference. So, I felt a responsibility to address how quantum information science *might* intersect the study of the brain, even if the intersection *ultimately* turned out to be the empty set! (As I say in the talk, the fact that people have *speculated* about connections between the two, and have sometimes been wrong but for interesting reasons, could easily give me 45 minutes’ worth of material.)

Anyway, it turned out that, while NIPS was *founded* by people interested in modeling the brain, these days it’s more of a straight machine learning conference. Still, I hope the audience there at least found my talk an amusing appetizer to their hearty meal of kernels, sparsity, and Bayesian nonparametric regression. I certainly learned a lot from them*;* while this was my first machine learning conference, I’ll try to make sure it isn’t my last.

(Incidentally, the full set of NIPS videos is here; it includes great talks by Terry Sejnowski, Stanislas Dehaene, Geoffrey Hinton, and many others. It was a weird honor to be in such distinguished company — *I* wouldn’t have invited myself!)

Comment #1 January 24th, 2013 at 10:48 am

I might have to borrow “GO FAST GO FAST GO FAST ZOOM THROUGH DON’T RUN OUT OF TIME.”

Comment #2 January 24th, 2013 at 11:02 am

Sean: LOL! I’m proud to say that mission was more-or-less accomplished. (The other trick I used was starting the stopwatch on my iPhone and placing it on the podium.) I knew I had to get through in 45 minutes what would’ve taken twice that with my usual atrocious rate of “ums,” “y’knows,” etc., so that’s exactly what I did. I was determined not to repeat my mistake from the FQXi meeting in Copenhagen.

Comment #3 January 24th, 2013 at 12:57 pm

(1) You should use Beamer. Good that you write your papers in LaTeX, at least.

(2) The top have of page 2 is good. But the bottom half shows subtle but important signs of selection bias. There is a reason that Popper and Dyson are in this chart, and not (say) Bethe or Feynman.

(3) Option 3 on page 8 is a very unfair summary of what you and I really think. It makes it sound worse than options 1 and 2, which in my view at least have been mostly sterile wishful thinking. What I would say here is: “Get over your surprise and appreciate the similarities to what you expected, along with the differences.” After all, that is how you present matters on page 5, and the similarities hardly stop there.

(4) On page 11, unstructured password search is a much better description than the unfortunate terminology that goes back to Grover, “list” or “database” search. Because, as everyone knows, it’s almost as easy to presort a list as it is to store it in the first place.

Comment #4 January 24th, 2013 at 1:13 pm

Greg:

(1) Sorry, Beamer simply isn’t as good for the wacky animations and graphics I like and that my audience expects. It’s fine for a “straight to business” math talk—but for those, I’m usually happier just to skip slides and use a board.

(2) I wasn’t claiming to do a statistical survey of what famous 20th-century thinkers have thought about QM! I was just making an existential argument: something like, “if these ten individuals all took X seriously, where X=anything, that obviously doesn’t mean X is

true, but it’s strong evidence that there’s something there worth giving a talk about.”(3) I don’t think that what you really think is quite the same as what I really think! For one thing, I’m not as rigorously “anti-philosophical” as you are when talking about quantum probability. I do try to be

anti-dogmatic—which in practice (unfortunately and ironically), often overlaps “anti-philosophical,” but is different.(4) OK, point taken.

Comment #5 January 24th, 2013 at 1:40 pm

Loved this quote about Aaron Swartz from your interview:

“He shouldn’t have had to accept a humiliating plea bargain in order to get a proportionate sentence: such a sentence should have been the *outcome* of a trial, not an inducement to forgo one.”

Comment #6 January 24th, 2013 at 2:12 pm

(1) I think that your preference for PowerPoint over Beamer is a great illustration of the illusion of free will. While in the theory PowerPoint does give you superior talk animation options, what does your talk actually have? Simple animations that are not all that hard in Beamer either, and formulas that would have been typeset better. I agree that there exist things that are easier in PowerPoint, but the same can be said of Microsoft Word.

(2) Maybe on this point we don’t really disagree.

(3) Okay, the list of options represents *your* views but not *mine*. Still, my position on a number of questions (I have come to realize) is not entirely anti-philosophical, rather it is somewhat philosophical but partisan. In the case of quantum probability, it puts me in a trap. Either I am expected to be more open to anti-Copenhagen interpretations than their adherents are to Copenhagen, or someone can summarize my position as “shut up and calculate”. I want to be actively pro-Copenhagen; I want to promote it as an entirely natural generalization of classical probability. I think that people who call that “shut up and calculate” listen but do not hear. The same was once true of special relativity — where even Einstein himself at first listened to Minkowski but did not hear.

Comment #7 January 24th, 2013 at 2:44 pm

LOL … for me a particularly illuminating invited talks was George Dahl ‘s “Merck Molecular Activity Challenge”, which illustrates a fast-improving 21st century research paradigm:

• The brain/computer

emulateslaboratory observations, and• the laboratory observations

emulateHilbert-space QM dynamics (a.k.a. quantum chemistry, etc).Dahl’s talk demonstrates that an exceedingly important practical question is “Can we

emulateobservations reliably and efficiently”?As for whether the quasi-philosophical question of whether the state-space of the world is “really” Hilbert space Dahl’s tak shows us why that’ is perhaps not the most relevant practical question, is it? Because we already know that emulating full-dimension linear Hilbert spaces is a computational non-starter. Whereas pullbacks onto lower-dimension nonlinear state-spaces are improving in reliability and efficiency at a Moore’s Law exponential pace … with no obvious limits in sight.

So if carbon-based life-forms like us are slowly learning to (nonlinearly!) simulate the prodigal dimensionality of Hilbert-space dynamics accurately and efficiently, perhaps Nature herself employs similar tricks, to similarly dispense with the physical reality of Hilbert space’s prodigal dimensionality?

Now *that* is a cool (and quasi-philosophical) speculation.

Comment #8 January 24th, 2013 at 5:38 pm

Great talk Scott. There is no doubt that human brain as physical device behave classically. The question whether it may efficiently simulate quantum computer. Factoring, etc. is not the natural intention of the brain, so just encoding may be not efficient for this task, although I guess many humans can factor 21 ;). Superposition 17:54 slide. There are visually bistable images. The classical one is face-vase where perception depends on attention. There are others that are pure perceptual such as if you present vertical grating to one eye, and horizontal to another eye you perceive something weird, with perception moving from vertical to horizontal and backward with some intermediate patchy state. The question here whether we can design an experiment that may show violation of Bell’s inequality in a convincing way. That is the question to the quantum mechanics experts. In other words, whether human brain can in some sense mimic, quantum computations, or have capacity of performing quantum like computations.

33:29 If the classical information is important for action, than you do not need to clone the quantum state, you just need to approximate it that will lead to the same outcome for almost all inputs, where the rest will not be measurable even in the original system. Let me give some crazy and vague analogy if you ave monomial x1 x2 … xn we can represent it as a two new variables (x1 x2 … x{n/2})(x{n/2+1} … xn) or any permutation divided into 2 brackets.This will give exponential number of new variables describing the same quantity. This line of thoughts actually can reduce the degree of Nullstellensatz refutation for, say, factoring, if not for NP complete problems. (I actually think something weird is going on in physics, where we have gravitation with one type of particles, electromagnetism with two types of particles, and strong interaction with 3 types of particles. It seems like a expansion of some general potential into something similar to Taylor series. )

I personally think that the human computational ability are overseen by computational theory experts.

Comment #9 January 24th, 2013 at 6:50 pm

Hi Scott (and others),

I have some reasonings I would like to submit for your peer review on the topic of the many-worlds interpretation. It relies on a simplifying assumption, that may render the thought entirely useless… in any case:

Let’s make the simplifying assumption that the universe has a a superposition that is evolving under some great and mysterious universal Hamiltonian. This superposition (let’s call it |Y>) has an expected energy that can be “easily” calculated, but it is only an expectation. Now, as parts of this wave function “collapse” over time, the energy of the resulting measurement will either be greater than, less than, or equal to the expectation.

We can model this process as a random walk over {-1, 0, 1} (I realize this abstracts away the magnitude of the divergence from expectation in each case – I am hoping this isn’t a fatal assumption) representing a decrease, increase or no change in the ‘total energy in the universe.’ Now, as many-worlds would have it, each of these collapses would give us all possibilities and all random walks would exist ‘materially’, giving us 2^{\aleph} walks if the original superposition |Y> is a finite superposition (another simplifying assumption?). What is interesting is that the cardinality of random walks in the limit that never return to the origin is also 2^{\aleph} – meaning that there is a one-to-one pairing between branches of the many-worlds-universe and branches where the total energy of the universe expands infinitely higher or lower than expectation (very unusual and unnatural universes).

I see a few ways to deal with this:

– My simplifying assumptions simply things too much, for example |Y> could be a superposition over infinitely many states to begin with or my relaxation into a random walk is not a good one.

– We don’t allow time to go to the limit. We suggest that there is a finite limit on time and so the paradox unravels.

– We reason that the cardinality of “natural/good” universe branches by the same reasoning is also 2^{\aleph} and take comfort in this.

– We use the same math to challenge the Copenhagen interpretation.

Please let me know if I did a poor job explaining what I mean.

Comment #10 January 26th, 2013 at 6:05 am

Scott,

Firstly mazel tov to you and your wife for your new family member!

Assume consciousness exists for almost the rest of life of the universe, which is, say, a septillion years. Or maybe that it would exist for ever along with the universe. That is, the period for which there were NO consciousness would be only a fraction of time.

Would you still maintain this argument against wavefunction collapse by consciousness? (I am trying to make sure I understood your argument. As I understand what you are basically saying is half of the physical theory not coming to play for almost all the time is an absurdity.)

Of course my assumption above is probably as absurd!

Comment #11 January 26th, 2013 at 10:22 am

Ross #9:

I’m already confused at the part when collapsing parts of the system causes its energy to “fluctuate” in this classical random walk way, but then approaching the physical world with a cardinality argument seems even more dangerous, or more like useless. I tried to consider some model universe to get a hold on this, but collapsing stuff will always be sensitive to changes from like a few particles to the “whole univere”. Well, I dont quite understand this expectation thing in the first place, if you want to talk about energy in the big picture (an timelike limits and so forth), you should bring in time conservation stuff, time translation symmetry and relate these to your interpretation. Presumably most MWI believers would be very happy with having some evidence of splitting quantum histories in systems with only some nice, definite, fixed energy.

Comment #12 January 26th, 2013 at 11:45 am

Ashley Lopez #10: Interesting question! Yes, even in that case, my “physical intuition” would be absolutely terrified were there some direct causal arrow from consciousness to wavefunction collapse. While I tried to dramatize the argument by imagining a serene, unitarily-evolving wavefunction suddenly and violently collapsing as soon as it produces a human able to measure it, I don’t think the

amount of timeconsciousness takes to come into existence, or how long it lasts after it does come into existence, are really the issue. Rather, I’d say the issue is that consciousness, essentially by definition, is not directly observable or quantifiable by any third party: no matter what observable characteristic you pointed to, a skeptic could always posit a physical system that had that characteristic but wasn’t conscious. Wavefunction collapse, by contrast, certainlyisobservable and quantifiable by third parties: you just need to check whether you get an interference pattern or you don’t. So if the former caused the latter, then consciousness would become as observable and quantifiable as wavefunction collapse, and one of the basic “abstraction boundaries” of the universe would be breached. This is why, even if someone from the future told me it was proved that “consciousness caused wavefunction collapse,” I’d simply be confused by what the person meant, and demand to know whichempirical correlateof consciousness (for example, some sort of complex organization??) was actually responsible.Comment #13 January 26th, 2013 at 1:58 pm

Scott. Please. Write “Physics from the Bottom Up”.

Comment #14 January 26th, 2013 at 3:40 pm

Scott #12

I don’t get how you define consciousness such that it’s not observable by definition. Can you elaborate?

Comment #15 January 26th, 2013 at 5:39 pm

Jay #14: David Chalmers and other philosophers have made the same point. Let X be your favorite definition of consciousness in terms of observable characteristics. Then consider a physical system, Y, that satisfies X but “only as a robot or zombie”: if you like, we start with a

consciousentity that satisfies X, then remove (by fiat) its ability to perceive “the redness of red” or whatever you want to call it.Now, maybe the above can’t actually be done in our universe: maybe consciousness

necessarilyaccompanies certain sorts of physical organization. But even if so, the very fact that it seems consistent toimagineit being done, suggests that the observable characteristics X can’t possibly have been what we really meant by “consciousness”: instead we must’ve meant some other thing that accompanies X.Comment #16 January 26th, 2013 at 9:34 pm

Scott #15: Ok I understand your point. Minsky would say this is circular, but I guess you already know this objection well .

Comment #17 January 27th, 2013 at 9:54 am

The problem with consciousness is that it is assumed to be all or none phenomenon. This is like Zeno’s paradoxes. Once you introduce the continuous variable more or less consciousness, the problem with sudden collapse disappear. For example, all our equations are reversible with time, but somehow people assume that entropy is increasing, to counteract this effect one should postulate that the amount of consciousness is also increasing.

The consciousness seems to be relevant to the action, and the ability to decrease the noise. We are ignoring all physical attributes of the light wavelength, surrounding ambient illumination in the perception of red. We clean up the irrelevant noise. The same seems to be also relevant to the computation – we need to eliminate irrelevant to problem noise and find a set of actions leading to the solution. The mechanical device is not consciousness, just because it cannot eliminate ambient noise, and acts according to program.

The same is relevant to PvsNP believes, the set of mechanical programs that were executed did not eliminate all noise, than people started to believe not equal, just because the existing techniques are not powerful enough.

For example, Nulstellensatz refutation system. What is obvious is that one can encode many NP-complete problems into system of quadratic equations, e.g. f_k = x_k^2-x_k=0, or f_k=x_k^2-1=0 for encoding binary variables (although the ambient space is now complex). Then one can add an encoding equations that in the case of partition problem is g= \sum a_k x_k=0 in the last encoding. Then in Nulstellensatz refutation one is attempting to find polynomials P_m \in C[x_1…x_k], such that \sum P_k f_k + P_0 g= 1. In terms of resulting monomials it is a system of extended redundant linear equations, and the question here is whether ‘1’ is in the kernel of the matrix encoding the system of linear equations. The mechanical application of this program (increase limiting degree for the monomial, write down encoding matrix, check whether ‘1’ is not in the kernel, repeat) gives too high degree polynomials. On the other hand the resulting certificate can be of very small length, with all the rest being noise (http://cstheory.stackexchange.com/questions/14073/semiprime-factorization-groebner-bases-and-a-nullstellensatz-certificate). But since we cannot clean-up this noise the problem appears too difficult.

But there is a lot of information hidden in the underlying equations not easily seen. For example, one need high degree certificate, because one need to test all possible combination of variables relevant to the problem, but even in the encoding stage for the equations f_k=0 one need too high degree monomials to encode all the possible combinations, for example one need monomials of degree 8 to encode 2^6=64 possible combinations as a kernel of linear matrix. If you do not have all combination and any two merges one cannot test all possible solutions. But there is more information that appears in mechanical application of Nullstellensatz. So encode, all quadratic monomials into new variables – y_{k,m} = x_k x_m, that you have additional information not available into original variables: x_k x_m x_p x_t= y_{k,m} y_{p,t}= y_{k,p} y_{m,t}= y_{k,t} y_{m,p} – you have 2 more equations that appear from x_k x_m x_p x_t= x_k x_m x_p x_t tautology. That transformation is sufficient to reduce encoding of all 2^6 combination from 8th degree monomials to 6th degree monomials (3rd degree in variables y).

Encoding has also more information that is extracted form Nullstellensatz – we have a1 x1 + a2 x2 + … an xn=0. If this has a solution than -a1 x1 = …, than (- a1 x1)^3= (…)^2, and so forth for all coverings. If it does not have a solution we added hopefully linear independent constraint into problem matrix. So the main problem is not in the application of known techniques (not to make better Swiss clocks), but to extract the useful information (try to build Arithmometer). Look, as I see the situation, there is no point to have discussion with the people that cannot make Swiss clocks about Arithmometers.

BTW, you can find printout of Matematica notebook how to decrease certificate for factoring of ‘0’ from 8 in Nullstellensatz to 6 along above line of reasoning. Since, I’m not interested in finding any place among cs theory people, it has almost no comments. I’M NOT INTERESTED IN COMMUNICATING THIS “NONSENSE” IN THE SENSE OF CLOCK MAKERS TROUGH THE STANDARD CHANNELS OF COMMUNICATIONS. but i’m letting you know it here, so that anyone can make beautiful clocks out of it, and market it.

Comment #18 January 27th, 2013 at 10:51 am

[…] Shtetl-Optimized, “Quantum Information and the Brain”, here. NIPS lecture […]

Comment #19 January 27th, 2013 at 4:33 pm

@mkatkov #17:

“The problem with consciousness is that it is assumed to be all or none phenomenon.”

thank goodness you’re here to illuminate us.

guess what, people actually do experiments (http://www.unicog.org/publications/SergentDehaene_PsychScience04.pdf) to inform these issues, and the results are available to anyone who cares enough to check !

Comment #20 January 28th, 2013 at 12:45 am

In the above mentioned paper experimenters used subjective rating, with the range of responses depending on subjective criteria. It has nothing to do with internal variables. Take care of finding my works on signal detection theory, if you are interested in the experimental methods, and there relations to internal state.

Comment #21 January 28th, 2013 at 10:38 pm

From your talk, regarding creating two copies of the same person by copying their state:

“Wouldn’t it be great if there were some “Principle of Non-Clonability” that prevented this sort of metaphysical craziness from ever rearing its head?”

What metaphysical craziness? On a basic level, this is no different than a process fork in Linux.

On a practical level, it’s extremely weird. Must my (our?) wife now be convicted of bigamy? Which of us has a valid passport, or Ph.D, or social security number? If one of us gets run over, what do the kids inherit? If you delete the data before restoration, is it murder?

Since it’s always been one body, one consciousness, these questions have never even come up, and our everyday framework is completely unable to deal with them. But on a meta-physical level, so what? You now have identical twins that shared experience up to a point, then evolved separately afterwords. It’s definitely out of our experience, but the universe as a whole could care less. No laws of physics need to be re-written and no paradoxes are introduced. It’s a problem for lawyers, and ethicists, and theologians, but not even a blip on the metaphysical scale.

Comment #22 January 28th, 2013 at 11:16 pm

Lou Scheffer #21: You express eloquently a common response to the “cloning worries.” My response to your response is to ask the following:

In a world where my brain-state can be copied all over the place, how should I make Bayesian predictions—considered by many to be the essence of scientific reasoning?

For example, suppose I know that 99 copies of me will get made if a fair coin lands “heads,” and no copies if the coin lands “tails.” After the experiment is over, what posterior probability should I assign to the coin having landed “heads”? Should it still be 1/2? Or should it be close to 1, since in the world where the coin landed heads, there are so many more observers who I could potentially be? Does it change things if I know that I have red hair, while the other 99 copies (assuming they were made at all) have brown hair but are otherwise identical?

The crucial point is that, in a world with brain-copying, in some sense

these would no longer be “philosophical” questions(or moral questions, theological questions, or anything like that). Rather, they’d just beempiricalquestions, about what I as an observer should predict about my future experiences!Now, you might reiterate that, be that as it may, “the universe as a whole could care less” about the answers to these questions. But frankly,

Icould care less that the universe could care less! I, a would-be Bayesian scientific reasoner embedded in the universe, still need some answers to these questions, if I’m to make predictions about my future experiences in the most general circumstances.Comment #23 January 29th, 2013 at 12:20 am

Scott #22:

Your example is exactly what I was talking about, when I said our current systems of thought are wedded to one body, one “me”. You are so used to a singular value of “I” that you don’t even think that perhaps the question has no well defined answer if copying is a possibility.

There’s an exactly analogous case where the answer is obvious, perhaps because it’s not personal. We observe that our galaxy has at least one civilization (namely us). What are the odds that some other galaxy contains at least one civilization? A little thought reveals we cannot answer this question. If does not matter at all if the chance of life arising in a galaxy is 1 in a million galaxies, or a million in 1 galaxy. If all we know is that we exist, it gives no information on the probability of the path by which we got here, except that it cannot be 0. This has exact same type of empirical consequences as your question. I would love to know, as an observer, if I scan another galaxy, what are the odds I find another civilization? Unfortunately, the correct answer is, “cannot tell from the data provided”.

Your case is exactly similar. Post experiment, you know you exist. In a world with potential copying, you cannot assign a posterior probability. You’d simply say “I can’t answer that question without more information”, just as today you’d give that answer to someone who tells you that two numbers sum to 4, then asks what each of them are.

The problem, I think, is that “no copying” is an incredibly strong prior. Since it’s always been true we apply it all the time, without thinking about it. But nothing horrible goes wrong if copying exists, we just need to get used to the implications.

Comment #24 January 29th, 2013 at 6:23 am

Lou #23: I actually have enormous sympathy for the idea that, in a world where copying existed, the right answer to many of these questions would be “cannot assign a probability from the data provided.” But an immediate problem is that, if you’re a thoroughgoing Bayesian, then you can

alwaysassign a probability! Your probability simply encapsulates the odds at which you’re willing to bet.So I think you’d have to say something like, “ah, but even the ‘you’ who’s placing the bet becomes ill-defined in these scenarios!” As Democritus already suggested 2400 years ago, it often feels more “scientific” to dissolve away your experience of a “singular I,” and think only about the “atoms and the void” (bosons and fermions and the vacuum?), occasionally giving rise to an “I-like (you-like?) excitation” (or two or three or infinity of them). The difficulty is that, when you “dissolve” yourself in this way, you also run the risk of rendering unintelligible the scientific experiments that told you about the atoms and the void in the first place!

For example, how can you predict the outcome of an experiment if, before the experiment is over, you might “switch bodies” with your doppelganger doing a slightly-different experiment, with all the memories and records to match? If that sounds stupid, that’s precisely my point: there’s no principle of physics that

tells usit’s stupid, only some sort of principle of personal identity that we tack onto the physics.Anyway, if you know the way out of these ancient conundrums, let me know!

Comment #25 January 29th, 2013 at 8:04 am

Scott #24,

Copying does not interfere with Bayesian reasoning at all. But your priors now include “what are the odds I was copied” and “which copy am I”, instead of the existing prior of “I’ve not been copied”.

Take, for example, your physics experiment. There are always sources of error. As a thorough-going Bayesian, you already assign odds to experimental errors such as cosmic ray strikes, co-worker fraud, etc. Now you’ve either got one more assumption (I was not copied during the experiment) or you need to estimate the probability and consequences, just like any other experimental condition.

In fact, the exact situation you described already exists. I could screw up my colleagues down the hall by sneaking in at night and substituting a genetically identical but oppositely trained rat into their experiments. As Bayesians, however, they assign a probability to “practical jokes” and include it into their experimental error.

Thinking about it as a bet still works, too. For example, suppose someone offers you a sure-thing bet, but you have to pay by check and you lose if your check bounces. In the current world you maximize your return by recalling how much money you have, then betting that amount. In a world with copying it’s more complex. You need to estimate the chance you were copied, decide whether you are maximizing your personal return, or the total return of all copies, guess at the actions of the other copies, and so on. It becomes a hard game theory problem, but it’s perfectly well defined.

To me, the simplest way to think about this is to imagine a similar example using a computer process (“what happens if I write this byte?”). Then it’s easy to see there are different results, depending on whether processes can be indistinguishably cloned, but it’s well defined in each case. It makes for good homework assignments (extend this to the case where processes can be cloned), and provides lots of fodder for the quantum-money folks, but it’s not a philosophical problem.

Comment #26 January 29th, 2013 at 9:09 am

Lou Scheffer #25: For me, the issue is that there seems to be no principled way known to

choosea Bayesian prior for questions like “which copy am I?” For example, how much probability mass should you assign to a Boltzmann-brain version of yourself? A copy run backwards in time? in just one branch of a quantum computation? homomorphically encrypted? Suppose the computer runs each step in your code 3 times, purely for fault-tolerance purposes: does that multiply the probability mass by 3? How similar to you does something have to be, anyway, before it counts as a “copy”? The only scientifically-respectable answer to these questions seems to be, as you put it, “cannot tell from the data provided.”Now, you might reply that Bayesian priors are

alwayssubjective, by definition! But the problem seems worse with the examples above. For in “ordinary” Bayesian situations, at least people with widely differing priors can converge in posterior as more data is gathered. But with questions like the above, it’s not clear how we could ever reach intersubjective agreement. (Which, I suppose, is why many people have such a strong temptation to dismiss these questions as pseudoquestions, despite the apparent difficulty of avoiding them in a world with clones.)Incidentally, I completely agree that we should always check our intuitions for possible human bias, by considering similar examples involving only (artificially-intelligent) computer programs being copied around in a memory. But when I’ve tried to do so, the conclusion I’ve come to is this:

I see no reason why the AI wouldn’t be just as “confused” about which Bayesian prior it should place over the various copies or near-copies of itself, as we would be in similar circumstances!

(At least, to whatever extent the AI is capable of “confusion” at all. If it were never confused, then its experience would be so different from mine that I couldn’t really say anything.)

Comment #27 January 29th, 2013 at 10:22 am

Scott #26:

I think the answer will be analogous to QM. If the different possibilities have different physical outcomes, then all observers will converge on a description. If the outcomes are not distinguishable, then all observers will eventually agree that the question cannot be decided.

In classical mechanics, for example, you can ask “I put two electrons into a box, and one came out. Which one is it?”. In QM, you can ask what the probability that an electron comes out at time T, but you can’t tell which. More data does not help – it’s a question that can’t be answered in the QM framework, and eventually all observers (including all aliens, and all their copies) will converge on the interpretation that this is a question without a well-defined answer.

I think the answer is exactly the same for copies. Just like QM, you need to count the physically distinguishable results, and compute the chance of each. All observers will converge to this. For other questions, such as your 3-fold vs 1-fold processor, if the results never differ then we can never know, and all observers will converge on the answer that this is not knowable.

It is deeply unsatisfying to be unable to assign a Bayesian prior, but perhaps that’s just how it goes. In fact this has already happened – what’s the Bayesian prior for us living in a simulation? I suspect we both agree there is no way to tell. In a world with copies, Bayesian priors fall into the same category as halting problems and decidability problems – sometimes there is no answer. The good news is that when there are distinct physical outcomes, then the priors can be computed and we all agree. Perhaps that’s the best that can be hoped for.

Comment #28 January 29th, 2013 at 3:51 pm

Scott #26/27

Can you provide an explicit example of how Baysian computation would diverges thus preventing intersubjective agreement? Let’s talk about a commercial computer program design to conduct some experiment of your choice.

Comment #29 January 29th, 2013 at 6:45 pm

Lou, Scott, you do not have problems with processing, and interpreting video stream around cut, where image changes abruptly. What is the problem of suddenly changing your copy. If this happens frequently enough, than invariant perception will develop, with survival insignificant details treated as irrelevant noise, or they will be dealt with through evolution. Even, if different copies exist they will be clustered around similar experience to form identity. With this respect your copies are more similar to each other than any of you copy to the copy of another identity. Just for fun – when your copies are diverging too much the identity disappears and only physical body remains for experience of others. The same goes with species identity, where each exemplar is more similar to each other than to the rest.

Comment #30 February 3rd, 2013 at 8:23 am

I’m a bit annoyed by this, another typical quantum computing oriented talk where the speaker uses their raw sex appeal to distract the audience from basic problems in their presentation.

On a serious note, I really liked the talk, speaking as physicist I think you have a very clear way of presenting quantum mechanics.

Comment #31 February 3rd, 2013 at 10:40 am

Scott #26/Jay #28,

It seems one of Scott’s objections to copies is that Bayesian priors may not converge in such a world.

So I was wondering, *are Bayesian priors guaranteed to converge even without copies*, in the limit of ever increasing evidence? I don’t think they are. For example, consider the Riemann Hypothesis. Currently, one of these is true:

(a) It’s true, and can be proved, but no-one has done so yet.

(b) It’s true, but cannot be proved.

(c) It’s false, but no-one has shown this yet.

Naturally, as hyper-rational Bayesians, we assign likelihoods to (a), (b), (c) as we see fit. So no convergence yet. Now suppose the real case is (b). Then no matter how much additional research is conducted (the additional data), we still have no proof and no counterexamples. In this case I see no way that three folks, initially assigning different odds to a,b, and c, have any rational procedure for converging to a common answer.

In practice, this may not apply to the Riemann conjecture, which could be decided (either way) at any moment, causing the priors to converge. But the idea applies to any true statement that cannot be proved, and we know such statements exist.

Comment #32 February 4th, 2013 at 12:00 pm

lou #32/33

Look, I won’t bet large amount of cash that Scott’s objection to copies make any sense, but your demonstration has some problems too: if you can’t prove a true statement within a given system of axioms, then it must be true and provable within another system (think of your first system augmented with the statement that RH is true).

Comment #33 February 4th, 2013 at 12:45 pm

Jay #32:

Look, I won’t bet large amount of cash that Scott’s objection[s] to copies make any sense…

Would you bet a large amount of cash that they

don’tmake sense?Comment #34 February 4th, 2013 at 1:08 pm

Scott #33/34

If that was my position, would I have asked for an explicit exemple?

I like you corrected objection[s] rather than make[s]. You gotta show me this plural, google translate says.

Comment #35 February 4th, 2013 at 1:29 pm

Scott #33:

I’d be willing to bet large amounts of cash that Scott’s objections to copying make no sense (thus revealing my own estimates of priors). I see no problem with copying a large, complex, autonomous object (say Scott, or a Google self driving car). The car would even share Scott’s confusion with priors (as in “Hey! What someone doing in *my* reserved parking place?”). But such confusion is just something you have to live with, like it or not, just as Einstein had to live with QM no matter how much it insulted his sensibilities.

I should add I work in neuroscience, and see no evidence that QM is needed for the (large scale) operation of a brain. It’s needed for the smaller parts, such as ion channels, but these seem to behave as independent actors, with no QM interactions or entanglements. And given the difficulty or preserving QM interactions over large distances, it’s hard to see how something the size of a brain could depend on these interactions and still be robust. So intuitively I feel that overall the brain is a very classical system, and can be copied freely.

Of course, perhaps I am wrong. Maybe consciousness *does* cause wave functions to collapse, so duplicating a consciousness is forbidden by some weird QM selection rule. That’s why, as a Bayesian, I’m willing to bet large amounts of my money, but not all of it…

Comment #36 February 4th, 2013 at 2:28 pm

Lou #35/36

Maybe you should distinguish the possibility that some objection is true from the possibility that it makes sense. As an example I’d agree that Penrose is likely^likely wrong, but I’d not say his reasoning makes no sense at_all^at_all.

[unrelated but I had a look at your current work, and as a neuroscientist I applaud. ;-)]

Comment #37 February 4th, 2013 at 8:46 pm

Jay #36,

I fully agree that “true” and “makes sense” are not always found together where QM is concerned, especially since “makes sense” seems to be time-varying.

For example, when Bell’s inequality was first introduced, folks said “That makes no sense!”, and did not believe it until it was measured. But now when people propose hidden variable theories, folks say “That makes no sense! It does not account for Bell’s inequality!”.

Furthermore, as a Bayesian, I must say that even with subjects less tricky then QM, and even when I’ve been certain, sometimes I’ve been confused, or wrong, or confused and wrong together. So I may be sure that something makes sense, and be willing to bet big bucks, but certainly I’ll agree this is not the same as “true”.

Comment #38 February 5th, 2013 at 7:56 am

Jay 32,

What you say about unprovable but true statements makes sense. However, I don’t think it changes the convergence problem. Suppose the statement they are arguing over is: “Can RH be proved without additional axioms?”. Then all three alternatives are still possible, and no amount of additional data will force the estimates of which one is most likely to converge.

[Unrelated – what kind of neuroscience do you do?]

Comment #39 February 5th, 2013 at 11:15 am

Lou 38/39

Formulated this way the short answer is: there is no reason they would be limited by the set of axioms they discuss.

However I think what you really meant is: “Suppose they can be described by a set of axioms [as computationalism seems to imply] and they discuss a statement that can’t be proved within this set of axioms.”. In which case I’d ask back: how did they came to discuss this statement in the first place?

So my long answer is: if they were able to discuss a statement, by definition they can use this statement to construct a system not unable to prove this statement, so the question is self-contradictory.

Of course, you could prove that there is always some statement that they can’t prove and even discuss, but as we can select axioms at random this limitation is more theorical than practical. Actually, in the context of what intelligence can and can not do I don’t think this line of thought adds anything to the simpler statement: “whatever how long lasting the human civilization will be, there is a set of axioms too large to be reachable.” Of course one won’t sell a lot of book with that.

[neurorehab]

PS: our priors on Scott being able to demonstrate that copies pose some additionnal problems for convergence of Baysian reasoning are starting to converge.

Comment #40 February 16th, 2013 at 5:01 pm

[Out of slow random thinking here’s a reduction of Scott #15 from “consciousness” to “factorization”.]

Let X be your favorite definition of [factorization] in terms of observable characteristics. Then consider a physical system, Y, that satisfies X but “only as a robot or zombie”.

[As for philosophical zombies a natural candidate is a lookup table which would happen to include enough q&a to exhaust any realistic tester.]

Now, maybe the above can’t actually be done in our universe: [you bet] maybe [factorization] necessarily accompanies certain sorts of physical organization. But even if so, the very fact that it seems consistent to imagine it being done, suggests that the observable characteristics X can’t possibly have been what we really meant by “[factorization]”: instead we must’ve meant some other thing that accompanies X.

[In a way that’s true, what we meant by “factorization” is the ability to factorize any number, not the ability to recall from a limited set of possible answers. The only way to tell Y appart from this “mathematical zombie” is to look at how Y is constructed. That’s the obvious answer for factorization, that’s my point for consciousness as well.]

Comment #41 July 1st, 2013 at 8:54 am

A fresh paper rather in the spirit:

Can quantum probability provide a new direction for cognitive modeling?, Emmanuel Pothos and Jerome Busemeyer, Behavioral and Brain Sciences

Andrew Gelman notes its welcome:

http://andrewgelman.com/2013/05/15/does-quantum-uncertainty-have-a-place-in-everyday-applied-statistics/

Choice theory – in the vein of Kenneth Arrow – seems rife with dead ringers of quantum uncertainty. Trickier to imagine what each side might want from the other.