Quantum Computing Since Democritus Lecture 17: Fun With The Anthropic Principle

Here it is. There was already a big anthropic debate in the Lecture 16 comments — spurred by a “homework exercise” at the end of that lecture — so I feel absolutely certain that there’s nothing more to argue about. On the off chance I’m wrong, though, you’re welcome to restart the debate; maybe you’ll even tempt me to join in eventually.

The past couple weeks, I was at Foo Camp in Sebastopol, CA, where I had the opportunity to meet some wealthy venture capitalists, and tell them all about quantum computing and why not to invest in it hoping for any short-term payoff other than interesting science. Then I went to Reed College in Portland, OR, to teach a weeklong course on “The Complexity of Boolean Functions” at MathCamp’2008. MathCamp is (as the name might suggest) a math camp for high school students. I myself attended it way back in 1996, where some guy named Karp gave a talk about P and NP that may have changed the course of my life.

Alas, neither camp is the reason I haven’t posted anything for two weeks; for that I can only blame my inherent procrastination and laziness, as well as my steadily-increasing, eminently-justified fear of saying something stupid or needlessly offensive (i.e., the same fear that leads wiser colleagues not to start blogs in the first place).

40 Responses to “Quantum Computing Since Democritus Lecture 17: Fun With The Anthropic Principle”

  1. Ian Durham Says:

    You were in high school in 1996? Man that’s depressing. I better get my butt in gear and do something less than mediocre since I was already engaged to my wife in 1996.

  2. Ian Durham Says:

    OK, that came out kind of bad (good thing my wife doesn’t read this). I meant to say something to the effect that I’m older and yet have accomplished a minuscule amount in comparison.

  3. Scott Says:

    It’s funny, Ian — to me, 1996 already feels like the impossibly-remote past. A Democrat in the White House, the twin towers still there, AltaVista the #1 search engine, stegosauruses roaming the countryside … come to think of it, I can’t believe how little I’ve achieved since then. (Cf. this Onion article.)

    I just finished reading the book The Shock Doctrine by Naomi Klein, and I had a reaction probably very different from the one she wanted me to have: in less than a decade, this previously-unknown writer has mobilized millions of people to overthrow the capitalist world order (for better or worse), and what have I accomplished?

  4. Ian Durham Says:

    Yeah, 1996 does seem so very long ago. Time really started to move along once my kids hit school. It doesn’t help when I have students who accomplish more in their four years in college than I have since college.

  5. Chris Granade Says:

    Geez… 1996. That takes me back. Things have changed for me since then, that’s for sure, but I don’t know if that’s because of what I’ve accomplished, or if it’s the world around me changing. It’s always hard to measure one’s accomplishments from so close-up.

    BTW, how was Shock Doctrine? I’d been meaning to read it ever since I saw the promo video. I read No Logo way back when, and found it to be quite insightful. It’s why I don’t wear more advertising logos (well, one reason).

  6. John Sidles Says:

    Back in the early 80s, I remember being depressed by the Doomsday (dystopian) anthropic arguments … until I realized that all such arguments can very easily be converted into equally compelling Jubilee (utopian) anthropic arguments.

    All we need to do is remove the temporal argument from our reasoning (as David Deutsch would have us do anyway). Then we ask the atemporal question, are we more likely to be embedded in a universe with few anthropic reasoners, or a universe with many anthropic reasoners … the Jubilee answer being (obviously) the latter.

    It seems, therefore, that we can predict with confidence that every (rational) step we take to increase the number of people engaged in anthropic reasoning will succeed.

    Systems biology? You bet. Synthetic biology? Sure. Space colonization? Guaranteed. Time travel? Okey-Dokey.

    Good luck in the future? Heck, it’s anthropically guaranteed … just as surely our good luck in the past.

    The point being … it’s all good … the fix is in … the odds are in our favor (even if we don’t know how or why) … and we’re guaranteed to win … provided we keep trying!

    Is the above reasoning tongue-in-cheek? Well, it started that way … but now I dunno … maybe the “Atemporal Anthropic Principle” compelled me to write it … and you to read it! :)

  7. Michael Brazier Says:

    I don’t think it makes sense to say “it’s more likely that I exist in a universe with more people like me, than in one with fewer”, even in a loose, qualitative way — never mind assigning numerical probabilities to the answers. That would imply that, for any natural number N, a population of N+1 is more likely than one of N, N+2 is more likely than N+1, N+3 more likely than N+2, and so on. But since there are an infinity of natural numbers, the expected population of people like me, given that I exist, is unbounded; an absurd conclusion.
    Since all the arguments Scott considers in Lecture 17 rely on the notion that a larger population is more probable than a smaller one, it’s not surprising that they all lead to peculiar conclusions …

  8. John Sidles Says:

    Michael Brazier Says: I don’t think it makes sense to say “It’s more likely that I exist in a universe with more people like me, than in one with fewer”

    Golly … that is indeed a slippery point.

    Because what sensible answers could purely anthropic reasoning give to the question “What is the most likely population of sentient beings in the observable universe?” … other than 0,1, or infinity … of which only “infinity” is not directly contradicted by data.

    Since purely anthropic reasoning (seemingly) fails, and known physics gives no answer, this seems to imply that new physics remains to be discovered. Hurrah! :)

    Specifically, the physics that sets the population scale.

    And yet, one somehow hates to think that “mere physics” could determine such an important number … :)

  9. Scott Says:

    But since there are an infinity of natural numbers, the expected population of people like me, given that I exist, is unbounded; an absurd conclusion.

    Michael: Let pn be the “prior probability” of there being n people like you (supposing n is finite). Then at a purely mathematical level, SIA is not absurd so long as

    Σ1≤n<∞ n pn

    is finite. For example, maybe you thought naïvely for some reason that there was a ~1/n3 probability of n people. Then after applying SIA, you should expect that the probability of there being n people scales like ~1/n2.

    SIA is also not absurd at a mathematical level if p>0 — that is, if there’s a nonzero probability of infinitely many people — for in that case, it simply tells you that infinitely many people is an absolute certainty.

    (Here I assume we’re talking a countably infinite number of people! :-) if there are different infinite cardinalities, then SIA tells you that the population should certainly be C, where C is the largest infinite cardinality satisfying pC>0. If no such cardinality exists, then SIA again becomes mathematically absurd, unless of course the prior probabilities pC can be surreal numbers…)

    Needless to say, our being able to make sense of something on a mathematical level, doesn’t preclude its being absurd on almost every other level.

  10. John Sidles Says:

    Scott, for summertime reading, an enjoyable mathematics-themed novel (there aren’t very many of them) about a universe containing an uncountable infinity of sentient beings is Rudy Rucker’s White Light.

    Rucker discusses all the higher cardinalities in detail, and the exposition includes lengthy quotes from Cantor himself.

    For some unfathomable reason, this novel never made the bestseller lists. :)

  11. Douglas Knight Says:

    John Sidles,
    You are tailoring your predictions by manipulating the bins (the universe vs the universe now). That’s usually a warning sign that there’s something wrong. It’s certainly true that anthropic reasoning tells us that we live in a populous universe, but only because the universe right now has a big population, at least compared to the alternatives. If population were going to keep increasing, we should be later, not now.

  12. Jonathan Vos Post Says:

    “… the probability of there being n people scales like ~1/n^2….”

    So I shoulf expect that there are zeta(2) = pi^2 / 6 ~ 1.6449340668482264364724151666460251892189499012067984377355582293700074704032 people?

    That’s Adam and Lilith, who is roughly 64% human, before an anthropically unlikely Eve is created for the Graden of Eden configuration by MANifold surgery?

  13. John Sidles Says:

    Douglas Knight says: … if population were going to keep increasing, we should be later, not now …

    You are absolutely right … that is why discounting temporal information is the key move that converts dystopian anthropic reasoning (we’ve been lucky so far, but Doomsday is nigh) into utopian anthropic reasoning (good luck is our past history *and* our future destiny … yeah baby!).

    My original post was written tongue-in-cheek—the topic being ‘Fun With the Anthropic Principle’—but it does provide us with a strong motivation to analyze the peculiar role that temporal considerations play in anthropic reasoning.

  14. Eden Says:

    Scott,

    you write (about the bank robber thought experiment):

    Question: It’s not clear to me that you can just take that limit, and not worry about it. If your population is infinite, maybe the madman gets really unlucky and is just failing to roll snake-eyes for all eternity.
    Answer: OK, we can admit that the lack of an upper bound on the number of rolls could maybe complicate matters. However, one can certainly give variants of this thought experiment that don’t involve infinity.

    Could you give an example of such a variant?

  15. Scott Says:

    Eden: What I had in mind was just the Doom Soon / Doom Late scenario introduced afterward, where there are only two possibilities and not an infinite number.

  16. Patrick (orthonormal) Says:

    I find it telling that the finite-population variant of the Dice Room example gives you exactly a 1-in-36 chance of death on the night you’re kidnapped; it’s only by assuming an infinite supply of people that you can argue for 9/10.

    Since the total population capacity of the universe as we know it is finite, I feel that some consideration like this should apply to the Doomsday Argument too, although we have the additional information of the current and past population size to work with.

    Also, it seems that the Many-Worlds Interpretation of QM suggests the SIA; if all the branches exist and you’re as likely to be one conscious mind-pattern in the multiverse as another, then of course branches with more such patterns are going to be privileged above brances with fewer.

  17. Chris Granade Says:

    A lot of these anthropic assumptions like SIA seem to dodge a big question (one which I didn’t notice at first– thank Andy): what is an observer? As ridiculous as the “soul warehouse” is, how else is one supposed to interpret the use of “observer” in this kind of anthropic reasoning? Does an AI that passes the Turing Test count? How about a puppy? There’s some implicit anthropocentric bias here, but it just gets swept under the rug.

    From another perspective, consider the Doomsday Argument again. Which probability is more useful in an empicial sense: the naïve or the SIA probability? Whenever we use the naïve reasoning in real life, it seems to give better results (we don’t run around saying the world’s gonna end without a good reason), so why shouldn’t we prefer that sort of reasoning?

  18. Greg Egan Says:

    Chris wrote:

    Whenever we use the naïve reasoning in real life, it seems to give better results (we don’t run around saying the world’s gonna end without a good reason), so why shouldn’t we prefer that sort of reasoning?

    I think we should, though I’d substitute the word “causal” for “naïve”. There’s nothing naïve about insisting that we don’t have access to information about the future, or to the number of intelligent jellyfish living in the atmospheres of gas giants, merely by noticing that we exist.

    On the definition of “observer”, Bostrom talks about the “reference class” in these kinds of arguments, which IIUC means, roughly, the class of beings you’re being invited to imagine you “might have been”, and hence which you are invited to suppose you’ve been selected from at random. What I argued for in my comment here is that what you’re really being invited to do is to care about strategies that optimise something for an inappropriately large reference class. Why “inappropriately large”? Well, it’s not that I can’t imagine being a jellyfish in a gas giant’s atmosphere, or a person living in the far future, and it’s not that I don’t care about such beings … but I really don’t want to blur the distinction between averages taken over classes that include such beings, and accessible, contemporary knowledge.

    For example, in the Doomsday Fallacy we’re (implicitly) asked to employ a strategy that will yield the best guess for the length of time a civilisation endures, if the strategy is used uniformly by every member of that civilisation over time, and the aim is to cause the expectation value of the average guess (an average that includes the guesses of people yet to be born) to match the average age of civilisations drawn from some distribution. If someone comes clean and tells me “devise a strategy with these goals”, well, fine, I know how to do that, but then the limited usefulness of the result, here and now, will be out in the open — which is, not very useful at all.

    Similarly for Boltzmann brains and intelligent jellyfish. Strategies that yield cosmological insights when employed across reference classes that include those beings don’t yield cosmological insights when employed across vastly smaller reference classes, such as “All humans living on Earth on 26 July 2008″. The anthropic sleight of hand is getting you to think that you “might have been” a member of a huge reference class which, through its extensive nature, gets to sample more of spacetime than you have access to yourself. Because “the class of beings I might have been” sounds very profound and philosophical, people get confused and have long arguments about its members. But if you cut the metaphysical crap and look at the set of beings you are actually able to average over in practice, you end up with sensible strategies for things like gambling or public health, and you give up the illusion of having cosmological knowledge or insights into the future that you really don’t possess.

  19. Radford Neal Says:

    Scott,

    A couple comments.

    First, what you call the “Dice Room”, also known as the “Shooting Room” is puzzling only because of a technical error – it’s just not possible to set up a probability space for this problem that’s consistent with the standard axioms of probability theory, notably “countable additivity”, that the probability of the union of a countably-infinite collection of disjoint events is the sum of their probabilities. The non-technical symptom of this is that the puzzle only works if the supply of people is truely infinite. Any finite limit, no matter how large, and the puzzle disappears. See the following reference:

    Paul Bartha and Christopher Hitchcock (1999) The Shooting-Room Paradox and Conditionalizing on Measurably Challenged Sets, Synthese. Vol. 118, Iss. 3; p. 403.

    Some people have advocated using probabilities that don’t obey countable additivity, but the strange results that come from this make it not a viable option as far as I’m concerned.

    Second, I wonder what you think of the resolution of these problems in my paper below:

    Radford Neal (2006) Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, available from http://arxiv.org/abs/math.ST/0608592

    As I recall, you said you weren’t entirely convinced by my arguments, but I didn’t ever hear the reasons for this from you…

    For others, the brief summary of my paper is that real Bayesian reasoning always conditions on everything you know. So you shouldn’t condition on being a red-head, for instance, but on being a red-head with a mole on their left foot, who likes classical music, who saw a butterfly land on the railing of their doorstep yesterday, who…. etc. Once you realize this, the equivalent of SIA follows naturally, since the more people there are the greater the probability of someone with exactly your memories existing.

  20. Jonathan Vos Post Says:

    I agree with Greg Egan about the absurdity of what organism we COULD have been; that you could have been me; that there but for the grace of God go I; that I could have been dead; that I was dead but could have come back to life; that I could have been an intelligent jellyfish in Jovian ammonia-methane slush; and Bostrom’s unintentional parody of Bayesian logic squeezing the multiverse into a ball to roll it toward some overwhelming anthropic question.

    For that matter, another imaginative writer agrees:

    I should have been a pair of ragged claws
    scuttling across the floors of silent seas…
    And would it have been worth it, after all,
    After the cups, the marmalade, the tea,
    Among the porcelain, among some talk of you and me,
    Would it have been worth while,
    To have bitten off the matter with a smile,
    To have squeezed the universe into a ball
    To roll it toward some overwhelming question,
    To say: “I am Lazarus, come from the dead,
    Come back to tell you all, I shall tell you all”—
    If one, settling a pillow by her head,
    Should say: “That is not what I meant at all.
    That is not it, at all.”

    [Excerpts from "The Love Song of J. Alfred Prufock" by T.S. Eliot]

  21. Job Says:

    I’m not afraid of saying stupid things.

    Like everything else, it takes practice. For example, say something stupid intentionally a few times of day, during lunch or immediately after saying something smart.

    It goes a long way towards leaving the lines of stupidity open. If you’re confident in your own intelligence you can handle looking like an idiot now and then.

  22. Job Says:

    You know, i don’t know whether i was kidding in my previous message. I really do strive to keep the lines of stupidity open on a regular basis.

  23. Luke G Says:

    This lecture really got me thinking! Some comments:

    Here’s another way to think about the error of the “Dice Room” problem. When I try to translate this mathematically, I get these:

    1) 1/36: A person enters the room; you have no other information. There is a 1/36 chance they die.

    2) 9/10: The killer has finished (he rolled snake eyes). A random person who entered a room is selected. They die with probability 9/10.

    The key point is in case (2) that you have information that the process has terminated. That information completely changes your probability space.

    Here’s a variation on the first problem that got me thinking.

    1) God flips a fair coin; heads, he creates a world with aleph0 redheads, tails, he creates a world with aleph0 redheads and aleph0 brunettes. You observe you have red hair. What is the probability God flipped tails given this observation?

    You run into problems with a Bayesian argument when there are pontentially infinitely many observers…

    2) God flips a fair coin; heads, he creates a world with 1 redhead, tails, he creates a world with 1 redhead and 1 brunette. It is known that brunettes cannot do mathematics. You observe you have red hair. What is the probability God flipped tails given this observation?

    Does the brunette count as a potential observer, since they could never formulate a Bayesian argument?

    I guess this anthropic reasoning is so tricky because it’s inherently self-referential, which of course is the source of many mathematical paradoxes. How do you write mathematically, property X was observed, and *I* am the observer? Maybe you can’t, and you should only use the information that X was observed.

    In the case of the redhead-brunette problem, it gives an answer of 1/2 for all cases, because knowing red hair exists adds no information. In the case of the Presumptuous Philosophers, you have information “A universe exists, and intelligent life exists”. (Of course you can keep adding more properties of our universe to things we’ve observed.) This would give an answer somewhere between 1/2 and 1/billion of being in the smaller universe, depending on how frequently you believe the occurence of intelligent life is.

  24. anon Says:

    Let the games begin:

    http://tech.slashdot.org/article.pl?sid=08/07/27/125241

    http://www.tomshardware.com/reviews/super-cooled-quantum-computing,1976.html

  25. Bram Cohen Says:

    Funny thing, I’m at Foo camp more often than not, but wasn’t invited this year, so I missed you.

  26. Dan Says:

    On http://www.wavemodel.org, Tony Booth suggests a reason for the fine structure constant being precisely cuberoot( pi/2 ) / ( 16*pi^2 ), which is about 0.5% out from empirical data, apparently. Anyone know of a critique of this in more mainstream physics writing?

  27. cody Says:

    Dan, I’m no particle physics expert, but a quick glance at the site does not promote confidence. I’d prefer to scrutinize the claim rather than the presentation, and the claim is somewhat dubious too, though I don’t yet see his exact justification.

    What does come to mind though, is that the fine structure constant is much more accurately known than to within 0.5% (along the lines of a few parts in a billion), and so he would need some pretty convincing arguments to explain the “huge” discrepancy.

  28. Scott Says:

    Dan, I also wouldn’t pay attention unless he could reproduce the known value exactly (and probably wouldn’t pay attention even then). See Wikipedia for a numerological formula that gets much closer but still looks extremely contrived.

    Some historical perspective might be helpful. Apparently Eddington had an argument for why the fine structure constant had to be exactly 1/136. Then experiments found that it was closer to 1/137, so he revised his argument to show that it had to be exactly 1/137. Then experiments showed it wasn’t exactly 1/137 either…

    Maybe a physicist reader can correct me, but my understanding was that there’s now a sort-of-consensus that one shouldn’t expect any simple closed-form formula for the fine-structure constant, since it presumably emerges from some more fundamental theory in a complicated and not terribly enlightening way, like (say) the proton mass. (Which is not to say the constant might not be calculable from that theory, just that the calculation wouldn’t be simple.)

  29. wolfgang Says:

    > since it presumably emerges from some more fundamental theory in a complicated and not terribly enlightening way

    alpha is the QED coupling constant at low energies and this coupling increases for smaller distances (=higher energies) and actually diverges at the Landau pole (i.e. QED is probably not a fundamental theory).
    Since unification (and thus perhaps simple formulas) are expected at high energies (i.e. at the Planck scale) one would indeed be surprised if a simple formula for alpha would exist.

  30. schani Says:

    I have a (probably stupid question) about the coin toss paradox. In the Bayesian calculation you assume that P[R|H]=1/2, but, considering that you looked in the mirror and discovered that you do have red hair, shouldn’t it be 1? P[R|H]=1/2 if you don’t know what your hair color is, yet. What am I missing?

  31. Scott Says:

    schani: P[R|H] is the probability before looking in the mirror. I agree that P[R|R]=1.

  32. schani Says:

    Ah, now I understand!

    Ok, but what if God proposes a bet to the red-haired person (no matter how the coin landed): The person gets to bet on which side the coin landed on. If they’re right, they win 150$, if not, they lose 100$. If the Bayesian reasoning is correct, the person should only bet on tails. However, no matter what the red-haired person bets on, he’ll be right half the time (i.e. if God runs his experiment N times the red-haired person will be right about N/2 times, no matter which side he bets on).

    So what difference does it make if I am that person? No matter which argument I can muster to the effect that in my particular case, it’s more likely that the coin landed tails, every other red-haired person can make the very same claim.

  33. Scott Says:

    schani, the fallacy in that argument is that it assumes you “must have been” the red-haired person. Yes, you are red-haired, but the very fact that you could have been green-haired might change your probability estimate, and therefore what bets you’re willing to make. In particular, if God runs his experiment N times, then a believer in SSA would only expect a bet on heads to be right about N/3 times. The prior probability distribution looks like this:

    Tails,RedHair: 1/2
    Heads,RedHair: 1/4
    Heads,GreenHair: 1/4

    or after conditionalizing on red hair:

    Tails,RedHair: 2/3
    Heads,RedHair: 1/3

    I wish I had a clearer way to explain it.

  34. Hal Says:

    Max Tegmark has a theory that the multiverse might literally be made of mathematics. We, along with the universe around us, are mathematical structures. Certain mathematical objects have the right internal structure to be self-aware, and those are observers like ourselves.

    In this model, many of these thought experiments don’t work. God Himself cannot change mathematics, any more than he can make 2 plus 2 not equal four. So while He can maybe control some corner of the multiverse (if He is a Mathematical Object), He can’t control the total number of redheads. Likewise Adam and Eve cannot control the population of the multiverse.

  35. Scott Says:

    Hal, I don’t understand what it means for the multiverse to be “made of mathematics” (what else could it be made of? :-) ), but suppose for the sake of argument that that’s both true and non-vacuous. Why would that affect any of these thought experiments? Are you saying that my reference class (the set of observers that I consider myself as having been drawn from) would then have to include all sentient beings in the mathematical multiverse, of which there are self-evidently many more than the paltry few in some particular thought experiment? If that’s the argument, then I don’t see why it’s specific to Tegmark’s multiverse idea: the same argument could be made about a sufficiently large single universe with lots of old-fashioned aliens in it.

    For the purpose of these thought experiments, it seems fair to me to assume that the reference class consists only of those beings that have been mentioned explicitly. For the whole point of these thought experiments is to elucidate assumptions like SSA and SIA, which if valid, ought to apply regardless of what we take the reference class to be.

  36. schani Says:

    Ok, I think understand the reasoning now. I think it can be reworded by using a property I’ll call “youness” instead of the ill-defined “you”:

    The world (with one or two people in it) gets created and at most one of the people in it is blessed with the youness property, and only the bet of the person with that property (if such a person exists) counts. The difference between SSA and SIA now seems to be the probability of some person getting youness. SSA assumes that no matter how many people there are in the world, the chance of there being one that has youness is always the same, whereas SIA assumes that the chance of there being a person with youness is higher if there are more people in the world.

    What I’d like to object to is not one of those two alternatives, but the assumption of youness to begin with. Youness seems to be a completely supernatural property that has no physical consequences.

    Am I wrong?

  37. Scott Says:

    Youness seems to be a completely supernatural property that has no physical consequences. Am I wrong?

    Well, if you’re standing outside the universe looking in on all the various people in it, then youness indeed has no physical consequences. However, if you are one of those people in the universe, then what you believe about the youness function (i.e., “what barrel were you picked from?”) can affect what bets you’re willing to make, as well as, e.g., whether you believe the human race is likely to go extinct soon.

    If you accept the Bayesian concept of rationality, then it’s impossible to avoid having some belief about the youness function. If the conclusion you want to draw from that is “so much the worse for Bayesian rationality,” you have my permission.

  38. schani Says:

    However, if you are one of those people in the universe, then what you believe about the youness function (i.e., “what barrel were you picked from?”) can affect what bets you’re willing to make

    Well, what you believe about God can also affect how you act in the world (help the poor vs fly airplanes into office buildings), but that doesn’t mean that there’s such a thing as God in the first place.

    Please correct me if I am wrong (as I hope I am): For the youness function to exist at all, God (the one in the thought experiment) not only had to create the one or two physical people in the universe, but also had to decide which one (if any) is you. Which seems to imply some form of supernatural soul.

    Sorry I’m such a pain in the ass :-(

  39. Scott Says:

    schani: Yes, whether to believe in a supernatural being who will reward and punish your actions, and if so which actions are rewarded and punished, is one question that an aspiring rationalist has to answer. Whether to believe you’re more likely to exist in a universe with more observers is another such question. Personally, I find the second question the harder of the two.

  40. schani Says:

    Thanks. I’ll read up on it.