Quantum Computing Since Democritus Lecture 4: Minds and Machines

Bigger, longer, wackier. The topic: “Minds and Machines.”

49 Responses to “Quantum Computing Since Democritus Lecture 4: Minds and Machines”

  1. Nagesh Adluru Says:

    Very nice lecture! Based on selfish motive of advertising my posts and popularity of your blog, I take this lecture of yours as an opportunity to present two links
    Realistic perception, perceptual reality and Man, Machine and Math. I hope you don’t mind:)

  2. Cheshire Cat Says:

    “Yes, it’s mysterious, but one mystery doesn’t seem so different from the other.”

    LOL, that’s a great strategy. If you have nothing interesting to say about a question, deny that the questione exists…

  3. Joseph Says:

    So for example: one popular argument is that, if a computer appears to be intelligent, that’s merely a reflection of the intelligence of the humans who programmed it.

    You can think of that as an argument against Intelligent Design. ID might imply that we’re God’s robots. OTOH, if we were produced by evolution, then we’re nobody’s robots.

  4. Braithwaite Prendergast Says:

    Thank you for posting your valuable lectures, Professor Aaronson. I was surprised by your argument about doubting whether anything that looks different from a human can think like a human. The difference between a metal robot and a human is qualitatively distinct from the difference between human racial groups.

  5. Anonymous Says:

    He says that if computers someday pass the Turing Test, then we’ll be compelled to regard them as conscious.

    I think that he is confused. I don’t regard consciousness and intelligence as the same. They seem to be two different things to me. Consciousness is an awareness of your environment while intelligence is the ability to solve problems. I do not think that you need to be conscious in order to be intelligent so I think it is fine to believe that a machine can be very intelligent without being aware at all of its environment.

  6. Anonymous Says:

    Are these lectures exact transcripts of the lecture or summary of what
    happened during the lecture? I admire you for the amount of hardwork
    youare putting into it. It is one thing to give wonderfull lectures
    and another to type the entire lecture out. Thanks

  7. Douglas Knight Says:

    Thank you for the opportunity to comparison shop search engines. I’ve always meant to do it, but it wasn’t until Google failed me on “corporate business alliance with starbucks” that I made a systematic survey. Not that the particular query seems like a good test.

  8. Anonymous Says:

    Can we Can we assume without loss of generality that a computer program has access to its own source code?

    I’ve tried very hard, but I can’t get two instances of gdb to debug each other.

  9. Kurt Says:

    You will, I trust, be revisiting the question of consciousness later on, when you get to the question of interpretations of quantum mechanics?

    BTW, I agree with anonymous 10/3/06 10:42:06 PM. Chalmers has postulated that there is a correlation between consciousness and information processing, which might be taken to suggest that the more intelligent something is, the more “conscious” it will be. I’m sure Chalmers’ theory is more nuanced than that, but the fundamental idea seems wrong to me. A machine that can fully pass the Turing Test would certainly be intelligent (pretty much by definition), but whether or not it would be conscious would seem to depend on the details of how it is constructed. Conversely, it seems to me that it ought to be possible to design a machine that is “dumb” and yet has consciousness. But how would we know?

    Do you think there is anything in quantum theory that offers any genuine insight into any of this?

  10. Anonymous Says:

    Is the course going to tackle Newcomb’s Paradox? Worthy of a lecture on its own!

  11. Osias Says:

    If that which before the 1800’s was called water turned out to be CH4 instead of H2O, would it still be water, or would it be something else?

    The same thing, but the world would be different, like having a 4th primary color and a 5th sex.

  12. Niel Says:

    Anon 10:42 said:

    I do not think that you need to be conscious in order to be intelligent so I think it is fine to believe that a machine can be very intelligent without being aware at all of its environment.

    To be intelligent without an awareness of it’s environment boils down to what “awareness” means. Does it mean having information about it’s enviornment? (Is an intelligent but unaware machine one which is able to pass the Turing test, but never actually does because it doesn’t interact with other computers or with people)? Or is “awareness” just a synonym for “consciousness”, bringing us back to square one?

  13. scott Says:

    cheshire cat:

    LOL, that’s a great strategy. If you have nothing interesting to say about a question, deny that the questione exists…

    No, it’s emphatically not denying that the question exists. It’s reducing it to another unanswered question. There’s a big difference! 🙂

  14. scott Says:

    braithwaite:

    Thank you for posting your valuable lectures, Professor Aaronson.

    I’m not a professor yet, but I do appreciate your newfound cordiality.

    The difference between a metal robot and a human is qualitatively distinct from the difference between human racial groups.

    What about the difference between a latex robot and a human? (Imagine that the robot looks, moves, and sounds as much like a human as you want.)

  15. scott Says:

    anonymous:

    Is the course going to tackle Newcomb’s Paradox?

    You better believe it!

  16. scott Says:

    Are these lectures exact transcripts of the lecture or summary of what happened during the lecture?

    No, they’re just my summary — partly written out beforehand, partly after, and in the future probably incorporating students’ lecture notes.

    I admire you for the amount of hardwork youare putting into it. It is one thing to give wonderfull lectures and another to type the entire lecture out.

    Thanks very much! The impetus was that I taught a course four years ago at Berkeley (called “Physics, Philosophy, Pizza”), and afterwards was frustrated at myself for not keeping notes. So I resolved not to make the same mistake a second time.

  17. HN Says:

    Problem is, Scott, by making your jokes public you’ll have to come up with other jokes the next time you teach another class :-).

    You are of course famous for your sarcasm, linguistically. Can you give an example of a sarcasm theorem? If there is a free-will theorem, there must be a sarcasm theorem.

  18. Anonymous Says:

    Scott said… [Is the course going to tackle Newcomb’s Paradox?] You better believe it!

    And surely, your students knew you were going to do that!

  19. Osias Says:

    I admire you for the amount of hardwork
    youare putting into it. It is one thing to give wonderfull lectures
    and another to type the entire lecture out. Thanks

    I second that!

  20. Scott Says:

    hn:

    Problem is, Scott, by making your jokes public you’ll have to come up with other jokes the next time you teach another class :-).

    On the contrary, think of how many comedians build their entire careers on one joke. If I’m to corner the quantum-complexity humor market (and decimate my rival), then clearly I need a recurring shtick.

    Can you give an example of a sarcasm theorem? If there is a free-will theorem, there must be a sarcasm theorem.

    You’re quite right: just as there’s a Free Will Theorem, there’s an even more important Sarcasm Theorem. The latter states that if experimenters are sarcastic, then every particle in the universe must also have a share of this valuable type of humor. The Sarcasm Theorem came as a complete shock to physicists, is not at all an obvious consequence of the Bell inequality, and ought to covered in the popular press as often as humanly possible.

  21. Scott Says:

    kurt:

    You will, I trust, be revisiting the question of consciousness later on, when you get to the question of interpretations of quantum mechanics?

    Probably!

    Do you think there is anything in quantum theory that offers any genuine insight into any of this?

    Quantum theory certainly adds a new layer of complication to the debate. As for whether it adds any insight … well, hopefully writing and giving a lecture will force me to decide what I think!

  22. B. Prendergast Says:

    A robot that looks and acts exactly like a human would be indistinguishable from a human. For practical purposes, it would be considered a human. Therefore, no one would look at it and declare that it doesn’t think.

  23. Scott Says:

    OK then, now we’re on the slippery slope!

    How about a robot that looks and acts exactly like a human, except that it has three eyes and purple skin? And if that doesn’t matter, then why not make the skin out of silicon and the skeleton out of metal? How much of the robot do you need to replace with “inorganic” (but functionally identical) parts, before it no longer thinks?

    (Incidentally, for this discussion we’re less interested in how people would regard the robot than in how they should regard it.)

  24. Bram Cohen Says:

    You dodged the whole issue of how unreliable humans have historically been at noticing when foundational axioms are inconsistent, and the whole debate about whether any large cardinal axioms should be accepted.

  25. Cheshire Cat Says:

    OK, OK, let’s play the reduction game. I reduce your reduction (making the highly dubious assumption that it exists) to:

    “Is the universe infinite?”
    That’s like asking “Does God exist?”
    Yes, it’s mysterious, but one mystery is not so different from the other.

    Cosmologists, please stop wasting our time.

    One is used to postmodernist sages and mock social texters abusing (gleefully) scientific terminology but to find an actual scientist doing it in good faith is a little depressing…

  26. Chris Says:

    One is used to postmodernist sages and mock social texters abusing (gleefully) scientific terminology but to find an actual scientist doing it in good faith is a little depressing…

    The original question was (I think) “Can a sufficiently sophisticated computer be regarded as conscious.” Our best present criteria for consciousness are phenomenological, and if a computer exhibits all the phenomena we normally associate with consciousness, then we’re compelled to regard it as conscious. Otherwise we’re just being prejudiced, which is the last thing you want in a scientist.

    What question is this dodging exactly? The “hard problem of consciouness”? We can’t even answer the “hard problem of the electron”, but it hasn’t stopped us understanding how it behaves. It’s the postmodernist sages and social texters who waste their time with the “hard” problems.

  27. Braithwaite P. Says:

    I don’t want to slide down a slippery slope, but I wonder if a being that has three eyes and purple skin would be regarded as a human. I use the phrase “would be regarded,” because the word “should” is only used to signify the expectation of obedient response to an imperative command.

  28. Niel Says:

    I wonder if a being that has three eyes and purple skin would be regarded as a human.

    Probably not. However, whether they are regarded as human is irrelevant to whether they should be regarded as conscious.

    Making a robot look human is just a way to fool people into thinking of them as conscious by making it easy to think of it as human. Functionally speaking, if a robot can fool you into thinking it is human under any superficial circumstances — behind a layer of latex, behind a terminal connection — then what do we know about it that would allow us to infer that it is not conscious.

  29. Br. Prendr. Says:

    In the matter of relevance, please note that Scott Aaronson’s example was not related to whether a robot is conscious or not. It is related to the problem that an observer would believe that other humans think because the human observer thinks. Other humans would include anything that looks and acts exactly like a human. This does not include beings with three eyes and purple skin.

  30. niel Says:

    Alright, then: what is the difference between “computation” and “thought”, and what is the difference between the problem of whether a human thinks and the problem of whether a robot thinks?

  31. Scott Says:

    You dodged the whole issue of how unreliable humans have historically been at noticing when foundational axioms are inconsistent, and the whole debate about whether any large cardinal axioms should be accepted.

    Give me a break — I can’t cover everything in one lecture! You could equally well say that I “dodged the whole issue” of quantum mechanics, or neurobiology.

    I did talk about consistency and large cardinals in Lecture 3, and I’ll certainly return to those things when we discuss Penrose’s views in more detail a few weeks from now.

  32. Bram Cohen Says:

    Oh yeah, I also wanted to point out how you dodged the whole issue of neurobiology 🙂 Inevitably, we will figure out how human brains work (although at the current rate of progress it may take a while) and once that happens we’ll no longer be able to hand-wavingly talk about the human brain being ‘mysterious’.

    The evidence indicates strongly (some might say overwhelmingly) that we’re basically monkeys with a little bit of symbolic manipulation ability tacked on, and that most of our self-awareness of how our own brains work is extremely misleading at best.

  33. Scott Says:

    cheshire cat:

    OK, OK, let’s play the reduction game. I reduce your reduction (making the highly dubious assumption that it exists) to:

    “Is the universe infinite?”
    That’s like asking “Does God exist?”
    Yes, it’s mysterious, but one mystery is not so different from the other.

    One is used to postmodernist sages and mock social texters abusing (gleefully) scientific terminology but to find an actual scientist doing it in good faith is a little depressing…

    Before getting depressed over what I said, you might try to understand what I said.

    The point is not that we have two things and they’re both mysterious; it’s that the mysteries are related, in the sense that if we could solve one then we could presumably solve the other.

    If we knew whether the universe was infinite, I don’t see how that would tell us anything about the existence of God or vice versa.

    A better analogy would be this: we don’t know if magnetic monopoles exist, and we don’t know if electric charge is always quantized, but we know (thanks to Dirac) that if magnetic monopoles exist then charge is always quantized. Or is that also postmodern BS?

    Chalmers is arguing that, if we understood how it is that a robot could think, then presumably we’d also understand how it is that we could think. Or taking the contrapositive, until we understand how it is that we can think, we can’t hope to understand how it is that a robot could think. You can agree or disagree with this view — obviously, it can no more be proved than anything else in this business — but it doesn’t strike me as ridiculous.

  34. Cheshire Cat Says:

    All I wanted was more of an argument for the “reduction”. I’m sure you’re aware that your rhetorical strategies, entertaining as they are, sometimes obscure the point you’re making 🙂

    My issue is with the statement that we will understand how robots are conscious just as well or poorly as we understand how humans are conscious. But these are two different things; moreover, both the human brain and the robot can be investigated empirically. These investigations may proceed at different rates and have different conclusions, or they may not. I see no reason to prejudge the issue by positing an arbitrary “reduction”.

  35. Cheshire Cat Says:

    Oh, and I also think the Dirac reference obscures the issue. I know what a reduction is, in a scientific sense. Which is why I get annoyed when a philosopher (or a scientist on behalf of a philosopher :)) borrows the term to give his speculation a spurious gravitas.

  36. brthwt prndrgst Says:

    I have to retract my former claim. The demarcation between human and non-human is vague in some cases. A purple human might be one with an extensive capillary hemangioma (strawberry birthmark). If a baby was born with three eyes as a result of pre-natal exposure to Phinolvex, it would still be regarded as human. I would assume that they have human thoughts and not robotic, programmed computations. Kasparov asserted that Deeper Blue seemed to have human characteristics which included a willingness to adopt unorthodox tactics.

  37. Scott Says:

    cheshire cat: The argument is just that the brain, so far as anyone can tell, looks like (as Marvin Minsky put it) a “computer made of meat”. Yet we believe, based on first-person experience, that this meat-computer somehow gives rise to consciousness. Since whether a computation runs in silicon or meat seems manifestly irrelevant to me, for me the easiest way out is to say that (1) certain kinds of computations (however they’re implemented) do give rise to consciousness, and (2) we don’t understand how or why. At least, that’s where my own ruminations ended up years ago; later I found out that Chalmers had developed the same ideas in more detail.

    Like most philosophical arguments, I would not present this one as having too much logical force. Rather, I’d present it as an intellectual option that’s “on the menu” for those who want to order it — one that not everyone might have realized was available to them. I’m sorry if that wasn’t clear.

  38. Anonymous Says:

    scott, why is your blog so popular?

  39. Douglas Knight Says:

    Bram Cohen:
    how unreliable humans have historically been at noticing when foundational axioms are inconsistent

    What are you refering to?

  40. Cheshire Cat Says:

    Douglas: Frege? And wasn’t there one of Quine’s that turned out to be inconsistent?

  41. Scott Says:

    scott, why is your blog so popular?

    I dunno — why are you reading it?

  42. Bram Cohen Says:

    doug knight: for starters, there’s russell’s paradox. Frege was no fool, but he screwed up in a major way.

  43. Anonymous Says:

    scott, why is your blog so popular?

    I dunno — why are you reading it?

    i’m reading it because it is so popular.

  44. Douglas Knight Says:

    Bram Cohen:
    I knew you were talking about Russel’s paradox, but maybe I don’t know enough about what Frege did to appreciate it. Anyhow, now that you’ve confirmed that you have other examples, what are they?

    Cheshire Cat:
    Could you say more?

  45. Scott Says:

    scott, why is your blog so popular?

    I dunno — why are you reading it?

    i’m reading it because it is so popular.

    Just following the crowd, eh? Baah-a-a-a, baah-a-a-a…

  46. Anonymous Says:

    >>>scott, why is your blog so popular?

    >>I dunno — why are you reading it?

    >i’m reading it because it is so popular.

    Dude, the anthropomorphic principle competition is over.

  47. Cheshire Cat Says:

    Guess it’s now the ovimorphic principle competition…

  48. Cheshire Cat Says:

    Douglas, I’m not an expert on logic, but I believe Quine defined an extension of his New Foundations system (proposed as an alternative to ZF) to include classes, and this extension was proved inconsistent by Rosser. Quine later modified his axioms so that Rosser’s proof no longer held. However, I’m not sure how many people believe NF is consistent, let alone the extension.

  49. Douglas Knight Says:

    That would be:
    Rosser, Barkley. The Burali-Forti paradox. J. Symbolic Logic 7, (1942). 1–17.

    It is my understanding that the consistency of NF is open.

    Rosser wrote a book “Logic for Mathematicians” (1953) using NF, so he seemed to have faith in them, even after he found problems. But I don’t know what one would expect of him, whether that’s evidence of anything.