Cerebrum-stuffer from Shtetl Claus

Ho3!  Home with family for the holidays and looking for something to do?  Then check out the archives of our 6.893 Philosophy and Theoretical Computer Science course blog.  The course just ended last week, so you can find discussions of everything from the interpretation of quantum mechanics to Occam’s Razor to the Church-Turing Thesis to strong AI, as well as links to student projects, including Criticisms of the Turing Test and Why You Should Ignore (Most of) Them, Barwise Inverse Relation Principle, Bayesian Surprise, Boosting, and Other Things that Begin with the Letter B, and an interactive demonstration of interactive proofs.  Thanks to my TA Andy Drucker, and especially to the students, for making this such an interesting course.

21 Responses to “Cerebrum-stuffer from Shtetl Claus”

  1. Timothy Gowers Says:

    I look forward to delving further into the archive, but here’s a preliminary reaction to one small point: the discussion of the criticism that a brute-force machine could in principle pass the Turing test. I accept (of course) that such a machine couldn’t exist in our universe. However, one could imagine a machine that was similar to a human brain but did certain tasks by brute force — a bit like a real human acting on autopilot but much more so. I can imagine machines of this type that I would not want to call fully intelligent, though I would certainly be very impressed by them. For example, if I found out that it did mental arithmetic in the way that a pocket calculator does arithmetic, with built-in delays to mimic human behaviour, then I’d want to say that it was missing something important.

  2. Arul Says:

    Wow impressive archive. Going to read atleast a few sections of my interest!!

  3. rrtucci Says:

    Wow, that course was “not even philosophy”.

  4. Radford Neal Says:

    I think the solution to the “brute force” objection to the Turing Test is that the Turing Test does not purport to identify the spatial and temporal location of the intelligent entity that it may indicate exists. If the machine interacting with the Turing Test interviewer works by looking up responses in a stupendously large look-up table, then how did that table get filled in? If it was filled in by an (impressivley long-lived) human, or by an AI that doesn’t use such a look-up table, then there is indeed an intelligent entity producing the responses to the interviewer’s questions. It’s just not located where you might have assumed it is.

    On the other hand, suppose that the table was filled in at random, and the interviewer knows this. Will the interviewer then be convinced that an intelligent entity exists? No, they will attribute the apparent intelligence of the machine to chance. Although I’m not sure it’s ever been made explicit, it’s clear that the real criterion for success in the Turing Test is not that the machine has made a long series of human-sounding responses, but rather that you have inductively concluded that it will continue to make such human-sounding responses to future questions. You won’t draw such a conclusion if you know that the responses are random.

  5. Abel Says:

    Amazing archive. Stumbled upon it some time ago actually, wondered when were you going to publicize it.

    About the brute-force objection to the Turing test, well, we all know that there are properties of a process that depend on aspects beyond which inputs map to which outputs (from a complexity point of view, for example, we have the time and space consumption). The way I view the brute-force objection to the Turing Test, is as saying that whatever kind of intelligence we are looking for with the Turing Test, it is probably a property of this kind. That seems reasonable to me.

    (By the way, I would like to thank #4, the point in the first paragraph is very good!)

  6. Timothy Gowers Says:

    I agree with Abel. In particular, I think that restrictions on short-term space consumption are a very important aspect of human intelligence: a program that could mimic human behaviour but only by making huge space demands would be exhibiting something that one could choose to call intelligence, but I think it would be fundamentally different from human intelligence. Having said that, I’m not completely confident that such a program would pass the Turing test, since it would have to pretend not to be able to all sorts of things that it could in fact do, and it would have to be able to tell what those things were.

  7. aris Says:

    Interesting piece by Colin Allen in yesterday’s NYT (“The Future of Moral Machines”).

    http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/

    Includes this:

    “The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior.”

  8. Noam Says:

    A related workshop: http://www.fil.lu.se/conferences/conference.asp?id=51&lang=se

  9. Tim Converse Says:

    I was interested by Timothy Gowers’ comment that he would want to ascribe less human-like intelligence to an machine that essentially had a “calculator co-processor” to do arithmetic. I’m wondering what the essential features of this objection are. Is it the speed, the automaticity, or the fact that the workings of the calculator are opaque to the rest of the agent?

    The human visual system has a lot of these properties – fast, dedicated to task, mostly opaque to conscious inspection. You could say that humans have a “vision co-processor”. One could imagine humans meeting an alien species for whom low-level visual processing is conscious and effortful – would it be reasonable to them to feel that something was missing in our intelligence? I fear that this kind of objection implicitly equates human intelligence with the conscious slice of intelligence.

    Finally I have to ask – as a human is it unfair to take the Turing test with recourse to a calculator on your desk? 🙂

  10. Abel Says:

    Also, a question somewhat related to the brute force criticism of the Turing Test that might be interesting is whether some parts or functions of the brain do work in a way similar to a brute force table (sorry if this is a stupid question, I am not that familiar with the brain). If the answer was affirmative, then it would be interesting to look at which kinds of intelligence are associated with these parts/functions of the brain.

  11. Querious Says:

    I guess it is self-evident that most reading this blog would disagree, but I was wondering if anyone was interested specifically in commenting on the last sentence of Aris’ NYTimes reference: “They [newer approaches ‘beyond’ the computer’] demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior.”

  12. Jiav Says:

    Abel, what feature(s) should let us say that some system does or does not work similar to a brute force table?

  13. Chris Says:

    I loved your Paper on the possible applications of computational complexity theory to questions in philosophy so i know going to love this!! thanks for putting this up for us all to enjoy Scott! : )

  14. John Sidles Says:

    I have to agree with (what I take to be) one thrust of Tim Gowers’ comments, namely that problematic issues are associated to “brute force” Turing Machines, even ones that pass the Turing Test.

    One possible path toward formalizing these issues would be to require (as part of the Turing Test) a witness extraction procedure that would require of any sentient algorithm the feasible extraction of a witness, with said witness amounting (in essence) to a short explanation if “How I work”.

    The intuition being, that thereby any “brute force” Turing Machine would be obligated — as an required ingredient of sentience — to provide (in essence) a short algorithm for emulating itself … whereupon the “brute force” algorithm renders itself nugatory.

    In human terms, the required process of witness extraction would be, very broadly speaking, the TM equivalent of a psychoanalysis — which indeed is a quintessentially human activity.

    In accord with ordinary human understanding, we would scarcely call an entity that could not concisely rationalize its actions “sentient” in the common sense of the word.

  15. Abel Molina Says:

    @Jiav: Great question! I can give you a partial answer, using as a toy example systems that multiply two numbers m and n.

    In the non brute-force case, we could have a system such that when n = 1, no activity is detected other than a copy of m to the output. Otherwise, the part of the system that holds the answer, holds first m, then the answer to 2*m , then the answer to 3*m, and so on, until it finally holds the answer n*m. One could generalize this to other problems in terms of the system holding during its computation answers to partial subproblems which combine to give the final answer in a nontrivial way (I add the last requirement to try to avoid the case where the system could be seen as having different brute force tables whose answers are easily combined, in the multiplication case this could be different brute force tables for different digits of the answer).

    In the brute force case, we could be registering activity in a different spatial region of the system for each possible input, and after that a copy of the state of that region of the system to the part of the system holding the final answer. In the multiplication case, this would be even clearer if the system was laid in a 2 dimensional way, and as we varied the m/n input, the region of the system that is activated would move along the x/y axis.

  16. TonyK Says:

    Am I missing something, or is that interactive proof demo completely useless? The way it’s set up, it doesn’t convince me of anything at all, certainly not that the application has a 3-colouring. Oh right, I just have to press Reveal… Surely there’s a better way to do this?

  17. Jiav Says:

    Abel, let’s consider multiplication using the digital equivalent of slide rules. This means to look up within three tables, or within a single table three times, plus a trivial combination. Should I understand you would see this as a brute force case?

    Regarding the neurosciences, I would propose several answers:

    If you consider that something must remain fixed so as to be a lookup table, then you’ll hardly find anything in our brains that fits this description.

    If what matters is a structuration laid in a n-dimensional matrix, then many neurological structures are known to be more or less structured this way -it’s a general rule, not an exception. The combination of the activities of these different maps is however unlikely trivial, to say the least.

    Finally, one may wish to consider that lookup tables are a special case of a deeper principle: save processing time at the expense of memory space. If this idea belongs to the Platonic heaven, then our brain is its incarnation. 😉

  18. Abel Says:

    @Jiav: I am leaning towards seeing the slide rule method as a non-brute force case. It would partially depend on how is the sum of the logs of the two inputs performed, though. Also, in the way I look at the slide rule method, the third and final lookup to determine the answer is a reverse lookup in the same table as the first lookup (although of course, the method can be described equivalently in the way you propose). This makes me not consider the method something I would call brute force.

    Thanks for the comments about the neurosciences! As I said I don’t know that much about those, so it’s always fun to learn 🙂 I would be interested if you could give any examples of neurological structures laid in a matrix (I did an online search to try to find those, but it failed, I suspect due to a lack of knowledge of the relevant words in the area).

  19. Jiav Says:

    Abel, sorry my answer seems jammed until Scott publish it manually.

  20. 2012 Survey Results « Gödel’s Lost Letter and P=NP Says:

    […] working on AI/robotics/brain science, and this gives me opportunity to note that Scott Aaronson has posted the culmination of his Philosophy and TCS course, including a student project showing the Turing […]

  21. rrtucci Says:

    I think I’m going to black out my blog for a day, in protest, unless Scott posts something in this blog soon.