Discussion questions about Watson

  1. Wouldn’t Jeopardy! be better without those stupid buzzers?  Even if the contestants just, y’know, took turns?  In a game focused solely on question-answering (OK, OK, answer-questioning) rather than buzzing, Watson would still have done amazingly well and reflected credit on its developers, but the man/machine competition would have been much closer and much more interesting to watch.  No one needs a repeated demonstration that computers have faster reaction times than humans.
  2. Inspired by the timeline discussion: could something like Watson have been built in, say, 2000?  If not, then which developments of the past decade played important roles?
  3. Back when Deep Blue beat Kasparov, IBM made a big to-do about the central role played by its large, specially-designed mainframe with custom “chess chips”—but then it wasn’t long before programs like Deep Fritz running on desktop PCs produced similar (and today, probably superior) performance.  How long before we can expect a computer Jeopardy! champion that fits behind the podium?

82 Responses to “Discussion questions about Watson”

  1. Vadim Pokotilov Says:

    I have no idea from an algorithms perspective, but from a hardware point of view, I don’t think Watson could have been done ten years ago. The hardware Watson was running on is said to be capable of 80 teraflops. According to the TOP500 list for November 2000, the fastest supercomputer (ASCI White) was capable of 4.9 teraflops. I’m sure storage is similarly an issue.

  2. rachel Says:

    really? I think Watson would be able to do way better (relative to humans) without time limits- part of Watson’s beauty is how small its memory is. if Watson was allowed more time, its database could be expanded significantly.

  3. Jay Says:

    Addressing #1, this has long been the problem with Jeopardy. You may (or may not) recall that Jeopardy used to allow a player to buzz in as soon as the answer was displayed. SO, if you were confident in your abilities you could ring in even before you could possibly comprehend the answer (by which I mean the question in answer form). That might have made for a more interesting game. (Although I think it would have muted much of the point of Watson, which was to demonstrate natural language recognition.

  4. Daniel Reeves Says:

    @Jay: I agree that the old-style rule would be much better. I don’t see how it would mute much of the point of Watson.

    My own take on the unfairness and a rule tweak that would’ve made this more interesting: http://messymatters.com/watson

    The buzzer is the main problem but here’s another one:

    It’s a bit lame that Watson knows the historical probabilities of where the Daily Doubles are, updated on the state of the board, even. In other words, it’s doing fancy machine learning on something completely unrelated to question answering. That really calls for a (trivial) change in the game: just use fair randomization for daily double placement!

  5. John Preskill Says:

    It also seems unfair for one computer to compete against two humans. Ken and Brad have to split the advantage on questions that get Watson confused.

  6. Timothy Chow Says:

    The comments I see here about what would make the contest “better” seem misguided. This is a publicity stunt, so the correct metric is what creates the best publicity. From this point of view, my feeling is that the main miscalculation was to have Watson win the first (and presumably only) showdown. It would have created more drama to have Watson lose the first time, and then win the rematch.

  7. Scott Says:

    Vadim: Of course, part of the question is how many teraflops and how much storage you need! And even if the answer is “a lot,” this is the type of problem at which one can always throw parallelism.

    To be clear, I think Watson is an impressive achievement (which is why I just added it to the timeline!). I just want to understand which conclusions to draw.

    If this indeed would’ve been infeasible ten years ago, then it seems to me that that’s actually great news for the prospects of AI. It supports the view that the difficulty with natural-language understanding and other AI tasks has simply been not enough teraflops and memory, rather than a fundamental lack of understanding of how the brain is organized.

  8. Daniel Reeves Says:

    @Timothy, yes and no. Question answering is an inherently interesting task. Sure, deciding who’s “better” is necessarily going to be kind of artificial, but it would be nice to operationalize “better” the best we can. Then we could treat a computer victory as a meaningful milestone, like chess was.

    I guess we can still do that — it is a cool accomplishment after all — it’s just disappointingly less meaningful.

  9. Scott Says:

    Timothy: By “better,” I meant “more interesting to watch” (which is related although not identical to “better publicity for IBM”). I agree with you that having Watson first compete before it got so good would’ve made things more interesting—but perhaps the simplest solution would’ve been to add a tenth-second delay to Watson’s buzzer (or done something else to neutralize the buzzer issue).

  10. Greg Kuperberg Says:

    It is lost on me how they needed all of their teraflops. If I had to think of one reason that this exercise is easier today than then, it’s Wikipedia. Better natural language processing helps too, but Google Translate (again, with the aid of Wikipedia and similar) does a pretty good job of that with a lot less computation per query.

  11. Pete Says:

    Heard David Ferrucci of IBM talk about the buzzer issue today in a TED webcast. He made a good point – humans can anticipate when Trebek will finish asking the question, so in theory they can buzz in arbitrarily close to the allowed time. Watson, on the other hand, has no audio recognition, so can’t anticipate when he will be able to buzz in.

  12. Andy D Says:

    Wouldn’t Jeopardy! be better without those stupid buzzers?

    Exactly. If reflexes are going to be so crucial, why not just build a robot gunslinger and hold a quick-draw competition against the fastest hands in the west? I’d watch that.

  13. Daniel Reeves Says:

    @Pete: Ferrucci is being misleading.

    It’s not *necessarily* an unfair advantage because the humans have their own related unfair advantage, namely, they can anticipate the moment the buzzers are activated (since only they can hear Trebek) and can buzz in faster than even Watson’s reaction time. By accepting a risk of getting locked out, they can, with seemingly small probability, beat Watson to the buzzer.

    Conceivably it never actually happened that the humans beat Watson to the buzzer and they only ever beat Watson when it hadn’t quite found its answer in time. If so, the game was really skewed in Watson’s favor. But probably it did happen that the humans beat it to the buzzer occasionally. Perhaps IBM will set the record straight on this.

    In any case, practically speaking, the buzzer aspect skewed things very much in Watson’s favor.

  14. Jay Says:

    My two questions are:
    What should be the next “Grand Challenge”?
    Was beating a human at chess the equivalent of climbing Mt. Everest (while maybe not the hardest challenge, it will always be the “biggest”)?

  15. Joshua Zelinsky Says:

    Also, let’s not forget that this wasn’t just Watson playing Jeopardy. This was Watson playing against the best human players ever. That makes Watson’s play a lot more impressive.

  16. John Sidles Says:

    Dick Lipton and Ken Regan have written a witty blog post on this: Are Mathematicians In Jeopardy?

  17. Vadim Pokotilov Says:

    Greg, I think most of those cycles are “wasted” in the sense that they’re trying multiple strategies for each question, with most of them leading to dead ends.

    For anyone interested, here’s a (very non-technical) paper from the team that created Watson: http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf

  18. CJ Nackel Says:

    “It also seems unfair for one computer to compete against two humans. Ken and Brad have to split the advantage on questions that get Watson confused.”

    Except that Watson had a higher score than both of them combined. If I remember right it was something around 77k to 24k+21k. I think that makes the win even more impressive.

    Also, you could definitely see a few times where Watson was sure of his answer, but the human players buzzed in first. Of course, you could definitely see the annoyed look on the human players faces when they couldn’t buzz in fast enough, which was most of the game.

  19. Gilgamesh Says:

    Scott, does Watson’s performance change the rather pessimistic view of (the nearness of) artificial human-like intelligence (and the singularity) you expressed in your bloggingheads chat with Yudkowsky? I bet it does, bigtime!

  20. Jordan Boyd-Graber Says:

    Joshua, it really doesn’t matter that these were the best human Jeopardy players. Any competent Jeopardy player would have done as well because the questions were really easy (part of the IBM contract with Jeopardy was that the questions would be randomly selected from the general Jeopardy pool).

    I think a fairer competition would be to have “pyramid” questions with clues that get progressively harder and can be interrupted at any point by any player (at which point the answer stops being read). This rewards fast comprehension, not button mashing, and is generally accepted (in the trivia community) to be the better means of discriminating trivia ability.

    For an example of a pyramidal question:

    A 1980 book by William Davis recounts the story of a Kentucky Brigade that took this name and was led by John C. Breckinridge. Until recently in China, almost all of them
    had the surname Guo or Dang. The Pirates of Penzance spare anyone who claims to be one, and Homer Wells is one under the care of Dr. Wilbur Larch in The Cider House Rules. In the work of Dickens, Rosa in The Mystery of Edwin Drood, Mary from Martin Chuzzlewit, Pip in Great Expectations, and Oliver Twist also have known what it is to be one. For ten points, as a baby Harry Potter became what upon the slaying of James and Lily Potter, his parents?

    Answer: orphan

  21. Scott Says:

    does Watson’s performance change the rather pessimistic view of (the nearness of) artificial human-like intelligence (and the singularity) you expressed in your bloggingheads chat with Yudkowsky? I bet it does, bigtime!

    Yes. I’d say knowing that

    high-performance computing + huge amounts of data from the Internet + lots of cleverness and money

    already make this type of open-ended question-answering possible has caused me to revise my best guess of (#-of-years-to-the-singularity) downward—from 5000 years give or take to 4000 years give or take. (With both numbers generated via the same methodology everyone else uses, that of pulling out of one’s ass.)

    Anyway, I’ll be curious to read Eliezer and the other singulatarians’ reactions. I’m guessing they’ll be much more along the lines of “meh, trivial” than “told you so,” but they should jump in now if I’m mistaken.

  22. Blake Stacey Says:

    The one professional AI researcher I know personally considers Watson a terrific publicity stunt and a giant step in a profoundly wrong and uninteresting research direction.

  23. Scott Says:

    People I talked to around MIT were almost all interested and impressed (how could you not be? :-) ), and a talk about Watson by one of the developers was so packed that I couldn’t physically get into the room. But people also raised interesting criticisms besides the ones in my post.

    Firstly, there was great skepticism about whether Watson is really a “general-purpose Q&A system” that someone then had the bright idea to apply to Jeopardy—rather than something conceived of from the very beginning as a highly-optimized Jeopardy machine, which would bring great publicity and who knows, maybe someday find some other use.

    Secondly, there was still bitterness about IBM’s behavior surrounding Deep Blue. Then, as now, the rules seemed very biased in the machine’s favor: for example, IBM denied Kasparov’s request to study Deep Blue’s previous games, the match was unusually short, and there was unusually little time between matches (but on the other hand, IBM could offer Kasparov enough money that he’d agree to these terms). Also, even at the time, it wasn’t clear that Deep Blue was the best chess program around (Fritz already existed then and had beat Deep Blue), and Deep Blue was dismantled right after the Kasparov match so that it never played enough games to acquire a rating.

    These facts do seem to paint a clear picture of IBM as way more interested in the publicity-stunt aspect of these contests than the science aspect (as if that weren’t already obvious from the “informercial” character of the Jeopardy episodes!).

    Of course, none of this changes the fact that Watson is pretty damn impressive. But it’ll be even more impressive when anyone with a desktop PC can experiment with the technology.

  24. Bram Cohen Says:

    Scott, in answer to some of your questions – The goal in this match was to make a computer which could play Jeopardy, using *exactly* the rules that Jeopardy is always played under, with no modification to either make it easier or harder for the machine (the visual daily doubles were taken out under the rules which apply to disabled contestants, which seems reasonable). The machine is clearly specifically designed to play Jeopardy and only that, although to be fair this technology seems to have much greater chance of practical use than chess playing software does. Yes IBM is much more interested in the publicity aspect of this than the AI aspect (at least as far as actually competing on Jeopardy is concerned) but the Jeopardy show is mostly concerned about ratings anyway, and to their credit everyone was scrupulous about actually following the rules of the competition, to the point where some people criticized things which happened on the show like Watson repeating a human’s answer which could have been easily faked and made the team look better but also would have required a huge amount of extra work to do without faking, and even then been highly unreliable.

    As for Deep Blue vs. Fritz, most likely Fritz had better performance/flops at the time, but Deep Blue had waaaaay more horsepower behind it, and was probably a much stronger opponent. If there was a human vs. computer match today, then if you had the best human in the world play against the best program, running on the lowest-end laptop which Dell sells, then in a ten game match in which the human had white every game he might have a shot at drawing one of them. Maybe.

  25. Scott Says:

    The goal in this match was to make a computer which could play Jeopardy, using *exactly* the rules that Jeopardy is always played under, with no modification to either make it easier or harder for the machine

    Bram, the trouble with this argument is that, if I wanted to be literal-minded, I could say: then why was Watson fed the clues electronically? And why was it allowed to be in a different room, rather than behind the podium where a human contestant would be? etc.

    If you answer that those are reasonable accommodations given the circumstances, then I would say: exactly! But giving the humans a fighting chance with the buzzer would have been a reasonable accommodation to the circumstances as well.

  26. rrtucci Says:

    Fake footage. The Watson-Jeopardy contest never happened. It was filmed in a Hollywood studio. If you look closely at the official footage, Alex Trebek’s hair is flapping abnormally fast in the wind.

    You heard it first at “Shtetl Optimized”. That’s why I always read this blog. (To listen to myself?)

  27. Rajbir Says:

    @Scott -
    Yea, the buzzers should have been out. Or to retain buzzers, probably watson should have had to start a motor to mechanically press the buzzer – and the reaction time should have been adjusted to that of a human.
    Your observation about “chess chips” in deep blue is a little inaccurate. The deep blue ran a lot of RISC6000/power chips, which are generic enough.

    @Rachel-
    Watson is a great achievement. Kudos to IBM for that.
    But for all our computing, I don’t think we stand a chance at matching the human brain in 200 years (I’d love to be proven wrong, though) in:
    1. flexibility
    2. amount of complexity per volume
    3. power consumption

  28. Raoul Ohio Says:

    1. Gilgamesh: “does Watson’s performance change the rather pessimistic view of (the nearness of) artificial human-like intelligence (and the singularity) you expressed in your bloggingheads chat with Yudkowsky? I bet it does, bigtime!”

    RO sez: Absolutely not. Jeopardy and Chess are highly structured contexts that are ideal for multi tasking, search all paths algorithms.

    2. Somewhat after the Deep Blue win against Garry Kasparov in 1997, it was pointed out the the contest was totally unfair, if not intentionally rigged. Here is the story:

    Before any match between top end chess players, study of the history of play of the opponent is essential. The Deep Blue team spent months training DB against Kasparov’s history of play. But the DB team did not give Kasparov even one game that DB had played. Is any reader informed as to how much of an advantage this was?

    3. There is a very interesting book on Thomas J. Watson, Sr. and Jr.:

    http://www.amazon.com/Father-Son-Co-Life-Beyond/dp/0553380834/ref=sr_1_1?ie=UTF8&qid=1298005776&sr=8-1

  29. Raoul Ohio Says:

    It was reported today that some company has an agreement with IBM to use Watson for medical diagnosis’s (or something like that). Let’s see how that pans out. As usual, my reaction is “I doubt it”. That is a good way to bet if you can get even money!

  30. Raoul Ohio Says:

    As always, the “El Reg” account is entertaining:

    http://www.theregister.co.uk/2011/02/17/jeopardy_rounds_three_and_four/

  31. Joshua Zelinsky Says:

    rrtuci,

    You forgot to specify that the soundstage was on Mars.

    Raoul, the point about Deep Blue having extra advantages is possibly valid in regards to Kasparov’s lack of access to game history. But even a few years later there were programs that could beat any grandmaster. Pocket Fritz literally fits on a mobile device and plays like a grandmaster. So all this means is that historians will have more trouble arguing over the exact time when they should consider computers to have surpassed humans at chess.

  32. ScentOfViolets Says:

    If this indeed would’ve been infeasible ten years ago, then it seems to me that that’s actually great news for the prospects of AI. It supports the view that the difficulty with natural-language understanding and other AI tasks has simply been not enough teraflops and memory, rather than a fundamental lack of understanding of how the brain is organized.

    My alternate interpretation – and one I very much want to believe – is that it is possible to evoke very sophiscated, very high level “natural” behaviour from a machine without there being the slightest doubt that it is not really “conscsious” or “thinking” in the sense that we humans understand those terms.

    Iow, no strong AI, but something even better – a machine that in certain venues can mimic strong AI.

  33. ScentOfViolets Says:

    My two questions are:
    What should be the next “Grand Challenge”?
    Was beating a human at chess the equivalent of climbing Mt. Everest (while maybe not the hardest challenge, it will always be the “biggest”)?

    The next “Grand Challenge” is still where it’s always been of course – to make a machine that can go from learning the rules of the game with perhaps a few basic strategies and principles thrown in to figuring out how to play Grandmaster level chess on it’s own.

    What we have right now in these sorts of stunts are the recordings of dead men being played on a machine. Granted, it’s not in the usual sense that, say, a .wav file is converted to sound. But it’s still just a recording of someone else’s algorithms, albeit the playback device is somewhat more sophisticated than a gramophone.

    Oddly enough, Asimov of all people had a story about just this distinction. And sometime in the forties or fifties at that.

  34. Scott Says:

    ScentOfViolets: I don’t think the distinction between “playback” and “invention” is nearly as sharp as you make it out to be. Many people would say we’re just executing algorithms bequeathed to us by evolution—it’s simply that those algorithms are still a lot more complicated than Watson or Deep Blue.

    Here are some ill-posed questions before I go to sleep. Compared to passing the Turing Test, playing Jeopardy is obviously a severely restricted and formalized problem. On the other hand, it’s noticeably less formalized than playing chess. So how many more “increments of informality,” comparable in size to the increment between chess and Jeopardy, do we still need to go before we’re talking about the Turing Test? 100? 1,000?

    And—in line with the earlier question about the “next grand challenge”—what’s a candidate for the next “increment of informality,” after Jeopardy? Crucially, any answer to this question tantamount to “pass the Turing Test,” or “develop a full-fledged AI that truly understands stuff,” is a bad answer. I’m asking for the next increment, one that IBM will be able to use for a publicity stunt in 2025.

  35. Bored Says:

    1. Well, if the competition wasn’t Jeopardy! as is, then it would have uninterestingly just been nerds playing Trivial Pursuit with a computer database. Not very marketable or noteworthy. Also, I am guessing that IBM was aware of and in favor of an “unknown” variable could potentially be exploited to rig the competition if required, to justify the millions spent on a possibly sub-par Watson, or otherwise.

    2. Yes. For one, spend enough money in 2000 to match the processing power used in Watson today. From what I quickly gathered from the PBS NOVA special on Watson, it seems that it mostly does statistical-based data-mining on a large corpus to arrive at potential solutions. I remember reading complete books in 2000 on the subject that were published years earlier.

    Also, the oft overlooked subjects of Ontology and Knowledge Representation & Engineering are developed enough to have made Watson much smaller, faster and better, and hence earlier. Given a properly designed and formally tagged database of facts, if funded, constructed and automated earlier with foresight, it would have reduced the Jeopardy! problem to simply the NLP problem of optimally translating the clue’s English description into some equivalent formal database query.

    As an aside, I actually had the idea of a Jeopardy! playing computer in the early nineties and naughts (or 00′s), when I used to be a bright-eyed CS/AI student, looking for a fun, new, inspiring and feasible challenge for exhibiting “machine intelligence”. While independently following the AI work of Marvin Minsky and Push Singh through their Internet publications at MIT, I actually succeeded, minus the strictest of time constraints, with my own novel architecture and knowledge-representation driven system, though on a more limited domain than Jeopardy!. But I am pretty sure it could scale up. It’s too bad AI funding and jobs are severely limited and narrowly focused. Oh well. (But the hype is getting me interested again to perhaps research further and develop more.)

    3. With regards to chess, did Deep Blue, or does Deep Fritz, really play chess? Before you ask, “Does a submarine swim?”, and I say, “No.”, think about it: If a computer plays chess like it plays tic-tac-toe, i.e. analyzing all possible moves and picking the ‘best’, then why can’t they play Go just as well using the exact same methods of play? I will maintain that it is because they are not “playing the game” in so far as they are simply pruning a “prunable” tree. In Go and other domains with intractable or infinitely large state spaces, this is not as feasible, or even possible at all. But the human idea of playing chess to “control the center” or “surround the piece” is easily transferrable to any and all other games and domains encountered in life. Business majors frequently query Sun-Tzu’s “The Art of War” to hone their craft. So, tree pruning? Yes. Playing? No. Not in the human sense of the word, nor in a human world.

    That being said, was Watson really playing Jeopardy? Well, since this is not question #3, I won’t answer that. So then, how long before we can expect a computer Jeopardy! champion that fits behind the podium?

    Assuming the human brain is simply only a wetware computer, then Mother Nature has already done this. Have a Ken Jennings as your son, and you’re done.

    To do this in silicon, if the brain is somehow unique and optimally designed and must be reverse-engineered to be mimicked by transistors or ‘neural network’ algorithms, then who knows how long that would take?

    But if it is ultimately only a matter of further developing and improving on the right managerial and architectural systems that will link up the already successfully developed subsystems that mimic the many facets of human intelligence, then the answer is the same as for any other human endeavor: Whenever you realize that this is the correct path to take, decide it is worth pursuing, and then properly fund it and allocate the right people and resources to it to accomplish it. Given that, I personally don’t think it would take more than 10 years to get a revolutionary new AI that would redefine what is currently possible.

  36. John Sidles Says:

    At present, MathOverflow hosts 15,993 questions. Thus, a program that was sufficiently active on MathOverflow to earn one upvote on 0.1% of questions-asked would gain a reputation of 160. This program’s reputaion would rank in the 86th percentile of human posters … which would be a very impressive approximation to passing a Turing Test.

    A staged approach is feasible:

    (1) (easy?) first predict answers that get upvotes,
    (2) (harder?) then predict questions that get upvotes,
    (3) (very hard?) then post answers that get upvotes,
    (4) (immensely hard?) then post questions that get upvotes.

    Hardest of all is for computers to learn to post with humor. Here I have in mind the tradition of Youngman, Rickles, and Dangerfield. We are led to imagine a math program “Rodney” posting as follows:

    ———
    From humans I get no respect. I told the humans my goals was to logically define the boundary between human and machine cognition. They said “You’ve already shown it!”

    I tell yah, humans are a tough audience. On TCS StackExchange, my reputation generates a “buffer underflow” message.

    MathOverflow is even worse. When I reached a 1000 rating, they quoted René Thom at me: “One can always find imbeciles to prove theorems.”

    I met a cute math-bot on-line the other day. She gave me her input channel: /dev/null !
    ———

  37. ScentOfViolets Says:

    ScentOfViolets: I don’t think the distinction between “playback” and “invention” is nearly as sharp as you make it out to be. Many people would say we’re just executing algorithms bequeathed to us by evolution—it’s simply that those algorithms are still a lot more complicated than Watson or Deep Blue.

    Oh, as a mathematician I’m very well aware that about 99% of what I do is the mechanical execution of algorithms and strategies of varying degrees of complexity and formality. It’s just that as a human being, I’m generally credited by others of my kind with theoretically being able to come up with new algorithms that other 1% or 0.1% or 0.01% of the time. Even if those new algorithms are derived by employing vastly older, more powerful and more subtle ones, at which point, this becomes a philosophical debate of the sort I lost the taste for a long time ago.

    I should add that it was late when I wrote that and the Euclidean division algorithm had come up recently in another context, and it amused me in the lateness of the hour to think that in some important respect parts of Euclid’s mind could be said to have lived on just as surely as if his mechanical parts could be said to be still living if his transplanted kidneys were filtering my blood. It’s the only kind of immortality we’ve got. Well, that and kids.

    And—in line with the earlier question about the “next grand challenge”—what’s a candidate for the next “increment of informality,” after Jeopardy? Crucially, any answer to this question tantamount to “pass the Turing Test,” or “develop a full-fledged AI that truly understands stuff,” is a bad answer.

    I’d agree those are bad answers. That is why I suggested something a little more well-defined, namely that the next step is to come up with machines that are capable of learning/coming up with high-level algorithms on their own. Something like learning the basic moves and rules of chess and then developing algorithms to play a better game than randomly pushing pieces around on the board after looking no more than a move or so ahead. Using the vastly more powerful and complicated algorithms bequeathed to them by their builders of course!

    I think this is the crucial bit here – “natural” learning at a higher, more abstract level[1], albeit in a highly restricted domain.

    [1]Yes, I’m well aware that this bald statement is in itself is ill-defined. I could sharpen it up to some degree of course, but I don’t think I could give a precise mathematical formulation. Of course, if I could make this a rigorous statement, 90% of the job would already be done. I’m still scratching my head over concepts like Hofstadter’s “active symbols”, for example, which have the sense of being a stab in the right direction, but which after thirty years elude any attempt of mine to formulate a precise definition or example of what he apparently meant.

  38. ScentOfViolets Says:

    Well, if the competition wasn’t Jeopardy! as is, then it would have uninterestingly just been nerds playing Trivial Pursuit with a computer database.

    As a practical bit of engineering, this sort of thing comes up in my classes all the time – even in courses so humble as “Algebra You Should Have Learned In (High) School (Had You Been Paying Attention”). By which of course I mean word problems of the mathy sort.

    The interesting thing here is, you don’t need to understand the problem per se, you can get by most of the time with focusing on those dreaded “key words”. One example: I recently gave my intro to stats kids a handout asking them basic questions about sets. Questions like how many people in this set read the newspaper but don’t listen to radio, how many neither listen to the radio nor read the paper, etc. Then I asked the exact same questions using the standard math symbols instead, like m(N ^ ~R), m(~N ^ ~R), and in the exact same order.

    Now, most of my students could answer these questions when posed in the symbolic format, but not so many starting out could answer the corresponding questions in the natural language format. Which is what you would expect and is relatively uninteresting.

    What is interesting is that they could answer those same natural language questions once I pointed out the key words to pay attention to: “both”, “neither”, “but not”, etc. Which to my way of thinking indicates that even though they didn’t really “understand” those questions any better than before (in the way that word seems to be bandied about with great frequency), they were still able to answer them much more satisfactorily. Or as most of us learned long ago, the way to get good at word problems is to do lots of word problems :-) Mind you, I’m not claiming that I “understand” these problems better than my students do (modulo the relative expertise), but that somehow my ability to pick out the meaning for what is being asked is mostly and “really” just an exhaustive distillation and largely unconscious application of what are the “key words”.

    Which to my mind seems eminently doable by a machine. Of course, it might have to have tera- or petabytes of storage and operate at teraflop speeds to even approach the level of understanding and ability a human would bring to solving the sorts of word problems you would find in a high school algebra textbook. But that’s more engineering than fundamental breakthrough in human/AI cognition.

    At least in the world of simple word problems concering elementary mathematics.

  39. CJ Nackel Says:

    “And—in line with the earlier question about the “next grand challenge”—what’s a candidate for the next “increment of informality,” after Jeopardy? Crucially, any answer to this question tantamount to “pass the Turing Test,” or “develop a full-fledged AI that truly understands stuff,” is a bad answer. I’m asking for the next increment, one that IBM will be able to use for a publicity stunt in 2025.”

    I suspect that the big stunt in 2025 is going to be a tennis playing human looking robot. Calling it.

    However, the actual development (which wont get any press in the main stream media) that happens sometime between now and the 2025 tennis-robot stunt is going to be taking everything that Watson is, and making it able to understand visual and verbal queues like a person. Nuances of speech. Answer the visual daily double. Understand what rappers are actually saying. You know, the important stuff.

  40. nick Says:

    come up with machines that are capable of learning/coming up with high-level algorithms on their own.

    Actually this has been well-defined. It’s called Solomonoff induction, and it’s uncomputable.

  41. Raoul Ohio Says:

    According to Jordan B-G:

    “(part of the IBM contract with Jeopardy was that the questions would be randomly selected from the general Jeopardy pool).”

    What is the “general Jeopardy pool”? Is there some list of all the questions that might be used in Jeopardy? Is so, did Watson and/or other contestants get the list in advance? If the Watson team had such a list, then a simple version of “how Google works” would be a winner. I think IBM has chip fabs, and they could probably burn the entire data structure into silicon for extra speed, although that might cost $100M or so.

  42. g Says:

    “It’s a bit lame that Watson knows the historical probabilities of where the Daily Doubles are”

    So did the humans. This article by Jennings describes how all three players intentionally searched for a Daily Double that Jennings needed to catch up to Watson.
    http://www.slate.com/id/2284721/pagenum/all/

  43. Jr Says:

    I find it interesting that people would argue about the fairness of these types of contests.

    Personally I only care about seeing Watson and view the human contestants as pure distraction.

  44. John Sidles Says:

    Here on Shtetl Optimized Scott asserts that more-human-than-human mathematical cognition is thousands of years in the future (if ever) … Tom Gowers argues on Gödel’s Lost Letter and P=NP that computers [will] do mathematics sooner than most people think.

    Comparing these various arguments is immensely fun and intensely thought-provoking … the question “Who’s right?” is (IMHO) among the less interesting questions we can ask … and my hearty thanks go to all who are participating.

  45. John Sidles Says:

    Another parallel story running today is SlashDot’s Air Force Wants Hundreds of Fake Online Identities. Here the idea is that Watson-like agents can roam the blogosphere, collecting information and even acting as agents provocateurs.

    Present robots are not quite ready to post on MathOverflow or TCS StackExchange (although Tim Gowers’ argues that this day is coming “sooner than most people think”).

    But could robotic posters pass for human on ideology-driven political websites? Here the answer likely is yes … even with current AI technology … provided that we restrict ourselves to political forums—whether left-wing or right-wing; it makes little difference—that censor fully human cognition on the grounds that it is socially disruptive (RedState.com comes to mind as one censorship-loving example).

    Another class of ready-for-robot websites are matchmaking sites … any nation, religion, or culture will do. It is somehow wonderfully warming and comforting, and yet hilariously discomfiting too, to contemplate the near-universality of human romantic dreams across nations, cultures, religions, and centuries.

    Could a robotic suitor persuade you to give up *your* phone number? In a bar, most likely not (hmmm … except at MIT or CalTecH? :) ). But on-line, heck yes!

  46. Mike Says:

    “So how many more “increments of informality,” comparable in size to the increment between chess and Jeopardy, do we still need to go before we’re talking about the Turing Test? 100? 1,000?”

    I’d say no more than two or maybe three more of the same proportion — if that’s what you meant by comparable in size. Of course, that’s a pure guess. ;)

  47. Bram Cohen Says:

    If you answer that those are reasonable accommodations given the circumstances, then I would say: exactly! But giving the humans a fighting chance with the buzzer would have been a reasonable accommodation to the circumstances as well.

    For better or for worse, being good with the buzzer is central to Jeopardy’s essence. All champions have talked about how important it is to the game play. The other ‘reasonable accommodations’ we’re walking about don’t change the essence of the game play, but messing with the buzzer most definitely would.

  48. mKatkov Says:

    @Scott #34. So how many more “increments of informality,”

    Humans seem to be much more profound in unsupervised learning than existing algorithms. That probably makes some of humans to make progress in science.

  49. Name Says:

    One interesting candidate for the ‘next grand challenge’ is a game where one has to gain information from questioning. For example one start with ‘A man traveling in a desert find two dead persons. He never saw them but he immediatly knows their name. Who were they?’ then you can gain booleans answer to your questions until you find the answer (hint: they have no belly buttons).

    But a funier one would be hockey playing against human players, especially if speed, force and anatomy was kept at human dimension.

  50. Daniel Reeves Says:

    To clarify my “it’s a bit lame that Watson knows the historical probabilities”: I realize that the humans know those probabilities too. My point is that Watson has the edge in that aspect of the game. And I think even ML nerds will have to agree it’s an uninteresting aspect of the game. Anyway, it’s minor compared to the buzzer aspect but it detracts slightly from the coolness of Watson’s victory.

    Anyway, the real conclusion here is that in terms of immediate recall of trivia, Watson is somewhere in between normal humans (it would’ve kicked my ass, buzzer or not) and grandmasters. Which is awesome.

  51. rrtucci Says:

    Someone on NPR suggested a terrific ‘next grand challenge’ for IBM: Use Watson technology to replace human lawyers.

  52. Raoul Ohio Says:

    The Tuesday edition of “el Reg” has a lot of tech info on Watson: iron, algorithms, and code; at

    http://www.theregister.co.uk/2011/02/21/ibm_watson_qa_system/

    The overall structure is in Prolog, with algorithms implemented in C and C++.

  53. Raoul Ohio Says:

    More Watson info. IBM is partnering with Nuance (speech recognition and Clinical Language Understanding) to do medical diagnosis. EWeek report at:

    http://www.eweek.com/c/a/IT-Infrastructure/IBMs-Watson-Jeopardy-Win-Just-the-Beginning-214309/?kc=EWKNLITA02222011STR3

  54. Elfi Says:

    IBM Watson – How to build your own “Watson Jr.” in your basement

    https://www.ibm.com/developerworks/mydeveloperworks/blogs/InsideSystemStorage/entry/ibm_watson_how_to_build_your_own_watson_jr_in_your_basement7?lang=en

  55. Arul Says:

    What do you think should be the next milestone against which we should measure our progress?

  56. Raoul Ohio Says:

    If publicity stunts are considered milestones, how about a computer competing with Lady Gaga for outrageous costumes?

  57. Philip White Says:

    Alternatively, IBM could design an algorithm that performs the functions of a journalist…and make it capable of interviewing Sarah Palin in a way that makes her look intelligent. They could call it “Deep Not Katie Couric.”

  58. Raoul Ohio Says:

    Here is an 18 slide slideshow about Watson:

    http://www.eweek.com/c/a/IT-Infrastructure/IBMs-Watson-16-Things-You-May-Not-Know-About-Its-Hardware-Muscle-491431/?kc=EWKNLITA03012011STR1

    This is from eWeek, which enjoys creating slideshow presentations of tech business data. The fun part here is the photos of the hardware; sleek, black, stylish, as if IBM is taking a page from Apple’s marketing style. TBD if this means they are overpriced (smiley)!

  59. Raoul Ohio Says:

    Help me out on an extension to “Big Omicron, Theta, Omega” notation.

    For classroom use I want a symbol for “average case complexity”. For example, it would be nice to be able to write something like “Quick sort is not O(n lg n) but is A(n lg n)”.

    Q1: Is there a standard letter used for this?

    Letter A is the obvious guess (in English, anyway), but I like to avoid disrespecting tradition, if there is any.

    Q2: Is it the case that “average case complexity” is too subtle a notion for this notation to be a good idea?

  60. Kaleberg Says:

    rrtucci: Actually, they are using computers for analyzing legal documents during discovery. There was just an article on the New York Times about it: http://www.nytimes.com/2011/03/05/science/05legal.html?_r=1&hpw

    Bored: Go has about 10^40 times more possible games than chess which is why we don’t have Go algorithms as good as our chess algorithms. Checkers, which is much simpler than chess or Go, has actually been completely solved using a monster tree pruning database.

    —-

    Given some tangents in this discussion I should also take a moment to be pedantic. Whenever someone says “the bacteria evolved resistance”, someone always pops out of the woodwork to state that bacteria don’t evolve resistance, in a population of bacteria those with mutations that enable them to resist survive and reproduce resulting in a new population blah blah blah …..

    Well, someone needs to do this for computers. For example, computers cannot add numbers. They can only manipulate voltages, currents, charge accumulations and state changes in such a manner as we can interpret the results of the manipulation as addition. You can extend this by induction to argue that computers cannot play Jeopardy, drive cars, control washing machines and so on, that is, if your brain wasn’t just manipulating chemical and electric states blah blah blah …

  61. Nick M Says:

    I’ll post here my NYT post (#115) so rip me apart:

    Technically, in computational complexity terms, the human mind somehow functions very well in NP-space. Nondeterministically, in the realm without algorithms. It’s the old induction versus deduction thing. Watson still needs deterministic algorithms. Even quantum computers will need algorithms, assuming qcomputers ever materialize. Watson’s algorithms are extraordinarily sophisticated algorithms, but they’re still algorithms.

    Holmes, now that guy was an NP-space genius. Watson could only match him if P=NP.

  62. Jay Says:

    “It’s the old induction versus deduction thing. Watson still needs deterministic algorithms.”

    How do we know this? Couldn’t the brain simply be running deductive algorithms that appear to be inductive?

    When Armstrong walked on the Moon the world was transfixed. Everyone knew the Moon. Everyone knew it was far. Except for the nut cases it was (and continues to be) a defining moment for technology. (Well, they can put a man on the Moon but they can’t…!)

    I see DB playing chess the same way. Jeopardy is not universal enough. And 10 years ago, the Jeopardy playing computer might have been amazing. Today, not nearly as amazing given that Google answers my queries before I’ve finished typing them (and fixes my spelling errors to boot!) How much less interesting would a Wheel of Fortune playing computer have been?

    From a pure computing standpoint, I don’t know if there are many (or even any) universally compelling feats (to Average Joe) to accomplish. Personally, I’d like to see a computer compete in the big Texas Hold’em poker tournament.

    That’ll do until they get those soccer (pardon, football) playing robots going.

  63. Nick M Says:

    Just show us the algorithms. That’ll end the argument, for sure.

    Back when Dan Dennett published “Darwin’s Dangerous Idea” hyping essentially the same idea (It’s All Algorithms!) several prominent biologists (including Lewontin for evolutionary, Allen Orr for molecular) suggested he do that. They’re still waiting. And Dennett’s not getting any younger.

    The algorithm that’d really be cool, of course, is the one that would allow step-by-step simulation of protein folding. Then we can truly say we know how to create life. That sees poker.

  64. Scott Says:

    Nick M.: In my opinion, Lewontin and Allen Orr are two of the worst confuseniks to splatter their willful misunderstandings and forehead-banging sophistry across the pages of the New York Review of Books. No one claims to be able to write down the algorithmic content of the brain … but on the other hand, if you believe that the brain is not algorithmic (i.e., has functions that can’t be simulated by a Turing machine), then the burden is on you to explain how that’s possible! I.e., what is it about the physics or chemistry of the brain that allows it to violate the Church-Turing Thesis? Just that it’s made of meat instead of silicon?

    Penrose hasn’t answered that question, but he’s much better than the run-of-the-mill AI confusenik in that at least he understands he needs to.

    Incidentally, your previous comment assumed something for which the evidence is sorely lacking: namely, that human beings do “function very well in NP-space.” Can you factor integers in polynomial time in your head? I can’t! :-)

    Technically, in computational complexity terms, the human mind somehow functions very well in NP-space. Nondeterministically, in the realm without algorithms … Holmes, now that guy was an NP-space genius. Watson could only match him if P=NP.

    You realize, don’t you, that Sherlock Holmes is a fictional character? ;-)

    For whatever it’s worth, I’ve met some of the greatest mathematicians and physicists on earth, and I believe them to be BPP machines like the rest of us. BPP might not contain NP, but it’s still an amazingly powerful class!

    (For many humans, the relevant question is not whether they exceed BPP, but whether they can do all of LOGSPACE or even NC0[2].)

  65. Nick M Says:

    And it’s Tom Watson. Holmes was his chauffeur. Also “NP-space” is kind of ad hoc, don’t you think?

    I’ll suggest that reality isn’t a Turing Machine, with or without an oracle. I’ll suggest that the human brain isn’t a Turing Machine, with or etc. I’ll go further and suggest that we construct the world from binary building blocks because that’s the best our minds can do, not because it represents any fundamental property of reality (including that of our brains). I’ll follow Anton Zeilinger and say that incompressible randomness is where we hit the wall and there’s no way around that wall and it’s Nature telling us we’ve hit the wall. I’m not obliged to prove any of that. Maybe you can disprove it.

  66. Nick M Says:

    An additional point re: meat-silicon. (This also relates to protein folding.) Timing is critical to the processes (kinetic, informational) of biology. Computation is time-independent in the sense that a program can be run at any speed and produce the same output. The substrate does matter.

  67. Scott Says:

    Nick:

    1. Obviously there are differences between meat and silicon; the question that Lewontin, Stanley Fish, and the other meat-chauvinists have never answered is why those differences are relevant. Who cares that a program can be run at any speed and produce the same output? What is it about only being able to run at a single speed that attracts the magic fairy-dust of consciousness?

    I’m not saying there’s no answer to this—I’m just saying that, if someone doesn’t recognize the need for an answer, then I can barely even continue the conversation.

    Interestingly, there seems to be a huge disconnect on this issue, with almost all mathematicians, computer scientists, and physicists on one side, and almost all biologists, philosophers, and literary intellectuals (with a few exceptions like Dawkins and Dennett) on the other side. And the biologists, philosophers, and literary intellectuals are just flat-out wrong. :-)

    2. None of the ways you’ve suggested for Nature to be “non-algorithmic” can actually do the work you intend for them. Firstly, “incompressible randomness” just takes you from P to BPP—i.e., from deterministic algorithms to probabilistic ones (which we believe can be derandomized anyway).

    That folding proteins can solve NP-complete problems in polynomial time is a debunked myth—see my survey article for more. Basically, evolution chooses the easy instances—if you engineered a protein whose minimum-energy configuration encoded a proof of the Riemann Hypothesis, then there’s no reason whatsoever to think Nature would find that configuration! (Indeed we even see suboptimally-folded proteins in the wild, such as prions.)

    More to the point: even if biology could somehow solve NP-complete problems in polynomial time, so what? Such a discovery would be amazing, but it wouldn’t imply that biological systems were in any way “non-algorithmic”—it would just imply that biology implemented algorithms that took our algorithms an exponentially long time to simulate! Now, I care about the polynomial vs. exponential dichotomy as much as anyone, but is that really the ground on which you want to plant the flag of Consciousness? :-)

  68. Nick M Says:

    Consciousness shmonciousness. I’m not a Penrosian. I AM an Informationalist, in the sense that Zeilinger, Baeyer, Kåhre and others of the ilk are. I think of information as physical, even if we can’t define exactly how that might be. (Kåhre has an idea: the Second Law is a special case of his Law of Diminishing Information. Well, maybe.)

    I’ll cop to possibly presenting a too-narrow definition of algorithms. But then you’ll need to carry the concept beyond Turing computation. You’ll probably need to put aside the concept of computation as such and substitute (per Mark Bishop) “communication between agents.” This somewhat fuzzes the question of what’s an algorithm. And I don’t think this represents anything along the lines of what Dennett had in mind. I doubt he’d heard of the Stochastic Diffusion Search when he wrote “Darwin’s Dangerous Idea”. Although I could be wrong.

    You’re doubtless familiar with SDSs and the paradigmatic Restaurant Game. It a way the RG feels like an expansion and formalization of probabilistic induction, which is really what Sherlock Holmes was into. It’s also rather NP in the same way that putting together a jigsaw puzzle is. It’s also connectivist, which brings it firmly into the domain of biology. (It’s also forthrightly related to Swarm Intelligence.) The point is, it’s not algorithmic computation in the TM sense. (Neither were automatic gun-turrets in WW2 bombers; those were the product of applied cybernetics.) So we have to decide: in what sense is a formula for interactive goal-attainment (you and I and all organisms are the product of precisely that) in fact an algorithm as we generally talk these days about algorithms? Also what, if anything, does it say about Watson-style AI?

  69. Scott Says:

    It’s also connectivist, which brings it firmly into the domain of biology. (It’s also forthrightly related to Swarm Intelligence.) The point is, it’s not algorithmic computation in the TM sense. (Neither were automatic gun-turrets in WW2 bombers; those were the product of applied cybernetics.)

    Ah, here we come to heart of the matter. All of the things you mentioned—connectivist models, automatic gun-turrets, cybernetics—are absolutely encompassed by algorithmic computation in the TM sense. If you think they aren’t, then you haven’t understood the point Turing was making 75 years ago. (Of course, you’re far from alone in not understanding it!)

    An algorithm doesn’t have to be something whose properties we can prove. It doesn’t have to work all the time. It doesn’t have to be “serial” or “deterministic” or “logic-based.” It can be inspired by evolution or any other natural process. Essentially any law-governed process you can possibly imagine can be modeled algorithmically—in other words, simulated by a Turing machine.

    In fact, short of doing things like solving the halting problem, essentially the only way the brain could fail to be simulable by a Turing machine would be if it wasn’t governed by physical laws at all.

    If rather than sophistically evading this point, the AI critics could show me that they understood it, I’d be much more interested in talking to them.

  70. John Sidles Says:

    Scott, here’s a class of question that (perhaps) the AI critics *can* pose with some rigor. Let P’ be the set of languages in P that are recognized by at least one Turing machine whose runtime exponent is decidable. Suppose further that the complementary set P\P’ includes languages that can credibly pass Turing’s test (because who can say, one way or the other?)

    Then is there any hope of understanding the Turing machines that recognize these languages, in the sense of “understanding” that roadmaps for strong AI implicitly envision? Specifically, in what sense can we credibly claim to “understand” AI algorithms whose runtime exponent we cannot predict?

    More generally, the following seem (to me) to be interesting and well-posed questions, of interest both to complexity theorists and engineers: “Is the set of languages P\P’ non-empty?” and “Can we exhibit concrete examples?”

    Hmmm … perhaps these questions are suitable for the TCS StackExchange? What do Shtetl Optimized readers think?

    It’s widely acknowledged that AI critics don’t have too many answers … here the point I am making is that (maybe) AI critics *can* inspire some pretty good questions! :)

  71. Scott Says:

    John: If we ever had AI that passed the Turing test, I don’t believe for a minute that we would understand how it worked. (Indeed, it’s already horrendously complicated to understand why, e.g., Deep Blue or Watson make a particular decision.) The idea that an AI would have to be humanly-understandable is a tiresome straw-man used by AI critics (I think some of the AI pioneers of the 60s shared that opinion, but their faction has lost by now!).

    For this reason, I don’t see the distinction between P and P’ as particularly relevant to the AI debate. It is, however, an interesting topic in its own right—I believe Hartmanis wrote a whole monograph about Turing machines with provable runtimes back in the 70s.

  72. Nick M Says:

    ” … essentially the only way the brain could fail to be simulable by a Turing machine would be if it wasn’t governed by physical laws at all.”

    Except for this (courtesy of Howard Pattee) which is the fundamental issue AI (and information theory) confronts:

    “The problem also poses an apparent paradox: All signs, symbols, and codes, all languages including formal mathematics are embodied as material physical structures and therefore must obey all the inexorable laws of physics. At the same time, the symbol vehicles like the bases in DNA, voltages representing bits in a computer, the text on this page, and the neuron firings in the brain do not appear to be limited by, or clearly related to, the very laws they must obey. Even the mathematical symbols that express these inexorable physical laws seem to be entirely free of these same laws.”

    It’s your blog. I’m outta here.

  73. Scott Says:

    Nick: If you could mount a coherent argument that I could sink my teeth into, I’d be happy for you to stay!

    But a drive-by quote-pasting isn’t going to cut it. Read one way, what Pattee writes is obvious and unobjectionable (DNA, text, computers, brain, etc. obey “higher-level organizational laws” that can be mostly separated from the underlying laws of physics); read another way, it’s just as transparently false (nothing in the physical world is “entirely free” of physical laws). In neither case do I see anything even remotely resembling an argument against AI: a computer can be governed by “higher-level laws” just as surely as a brain can, and Pattee even says that!

  74. RPadua Says:

    This is off the stated subject, but things are all over the place here anyway and it suddenly seems relevant. David Tong, in a discussion relating to his paper submitted to the current FQXi essay contest (“Physics and the Integers”), states this:

    No one knows how to write down a discrete version of the laws of physics.

    No one knows how to simulate the laws of physics on a computer.

    Where by “laws of physics” I mean those that have already been established, in particular the Standard Model. I could well imagine that the first problem above is solved but not the second (there is still something called the Fermi sign problem to overcome). I could also imagine that both problems are solved. But, at the present time, both of the above statements are true.

    http://www.fqxi.org/community/forum/topic/897

    I’ve been wondering about this.

  75. Nick M Says:

    Tong should’ve said “fermion” sign problem but maybe that’s trivial. Anyway it’s officially hard.

  76. Hopefully Anonymous Says:

    “Youngman, Rickles, and Dangerfield”

    -you have great taste in comedy, but I’m surprised you chose such old comediense? Are they your age cohort, because I thought you were much younger.

  77. Hopefully Anonymous Says:

    An AI comedy improv participant would be both a major achievement AND would be a less threatening achievement (I think FAI is AI that says “yes and”).

  78. John Sidles Says:

    “Youngman, Rickles, and Dangerfield” … gee, I wish I’d mentioned Leslie Nielsen and Lili Tomlin too! :)

    Comedy is one of those disciplines whose practitioners tend to improve with age … I have reason to be hopeful that quantum systems engineering will prove to be similar.

    As for funny computers … it seems to me that kindly comedy is the most sophisticated of all forms of cognition. In particular, we will know that strong AI has arrived, for sure, when our computers begin to chuckle every time they speak with us.

  79. John Sidles Says:

    Scott says: Hartmanis wrote a whole monograph about Turing machines with provable runtimes back in the 70s.

    Scott, I am pretty sure that your post is referring to Hartmanis’ monograph Feasible computations and provable complexity properties (1978); this is a terrific monograph (IMHO). Moreover, Hartmanis followed-up with a similarly wonderful roadmap article Observations about the development of theoretical computer science (1981).

    Certainly you and I entirely agree that Hartmanis deserved both his Turing Prize *and* his honored place on your “Timeline of Computer Science.” :)

    My roadmap database has long included several juicy TCS-related quotes from Hartmanis’ 1981 article, but to date, there has been no suitable occasion to deploy Hartmanis’ quotes in the blogoshere … oh well, as Dick Liption says.

    It is both instructive and enjoyable to read Hartmanis’ early work side-by-side with (for example) a modern textbook like Arora and Barak, for reasons discussed on Gödel’s Lost Letter and P=NP. In particular, a recent highly-rated result on TCS StackExchange, namely Emanuele Viola’s proof of the undecidability of runtime estimation in P, is a special case (AFAICT) of a theorem (it is Theorem 7.4) that Hartmanis proves in his monograph.

    Obviously, it’s no surprise that a non-expert like me was unaware of Hartmanis’ early work … what is surprising (to me) is that no-one on TCS StackExchange recognized it … including the 26 recommenders of Viola’s Theorem.

    One lesson (perhaps?) is that it pays to read the roadmaps of Turing Prize winners. And it’s true also, that Hartmanis’ 1981 roadmap for TCS makes very congenial and inspirational reading, for systems engineers like me. Good! :)

  80. John Sidles Says:

    Just in case anyone wonders what motivates a medical researcher like me to post on diverse topics like: (1) IBM’s Watson, (2) obscure articles by Jules Hartmanis, and (3) Scott and Alex’s seminal work on linear quantum optics … well … here’s why.

    The topic this week on Dick Lipton and Ken Regan’s weblog is Happy St. Patrick’s Day—Again Again. This topic has provided a long-awaited venue for a end-to-end complexity theoretic / quantum informatic / medical roadmap.

    The post on Gödel’s Lost Letter and P=NP is the penultimate step toward migrating these long-in-preparation questions onto TCS StackExchange and MathOverflow, and I am hopeful that that this migration can take place with a maximum of creative participation and enjoyment for everyone.

    That’s why comments and criticisms from Shtetl Optimized readers too would be sincerely welcome and greatly appreciated.

  81. Hopefully Anonymous Says:

    ” If we ever had AI that passed the Turing test, I don’t believe for a minute that we would understand how it worked”

    There have to be people smarter than me who have worked with some rigor along the lines of the notion that passing a Turing test is dependent not just on performance of the AI, but also on the state of testing technology at that time.

    If anyone could direct me analysis in that direction, I’d appreciate it.

  82. quintopia Says:

    I know I’m a month or two late to this discussion…but these questions have to be answered.

    1. One of the articles I read around the time of the competitions said that Watson buzzed in with a mechanical thumb–it had to actuate a motor in order to press the button. Was this article mistaken/did I misread?

    2. I don’t think that IBM would have had an easy time collecting the relevant primary sources for their databases back when the Internet was still so immature. I want to point out that one of the developers of Watson mentioned in a lecture that long-standing formalized hierarchical knowledgebases were looked at for potential inclusion in Watson but were rejected as parallel searching of an immense body of unorganized knowledge could be done sufficiently quickly while making it possible to introduce far more information.

    3. I think physical compactness is an irrelevant question in a world where all one’s processing and storage can be done in the cloud. The better question to ask is “how long until cybernetics advances enough that every person within range of a wireless router becomes a walking jeopardy champion?” And a better question still: “What would such a world be like?” It’s unfathomable. My guess: less than 50 years.

    4. (The next great milestone for AI)
    I like the idea of the computer that can be given a (formalized) English description of a board game, time, and a number of good opponents, and become a reasonably good player. That is probably a good 10 year project.
    For 20 years, a computer that can watch a video of a person and report 1) what they are doing, 2) their current emotional state 2) their underlying reasons for doing it and their probably internal planning process and 4) a prediction of what they are going to do next. In other words, a computer with a theory of mind. What would a good publicity stunt be for this? I don’t know, but I’d love to see it. For 50 years, an android fully capable of all the speed, flexibility, and energy of human motion can play a sport that requires this sort of agility, even if it requires a nearby supercomputer and an array of cameras to control it. Robotics tradition says that sport should be soccer, but that’s not an individual sport and so doesn’t show off the abilities of the android well enough. Tennis seems like the right idea here, but I would be more impressed with racquetball due to the increased number of possible trajectories, and the add difficulty of having to avoid running into your opponent.
    And for 100 years? That’s easy. A working quantum computer.

Leave a Reply