“The Man Who Tried to Redeem the World with Logic”

No, I’m not talking about me!

Check out an amazing Nautilus article of that title by Amanda Gefter, a fine science writer of my acquaintance.  The article tells the story of Walter Pitts, who [spoiler alert] grew up on the mean streets of Prohibition-era Detroit, discovered Russell and Whitehead’s Principia Mathematica in the library at age 12 while hiding from bullies, corresponded with Russell about errors he’d found in the Principia, then ran away from home at age 15, co-invented neural networks with Warren McCulloch in 1943, became the protégé of Norbert Wiener at MIT, was disowned by Wiener because Wiener’s wife concocted a lie that Pitts and others who she hated had seduced Wiener’s daughter, and then became depressed and drank himself to death.  Interested yet?  It’s not often that I encounter a piece of nerd history that’s important and riveting and that had been totally unknown to me; this is one of the times.

Update (Feb. 19): Also in Nautilus, you can check out a fun interview with me.

Update (Feb. 24): In loosely-related news, check out a riveting profile of Geoffrey Hinton (and more generally, of deep learning, a.k.a. re-branded neural networks) in the Chronicle of Higher Education.  I had the pleasure of meeting Hinton when he visited MIT a few months ago; he struck me as an extraordinary person.  Hat tip to commenter Chris W.

38 Responses to ““The Man Who Tried to Redeem the World with Logic””

  1. bloop Says:

    Wow

  2. wolfgang Says:

    This is the most uplifting story I read in a long while, which then turns into the saddest tale …

    Did Wiener ever find out the truth?

  3. James Gallagher Says:

    Amanda Gefter is an engaging writer, I enjoyed her book The Age of Entanglement where she creates some wonderful hypothetical scenarios, such as Bohm and Feynman discussing QM in a bar when they were both visiting the Univerity in Rio de Janeiro (the book ends with a pretty amazing factual revelation re Schrodinger in Ireland)

    I did not know about Pitts’ work and this piece is compelling reading – what an incredible man he was.

  4. Raoul Ohio Says:

    Thanks for the pointer. Amazing story.

    The burned dissertation reminds me of Fermat’s scratch paper, where presumably he had what he thought was a proof of the last theorem. And, Ramanujan’s chalkboard. There might have been ideas there that no one has ever recreated.

  5. asdf Says:

    The part about Wiener’s wife has unknown truth value. The Wiener biography “Dark Hero of the Information Age” said something similar and may be the source for the Nautilus article, but it was less than certain. I didn’t like that book very much and don’t remember it all that well. I think Wiener’s autobiography “I am a Mathematician” (which was weird but great) was vague on the issue.

  6. Michael Bacon Says:

    I re-posted this elsewhere. However, I agree with asdf@5. What’s the real story on this point? And, regardless of the answer, the question remains what impact this has on the tale overall. Still, an amazing story.

  7. Scott Says:

    James #3: You’ve confused Amanda Gefter with Louisa Gilder (author of The Age of Entanglement). I’ve talked to both of them, and they are not the same. 🙂

  8. Brian Rom Says:

    Another nit to pick, Scott: It’s ‘whom’. (You can run (write?), but you can’t hide from the Grammatical Curmudgeon!)

  9. William Hird Says:

    Wow, whiskey and ice cream, let’s take out the liver AND pancreas at the same time. Poor guy, why didn’t Wiener check out his wife’s story before going ballistic? Isn’t that what great scientists are supposed to do, check my facts before I “publish”?

  10. Shubhendu Says:

    I picked up the book that makes this suggestion (about Wiener’s wife) after reading this excellent review by Freeman Dyson http://www.nybooks.com/articles/archives/2005/jul/14/the-tragic-tale-of-a-genius/ I am sure there are copies available online. The book too, is amazing!

  11. Job Says:

    Perhaps we are too bound by logic.

    I wonder if there are problems that would benefit from illogic, e.g by giving up logical behavior, or applying flawed logic, without resorting to randomization.

    It can’t be argued that the shortest path between two true statements is an illogical one.

    Imagine the performance of such a machine. 🙂

  12. Michele Says:

    Thank you for the pointer!

    “Dark Hero of the Information Age” is one of my favorite books. Every year, when I introduce autonomic computing to my students, I spend some time to summarize the incredible life of Norbert Wiener and his work on cybernetics.

    I will forward the article to my colleague who teaches A.I.

  13. Scott Says:

    Brian Rom #8:

      Another nit to pick, Scott: It’s ‘whom’.

    I’m not even sure which “who” is the offending one. But I have it on the authority of no less than Steven Pinker, whom is one of my intellectual heroes, and whom wrote The Sense of Style, that who/whom is one of many flavors of grammatical pedantry up with which I need not put.

  14. clayton Says:

    I found this article yesterday, coincidentally, and was so immensely impressed that I briefly considered getting a Nautilus subscription. I forgot about that impulse upon getting back to work but now the idea is in front of me again. Any further info on / familiarity with Nautilus? Have you seen other good writing from them?

  15. Diego Mesa Says:

    Another fantastic article by the Nautil.us team! This one was definitely one of my favorites. If you enjoyed this article, I can recommend the profile of John Wheeler [1], and Scott’s Ingenious Interview! [2]

    [1]: http://nautil.us/issue/9/time/haunted-by-his-brother-he-revolutionized-physics
    [2]: http://nautil.us/issue/21/information/ingenious-scott-aaronson

  16. luca turin Says:

    Great story, amazing guy. Thanks for pointing it out, Scott. Also had no idea that my old University College London prof Pat Wall had been there at the beginning of it all!

  17. Alex Says:

    Quoting from the article:

    “The results shook Pitts’ worldview to its core. Instead of the brain computing information digital neuron by digital neuron using the exacting implement of mathematical logic, messy, analog processes in the eye were doing at least part of the interpretive work.”

    I can understand why the level of sophistication of a frog’s eye might have been a surprise, however, I cannot understand the claim that this sort of higher-level signal processing in the eye equates to the existence of “messy” analogue processes, and why this would have affected Pitt in the manner that it the author of the article claims.

  18. James Gallagher Says:

    Scott #7

    oops, thanks for the correction.

  19. Sniffnoy Says:

    Oh, man! I wish I’d gotten paid that time my advisor had me go over a barely readable proof of the Collatz conjecture! (OK, I suppose the interview just says you have such people pay you, it doesn’t say that any of the money goes to the student… 🙂 )

  20. rrtucci Says:

    I had to look up the word teleology to understand your Nautilus interview. Wow-wee, now I know some philosophy. I shall endeavor to forget it as soon as possible.

  21. Scott Says:

    Sniffnoy #19: On the contrary, ALL the money goes to the student. (I don’t even handle the money; the student and the P≠NP prover arrange that themselves.)

  22. Serge Says:

    Very moving story about an almost-forgotten founder of computer science. He was even less fortunate than Grothendieck, who at least didn’t die prematurely as an alcoholic. Somehow he was at the mercy of his spiritual fathers. Had he been able to achieve his work on probabilistic three-dimensional neural networks, we’d probably know more about how the brain works by now…

  23. Vadim Says:

    Great article & great interview of you, Scott. I’m really digging Nautilus.

  24. Job Says:

    Challenge: Produce a deterministic, formally-illogical algorithm, for a non-trivial problem, that produces the right answer with bounded error and performance comparable to a deterministic logical algorithm.

    As building blocks, consider borrowing from this list of non-sequiturs (one is sufficient :)):
    http://en.wikipedia.org/wiki/Non_sequitur_(logic)

    Maybe that’s not so interesting – we can certainly craft problems for which non-sequiturs happen to work pretty well – but consider the hypothetical class Bounded-Error Illogical Polynomial Time. How does it relate to other classes?

    While it’s true that, unlike a randomized algorithm, a deterministic illogical algorithm will consistently fail or succeed on the same input, we can always just “shuffle” the input and retry.

  25. Scott Says:

    Alex #17: Yeah, I was also a bit skeptical about that part of the article.

  26. Raoul Ohio Says:

    Re #17 and #25:

    I can make a plausible guess about this. Perhaps he/they had an insight about digital logic aspects of the brain. Furthermore, this was early on in the evolution of this concept. And it is very common to go “all in” on new ideas (compare to the common observation that the recently converted are often the worst religious nuts). Thus, when the “messy analogue pre-conditioning” in the eye was found, it appeared to be at odds with the pure theory they had been developing, and thus a real bummer.

    From the Speed of Light Update desk: It is widely reported today that the speed of light has been demonstrated to NOT be a constant c, but to vary (for various reasons, including non-vacuum) with an upper bound of c. Check it out online.
    Decades ago I read a related paper by Ernst Breitenberger that relativity should be based on (what EB called) “the signal velocity” s with the requirement that c .LE. s.
    I had not been aware that active “Speed of Light” work was ongoing. But, when considering cosmology, it seems to me that there might be something we don’t get, so reexamining every step is important.

  27. anon Says:

    I did’nt knew either this piece of history. Thank you very much!
    K.

  28. Vijay D'Silva Says:

    The article notes that the logical calculus paper by McCulloch and Pitts inspired von Neumann. It was also noticed by Rabin and Scott who cite it as an example of a graph-based representation of languages in their paper on finite automata.

  29. Thomas Flynn Says:

    It’s fascinating that von Neumann and the EDVAC architecture were so heavily influenced by McCulloch and Pitts’ calculus and neural networks.

    Personally I had the impression that the early history of computers involved people trying to make physical realizations of Turing machines. Was that at all true? If it was, it seems people changed course pretty quickly…

  30. Braithwaite Prendergast Says:

    Amanda Gefter demands my credulity when she writes that Russell and Whitehead’s book was available in a Detroit public library. Is it preposterous to also claim that a 12-year old could understand it?

  31. Greg Kuperberg Says:

    Personally I had the impression that the early history of computers involved people trying to make physical realizations of Turing machines.

    That’s actually closer to the middle history of Turing machines. Early microchip CPUs were arguably fairly similar to idealized RAM machines. A RAM machine is a lot like a Turing machine, except with an extra address tape that lets you teleport on the main tape. The address tape was implemented as the address register, and the main tape was implemented as the main memory. There were also a few data registers for arithmetic, but they were arguably only a moderate variation of the RAM machine idea.

    In fact a modern CPU still looks something like this, except with some amount parallelism and a lot of caching.

  32. Raoul Ohio Says:

    BP#30,

    I think the point is that Pitts was NOT a typical 12 year old.

    There are many accounts of very young very smart people, including Gauss. Gauss actually did a much harder problem than the usual story (sum 1 to 1000) in about 1 second as a child.

    Have you seen the picture of about 8 year old Terrance Tao showing Paul Erdos some math?

  33. Chris W. Says:

    Loosely related: See The Believers, a fascinating new article in the Chronicle of Higher Education about the career of Geoffrey Hinton and his role in the recent applications of so-called deep learning being pursued at Google and elsewhere. The article emphasizes that much of this is due to the explosion in the availability of cheap graphics processors and massive parallelism.

  34. Chris W. Says:

    PS: See this exchange between a reader and the author (Paul Voosen) in the comments.

  35. Michele Says:

    RO #30:

    http://upload.wikimedia.org/wikipedia/commons/c/c4/Paul_Erdos_with_Terence_Tao.jpg

    Wow!

  36. aviti Says:

    I am happy this pointed if other nerdy histories I did not know about.

  37. Jim Cliborn Says:

    Good read on Hinton, Thanks!!

  38. Ben Standeven Says:

    @Job #24:

    It looks like your class is called HeurP on the Complexity Zoo.