Marvin Minsky

Yesterday brought the sad news that Marvin Minsky passed away at age 88.  I never met Minsky (I wish I had); I just had one email exchange with him back in 2002, about Stephen Wolfram’s book.  But Minsky was my academic great-grandfather (through Manuel Blum and Umesh Vazirani), and he influenced me in many other ways.  For example, in his and Papert’s 1968 book Perceptrons—notorious for “killing neural net research for a decade,” because of its mis- or over-interpreted theorems about the representational limitations of single-layer neural nets—the way Minsky and Papert proved those theorems was by translating questions about computation into questions about the existence or nonexistence of low-degree polynomials with various properties, and then answering the latter questions using MATH.  Their “polynomial method” is now a mainstay of quantum algorithms research (having been brought to the subject by Beals et al.), and in particular, has been a mainstay of my own career.  Hardly Minsky’s best-known contribution to human knowledge, but that even such a relatively minor part of his oeuvre could have legs half a century later is a testament to his impact.

I’m sure readers will have other thoughts to share about Minsky, so please do so in the comments section.  Personal reminiscences are especially welcome.

33 Responses to “Marvin Minsky”

  1. Gene Chase Says:

    As an MIT sophomore (“wise fool”) math major I started to take Minsky’s course in AI, cross-listed in math and electrical engineering. It was before there was a computer science major, and before there were AI textbooks, so we used his grad students’ dissertations, later to be published as Semantic Information Processing. (Names you would know like Patrick Winston, Bertram Raphael, Danny Bobrow, and John McCarthy.) There were “no prerequisites,” but I was glad to have had 18.03 Calculus III. The first assignment was “hill climbing.” I could do it. After that I was in over my head. It’s the only course I’ve ever dropped. Since then, I’ve taught AI a dozen times. Minsky was patient, but he did not suffer fools gladly — even “wise fools.”

  2. Mitchell Porter Says:

    At EXTRO-3 in 1997, Eric Drexler said he had a nightmare in which he spent centuries trying to explain to the AIs of the future why he had let his thesis supervisor die irreversibly, and then he gave Minsky a cryonics suspension contract. Minsky accepted bashfully, saying that perhaps he had been put off by the paperwork.

  3. mjgeddes Says:

    It’s interesting that Minsky was sceptical of Bayesianism and didn’t believe that statistical methods alone could get to AGI.

    Minsky emphasized the importance of a variety of techniques, rather than trying to rely on one grand idea:
    “You don’t understand anything until you learn it more than one way”.

    Here’s an interesting paper where he talks about different reasoning methods and suggests that different methods apply at different levels of abstraction:

    web.media.mit.edu/~push/StThomas-AIMag.pdf

    That’s my view of things also. It is just plain foolish to think that probability theory is going to continue to apply at all levels of epistemological abstraction; thinking that ‘probability’ is going to continue to apply in epistemological domains way outside those it’s worked in so far is just as silly as thinking Newtonian physics can continue to work in black holes.

  4. fred Says:

    just when there are huge breakthroughs happening in practical AI
    http://spectrum.ieee.org/tech-talk/computing/software/monster-machine-defeats-prominent-pro-player?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+IeeeSpectrum+%28IEEE+Spectrum%29

  5. Michael Tsai - Blog - Marvin Minsky, RIP Says:

    […] Update (2016-01-30): Scott Aaronson: […]

  6. eli Says:

    Is it possible to start on a new project after 40 and do breakthrough work after 40 if you are a male scientist?

  7. jonas Says:

    Scott: It’s probably not connected directly to Minsky, but can you please comment on the recent advances in computers playing the Go game?

  8. Scott Says:

    jonas #7: It’s an amazing achievement!

    My Go knowledge is surely insufficient to get much out of them, but does anyone have a link to the actual games against the European champion that the machine won?

  9. Scott Says:

    eli #6: Yes, it’s possible (also if you’re a female scientist), though surely not easy. Exercise for the reader to give examples.

  10. fred Says:

    Scott #8

  11. fred Says:

    Scott,
    what is opinion on the issue of AI becoming one day too advanced for our own sake?

    People like Stephen Hawking and Elon Musk seem pretty concerned about it ( http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/ )

  12. Bill Kaminsky Says:

    Eli #6 and Scott #9:

    As someone who recently turned 40, I too would like to know the answer to whether “old dogs…” (i.e., scientists past 40) “…can learn new tricks” (i.e., embark on new projects and make major contributions).

    As a theoretician myself, I’m happy to have merely a plausible intuition upon which to build and then some sort of existence proof, so here goes:

    Plausible Intuition: I imagine if one (A) has a solid foundation of knowledge in the field, (B) has developed a good nose for new fields with low lying fruit, and then (C) is lucky to live in a time where there are new academic fields and/or practical applications that arise in her or his 40’s or 50’s that have lots of low lying fruit, then the answer is “yes, old scientific dogs can most certainly successfully learn new tricks”

    Existence Proof: Max Born. For his PhD, he did work on good ol’ boring 19th Century physics — namely continuum mechanics. But it wasn’t ho-hum work. Born did it with such mathematical rigor and craft that he made good friends that’d greatly help his career, namely mathematicians of the epochal-making caliber of Hilbert, Klein, and Minkowski. Through Minkowski, he quickly developed the nose for the new and was an early worker in Special Relativity. Then, Born was indeed in his early 40s when he worked with Werner Heisenberg and brought his straight-form-the-source Hilbert Space skills to the creation of Matrix Mechanics. And then, of course, once quantum mechanics was formulated, there was tons of low lying fruit, and Born did excellent work in its applications to solid state physics as well as doing magisterial work in the perhaps-boring-but-ever-more-technologically-useful 19th Century physics of his schoolboy youth (e.g., his famous _Principles of Optics_ textbook/reference with Wolf).

  13. Scott Says:

    fred #11: See my post The Singularity Is Far. Yes, if civilization lasts long enough, unfriendly superintelligent AI taking over might very well eventually become the biggest problem we have to face. But there’s a lot of other problems we’ll need to solve first even in order to survive for long enough to have such a problem.

  14. Sniffnoy Says:

    Here are the full game records, with also some links to commentary: https://www.reddit.com/r/baduk/comments/42yt5i/game_records_of_alphago_vs_fan_hui/

    (I don’t play Go myself so don’t ask me about this.)

  15. John Sidles Says:

    Hmmmm … here are four transformational STEM advances by workers over 40:

    Thermonuclear weapons  Stanislaw Ulam and Edward Teller, invented circa 1951, at ages 42 and 43 respectively.

    For details, see Randall Munroe’s wonderfully clear Thing Explainer essay “Machine For Burning Cities.”

    Magnetic resonance imaging  Raymond Damadian in 1972, Paul Lauterbur in 1973, Richard Ernst in 1975, Peter Mansfield in 1978, at ages 36, 44, 42, and 45.

    For details, see Randall Munroe’s Thing Explainer essay “Tiny Bags of Water You’re Made Of.”

    Single-molecule biomicroscopy  Stefen Hell in 1962, Eric Betzig in 2006, and William E. Moerner in 2006, at ages 38, 46, and 53.

    For details, see Randall Munroe’s Thing Explainer essays “Picture Taker” and “Colors of Light”

    Scanning microscopy  Gerd Binning and Heinrich Rohrer, in 1981, at ages 34 and 48.

    Hmmmm, wouldn’t scanning microscopes have made a TERRIFIC Thing Explainer essay? 🙂

    —-

    Conclusion  Transformational advances in systems engineering commonly have been simple enough to explain in the plainest of words, yet these “obvious” advances commonly have required decades of experience to conceive in integrated form; perhaps this explains the preponderance of STEM inventors who have been over the age of 40?

  16. ks Says:

    Yitang Zhang was over 50 when he solved the prime pairs problem.

  17. John Sidles Says:

    Hmmm … on the mathematical end, no one has mentioned two very prominent recent claims by over-40 mathematicians:

    abc conjecture  Shinichi Mochizuki, in 2012, at age 43.

    Graph isomorphism in quasi-P  László Babai, in 2015, at age 65.

    Peer review is still pending in both cases.

    Conclusion  Just admiration for sustained commitment.

  18. Raoul Ohio Says:

    I believe Weierstrass did all his work post 60 or so. Before that he was a school teacher. Legend has it that he later made Ph.D. students stand at the board and rewrite proofs until they had it exactly right.

    I think his approach (integral formulas + power series) is what transformed complex analysis into an easy standard tool that underlies almost all of advanced mathematics.

  19. John Sidles Says:

    Another notable triumph for over-40 STEM workers in 2015 was the release by Paul Horowitz and Winfield Hill of the much-expanded and eagerly-awaited third edition of their The Art of Electronics. If quantum supremacy is ever demonstrated experimentally, it almost certainly will be at a lab that has one or more tattered copies of Horowitz and Hill on its bookshelf.

    Pretty good for two guys of age 73 and (can this be right?) 89.

    The Art of Electronics is a paradigmatic example of a STEM book that, by virtue of its immense scope and integral clarity, can itself be appreciated as a deliberate work of (cognitive) systems engineering. Perhaps in crafting such books, the more decades of experience, the better!

    Another such artful book (as it seems to me) is Alexander Grothendieck’s and Jean Dieudonné’s Éléments de Géométrie Algébrique (universally known as EGA). This work was published between 1960 and 1967, when Grothendieck was aged 32-39, and Dieudonné was aged 54-61.

    It’s easy to undervalue Dieudonné’s contribution to Grothendieck’s seminal work; survey articles like Dieudonné’s “The Historical Development of Algebraic Geometry” (1972) expose the broad-and-solid foundations of Dieudonné’s cognitive perspective.

    Conclusion  Perhaps young Shinichi Mochizuki’s great misfortune is that he has not found (yet) his mature Jean Dieudonné, and in consequence Mochizuki’s young ideas do not rest (yet) upon mature foundations of explanation and understanding.

  20. fred Says:

    Scott #13

    Right, it’s possible that humanity will be wiped out before AI is powerful enough to be a threat.
    But, on the other hand, we have to recognize that the pace of AI improvement is exponential and we’re finally seeing undeniable breakthroughs.
    Those breakthroughs are actually motivated by immediate practical applications ($$$) and aren’t about building AI as a theoretical/academia toy (beating humans at Go is to make a point that there is something new going on here).
    http://preview.tinyurl.com/jskm7lx

    And this sort of “applied” AI will probably become an essential tool to help us solve all those other issues that threaten us, driving more and more investment, which itself would accelerate AI development even more.

  21. John Sidles Says:

    Fred foresees (#20) that  “‘Applied’ AI will probably become an essential tool to help us solve all those other issues that threaten us, driving more and more investment, which itself would accelerate AI development even more.”

    Mathematician (and Fields Medalist) Michael Harris two very recent weblog essays “Beschleunigung, perfectoid or otherwise” (February 3, 2016) and “More thoughts on acceleration” (February 4, 2016) are meditations upon the topic of cognitive acceleration in general.

    A survey of recent work motivating Harris’ essays is provided by a recent Scientific American weblog essay by mathematician Evelyn Lamb, titled “Thinking about How and Why We Prove” (January 21, 2016).

    Caveat  Michael Harris’ weblog Mathematics Without Apologies (and his 2015 book of the same title) asks plenty of tough questions, and presents plenty of challenging observations, without offering any easy answers or reassuring roadmaps. Harris relies upon the comments to his weblog for that. 🙂

    Working conclusion  The ongoing acceleration of global mathematical culture has produced a surplus of Grothendiecks and a paucity of Dieudonnés; in consequence “accelerated” intelligences already are walking among us; it is natural for STEM workers (young and old alike) to be simultaneously exhilarated and frightened by this reality.

  22. lotns Says:

    eli #6:
    refer to
    http://mathoverflow.net/questions/25630/major-mathematical-advances-past-age-fifty

  23. gasarch Says:

    Hilbert began working on Physics in his 50s.

    I’ve heard that the reason it appears that people stop doing work after age x is not that they are old, but its that their field has dried up— all of the problems that can be solved have been, leaving only hard ones or ones that aren’t that interesting. So they key may be to change fields.

  24. joe Says:

    Given that this is Scott’s blog, I’m surprised no one has yet mentioned Feynman and his seminal work on quantum computing, which came after he had turned 60.

  25. eli Says:

    what is the polynomial method?

  26. Scott Says:

    eli #25: Here are PowerPoint slides from a tutorial talk about it that I gave at FOCS’2008.

  27. Eleanor Rieffel Says:

    The eminent knot-theorist Joan Birman got her Ph.D after age 40, though she must have started working in knot theory a little before she was 40.

  28. 5000 pages | Mathematics without Apologies, by Michael Harris Says:

    […] Sidles writes on Scott Aaronson’s […]

  29. John Sidles Says:

    The following passage augments in bold the Wikipedia article on the mathematician Emma “Emmy” Noether:

    Noether’s mathematical work has been divided into three “epochs”.

    In the first epoch (1908–19, Noether age 26 to 37), she made contributions to the theories of algebraic invariants and number fields. Her work on differential invariants in the calculus of variations, Noether’s theorem, has been called “one of the most important mathematical theorems ever proved in guiding the development of modern physics”.

    In the second epoch (1920–26, Noether age 38 to 44), she began work that “changed the face of [abstract] algebra”. In her classic paper Idealtheorie in Ringbereichen (Theory of Ideals in Ring Domains, 1921) Noether developed the theory of ideals in commutative rings into a tool with wide-ranging applications. She made elegant use of the ascending chain condition, and objects satisfying it are named Noetherian in her honor.

    In the third epoch (1927–35, Noether age 45 to 53), she published works on noncommutative algebras and hypercomplex numbers and united the representation theory of groups with the theory of modules and ideals.

    In addition to her own publications, Noether was generous with her ideas and is credited with several lines of research published by other mathematicians, even in fields far removed from her main work, such as algebraic topology.

    How unfortunate it is, that Noether’s tragically early death (at age 53) deprived humanity of a fourth “Emmy Epoch”. As Hermann Weyl said at her funeral “Her heart knew no malice; she did not believe in evil.” … oy …  🙁

    Jean Dieudonné’s survey “The Historical Development of Algebraic Geometry” (1972, see comment #19) provides further details, and an extended mathematical context, regarding the transformational impact(s) of Emma Noether’s work (and that of her mathematician-father Max Noether too).

  30. Attila Smith Says:

    John Sidles writes in his fourth (but not last) post “Mathematician (and Fields Medalist) Michael Harris …”. It is definitely false that Michael Harris is a Fields medalist. I suggest posters check these new gadgets, the Internet and Wikipedia, before writing misleading comments.

  31. John Sidles Says:

    That was an embarrassing goof: I referred to a Clay Research Awardee (Michael Harris, 2007) as a “Fields Medalist.”

    For details, interested Shtetl Optimized readers are referred to Chapter 2 “How I Acquired Charisma” of Michael Harris Mathematics Without Apologies (2015).

    Its notable (even surprising) that Harris’ book refers nowhere to his own Clay Research Award; Harris’ narrative rather is mainly concerned with (what Harris calls) “the relaxed field” that remains when peripheral considerations of personal honors, national interests, state secrets, and intellectual property are removed from centrality to mathematical practice.

    Needless to say, not everyone agrees on the merits of Harris’ view of mathematics as a practice that (ideally) is “not subject to the pressures of material gain and productivity.”

    Thank you, Attila Smith, for helping to inspire these corrections, reflections and citations.

  32. John Sidles Says:

    Let’s not let these fine Shtetl Optimized comments end without a tribute to yet another remarkable group of over-forty (in fact, over-sixty!) STEAM-workers.

    The  Turing Test  DEVO Test  From ordered foundations in rationality, self-construct/self-deconstruct self-realizing general intelligences. Because we’ll know, for sure, that strong AI has arrived, when its avatars start wearing Energy Domes. 🙂

  33. T. H. Ray Says:

    I attended the New England Complex Systems Conferences on complex systems in 2006 (ICCS 2006) where Marvin Minsky was a plenary speaker. I was self-conscious because I requested an overhead projector for my session talk, while other participants (much younger than I) were skilled PowerPoint presenters.

    When Minsky kept the audience waiting while an overhead projector was wheeled into place and adjusted, he drily quipped: “I just work with computers; I don’t like them.” A forever endearing comment.