Not the critic who counts

There’s a website called Stop Timothy Gowers! !!! —yes, that’s the precise name, including the exclamation points.  The site is run by a mathematician who for years went under the pseudonym “owl / sowa,” but who’s since outed himself as Nikolai Ivanov.

For those who don’t know, Sir Timothy Gowers is a Fields Medalist, known for seminal contributions including the construction of Banach spaces with strange properties, the introduction of the Gowers norm, explicit bounds for the regularity lemma, and more—but who’s known at least as well for explaining math, in his blog, books, essays, MathOverflow, and elsewhere, in a remarkably clear, friendly, and accessible way.  He’s also been a leader in the fight to free academia from predatory publishers.

So why on earth would a person like that need to be stopped?  According to sowa, because Gowers, along with other disreputable characters like Terry Tao and Endre Szemerédi and the late Paul Erdös, represents a dangerous style of doing mathematics: a style that’s just as enamored of concrete problems as it is of abstract theory-building, and that doesn’t even mind connections to other fields like theoretical computer science.  If that style becomes popular with young people, it will prevent faculty positions and prestigious prizes from going to the only deserving kind of mathematics: the kind exemplified by Bourbaki and by Alexander Grothendieck, which builds up theoretical frameworks with principled disdain for the solving of simple-to-state problems.  Mathematical prizes going to the wrong people—or even going to the right people but presented by the wrong people—are constant preoccupations of sowa’s.  Read his blog and let me know if I’ve unfairly characterized it.


Now for something totally unrelated.  I recently discovered a forum on Reddit called SneerClub, which, as its name suggests, is devoted to sneering.  At whom?  Basically, at anyone who writes anything nice about nerds or Silicon Valley, or who’s associated with the “rationalist community,” or the Effective Altruist movement, or futurism or AI risk.  Typical targets include Scott Alexander, Eliezer Yudkowsky, Robin Hanson, Michael Vassar, Julia Galef, Paul Graham, Ray Kurzweil, Elon Musk … and with a list like that, I guess I should be honored to be a regular target too.

The basic SneerClub M.O. is to seize on a sentence that, when ripped from context and reflected through enough hermeneutic funhouse mirrors, can make nerds out to look like right-wing villains, oppressing the downtrodden with rays of disgusting white maleness (even, it seems, ones who aren’t actually white or male).  So even if the nerd under discussion turns out to be, say, a leftist or a major donor to anti-Trump causes or malaria prevention or whatever, readers can feel reassured that their preexisting contempt was morally justified after all.

Thus: Eliezer Yudkowsky once wrote a piece of fiction in which a character, breaking the fourth wall, comments that another character seems to have no reason to be in the story.  This shows that Eliezer is a fascist who sees people unlike himself as having no reason to exist, and who’d probably exterminate them if he could.  Or: many rationalist nerds spend a lot of effort arguing against Trumpists, alt-righters, and neoreactionaries.  The fact that they interact with those people, in order to rebut them, shows that they’re probably closet neoreactionaries themselves.


When I browse sites like “Stop Timothy Gowers! !!!” or SneerClub, I tend to get depressed about the world—and yet I keep browsing, out of a fascination that I don’t fully understand.  I ask myself: how can a person read Gowers’s blog, or Slate Star Codex, without seeing what I see, which is basically luminous beacons of intellectual honesty and curiosity and clear thought and sparkling prose and charity to dissenting views, shining out far across the darkness of online discourse?

(Incidentally, Gowers lists “Stop Timothy Gowers! !!!” in his blogroll, and I likewise learned of SneerClub only because Scott Alexander linked to it.)

I’m well aware that this very question will only prompt more sneers.  From the sneerers’ perspective, they and their friends are the beacons, while Gowers or Scott Alexander are the darkness.  How could a neutral observer possibly decide who was right?

But then I reflect that there’s at least one glaring asymmetry between the sides.

If you read Timothy Gowers’s blog, one thing you’ll constantly notice is mathematics.  When he’s not weighing in on current events—for example, writing against Brexit, Elsevier, or the destruction of a math department by cost-cutting bureaucrats—Gowers is usually found delighting in exploring a new problem, or finding a new way to explain a known result.  Often, as with his dialogue with John Baez and others about the recent “p=t” breakthrough, Gowers is struggling to understand an unfamiliar piece of mathematics—and, completely unafraid of looking like an undergrad rather than a Fields Medalist, he simply shares each step of his journey, mistakes and all, inviting you to follow for as long as you can keep up.  Personally, I find it electrifying: why can’t all mathematicians write like that?

By contrast, when you read sowa’s blog, for all the anger about the sullying of mathematics by unworthy practitioners, there’s a striking absence of mathematical exposition.  Not once does sowa ever say: “OK, forget about the controversy.  Since you’re here, instead of just telling you about the epochal greatness of Grothendieck, let me walk you through an example.  Let me share a beautiful little insight that came out of his approach, in so self-contained a way that even a physicist or computer scientist will understand it.”  In other words, sowa never uses his blog to do what Gowers does every day.  Sowa might respond that that’s what papers are for—but the thing about a blog is that it gives you the chance to reach a much wider readership than your papers do.  If someone is already blogging anyway, why wouldn’t they seize that chance to share something they love?

Similar comments apply to Slate Star Codex versus r/SneerClub.  When I read an SSC post, even if I vehemently disagree with the central thesis (which, yes, happens sometimes), I always leave the diner intellectually sated.  For the rest of the day, my brain is bloated with new historical tidbits, or a deep-dive into the effects of a psychiatric drug I’d never heard of, or a jaw-dropping firsthand account of life as a medical resident, or a different way to think about a philosophical problem—or, if nothing else, some wicked puns and turns of phrase.

But when I visit r/SneerClub—well, I get exactly what’s advertised on the tin.  Once you’ve read a few, the sneers become pretty predictable.  I thought that for sure, I’d occasionally find something like: “look, we all agree that Eliezer Yudkowsky and Elon Musk and Nick Bostrom are talking out their asses about AI, and are coddled white male emotional toddlers to boot.  But even granting that, what do we think about AI?  Are intelligences vastly smarter than humans possible?  If not, then what principle rules them out?  What, if anything, can be said about what a superintelligent being would do, or want?  Just for fun, let’s explore this a little: I mean the actual questions themselves, not the psychological reasons why others explore them.”

That never happens.  Why not?


There’s another fascinating Reddit forum called “RoastMe”, where people submit a photo of themselves holding a sign expressing their desire to be “roasted”—and then hundreds of Redditors duly oblige, savagely mocking the person’s appearance and anything else they can learn about the person from their profile.  Many of the roasts are so merciless that one winces vicariously for the poor schmucks who signed up for this, hopes that they won’t be driven to self-harm or suicide.  But browse enough roasts, and a realization starts to sink in: there’s no person, however beautiful or interesting they might’ve seemed a priori, for whom this roasting can’t be accomplished.  And that very generality makes the roasting lose much of its power—which maybe, optimistically, was the point of the whole exercise?

In the same way, spend a few days browsing SneerClub, and the truth hits you: once you’ve made their enemies list, there’s nothing you could possibly say or do that they wouldn’t sneer at.  Like, say it’s a nice day outside, and someone will reply:

“holy crap how much of an entitled nerdbro do you have to be, to erase all the marginalized people for whom the day is anything but ‘nice’—or who might be unable to go outside at all, because of limited mobility or other factors never even considered in these little rich white boys’ geek utopia?”

For me, this realization is liberating.  If appeasement of those who hate you is doomed to fail, why bother even embarking on it?


I’ve spent a lot of time on this blog criticizing D-Wave, and cringeworthy popular articles about quantum computing, and touted arXiv preprints that say wrong things.  But I hope regular readers feel like I’ve also tried to offer something positive: y’know, actual progress in quantum computing that actually excites me, or a talk about big numbers, or an explanation of the Bekenstein bound, whatever.  My experience with sites like “Stop Timothy Gowers! !!!” and SneerClub makes me feel like I ought to be doing less criticizing and more positive stuff.

Why, because I fear turning into a sneerer myself?  No, it’s subtler than that: because reading the sneerers drives home for me that it’s a fool’s quest to try to become what Scott Alexander once called an “apex predator of the signalling world.”

At the risk of stating the obvious: if you write, for example, that Richard Feynman was a self-aggrandizing chauvinist showboater, then even if your remarks have a nonzero inner product with the truth, you don’t thereby “transcend” Feynman and stand above him, in the same way that set theory transcends and stands above arithmetic by constructing a model for it.  Feynman’s achievements don’t thereby become your achievements.

When I was in college, I devoured Ray Monk’s two-volume biography of Bertrand Russell.  This is a superb work of scholarship, which I warmly recommend to everyone.  But there’s one problem with it: Monk is constantly harping on his subject’s failures, and he has no sense of humor, and Russell does.  The result is that, whenever Monk quotes Russell’s personal letters at length to prove what a jerk Russell was, the quoted passages just leap off the page—as if old Bertie has come back from the dead to share a laugh with you, the reader, while his biographer looks on sternly and says, “you two think this is funny?”

For a writer, I can think of no higher aspiration than that: to write like Bertrand Russell or like Scott Alexander—in such a way that, even when people quote you to stand above you, your words break free of the imprisoning quotation marks, wiggle past the critics, and enter the minds of readers of your generation and of generations not yet born.


Update (Nov. 13): Since apparently some people didn’t know (?!), the title of this post comes from the famous Teddy Roosevelt quote:

It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.

211 Responses to “Not the critic who counts”

  1. Grimy Says:

    “break free of the imprisoning quotation marks”? This sounds like a description of an injection attack¹ over plain-text. Really interesting mental image.

    ¹: https://en.wikipedia.org/wiki/Code_injection

  2. anon Says:

    I think you’re a bit unfair to owl/sowa. His blog is not quite all about complaining about ‘problem solvers’ and it seems to me that he was being polemical partly just to get a response from Gowers. If I ignore the polemics, I find some thoughts about philosophy and sociology of mathematics that I could even partly agree with.

    Besides, if one took all of Doron Zeilberger’s opinions 100 percent seriously, he would appear even more hateful than owl/sowa.

  3. Celestia Says:

    I believe this is what’s meant by the phrase “Haters gonna hate”

  4. Scott Says:

    anon #2: I agree, owl/sowa’s blog isn’t 100% bad, but represents a wasted opportunity to explain the kind of mathematics he likes.

    Regarding Zeilberger, the main difference is that owl/sowa is dead serious about purging the upper echelons of mathematics of unworthy Hungarian-style problem-solvers (and no doubt many others quietly agree with him), whereas Zeilberger’s essays are so over-the-top loony that they’re impossible for me to take seriously no matter how hard I try.

  5. Scott Says:

    Celestia #3: Thanks! I’ve always had difficulties with concision. 🙂

  6. anon Says:

    To me also owl/sowa is so over the top (in a different way…) that I find it impossible to believe that _he_ is dead serious.

  7. Cynicism Says:

    I too think that you’re being a bit unfair to owl/sowa. Writing about your own research mathematics is a bad way to stay anonymous online. As far as I can tell, he did really want to stay anonymous before Gowers came to his blog to comment and his blog essentially stopped.

    One wonders as well why you’re giving top billing on your blog to another blog by a much less famous academic whose last post was over a year and a half ago, and which has had a grand total of 3 posts since Thanksgiving of 2014.

  8. Ilyas Says:

    Hi there Scott, as you may recall, Tim also launched a runs the open access online journal discrete analysis which has been a major success in providing an alternative to the established publishers in Peer Reviewed mathematics publishing. http://discreteanalysisjournal.com/

  9. Scott Says:

    Cynicism #7: To be honest, I didn’t even realize owl/sowa had stopped blogging. I’d simply had his disparaging comments about the kind of mathematics I love stuck in my craw since I encountered his blog several years ago—and after I encountered SneerClub, I realized that he and they would fit well together in a “haters gonna hate” post.

  10. CIP Says:

    I think that it’s that old green eyed monster, jealousy, that motivates the sneerers of all flavors.

  11. Bram Cohen Says:

    Sometimes as a manager you’ll have two camps of employees each of whom is accusing the other one of being useless, evil, and conniving. There’s only one reasonable way forward in this scenario: Figure out which camp has more work output which you can actually verify, declare them the winner, and purge the other side.

    I’ve pencilled in the meetup this saturday, what’s the best way to keep informed of when and where it is?

  12. Joshua Zelinsky Says:

    Scott #9,

    “To be honest, I didn’t even realize owl/sowa had stopped blogging. I’d simply had his disparaging comments about the kind of mathematics I love stuck in my craw”

    This is interesting. I’m curious if you identify yourself more as a theory builder or as a problem solver? A lot of your work (such as the algebraization barrier) made me think of you as more of the theory builder type. Do you not see yourself this way? (Of course, one can think of one’s own work as leaning one way or another and still like the other work. I’m very much on the problem solver end, but I find the theory building fascinating.)

  13. Eli Schiff Says:

    This is an incredibly anti-critical post. You claim to elevate the rigorous pursuit of knowledge, and yet you can’t see that harsh criticism is central to that effort. In fact, you explicitly trash the critical act in your title (an homage to a long lineage of anti-critical intellectuals.)

    The people you condemn (though I’d never heard of them until your introduction) at least have a sense of humor about what they do. It’s clear in their name, “sneerclub.” But you seem humorless about the issue entirely.

    To wrap it all up, you engage in the armchair fallacy (http://www.elischiff.com/blog/2015/2/4/criticism-and-the-armchair-fallacy) that critics are not the equals of those they criticize, and more preposterously, you suspect them of wanting to subsume the essence of their targets into their own narcissistic project. What ever happened to steel-manning?

  14. Scott Says:

    Joshua #12: I have nothing whatsoever against theory-building, and I guess algebrization and even BosonSampling were sort of examples of it. I do have an allergy against anything that strikes me as generality or formalism for its own sake: I always want to know the motivation, what concrete patterns or examples your theory is trying to explain (and of course, in the case of algebrization I can tell you exactly what those were). Also, my own motivations tend to be things like P vs NP, or understanding the power of quantum computers, which I’m sure sowa sneers at (he once commented on his blog that he doesn’t care about CS, and that he thinks essentially everything useful in computing comes from faster hardware anyway, not from thinking about algorithms). I don’t return the favor: I’d love to someday understand some of the great achievements of Grothendieck-style algebraic geometry, maybe after I retire. 🙂 In the meantime, though, I also find great pleasure in the kind of math that tries to answer very concrete questions about graphs and Boolean functions and circuits and programs.

  15. The Problem With Gatekeepers Says:

    While I don’t know enough about the particulars of the beef Nikolai Ivanov has against Timothy Gowers, I think that there is definitely truth to what you claim he is claiming, namely, that if you look at the problems that not only mathematics, but natural science in general has focused on as of late and the things both fields have been rewarded, I generally agree with the notion that for all the award mumbo-jumbo, a lot of it won’t stand the test of time the way say Newton’s Principia, Einstein’s General Relativity or Godel/Turing work has.

    I put that critique in the larger context of physics having been hijacked with string theory -to the point that leading physicists are now questioning experimentation/falsification as the prime method theoretical physics should stand or fall- , what Higgs said that he would not have been able to come up with the boson that bears his name in the age of “publish or perish” or that the three most celebrated results in mathematics of the last 20 years (the proof of Fermat’s Last Theorem, Perelman’s proof of the Geometrization Conjecture and Yitang Zhang’s proof about the regularity of prime numbers) have been made by people not following the ways of the mathematical institutional system.

    The world of high tech is not any better. Today’s AI is exactly the same 1960s and 1970s AI, only with more data and more computing power -which makes AI essentially able to cover anything human beings have been able to produce but not improve on human work. We are living the world that people in the 1960s and 1970s dreamed of at places like Bell Labs, IBM research or PARC.

    There is very little to be optimistic about “breakthroughs” by looking at the last 30-40 years. And yet, there has never been more awards for scientific work and more people accumulating them.

    Something definitely doesn’t add up at all.

  16. Scott Says:

    Eli #13: I’m totally fine with harsh criticism of ideas, especially when it uses the thing being criticized as a jumping-off point to try to build something new and better. Scott Alexander constantly engages in that sort of criticism, and I try to as well. But “sneering,” almost by definition, is not the interesting and useful kind of criticism, but its opposite.

    Incidentally, I also found essentially nothing on SneerClub that made me laugh—which was disappointing, since I would’ve enjoyed some good zingers about people I know (or even me 🙂 )

  17. Anonymous Says:

    Great post! Two quick comments:

    1. I find that not “embarking in appeasement of those who hate” (regardless of whether it has a chance of success) is one of the things I’ve learned best growing up (although I’m still learning it!).

    2. Another, possibly more general, criterion according to which “a neutral observer [can] decide who was right” is finding out which side accepts (even better, welcomes) criticism or at least dissenting opinions. That’s the side who’s right.

  18. Eli Schiff Says:

    Scott #16:

    > especially when it uses the thing being criticized as a jumping-off point to try to build something new and better.

    Criticism needn’t be constructive, nor useful to the person being criticized. There is indeed a place for sneering in the same way as there’s a place for the iconoclastic mockery inherent to satire. Indeed, one could categorize this very post of yours as a sneer directed at your critics, for not rising to a standard of decorum you find appropriate.

    I didn’t particularly find anything funny on that site either. I mostly disagree with their framing. But I do think it’s good there is a place of people who may just yet challenge themselves to provide a substantive critique of your work. Perhaps one of them will use this very blog post as the impetus for it.

  19. The Pachyderminator Says:

    Regarding Ray Monk: I haven’t yet read his Russell biography, but I’ve heard that he came to dislike Russell by the time he was finished working on it. FWIW, his biographies of Wittgenstein and Robert Oppenheimer are quite sympathetic, while not ignoring the faults of either one. The Wittgenstein one is my favorite, but the Oppenheimer one is particularly interesting in that Monk, who’s basically a humanities scholar, made a serious attempt to understand all the physics that Oppenheimer researched.

  20. Romeo Stevens Says:

    Thank you for writing this. Resonates with C.S. Lewis’ The Inner Ring: http://www.lewissociety.org/innerring.php
    Which I find myself recommending to people who are capable of Actually Doing the Thing, but find themselves discouraged by the edifice of accomplishment that is incentivized over real, messy progress.

  21. Scott Says:

    Eli #18: Yes, if the sneerers ever graduate to more substantive critique, let me know; I’ll be happy to read that! In general, though, I find that the danger of providing serious critiques of field X, is that the thoughtful people in field X are then so interested to discuss with you that before long you’ve basically become a member of field X, albeit one with some heterodox opinions. That’s more-or-less what happened with me and the rationalist community, and is also what happened with some quantum computing skeptics and quantum computing…

  22. Eli Schiff Says:

    Scott #21 You become the outsider-insider in a way that seemed impossible!

  23. Scott Says:

    Romeo #20: Thanks; that’s one of the best things like a graduation speech I’ve ever read, up there with Feynman’s “Cargo Cult Science”! Who ever knew C.S. Lewis was a good writer? 😉

  24. venky Says:

    Scott: Can you write an expository post on Voevodsky’s univalent CS theory? Thx!

  25. Scott Says:

    venky #24: No, because I don’t understand it. I don’t understand type theory, I don’t understand category theory, I don’t understand functional programming, I don’t understand lambda calculus, I don’t understand any part of math or CS that’s about developing higher-level formalisms for things that we already knew how to say in more concrete ways. This is not, in any way, a comment against these areas, which have many uses and are beloved by many of my friends and should live long and prosper. It’s purely a comment about me and my cognitive style.

  26. Atreat Says:

    “But “sneering,” almost by definition, is not the interesting and useful kind of criticism, but its opposite.”

    One could even call it bullying. Certainly reminds me of the “cool kids” in school or the mean girls. They should call it the MeanGirlsClub.

    “for things that we already knew how to say in more concrete ways”

    Whaaa!!! 🙂 I find the lambda calculus and ski calculus much more intuitive to grok than Turing’s framework. Not sure in what way Turing is more concrete.

  27. Domotor Palvolgyi Says:

    Quite oddly, I’ve once heard a Hungarian (who has stopped research long ago) arguing that he didn’t like the way Gowers and some of his co-authors did math. While I didn’t completely understand the argument, I think it was more the opposite, i.e., that they are building a theory that is efficient in proving results, but has no elegance / it’s not so intuitive.

  28. Noah Stephens-Davidowitz Says:

    There’s a sort of obvious trap that something like the rationalist movement/community (or any movement that claims to be built around a moral truth/set of truths) can fall into. “We” can start to feel superior to other people, which is probably never a good thing. The movement already walks some pretty fine lines in its relationship to, e.g., religion. (I’m the world’s most zealous atheist, so I’m definitely lumping myself in with “the movement” here.) I think it definitely has a narcissism of small differences problem, but I’m not sure there’s a group of people (or even a single person) that doesn’t have that problem.

    That said, as I understand it, the foundational principle of this brand of rationalism is doubt. That is a very good foundational principle, and one that should do something to keep us from becoming an insular group of self-righteous sneering nerdsplainers. But, we’re still a group of people with something like an ideology, and we’re therefore still at risk of sucking in roughly the same way that almost every other such group of people ends up sucking.

    Maybe being really reticent to criticize other people (and really active in criticizing ourselves) is a good way to avoid these traps. It’s easier said than done, though.

    Anyway, I mostly just want to say that I love the density of nerdiness in this sentence:

    “At the risk of stating the obvious: if you write, for example, that Richard Feynman was a self-aggrandizing chauvinist showboater, then even if your remarks have a nonzero inner product with the truth, you don’t thereby “transcend” Feynman and stand above him, in the same way that set theory transcends and stands above arithmetic by constructing a model for it.”

  29. J Says:

    I think you are oversimplifying Sowa’s views somewhat. He believes, for example, that Shannons work in information theory would have been fields medal worthy. So he isn’t opposed to all applied mathematics. He seems to dislike hard analysis type estimates which are far, far, away from the predicted truth and seems to want applications to “classical mathematics”. He, on the other hand, seems to really love giving powerful unifying definitions and theories around them which explain a tremendous number of disparate seeming phenomena (Shannons information theory, Homology, even Schramm Loewner evolution). This definitely cuts towards algebraic geometry and away from combinatorics but that doesn’t seem to be his point.

  30. Scott Says:

    Noah #28: I agree with everything you say! But here was my experience:

    When I argued with rationalists about some of their central beliefs (the risk of a singularity soon, the universal applicability of Bayesianism…), they argued back in a substantive and collegial way. They seemed clearly happier to have me criticize their ideas than not to engage at all.

    By contrast, when I argued with SJWs about some of their central beliefs … well, everyone knows what happened. It was actually fine, exactly like with the rationalists, so long as the discussion was confined to the subset of SJWs who read Shtetl-Optimized. But when it got to the wider world, I became the embodiment of everything they hated, fit only for sneering rather than dialogue.

    Based on this experience, I’m currently more optimistic about the ability of the rationalist community to resist the suckifying forces that you write about, than I am about the ability of the SneerClub community to do the same.

  31. Sniffnoy Says:

    I don’t understand lambda calculus

    If you don’t mind me repeating my comments from earlier threads… lambda calculus is actually really simple. As I recall last time this came up your response was that you didn’t understand the three equivalence rules. But these are actually really simple, especially if you keep in mind what they mean. Remember, “λx.(etc)” just means the function “x↦(etc)”. So then the three rules are:

    α. Dummy variables can be renamed.

    Now, if you state this rule in full, it can look confusing, because you end up talking about free variables, bound variables, etc. But these are absolutely the same conditions that apply to renaming dummy variables in all of mathematics! One just doesn’t normally think about them explicitly. If you understand the concept of a dummy variable, you shouldn’t need to think about them explicitly here, either.

    β. Functions do what they say they do. (Or, to apply a function, you do what it says it does.)

    Not sure I have any further commentary on this one. This one has absolutely no restrictions on it.

    η. “x↦f(x)” is just a fancy way of saying “f”.

    This one can also look a bit confusing if you state it in full because there’s again a technical restriction; however, this technical restriction is just saying that f can’t have some secret dependence on x (or else the whole thing would make no sense, after all). So once again, if you just keep in mind what the rule means, the technical restriction should be something you don’t even really have to think about explicitly. Obviously you can’t replace “x↦f(x)” with f if f is hiding some secret dependence on x.

    They’re really that simple!

    (The SKI calculus I truly don’t understand the point of. Why rephrase lambda calculus in such an opaque manner?)

  32. Sniffnoy Says:

    Darn it, ↦ showed up in the preview as the appropriate character, I don’t know why it suddenly failed when the comment was actually posted.

  33. Scott Says:

    Sniffnoy #31: I don’t really mean that I don’t understand those rules, as you’ve stated them. I mean that for any of the math or CS problems I’ve ever cared about, I don’t understand what the benefit would be of restating things in terms of those rules. However, this is not in any way to deny their usefulness for other parts of math and CS.

  34. Pavel Krapivsky Says:

    Scott, have you just discovered the *Stop Gowers! !!!* website? He was writing his posts 4-5 years ago, essentially nothing since then. My old memories of his posts are that he was intentionally provocative to emphasize a different vision of what he considers most important in math, but he would basically agree with all the positive things you said about Gowers (an outstanding researcher, a brilliant expositor, and a good human being).

    Sowa (meaning *eagle-owl* in Russian) certainly was not writing numerous blogs, so one should not expect to learn math from his posts. Still, he was writing (in answers to the readers if I remember correctly) about his effort to study various parts of math very far from the *Grothendick* math, he certainly knows a lot of the *Hungarian* math, and it felt that he genuinely love a huge number of math subjects. Why he was so provocative and rude? My feeling was that due to oscillations in popularity. In the past, especially in France, the *Grothendick* kind of math was considered the only worthwhile and *Hungarian* math was treated as a kind of curiosity rather than something deep. Then people get tired of that kind of indoctrination and the popularity has reversed. Sowa was kind of fighting against this fad, and perhaps consciously chose Gowers as a kind of exemplary mathematician liked by most, who inadvertently can play a not always positive role (according to sowa).

    I’ve just looked at recent arXiv submissions by Nikolai Ivanov. Two out of the last four contribute to the *Hungarian* math, they contain proofs of the theorems by Erdos. The most recent such preprint has interesting bits of history, it mentions a problem attributed to Bourbaki (a kind of old collective *Grothendick*) and tells that it actually comes from de Bruijn and Erdos. I also see a paper about the Tutte expansion. He is seemingly doing combinatics more actively than many official combinatorialists!

  35. Sniffnoy Says:

    Scott #33:

    …oh. Yeah honestly I don’t understand that either. I just think lambda calculus is neat. 😛

  36. Atreat Says:

    “The SKI calculus I truly don’t understand the point of. Why rephrase lambda calculus in such an opaque manner?”

    SKI calculus came first. Schönfinkel developed the combinatory logic before Church came up with the lambda calculus. In fact, all you need are the two combinators S and K as identity can be formed from the first two. That those two combinators can model a universal Turing machine is incredible to me. Those two combinators are all you need to perform any computation.

  37. Joshua Zelinsky Says:

    “When I argued with rationalists about some of their central beliefs (the risk of a singularity soon, the universal applicability of Bayesianism…), they argued back in a substantive and collegial way. They seemed clearly happier to have me criticize their ideas than not to engage at all.

    By contrast, when I argued with SJWs about some of their central beliefs … well, everyone knows what happened. It was actually fine, exactly like with the rationalists, so long as the discussion was confined to the subset of SJWs who read Shtetl-Optimized. But when it got to the wider world, I became the embodiment of everything they hated, fit only for sneering rather than dialogue.”

    How much of this is due to different size communities? As the size of a community gets larger, sheer probability will lead to a higher percentage of jerks. It is true that some communities will based on their norms have different base percentages, but one can’t help but note that the rationalist community is pretty tiny as as groups go. If in 40 years it turns out that we teach basic Bayesianism and AI risk in schools, there will be a heck of a lot jerks who will respond very negatively to any questioning of those positions.

  38. Boaz Barak Says:

    Hi Scott,

    Given that I’m just teaching intro to TCS now (and in fact decided to teach lambda calculus as part of this course), let me tell you why I do think it’s interesting. (BTW I am going to refer to the busy beaver problem and your essay in one of the problem sets.)

    First of all, the definition of computation via Turing machines has a number of arbitrary choices or “free parameter”, while lambda calculus does seem “purer” or “determined” in some sense.

    Second, the lambda calculus might be the cleanest setting where we can see that we have the “fixed point operator”, that for every functional F we can find a “fixed point” function f s.t. Ff = f. The proof of that in the lambda calculus is really two lines with the Y combinator.

    In turn this notion is related to being able to assume that a function has access to its own code. You can think that in the lambda calculus, since we don’t have recursion, we emulate it by printing our own code and then evaluating it (which means that when we evaluate this code, it also prints itself and evaluates it, and so on and so forth).

    In turns this is also related to the construction of the Godel sentence

    x = “x does not have a proof in the system V”

    That is, we can think of x as a fixed point of some such equations.

    That said, I didn’t prove Godel’s incompleteness theorem using the lambda calculus in class. (But this probably has to do with my own intellectual limitations, which I’ve never been able to keep in my head the difference between the first and second incompletness theorems, and why it is important.)

  39. murmur Says:

    Hi Scott, do you think given the constant hatred that the nerds face from the SJW-ers they will sour on the whole concept of social justice?

  40. John Says:

    I think you are misreading these sites. They are obviously meant to be taken at least somewhat humorously. “Sneer club” or “!!!” should be a tipoff. I think anything with three explanation points is meant to be humorous, with the possible exception of Presidential Statements.

  41. Douglas Knight Says:

    You unfairly characterize Sowa’s blog.

    I think your characterization of the part you talk about is inaccurate, but, more importantly, that part is no more than 10% of his blog.

  42. Alex Zavoluk Says:

    I know this is almost completely irrelevant, but I get giddy whenever I see the p=t paper get attention because Professor Malliaris was one of the best professors I ever had.

  43. Vincent Luczkow Says:

    Oh, you would just *love* r/badphilosophy. /s

  44. Scott Says:

    Joshua Zelinsky #37:

      If in 40 years it turns out that we teach basic Bayesianism and AI risk in schools, there will be a heck of a lot jerks who will respond very negatively to any questioning of those positions.

    Yes, that’s entirely possible. Not only because of sheer numbers, but also because I’ve never encountered any topic (Shakespeare, set theory, programming…) so interesting that it couldn’t be ruined by the American K-12 system.

    As it happens, I have lectured about Aumann’s theorem at two summer camps that were devoted to teaching high-school kids about basic Bayesianism and AI risk. And I found the students to be absolutely wonderful, and not at all dogmatic about those topics. But of course, they were an extremely self-selected group.

  45. Scott Says:

    Boaz #38: Thanks; that’s helpful! I’m glad you found a use for lambda-calculus in your course—though I would’ve been more impressed still had you said you’d found a use for it in your research. 😉

    I think I won’t incorporate it into my undergrad ToC course, but mostly just because the way I teach it, I don’t much need any programming formalism, not even imperative ones. We do mention the recursion theorem, but more in the spirit of “here’s a cool trick that lets you assume programs have access to their own code” than as something we fundamentally need. Of course it does tie in nicely with Gödel’s Theorem, which is something I spend a lecture or two on, but again not at a formalized level.

    Incidentally, the First Incompleteness Theorem is the thing that tells you that the following sentence is unprovable in formal system F, if F satisfies reasonable conditions:

      The following sentence repeated twice, the second time in quotes, is not provable in formal system F.
      “The following sentence repeated twice, the second time in quotes, is not provable in formal system F.”

    Whereas the Second Incompleteness Theorem, a corollary of the first, is the thing that tells you that

      F is consistent

    is unprovable in F—a conclusion that packs a bit more punch!

    (From the unsolvability of the halting problem, we can deduce that some true arithmetical statement is unprovable in F, which we could also call the First Incompleteness Theorem, but which then doesn’t yield the Second as an easy corollary, unless we come at it a different way. Maybe that’s the 0.5th Incompleteness Theorem.)

  46. Nick Teague Says:

    Regarding your comments about Russell’s excerpts in his biography, I remember feeling this exact same way reading the book “Faraday, Maxwell, and the Electromagnetic Field: How Two Men Revolutionized Physics”. Although Maxwell’s and Faraday quotes are fairly sparse in the text, their crispness and clarity are so beyond the voice of let’s face it most any biographer. (not to be mistaken was still a great book etc)

  47. Scott Says:

    murmur #39:

      do you think given the constant hatred that the nerds face from the SJW-ers they will sour on the whole concept of social justice?

    I hope not! In that regard, I’m encouraged by the popularity among nerds of the Effective Altruist movement, which seems to be doing great so far at taking the good parts of social justice (i.e., actually helping disadvantaged people) while leaving out the bad parts (sneering at the outgroup).

  48. Scott Says:

    John #40:

      I think you are misreading these sites. They are obviously meant to be taken at least somewhat humorously.

    In that case, why aren’t they funny?

  49. Scott Says:

    Incidentally, anonymous #17:

      Another, possibly more general, criterion according to which “a neutral observer [can] decide who was right” is finding out which side accepts (even better, welcomes) criticism or at least dissenting opinions.

    I noticed that a thread appeared this morning in r/SneerClub to reply to this post, but then was quickly deleted before it attracted any comments. So it would indeed seem like, despite the protection of anonymity which I lack, the mods of SneerClub can dish it out but have trouble taking it.

  50. Scott Says:

    Douglas #41: I’m actually not sure about the percentages—but in any case, I feel like the argument that stopping Timothy Gowers is no more than 10% of sowa’s blog would be slightly more persuasive, if the blog’s title were something other than “Stop Timothy Gowers! !!!”

  51. Scott Says:

    Alex Zavoluk #42: It’s good to hear that she’s not only a great mathematician, but a great teacher as well!

  52. Douglas Knight Says:

    I mean 10% of the blog is about comparing those styles of mathematics. In particular, that has nothing to do with the title.

  53. Rational Feed – deluks917 Says:

    […] Its not the Critic who Counts by Scott Aaronson – Scott Alexander and the mathematician Timothy Gowers seem to genuinely love their subjects. Whenever you read their work, even their controversial posts, you learn new things. Some of their critics seem consumed by politics. […]

  54. John Baez Says:

    Eli Schiff #13 writes: “This is an incredibly anti-critical post. You claim to elevate the rigorous pursuit of knowledge, and yet you can’t see that harsh criticism is central to that effort.”

    You’re harshly criticizing Scott’s harsh criticism of harsh criticism? Cool! How meta! It makes me want to harshly criticize you, just to join the fun.

  55. Chinese Student Says:

    Good day.

    This is a reply to your comment https://www.scottaaronson.com/blog/?p=3167#comment-1732350 under my initial comment here: https://www.scottaaronson.com/blog/?p=3167#comment-1732343 Yes, it’s off-topic for this thread but I couldn’t reply there. Hope you don’t mind this and approve this comment.

    Also sorry for taking so long for catching up (I was detained for months by the Chinese authorities for activism related charges). Here’s my response: It’s extremely troubling to see that you allege with force that the Enlightenment is compatible with notions of moral duty towards the environment, and that modern capitalism merely tries to ignore this alleged Enlightenment doctrine. Something should be said of the profound disjunction between this thought-structure of domination, on the one hand, and morality and values, on the other. The disjunction was effected early on in the Enlightenment, when the so-called mechanical philosophers, such as Boyle and Newton, began to emerge. These philosophers, whose doctrine came to constitute a largely undisputed paradigm of the Enlightenment, argued that matter is “brute,” “inert,” and even “stupid.” [See: A. Bilgrami, “Gandhi, Newton, and the Enlightenment,” in I. A. Karawan, et al., eds., Values and Violence (New York: Springer, 2008), 15-29.] In this doctrine, all spiritual agencies in the *anima mundi* were banished from the universe, rendering matter spiritually meaningless, but still relevant in an anthropocentric, materialistic sense. If matter exists in a “brute” and “inert” form, then the only reason for its existence must be that of its subservience to man. Robert Boyle, a leading mechanical philosopher, represented his movement well when he elaborated the view “that man was created to possess and to rule over nature.” [Jacob, Radical Enlightenment, 6, as well as pp. xi, 3-4, 64-67 and passim.] Thus, if nature is “brute” and “inert,” then one can deal with it without any moral restraint, which is _precisely what has happened since the early nineteenth century_, if not long before. But since, as I have argued, the universalist-extensionist propensity is structurally tied to a domination-based thought-structure, the view of “brute” matter was also the basis on which modern colonialism was conducted and justified. [For the European encounter with the Indian conflation of value and matter, see the insightful analysis of Bernard Cohn, Colonialism and its Forms of Knowledge, 18-19. See also J. Hart, Empires and Colonies (Cambridge: Polity, 2008), 44-47, 79-82, 211-14.] This is not all, however. The more important point in the isolation of matter as “brute” and “inert” is the resultant crucial phenomenon of separating fact from value, which is yet another major and essential factor in the modern project (a factor that, as Charles Taylor once observed, “outrageously fix[es] the rules of discourse in the interests of one outlook, forcing rival views into incoherence”). [Taylor, “Justice After Virtue,” in J. Horton and S. Mendus, eds., After MacIntyre (Cambridge: Polity Press, 1994), 20.] If matter, in itself, is devoid of value, then we can treat it as an object. We can study it, and subject it to the entire range of our analytical apparatus, without it making any moral demands on us. [Bilgrami, “Gandhi, Newton,” 25 and passim.] More importantly, however, and as some social scientists have argued, the separation which denudes intellectual/scientific enquiry of value “is ethically untenable,” for it “disengages the observer from the social responsibility that should accompany his accounts, and it results in the status quo being presented as somehow natural and real, rather than as constructed and partisan.” [Pressler and Dasilva, Sociology, 102-03.] This ethical dimension, indeed moral accountability, can hardly be overemphasized.

    It’s a tragedy, indeed, if you on the one hand embrace the Enlightnment and be one of the (in your own words) “defenders of the Enlightenment,” and on the other hand warn about global warming and the necessity of taking all precautions to diminish its ever expanding impacts, while at the same time thinking that there are no contradictions between the two.

  56. BLANDCorporatio Says:

    I have a feeling SneerClub is in the same vein as the Flat Earth Society.

    Aka, it’s hard to recognize satire. And no, the satire targets aren’t Elon Musk or Scott Alt-A.

    Then again, I wouldn’t know. I haven’t been there. Only so much modern art I can make room for.

    Cheers.

  57. Scott Says:

    BLANDCorporatio #56: Regarding the idea that people who spend years attacking you, and people like you, are just engaging in a subtle form of satire that you don’t get—it reminds me of what Richard Dawkins wrote about postmodernists, in his unforgettable review of Alan Sokal and Jean Bricmont’s book:

      But don’t the postmodernists claim only to be ‘playing games’? Isn’t the whole point of their philosophy that anything goes, there is no absolute truth, anything written has the same status as anything else, and no point of view is privileged? Given their own standards of relative truth, isn’t it rather unfair to take them to task for fooling around with word games, and playing little jokes on readers? Perhaps, but one is then left wondering why their writings are so stupefyingly boring. Shouldn’t games at least be entertaining, not po-faced, solemn and pretentious? More tellingly, if they are only joking, why do they react with such shrieks of dismay when somebody plays a joke at their expense?
  58. BLANDCorporatio Says:

    Hey man, I’m just showing charity to dissenting opinions 😉

    You seem to assume that, like the Postmodernists, SneerClubbers actually mean what they say. One should be mindful that on the internet nobody knows when you’re a dog, or a secret MAGA-wearer posting nutty stuff on Jezebel.

    Cheers.

  59. Jeremy Says:

    Owl/sowa clearly has elitist and ugly opinions. In particular I find his opinions about computers in mathematics to be quite backward.

    I think his reasons for being miffed at Gowers’ presentation of Deligne’s award have some merit though. I wonder if you consider them at all reasonable.

    If you have a moment, look at Sections 5 & 6 of the following:
    http://www.abelprize.no/c57681/binfil/download.php?tid=57753

    Gowers says that things like “Leray Spectral Sequence arguments” are “just words to him, and pretty terrifying ones at that.” This seems like someone writing about your life’s work who is scared of the phrase “unitary transformation.”

    It would be a deep honor to have such a strong and respected mathematician as Gowers make an honest effort to understand your major work. But perhaps the presenter of the Abel prize should have the background necessary to actually understand the work.

    There is a terribly unfortunate situation in mathematics that different fields are unable to communicate. It takes years to learn how to speak a different language. Sometimes a field of math truly can be worthwhile, connected to concrete problems, and even foster a problem-solving culture, yet require so much background to enter into that it only attracts people who love learning theory as much as they love problem solving.

    As much as Combinatorics and Theoretical Computer Science has suffered from elitism, there is a present and real fear in some other subjects that they suffer from inaccessibility. Combinatorics and theoretical computer science truly are easier to enter, and increasingly a larger part of the mathematical landscape, even if it takes just as long to achieve true mastery of them as true mastery of Deligne’s work. The hope is that there remains room in the world for number theorists and algebraic geometers.

  60. fred Says:

    Reminds me of this article about an interesting classification of Futurists, based on belief in the Singularity and Optimism.
    But in the end it’s really about the white male nerds being the source of all evil…

    http://bostonreview.net/science-nature/cathy-oneil-know-thy-futurist

  61. Scott Says:

    fred #60: Yes, I of course read that piece, as well as Scott Alexander’s scathing review of it. The thing is, I had some correspondence with Cathy O’Neil in the wake of the comment-171 affair. I like her. I know that she’s capable of vastly better than that garbage piece, which criticizes hypotheses about the future not on the ground of being wrong, but on the ground of being popular among former “sex-starved teenagers.” (One wonders: how many of the important ideas in the history of humanity were not invented by former sex-starved teenagers?)

  62. aaaaaaaaaaa Says:

    Both Cathy O’Neill’s piece and Scott Alexander’s review made me feel that they were written without trying to understand the other side of the debate very much.

  63. Izaak Meckler Says:

    Hi Scott,

    On type theory: I have found a few instances in complexity/crypto-style TCS[1] where people make (to my ears) silly circumlocutions which could be avoided using the “types mindset”. And I don’t merely mean that things would sound simpler, I mean they would also be simpler to think about and prove things about.

    The most salient example is black-box separation arguments in crypto. Here the idea is to show that one cryptographic primitive B cannot be obtained in a generic way from a cryptographic primitive A. The usual technique here is to come up with a (randomized) oracle relative to which primitive A exists but primitive B does not. The way this shakes out in practice is typified by the following example.

    A common “randomized oracle” is the “generic group model”, in which one has oracle access to the multiplication and inversion functions for an abelian group, whose encoding as bitstrings is chosen randomly (specifically I mean we pick some arbitrary identification of the group with a set of bitstrings X, then sample uniformly a permutation \pi of X. If g \in G was previously identified with x \in X, it is now identified with \pi(x).)

    Now, the point is that picking encodings randomly forces any algorithm to only be able to multiply, invert, and test group elements for equality. Any algorithm which doesn’t restrict itself to dealing with group elements in this way cannot be doing anything useful because the randomization prevents you from doing anything else. Then, once you establish that any adversary is dealing with group elements in only this way, one can proceed with further analysis.

    This situation is I think more easily described and analyzed using a PL style approach. Instead, one simply defines a “programming language” with a type for bitstrings and also a type G for group elements. One will start with simply-typed-lambda calculus add bitstrings, and then add terms “mul : G * G -> G”, “inv : G -> G”, and “eq : G -> G -> bool”. Now, an adversary is just an appropriately sized program in this language. One does not have to use this idea of a “randomized encoding” to model the idea of a generic adversary (i.e., an adversary that respects certain types), one can instead just describe the situation directly. By the way, this point of view should make subsequent analysis simpler as well, since one can just do standard “induction on terms” and “inversion” arguments.

    [1]: I know this is a cumbersome turn-of-phrase but I dislike the “theory A/B” description.

  64. Kevin S Van Horn Says:

    Noah Stephens-Davidowitz #28: I heartily agree. This kind of writing is what keeps me coming back to this blog. 🙂

    “At the risk of stating the obvious: if you write, for example, that Richard Feynman was a self-aggrandizing chauvinist showboater, then even if your remarks have a nonzero inner product with the truth, you don’t thereby “transcend” Feynman and stand above him, in the same way that set theory transcends and stands above arithmetic by constructing a model for it.”

  65. Scott Says:

    aaaaaaaaaaa #62:

      Both Cathy O’Neill’s piece and Scott Alexander’s review made me feel that they were written without trying to understand the other side of the debate very much.

    OK, so let me extend to O’Neil’s piece the maximum charity that I can. I completely agree with her that we should consider, not only what futures are likely or unlikely, but also which ones we (“we” meaning all of humanity, or all who want to join the discussion) want or don’t want. I agree that, when we do this, we should weigh the benefits and harms for all humanity (and ideally animals, and any other sentient life we create)—not merely the wealthiest 0.1%, or white men, or whatever. So far, I don’t see how any sane person could disagree.

    But more than that, I’m actually with O’Neil on her “fourth quadrant” being (alas) the most likely future for our civilization. That is, I expect there to be no technological singularity (or certainly not in the next century or two), and I expect things simply to get worse and worse as the planet heats up, we run out of natural resources, and we descend into authoritarianism, religious fundamentalism, and possibly nuclear war.

    But I’m not certain about that. And I certainly don’t think it’s evidence of racism or sexism if someone is in one of the other three quadrants.

    Indeed, I think O’Neil destroys whatever value her piece might have had with the constant slurs against her fellow futurists—whose views, apparently, are not to be answered with counterarguments, but only with comments about how they were probably sex-starved white male teenagers. It’s as though I was reading an article on monetary policy that maybe contained some valuable points, but was also larded with remarks about the need to free the international banking system from the scheming hook-nosed Jews. Is it on me to ignore those comments?

    But it’s worse than that, because even if I do ignore the inflammatory comments, the rest of the piece never even reaches the point of asking whether visions of the future other than O’Neil’s might be true. They’re simply presumed to be false—and, for that reason, nothing more than windows into the (twisted, white, male) psychology of the individuals who advocate them. But this is profoundly lazy.

    If an infallible oracle told you that an AI would probably destroy the solar system in a few decades, surely you could be forgiven if your first followup question wasn’t about algorithmic biases, which might cause the AI to wipe out one ethnic group a half-second before it wiped out another one.

    Conversely, if the oracle told you that it was a realistic prospect to build nanotech that could cure all diseases and let everyone live for thousands of years, surely you could be forgiven for thinking that such a prospect might interest women, minorities, and jocks just as much as it would interest nerdy white men.

    Of course, we don’t have an oracle telling us those things—but nor do we have an oracle telling us their opposites. It’s up to all of us to think through the arguments and evidence, separating them from personalities as much as we can—just as O’Neil would’ve done with a problem in algebraic geometry.

  66. wolfgang Says:

    Reading (for the first time) about the owl/sowa vs Timothy Gowers feud (which seems to be quite one-sided) reminds me of the more famous Grundlagenstreit between Hilbert and Brouwer (which Einstein called a ‘war of mice’).

    I am not a mathematician and do not claim to understand what owl/sowa is complaining about, but I remember that during my times as student of physics, there was a bitter feud between competing math profs at my university and again nobody understood what it was all about outside of the two departments.

    It seems to me that the old saying applies in those cases: The main reason debates in academia are often so bitter is that the stakes are so low.

    Of course for the feuding parties this is far from the truth, because pride and ego are at stake …

  67. anon Says:

    Curiously, the field of genomics experienced a technological singularity of sorts in a 4.5 year interval from late-2004 through mid-2009. In that span, the capacity for DNA sequencing (measured in DNA molecules sequenced per machine per day) increased by a factor of 10^6 (from 10^2 to 10^8 molecules). By comparison, Moore’s would yield ~32-fold. In fact, it took a couple of years for computational capacity to catch up. The rate of increase eventually tapered off though that probably had more to do with changing economic incentives than exhaustion of technological innovation. It was an exhilarating experience. Suddenly people were able to do experiments that weren’t conceivable a year before, etc.

  68. atreat Says:

    The most charitable reading I can come up with for Cathy is that she is essentially saying ba humbug on the whole enterprise of trying to predict future technological outcomes and that we should instead focus on the current problems technology is causing for marginalized groups.

    That, and she is saying that it is largely white male privilege that allows the bad futurists to engage in speculative fancies about the future while paying insufficient attention to the problems of the present that technologies have caused for marginalized groups.

    She is clearly not really a futurist.

  69. aj Says:

    You should have sent this to Voevodsky when he was alive.

    The guy sofa does have a point if the only point is that it will prevent Grothendieck style mathematics not getting favor.

  70. Ryan O'Donnell Says:

    @Boaz: The Recursion/Fixed Point Theorem is the one *I* can never remember 🙂 Luckily, I’m not sure you ever need it for anything. You don’t need it for Godel.

    Here’s how I remember/think about Godel1 vs. Godel2:

    Turing’s 1st Theorem: There’s an undecidable language. (Proof: countability.)

    Turing’s 2nd Theorem: There’s an *interesting* undecidable statement, namely, the Halting Problem.

    Godel’s 1st Theorem: Assuming ZFC (or whatever) is consistent, there’s a statement that ZFC can’t prove.

    Godel’s 2nd Theorem: Assuming ZFC is consistent, there’s an *interesting* statement that ZFC can’t prove, namely, “ZFC is consistent”.

    As Scott mentioned, there is also —

    “Godel’s 0.5th Theorem”: Assuming ZFC is sound, there’s a statement ZFC can’t prove.

    — and this is an immediate consequence of Turing’s 2nd.

    Godel’s 1st takes slightly more work; 2 extra paragraphs. (Roughly, instead of “Now does D(D) halt or loop?” setting up the punchline, you use the setup “Now does ZFC have a proof of ‘D(D) halts’ or ‘D(D) loops’?”)

    The proof of Godel’s 1st also has an extra bonus over Godel’s 0.5th: it gives an *explicit* statement with no proof, whereas the proof of the 0.5th merely shows there *exists* a statement with no proof. (This is also like Turing1 vs. Turing2.)

    Also, this explicitness is useful: You get Godel’s 2nd Incompleteness Theorem as basically an immediately corollary.

    [Some of this “Easy proofs of Godel’s Theorems via Turing Machines” stuff was worked out by Scott, Amit Sahai, and others on this very blog 🙂 Also, for accuracy of credit, Godel’s 1st was only fully proved by Rosser, and Godel’s 2nd was independently proved by von Neumann.]

  71. quax Says:

    “If appeasement of those who hate you is doomed to fail, why bother even embarking on it?”

    Made my day. Your time is way to valuable to be wasted on this.

  72. A comment on the sowa versus Gowers affair | Noncommutative Analysis Says:

    […] is post is reply to (part of) a post by Scott Aaronson. I got kind of heated up by his unfair portrayal of the blog “Stop Timothy Gowers!!!“, […]

  73. Orr Shalit Says:

    Dear Scott,
    I think you got the sowa/Gowers thing all wrong. I wrote a reply, which was ridiculously long, so I moved it to here:
    https://noncommutativeanalysis.wordpress.com/2017/10/13/a-comment-on-the-sowa-versus-gowers-affair/

    Here is one point important enough for me to paste is back to your comment section:

    “It is ironic that one of the good things that you (Scott) have to say about Gowers is that “He’s also been a leader in the fight to free academia from predatory publishers”. Google “predatory publishers”, I don’t think it means what you think it does. Indeed he played a creditable role as a leader in the boycott against Elsevier (about which I had doubts, I won’t go into that). But Gowers, in my opinion, abused his reputation and played a very dangerous role in actually vindicating predatory publishers, when he helped to set up Gold Open Access journals”.
    His virtues notwithstanding, for this he rightfully earned some criticism.

  74. Anonymous Says:

    #70:

    Godel’s 1st Theorem: Assuming ZFC (or whatever) is consistent, there’s a statement that ZFC can’t prove.

    Like 0=1? But that follows merely from applying the definition of consistency…

  75. Richard Gaylord Says:

    scott:
    i’ve been having an exchange with a friend of mine who is in philosophy of science concerning Godel’s Theorems. i sent him a quote “Goedel’s Incompleteness Theorems had the effect of nixing a logical/mathematical program of bootstrapping a foundation to mathematics by letting the system justify itself through its internal completeness and consistency. They don’t really have any practical applications outside the foundation of mathematics.” and i asked if that was true (i thought it was and said that it didn’t seem very useful since it didn’t tell you how to know in advance if you were trying to prove an unprovable statement). he responded that “I think they have applications to computer science.”. can you direct me to a blog entry of yours that addresses this issue or, if there isn’t one, could you address it? thanks.
    richard

  76. Scott Says:

    Anonymous #74: Ryan obviously meant a true statement! (He was just contrasting the different versions of Gödel’s Theorem, not trying to state any of them precisely.)

  77. Scott Says:

    Richard #75: Gödel’s Theorem was foundational for the entire field of CS—though less because of the statement itself, than because of the ideas that go into proving it. Gödel numbering anticipates programming, and the notion of data that can also be code. (When I teach Gödel’s theorem, I treat Gödel numbering as quaint and outdated, but that’s only because programming now already exists!) The self-reference anticipates recursion: programs able to call programs including themselves, and to treat their own code as yet more data. Finally, the notion of a minimum floor of expressive power (e.g., integer addition and multiplication, but not addition only) before a formal system starts being able to talk about itself, anticipates Turing-universality.

    In terms of historical fact, we know that Turing came just a few years after Gödel and was directly inspired by the incompleteness theorem. I believe Church was as well. Post reached many of the same conclusions as both Gödel and Turing independently of them (while being too afraid to publish), which just further illustrates how entangled Gödel’s theorem is with computability.

    In terms of “direct applications”: you can, in fact, produce particular statements in almost any area of mathematics that are Gödel-undecidable, but you’re right that 85 years on, we still mostly lack the technology to prove Gödel-undecidability (if indeed it holds…) for the types of statements that practicing mathematicians would actually care about. The great logician Harvey Friedman has been laboring his whole career to try and fill this gap.

    And then, I dunno, Gödel’s theorem is always in the background when we’re discussing automated theorem-proving systems—though less as a “tool,” than as a sort of conceptual precondition for the entire discussion.

  78. Atreat Says:

    “And then, I dunno, Gödel’s theorem is always in the background when we’re discussing automated theorem-proving systems—though less as a “tool,” than as a sort of conceptual precondition for the entire discussion.”

    Are you talking about automated programs to search for proofs or programs to check proofs? Because in the latter, I don’t think Gödel’s theorem is all that relevant because the systems used to prove those theorems are not themselves Turing-complete. They are strongly normalizing == do not suffer from the halting problem.

  79. Scott Says:

    Jeremy #59 and Orr Shalit #72: OK, I finally read the infamous talk by Gowers about Deligne’s work—the talk that sowa found so world-historically offensive that it caused him to rename his blog “Stop Timothy Gowers! !!!”

    The talk is a masterpiece of clear mathematical exposition. Reading it, I—a mere theoretical computer science PhD—actually learned something about the context of Deligne’s great achievements, about the web of connections among the Ramanujan tau function, the Leech lattice, the number of integer points inside a hypersphere, topology over finite fields, the Riemann hypothesis over finite fields, etc. Had the talk instead been given by someone sowa approved of, I probably would’ve gotten nothing from it.

    Gowers makes it clear, at the beginning, that the task the prize committee tapped him for was not to justify Deligne’s prize to experts (which … don’t the experts already know he deserves it?), but instead to give a sense of Deligne’s achievements for a “general audience,” meaning mathematicians and scientists from other areas. Gowers was an inspired choice for that; there are so few other mathematicians on earth able to do this sort of thing, that one can pretty much name them (John Baez, Terry Tao…).

    There’s a particular irony for those who fret about whether Grothendieck-style algebraic geometry can “survive” the rising popularity of problem-centered mathematics, and of combinatorics and theoretical computer science: if you want your favorite kind of mathematics to remain popular, then an inspired popularizer like Gowers is one of the best allies you can hope for. You should find more like him.

    But I fear this will detract from a broader point: namely, even if Gowers had done a terrible job with this talk (which he didn’t, quite the opposite), it would still be hilariously nutty to base a whole paranoid ideology about “power structures in mathematics” on the question of who’d been chosen to popularize Deligne’s work on the occasion of the latter’s Abel prize. Deligne’s work is for eternity, and would survive an inadequate laudation. Does anyone remember, or care, who gave the speech presenting Einstein with his Nobel Prize?

  80. Scott Says:

    Atreat #78: Even for proof-checking software, it might be useful to understand why it is that adding more axioms will always allow more statements to be verified than you could verify before.

  81. The Problem With Gatekeeprs Says:

    Nice seeing the issue of Godel’s incompleteness theorems -and their implications- discussed.

    The question was asked as to whether there are any interesting statements that are Godel-undecidable beyond “this statement cannot be proved” in second order arithmetic. In case people missed, 2 years ago, the field of atomic physics produced an interesting one,

    http://www.nature.com/news/paradox-at-the-heart-of-mathematics-makes-physics-problem-unanswerable-1.18983

    I think we are just scratching the surface of the implications of these theorems. As far as I can tell, they are the first formal proofs that “metaphysics” is a real thing and that there are truths that will always be out of reach of “computing” at least as we understand it today. For Godel to be overcome, a different type of mathematical proofing/knowledge will have to be developed. So long as people go along “start with a few axioms, then apply deduction rules” undecidable Godel statements will exit.

    I always make a connection with Shannon’s capacity theorem. Shannon was able to show that no matter what you do in terms of encoding, there are limits to transmission rates in noisy channels that cannot be overcome by “tricky encoding”. He used the concept of typical sequences to show this.

    In general, I think both Godel and Shannon show that there are certain inalienable truths about the limits of human knowledge that cannot be “tricked away” by clever craftsmanship. That’s their true legacy as far as I am concerned.

  82. Jeremy Says:

    @Scott #79
    I am glad you liked Gower’s talk, which as you say did a great job of setting out some of the prehistory around Deligne’s work. Of course Deligne’s work will “withstand” such laudation.

    Perhaps we can agree that something is lost when the popularizer does not understand the mathematics. It is, of course, a great credit to Gowers’ article that he knows and acknowledges what he does not know.

    Here’s to hoping that the Grothendieck school finds more people who can write articles like Carlos Simpson’s snippet from
    Alexandre Grothendieck: A Mathematical Portrait

    We need folks who can communicate to a broad mathematical audience without losing the substance of what was done.

  83. Scott Says:

    Gatekeepers #81: Yes, I blogged about that result two years ago; hope you enjoy my post.

    To clear up a possible confusion: their result was really about a problem—namely deciding whether a given translationally-invariant Hamiltonian with bounded local dimension is gapped or gapless—being uncomputable in Turing’s sense. It was only incidentally (i.e., as a corollary of that) about undecidability in Gödel’s sense. A lot of popular articles got this wrong.

    More to the point: if you wanted to get “Gödel-undecidability in atomic physics” through their result, you’d first need to build a Turing machine that searched for inconsistencies in set theory—and only then, define a hypothetical, extremely complicated new 2D material whose limiting behavior encoded the halting or non-halting of that Turing machine. I personally find it fascinating, but of course even a mathematical physicist could rightly shrug and say that’s not a question they would ever have come across in a billion years. We’re still a long way from showing that any “real” mathematical physics question (like, say, the Yang-Mills mass gap problem) is Gödel-undecidable.

  84. Boaz Barak Says:

    It might be worth noting that, as far as I know, the choice of the presenter for the Abel prize is made before the choice of who gets the prize. So, it is inherently the case that the presenter is often not an expert in the area of the laurate but rather someone who is broadly knowledgeable about mathematics at large.

    Ryan and Scott: would be interested in your comments on my presentation of Godel’s Incompleteness here http://www.introtcs.org/public/lec_09_godel.pdf

    I do make the effort to talk about statements involving integers and not programs and use the prime encoding, since I think it is a nice example of showing that uncomputability can pop up in unexpected areas. (It’s also a nice prelude to the Cook-Levin Theorem and all the NP completeness results where again we see we can encode computation in places where it does not immediately seem relevant.)

    However, if the proof is unnecessarily long then I would like to know about it.

  85. Scott Says:

    Incidentally, Jeremy #82: Deligne was around when I was a postdoc at IAS. I regret that I never really got to know him, but he happily interacted with Avi Wigderson and the other theoretical computer scientists about common interests like pseudorandomness, and came to CS theory talks. He struck me as an extremely kind and broad-minded guy, and it’s hard for me to imagine that he himself would’ve had a problem with Gowers’s exposition.

  86. Neel Krishnaswami Says:

    In turn this notion is related to being able to assume that a function has access to its own code.

    Surprisingly, this is false. I say “surprisingly”, because a lot of eminent complexity/algorithms people (not just Boaz, but also people like Lance Fortnow) seem to think this, and it’s wrong.

    The lesson of the lambda calculus is that “access to your own source code” is NOT necessary to compute a fixed point. This is because the lambda calculus can’t access the source code to its arguments, but it can compute fixed points!

    Concretely, given two lambda terms with the same IO behavior, it is impossible to write a third lambda term that can distinguish between them. This is a consequence of the fact that there are extensional models of the lambda calculus. If you could somehow distinguish extensionally equal terms, then the model construction would have to fail. So we know that the lambda calculus can’t tell apart programs which have different source code, but which are extensionally equal (for instance, λx.x and λx.(x+0)).

    So where does the perception that access to source code is somehow essential come from?

    The halting theorem and Goedel’s theorem (and even classical results like Cantor’s theorem asserting the non-isomorphism of a set and its powerset) are consequences of the Lawvere fixed point theorem, which says that which says that for any cartesian closed category, if there is an epimorphic map e:A→(A⇒B) then every f:B→B has a fixed point. The proof is super slick in the lambda calculus, since the lambda calculus is the algebraic theory of Cartesian closed categories, and it lets you turn Lawvere’s original multi-page derivation of his theorem into something like five lines of routine calculation.

    So to apply this machinery to Turing machines, we just need to show that we can use them to form a category with the appropriate structure. The nice way to do this is to prove the snm and utm theorems, to show that Turing machines form a partial combinatory algebra (ie, you can implement the S and K combinators), and then use the bracket abstraction theorem to turn combinators into lambda terms. And now we’re off to the races!

    Note that the snm and utm theorems *do* talk about quotation and source code. Basically they are telling us that if we have a universal interpreter, we can simulate first-class functions by passing around source code and then interpreting it. And once you have first-class functions and self-application, you can implement fixed points.

    But you don’t NEED source code to implement functions! This is an important fact both theoretically and practically: the correctness of basically every compiler optimization ever rests upon the ability to change the source code of one part of a program without the rest of the program noticing.

  87. Scott Says:

    Neel #86: The lower part of my brain thinks of lambda calculus, type theory, category theory, and functional programming as all just different technologies for taking things that I understand, and restating them in a language where I no longer understand them.

    The higher part of my brain says that’s nonsense, that there’s obviously deep insight to be had in all these fields, given all the verifiably brilliant people who swear by them.

    But then I read something like your category-theoretic reformulation of Cantor’s diagonal proof, and the lower part of my brain just grins at the higher part and says “YOU SEE???” 😀

  88. The Problem With Gatekeepers Says:

    Scott #83

    I went through your entry -thanks for pointing it out- and I have to say I lack the expertise to understand the finest details of what you are saying.

    Setting that aside, doesn’t this paper deal a severe blow to https://en.wikipedia.org/wiki/Mechanism_(philosophy) ? The way I understand the overall paper is this: here is an example where being able to characterize the individual components of some material doesn’t let us make a prediction of a macro-property for the same material. It’s either that, or we don’t understand limits very well and taking a limit is not the right tool to compute macro-properties like the spectral gap.

  89. Alec Bondale Says:

    Prof. Ivanov essentially belongs to the Russian mathematical tradition, and you don’t probably understand its key point, which is the key point of the European academic tradition as whole. It rests on disagreement and noncompliance with your opposer, as much as the academic tradition of the United States rests on agreement and compliance. The latter may seem nicer, but it ultimately leads to the degradation of the knowledge. The very first sign of it is the ongoing upheaval towards the content-free mathematics, manifested in the works of so-called ‘symplectic topologists’, ‘mirror symmetry’ theorists and most remarkably the nlab Jacob Lurie’s adherents.

    One should not blame the Americanism of course, since the Americans are not guilty of the European science fallen to their Princetons and Stanfords like a bolt from the blue after Bolsheviks’ and Nazis’ rise to power. Of course they should have better partaken of its life-giving force, rather than trying to shove it into the Procrustean bed of their national mock civility which goes back to the Robber barons’ age. However, if they see no value in the scientific tradition, we the Europeans should not force them to keep it. To darkness comes, who comes from her.

  90. Ryan O'Donnell Says:

    Oh, whoops! Thanks Anonymous 74!! In Godel1 and Godel2 I meant to write “ZFC cannot prove *or disprove*”. In Godel0.5 I meant a “*true*” statement ZFC cannot prove, as Scott said.

  91. Scott Says:

    Alec #89: Strange as it feels to say anything “patriotic” in the age of Trump, I think the US has done pretty well in basic science for the last century or so, and disagreement and “noncompliance” are not unknown to us (have you been to, say, a physics seminar at Stanford or MIT? 🙂 ). And in any case, Timothy Gowers is not an American. And the combinatorics tradition that sowa so often denigrates is called “Hungarian.” And sowa actually held up Jacob Lurie, who is American, as a possible example of a theory-builder pursuing something deep and revolutionary (while averring that he didn’t know enough to judge, as I certainly don’t).

    So rather than indulging in national stereotypes, wouldn’t it be better to try to address things on their merits?

  92. Scott Says:

    Gatekeepers #88: No, the Cubitt et al. paper does not deal a severe blow, or any blow at all, to a mechanistic view of nature. That’s because you only see uncomputability when you look at the limit of an infinitely large lattice. With a finite lattice—i.e., the only kind that could fit in our universe—everything remains computable. Now, letting L→∞ is often a good approximation technique in condensed matter physics, but that’s all it is—and with the strange “computational matter” that Cubitt et al. construct, there’s an excellent and well-understood reason for the approximation to break down. And as I pointed out in my blog post, if you just wanted an example where something becomes uncomputable in an infinite limit, then we already had many, many examples simpler than Cubitt et al.’s: they just didn’t involve spectral gaps of translationally-invariant Hamiltonians.

    Incidentally, I’ll sometimes be talking to a journalist about quantum computing, and they’ll put forward some cool-sounding but wrong philosophical view about how QC works, and I’ll say, “no, that’s totally wrong—let me explain why.” And then after 10 minutes of happily nodding along, the journalist will say, “well, I didn’t understand the fine technical details of any of that, but it sounds like the high-level point that I should take away, as a layperson, is [my original wrong view].” So then I have to try and figure out a diplomatic way to say, “no, I wasn’t rattling off random technical details for no reason, I was trying to tell you why your original view was wrong—you just didn’t want to hear it!”

  93. wolfgang Says:

    @Scott

    Meanwhile I read a bit of owl/sowa and it seems to me that you have not addressed the main issue: You understand “explaining math … in a remarkably clear, friendly, and accessible way” as positive, while owl/sowa sees (applied) math as largely negative: It only helps the wealthy to become even more wealthy (stochastic calculus), encryption is bad, because it “prevent access of the general public to all sorts of artistic and intellectual goods” and of course nuclear weapons etc.
    Explaining and spreading this kind of math is therefore immoral behavior.

    Only abstract mathematics with no application is morally justified, as long as the barbarian masses do not get it.
    Therefore Tim Gowers et al. are so dangerous and need to be stopped, because they will destroy what is pure and good about mathematics.

    Even you [an evil computer (!) scientist, contributing to the destruction of society with all that internet stuff] have to admit that his point of view (assuming I understood it correctly) is consistent …

  94. The Problem With Gatekeepers Says:

    Scott #92,

    Now it is clear. Your point is there is not such a material in the universe with an infinite number of “cells” like the material they are proposing and therefore any finite version of what they propose -even if it were to exist- would have a computable spectral gap. It is mostly that the approximation technique that they are using breaks down. So in your view, correct me if I am wrong, it is the technique itself that is not appropriate to compute the spectral gap, not that there is anything inherently wrong -at least not for now- with quantum physics that would imply that it is not possible to compute macro properties of materials out of analysis of the individual entities.

    So then my next point is this. Do you consider general relativity to be a theory that is somehow the “limit” of some other theory that explains the physics of the atom? We know quantum mechanics explains the physics of the atom, but it doesn’t explain the physics at the macro level. We know that general relativity explains things at the macro level but not at the micro-level. Does the the Cubitt et al. paper say anything about the possibility / impossibility for such a theory to exist, namely, a theory that while modeling well the micro level is able to model the macro-level as a limit of itself when the number of atoms goes to infinity?

  95. Scott Says:

    wolfgang #93: But there’s plenty of concrete, problem-oriented math that’s arbitrarily useless, including for example almost all of what Erdös worked on, and even most of theoretical computer science (just don’t tell the funding agencies 🙂 ). So “the danger of being useful to someone” couldn’t possibly be sowa’s main objection to this kind of math.

    Having said that, I completely agree that if clarity of exposition is evil, then Tim Gowers is extremely evil, and I root for the triumph of evil as well.

  96. Scott Says:

    Gatekeepers #94: Yes, the modern view holds that GR is certainly a low-energy limit of something deeper, which would resolve (e.g.) the black hole singularities—presumably, a quantum theory of gravity. And to be consistent with the Bekenstein bound, that deeper theory might well replace continuous measurements of length and time with quantities that are discretized at the Planck scale.

  97. Neel Krishnaswami Says:

    But then I read something like your category-theoretic reformulation of Cantor’s diagonal proof, and the lower part of my brain just grins at the higher part and says “YOU SEE???”

    🙂 You do get something from this, though: Lawvere’s theorem is phrased as a positive existence statement asserting a fixed point exists (indeed, it explicitly constructs the fixed point), rather than a negative nonexistence one. This makes Cantor’s argument much less brain bending, IMO. It reduces it to the observation that the boolean not function doesn’t have a fixed point.

    Anyway, the CS-critical point is that the fixed point theorem *doesn’t* rely on quotation or access to source code — it relies on functional abstraction. This is good, because we know much more efficient ways to implement functions than passing around ASCII strings of program text.

  98. The Problem With Gatekeepers Says:

    Scott #96

    Thanks for clarifying.

    Scott #91

    I think Alec has a valid point. At the risk of giving out more about me than I want to, I have been educated in both traditions – I did what Americans call undergraduate in Europe and my graduate school/PhD in the US at one of the country’s academic powerhouses- and I have to agree that the approach to academics is somehow different in the US vs Europe.

    Nobody disputes that what we could call “Big Science” had a better run in the US during the XX-th century, particularly during and after World War II. Yet, despite the rankings that put American universities at the top in 2017 using different ranking criteria -the Shanghai Ranking and the the Times Higher Education’s are quite different in methodology but still put American universities on top- American academics seem to be missing producing something that is akin to Einstein’s General Relativity, Godel’s Incompleteness theorems or Turing’s theory of computation. I could hardly label John Von Neuman’s work “American”. The closest I can think of is Shannon’s Information Theory but as useful as the theory has been for designing communications systems, it seems to me it doesn’t belong to the same league as the others in the grand scheme of things.

    If string theory were to ever produce falsifiable predictions that could be verified via the LHC or similar experiments, American academics would have proved itself to be on par with the kind of theories produced by European academics. But we are not there. And I agree that there is an insidious “agreement and compliance” -even at the places you mention- bias that negatively impacts the production of new revolutionary knowledge.

    One of the United States’ most glaring contradictions is that on the one hand it has the strongest protections for free speech there is the free world against the government intervention, while on the other hand -something that Alexis de Tocqueville already noticed almost 200 years ago- outside the action of the government, there is a strong bias for compliance and suppression of dissent. That’s how you end up for example with academia that is overwhelmingly liberal, Silicon Valley that is overwhelmingly leftist libertarian, and industries like oil or defense that are overwhelmingly right wing. Americans seem to have little tolerance for dissent in their professional and private realms.

  99. Scott Says:

    Gatekeepers #98: In the last (say) 70 years, would you say that Russia or Europe produced any scientific discoveries comparable to general relativity or quantum mechanics or Turing computation? If not, then consider the possibility that we’re not talking about a European vs. American phenomenon at all.

    E.g., maybe it’s simply a case of a huge amount of low-hanging fruit in the early 20th century, which got picked, forcing the equally-talented scientists of today to jump higher. (The case for that seems especially strong in fundamental physics and logic, which are the two main fields we’ve been talking about.) Or maybe we’re really not as smart as our forbears, or we don’t think as boldly, but it’s because of similar forces acting everywhere in the world. Or maybe there are recent discoveries that future generations will rank alongside relativity and Turing machines; we just don’t realize yet how important they are. It’s a huge question, worthy of many blog posts of its own—I don’t think the answer is obvious.

  100. The Problem With Gatekeepers Says:

    Scott #99

    I agree in part and disagree in part. I agree that the Europeans have not produced anything either of that magnitude in the last 70-80 years so we could be under a worldwide phenomenon of lack of creativity.

    Where I disagree is with the notion that Einstein’s, Godel’s and Turing’s contributions are low-hanging fruit (I would add Heisenberg’s, Schrodinger’s and Planck’s to these to make sure I include quantum physics in the same bag of revolutionary discoveries). To me these contributions are of the highest order. Take GR for example. Its verification has kept busy physicists for 100 years and counting -this year’s Nobel prize to the LIGO experiment is proof. I am more of the opinion that the reward system “publish or perish”, “get famous by the age of 40” and similar discourages revolutionary work and encourages what I would call true “low-hanging fruit”: make sure you can list in your resumé a series of incremental contributions and that is how hiring decisions/promotions are made. Higgs made an observation about these lines. Feynman -to mention the only American physicist who regularly makes it to the top 10 when the best physicists are asked to name the top physicists of all time- had also a sane distaste for that system. I don’t know anybody in today’s academia that would give up their membership to the National Academy of Sciences to make a moral point as Feynman did. So in my opinion, the situation in this regard is the opposite: we reward low hanging-fruit at the expense of the real deal. I agree though that it affects Europeans and Americans alike.

  101. mjgeddes Says:

    But Scott,

    You must admit that some of these ‘pop-rationalists’ can be very very annoying. If someone has either real-world expertise or real world achievements (Hanson for instance), then that’s fine, but when others with no expertise or real-world achievements come along, set themselves up as gurus on social media, parrot back ideas they read from pop-sci books (which they probably don’t even understand on anything but the most superficial level) and make themselves out to be awesome polymaths , can you see how that could be so very annoying?

    Folks with both real-world expertise AND real-world achievements have already informed the pop-rationalists that Bayesianism has serious limits. Both David Deutsch and yourself (Scott) have clearly explained the limits and problems of Bayesianism.

    Numerous other top-class cog-sci researchers have expressed serious reservations that the ‘statistical’/Bayesian/machine learning) approach can ever get to real AGI. For example Josh Tenenbaum, Douglas Hofstadter and many more.

    The ideas behind machine learning are 30-50 years old, and all that changed was a lot of big data and big hardware. Whilst the pattern-recognition/statistical approach is clearly a major component of effective cognition, it’s not the holy grail that going to get us to AGI. It’s knowledge representation and upper ontology that needs to be investigated here to crack AGI, not stats.

    As a demonstration of my super-clicking prowess, I will finally explain consciousness in plain English:

    What is consciousness? *super-click* it’s a ubiquitous thermodynamic property, the flow of time itself, defined in terms of entropy. There are two types of entropy, the usual informational kind, and a new as yet unidentified kind. So there are *two* arrows of time. Then the ratio of the two types of entropy dissipation yields the rate of ‘time-flow’ , which defines the level of consciousness present. High time-flow, high consciousness.

    The IIT theory was half-right- consciousness involves something getting integrated, but it’s not *information* per se that’s getting integrated, but rather *dynamical processes* (and that needs the *new* generalized form of entropy to properly define).

    Consciousness is temporal coordination. There are many periodic processes coordinating their activity in a certain way. Which way? The answer is that the temporal pattern has to form a *temporal fractal* , such that the pattern of events looks the same (or similar) across all time-scales. So consciousness is a *quasi-abstract* property – it’s extended in time (it has temporal extension), but not space (it has no spatial extension).

    I eat PhD’s for breakfast. Pop-rationalists are little match for the power of genuine super-clicking insight 😀

  102. AJ Says:

    So ask sowa if he dislikes diophantine geometry which has both abstract stuff and analysis.

  103. Eric Van Nevel Says:

    Man, things are getting rather meta over on SneerClub with a couple people sort of agreeing with Scott, and then some kind of crazy criticism-off of people criticizing each other without contributing anything positive and new for criticizing each other without contributing anything positive and new.
    https://www.reddit.com/r/SneerClub/comments/7633el/not_the_critic_who_counts/

  104. Scott Says:

    mjgeddes #101: Here’s what I think is the key. Rather than asking myself whether people are being annoying by speaking above their station or whatever, I try to focus on whether the ideas they’re offering are good or bad, and presented well or poorly.

    Bayes’ Theorem is ridiculously simple. If you were to make a YouTube video explaining how to use it in everyday life, any mathematical knowledge you possessed beyond a smart 8th-grader’s would be completely wasted, and might even get in the way. What you’d mostly need is presentation skill, pizzazz, and ability to think and speak clearly.

    So if you told me people with no “real-world qualifications” were making YouTube videos about Bayes’ theorem, my only question would be: “so, for the target audience, are these videos good or bad, helpful or not?” (I honestly don’t know; watching lectures on YouTube is not my thing.)

    Your comment also seems to conflate extremely different questions. A person could believe, as I do, that Bayes’ Theorem is not some universal magic key to rationality or AI, that it’s just one tool among others—but also believe that it’s a tool that would probably help the billions of people who make decisions about life and death, guilt and innocence, medical treatments, career choices, etc. etc. without even an intuitive understanding of base rates (as conclusively shown by, e.g., the Kahneman-Tversky experiments).

  105. atreat Says:

    mjgeddes, you baffle me. Are your consciousness insights and your insights into “reality has three levels blah blah” offered then as some sort of highly advanced alien sarcasm technology? Like if aliens landed and introduced some new technology based on sarcasm that we hopelessly primitive apes could only understand as “sarcastic magic”, I’d imagine your prose is what this could look like…

  106. atreat Says:

    What this all amounts to (SneerClub) is whether criticism or conversation is offered in good faith. The people writing on SneerClub are not interested in a meeting of the minds or an actual transmission of information between the rationalist community and their leftist/postmodernist community.

    When you don’t have good faith, then beneficial communication is not really possible. I’m sure it is cathartic for them to sneer and know that others agree. Malevolent tribalism instincts are powerful hard to fight.

  107. atreat Says:

    More interesting than SneerClub: https://arxiv.org/pdf/1709.02874.pdf

  108. NN Says:

    Ryan #90: Gödel2 doesn’t rule out the possibility that ZFC is consistent and proves its own inconsistency.

  109. JimV Says:

    If you tend to believe as I do, that the real answer to life, the universe, and everything is evolution: that human thinking itself consists of trial and error, selection criteria, and memory, as does biological evolution; then “not the critic who counts” seems an overstatement. You need good selection criteria – good critics – to make evolutionary progress. I would agree that the hardest work is done by those who do the trials, but “critics don’t count” goes too far. Some critics don’t count, perhaps.

    Which of course mainly just illustrates that the title, or headline, of an essay may not accurately sum up the essay.

  110. Scott Says:

    atreat #106: You hit it on the head.

    I confess that I went back to SneerClub, wondering what counterarguments they would offer to this post. I now feel dirty for having done so.

    The height of “intellectual discourse” on the thread is: when a CS student attempts to argue with the sneerers, one of them (upvoted by many others) repeats back the student’s comments in a “nerd voice,” or rather uses text markers to simulate having done so.

    The user named “queerbees” is clearly the “thought leader” among the sneerers, a depressingly low distinction. This user’s response to a request for examples of nontrivial insights from the “science studies” scholars was simply, “I don’t need to justify myself to you,” and to wonder whether I was the asker (no).

    User “queerbees” also asserts that the use of the phrase “ideological Turing Test” is proof that I have “zero commitment to the conceptual rigors of CS.” Well, firstly, I don’t think I’ve ever used that phrase in my life, but secondly, what’s wrong with it? The Turing Test, unlike the Turing Machine, isn’t a concept of “rigorous” CS anyhow, and why shouldn’t one transplant that well-known philosophical concept to other domains?

    But I’d like to make a broader point about SneerClub. In his famous “Untitled” post, Scott Alexander mused:

      …much the same people who called us “gross” and “fat” and “loser” in high school are calling us “gross” and “misogynist” and “entitled” now, and for much the same reasons.

    I’ve made the same observation: that, despite its veneer of social justice and concern for the downtrodden, the bullying of nerds that occurs online bears striking similarities to the bullying that occurs in high school, and is probably even carried out in many cases by the same people.

    Some would no doubt dismiss that as a paranoid fantasy: clearly a hypermasculine quarterback, stuffing the math club kids into lockers, is universes away from some blue-haired gender studies scholar sitting in a cafe, even if they both happen to hate STEM nerds? 🙂 Well, if that’s what you think, then read SneerClub and see for yourself.

    This is a forum where at least some of the sneerers let their hair down—where, under cloak of anonymity, they abandon the pretense of anything even vaguely resembling thoughtful discourse, let themselves sink to the “I know you are but what am I?” level of argument. But in so doing, they incriminate themselves, as hating nerds in much the same terms and same language with which the most brutal thug hates them. It reminds me of how the anti-Semitism of the far left is not a fundamentally different phenomenon from the anti-Semitism of the far right: even those who go off in opposite directions can meet at infinity, united by a common hatred.

  111. Scott Says:

    atreat #107: Alas, ‘t Hooft still doesn’t understand the point of Bell’s theorem. Can’t get around it, no matter how many pages he writes. (For more, see his lengthy recent Facebook exchange with Tim Maudlin.)

  112. gentzen Says:

    I don’t understand type theory, I don’t understand category theory, I don’t understand functional programming, I don’t understand lambda calculus, I don’t understand any part of math or CS that’s about developing higher-level formalisms for things that we already knew how to say in more concrete ways.

    Higher order logic and type theory are significantly older than category theory, functional programming, and lambda calculus. The problem with them is that they can mean many different things, not that they would be inherently difficult to understand (like category theory). Neither problem affects functional programming and lambda calculus. Additionally, both are really useful, especially given the limited effort required for understanding them. Other commenters already defended lambda calculus, so let me give an exposition of functional programming (with links to additional resources).

    Around the year 2002, I came across The Language Guide, which had been created as part of a CIS 400 Programming Languages course. It contained Lisp, Miranda, and Scheme as examples of functional programming languages. Especially the Miranda example programs looked just beautiful:

    sort [] = []
    sort (a:x) = sort [b | b <- x; b <= a]
    ++ [a] ++ sort [b | b <- x; b > a]

    fac 0 = 1
    fac (n+1) = (n+1)*fac n

    ack 0 n = n+1
    ack (m+1) 0 = ack m 1
    ack (m+1) (n+1) = ack m (ack (m+1) n)

    My appetite was wetted, even so Miranda was dead, and I didn’t know about Haskell yet. Five years later, I found out that FFTW used OCaml to generate the fast part of its source code. OCaml seemed similar to Miranda, but fully alive, and it was a pleasant experience to work through its tutorial. Let me quote from the chapter Functional Programming:

    We’ve got quite far into the tutorial, yet we haven’t really considered functional programming. All of the features given so far – rich data types, pattern matching, type inference, nested functions – you could imagine could exist in a kind of “super C” language. … In fact my argument is going to be that the reason that functional languages are so great is not because of functional programming, but because we’ve been stuck with C-like languages for years and in the meantime the cutting edge of programming has moved on considerably. … While we were being careful to free all our mallocs, there have been languages with garbage collectors able to outperform hand-coding since the 80s.

    Well, after that I’d better tell you what functional programming is anyhow.

    The basic, and not very enlightening definition is this: in a functional language, functions are first-class citizens.

    So we finally learned the definition of functional programming. But which is the crucial feature missing from C? After all, you can pass functions as arguments to other functions.

    Closures are functions which carry around some of the “environment” in which they were defined. In particular, a closure can reference variables which were available at the point of its definition.

    But what has any of this to do with type theory and higher-level formalisms? There are polymorphic types in OCaml like

    type ‘a binary_tree =
    | Leaf of ‘a
    | Tree of ‘a binary_tree * ‘a binary_tree

    taken from the chapter Data Types and Matching. The a’ here is a type parameter. It should be obvious why it is desirable to declare a data structure like binary_tree independent of the type of objects which will be stored in that structure. We actually also see sum types (A | B) and product types (A * B) in the declaration of binary_tree.

    But for diving deeper into the mysteries of polymorphic and algebraic data types, let us continue with the chapter Making Our Own Types and Typeclasses from the Learn You a Haskell for Great Good! tutorial. If you don’t want to install Haskell on your computer just for working through that chapter, here is an website running GHCi online.

    data Maybe a = Nothing | Just a

    Inspecting the value constructors Nothing and Maybe via :type

    ghci> :type Just “Haha”
    Just “Haha” :: Maybe [Char]
    ghci> :type Just 84
    Just 84 :: Num a => Maybe a
    ghci> :type Nothing
    Nothing :: Maybe a
    ghci> :type Just
    Just :: a -> Maybe a

    But Maybe itself is a type constructor. The last section Kinds and some type-foo of that chapter talks about that stuff. Maybe takes a type, and produces another type. Use :kind

    ghci> :kind Int
    Int :: *
    ghci> :kind Maybe
    Maybe :: * -> *

    Don’t worry if this is confusing. The section also says

    You don’t really have to read this section to continue on your magical Haskell quest and if you don’t understand it, don’t worry about it. However, getting this will give you a very thorough understanding of the type system.

    If you want even more (type) theory than that, then the website of Robert Harper should be a treasure trove (without getting lost in higher-order abstractions for their own sake). Especially the abridged preview edition of his Practical Foundations for Programming Languages available there contains deep material, like System T (well known as Gödel’s T), System F (a variant of Girard’s System F), and much more.

    But let me finish my appetiser on a more practical note with a link to The Programming Languages Zoo by Andrej Bauer and Matija Pretnar. Here you can see OCaml in normal use. It demonstrates various concepts and techniques used in programming language design and implementation.

  113. The Problem With Gatekeepers Says:

    Scott,

    First of all good luck with today’s meetup. I found the reference to the Alexis the Tocqueville warning I had referred to earlier,

    http://www.learnliberty.org/blog/resist-the-pressure-to-conform-tocquevilles-warning/

    “In the United States, the majority takes charge of furnishing individuals with a host of ready-made opinions, and it thus relieves them of the obligation to form their own. There are a great number of theories on matters of philosophy, morality, or politics that everyone thus adopts without examination, on the faith of the public”

    Thankfully, the United States is not only that. I believe that what keeps the country going, despite that insidious phenomenon, is another of America’s features since the time of the Mayflower: people vote with their feet free from government “interference”. So if a particular entity becomes “intellectually and morally intolerable” people form their own entities. A while back you discussed the phenomenon of conservatives creating their own parallel institutions to those that are currently dominated by liberals. We are very early in that process and it is very difficult to predict that conservatives will succeed in all areas. In some areas, I can definitely see already some success. Professional schools (law, medicine and business) with a conservative bent -think for instance Pepperdine- are already producing professionals that give those educated at the liberal schools a run for their money.

    It’s the United States dynamism that keeps the phenomenon of “bias towards compliance and conformity” at bay.

  114. Joshua Zelinsky Says:

    Completely off topic,

    I was recently at a talk by Jack Lutz about using notions from Kolmogorov complexity to prove statements about the Hausdorf dimensions of fractals. One of the questions I asked was how his constructions require at certain points looking at certain things with respect to all oracles, and whether anyone had looked at the constructions restricted to some set of natural computable oracles (say PSPACE, or NP oracles) and then looking at the resources involved in some way. He said that it was further from what he thought about but that he had not too long ago talk to you and Lance Fortnow about some aspects of what he was doing. So, is anyone trying to move this sort of thing from a more computability/Kolmogorov approach to a version that cares about computational complexity?

    (And if anyone cares when reading this, I have a stereotypical nerd voice, so you don’t even need someone to repeat it separately.)

  115. Joshua Zelinsky Says:

    Scott #110,

    The “nerd voice” thing is interesting at a different level, in that it does say something when one thinks that that is apparently a reasonable or productive reply, although one should note that Elon Musk just called someone a nerd https://www.reddit.com/r/space/comments/76e79c/i_am_elon_musk_ask_me_anything_about_bfr/#dodcpv7 so maybe sometimes it is?

    More seriously, there are a handful of comments in the Sneerclub post where I’d be really interested in reading more- the comment that the other Scott A doesn’t understand Marxism and related ideas seems interesting and I’d love to see it expanded on, but I doubt they will. In that regard, it is very similar to the name dropping of the science studies people (which is incidentally an area where I could imagine it might have some very interesting work but I haven’t seen any evidence for that).

  116. Attila Smith Says:

    @Scott #99: “In the last (say) 70 years, would you say that Russia or Europe produced any scientific discoveries comparable to general relativity or quantum mechanics or Turing computation?”
    Yes, certainly: Cartan/Serre sheaf theory in analytic and algebraic geometry, Grothendieck’s scheme theory+étale cohomology+algebraic fundamental group+motives, Deligne’s mixed Hodge theory and solution of Weil’s conjecture, Thom’s cobordism, Hörmander/Lagrange linear partial differential equation theory, Atiyah/Grothendieck/Hirzebruch K-theory,…
    Shall I mention Faltings, Merkurjev, Perelman, Suslin and Wiles?

    As a controversial statement, I think Grothendieck’s contribution is more original, difficult and impressive than Einstein’s.
    One indication is that many graduate students have a pretty good understanding of Special or General Relativity (even I who am not a physicist feel that I have a tolerable understanding of these fields, but maybe I suffer from delusions) whereas many professional specialists in algebraic geometry definitely don’t know the entire content of Grothendieck’s Éléments de Géométrie Algébrique (EGA), let alone that of his Séminaire de Géométrie Algébrique (SGA). This can be empirically checked by browsing the questions on MathOverflow.

  117. GASARCH Says:

    Response to Stop Timothy Gowers: The history of math shows that pure math, applied math, theory-builders, problem-solves, problem-makers, all contribute in a great nexus without anyone having the high moral ground. And recently
    a) Lutz’ work, as mentioned above by Joshua (Its Jack and Neil Lutz actually. There will be an open problems column in SIGACT news by Neil on this soon. (I am the SIGACT NEWS open problems col editor.) is applying `computer science’ to `pure math’
    b) Work on the Unique Game Conjecture has lead to solutions of problems in `pure math’ See

    https://www.simonsfoundation.org/2011/10/06/approximately-hard-the-unique-games-conjecture/

  118. Scott Says:

    Attila #116: So the impressiveness of a scientific contribution can be measured by how few people understand it? If so, then yes, I agree, K-theory and sheaves clobber Newton, Darwin, and Einstein.

  119. The Problem With Gatekeepers Says:

    Attila Smith #116

    I am totally ignorant about algebraic geometry so I am unable to comment on how Grothendieck’s work stacks in the large scheme of things of revolutionary knowledge.

    Since you mention Wiles and Perelman -two of the people I have mentioned a few times who did ground breaking work by not following the institutional rules in mathematics- I have to say that Fermat’s Last Theorem is a conjecture of the XVII-th century whereas the Poincare Conjecture -the predecessor of Thruston’s geometrization conjecture that Perelman proved- is from the early XX-th century. Both of them did work that I consider to be breakthrough that made connections of XX-th mathematics -such as the science of modular forms and the Ricci flow- to those problems, but they were still working on problems stated in a different time.

    Let me offer a couple of possible explanations for what I agree to be a creativity crisis that affects both the West and, by extension, the world -given that the West is still the world leader in mathematics and natural science. One possible explanation is that the horrors of WWII probably caused a crisis of confidence in the West as a whole and that the educational effort that followed targeted at preventing anything like that from ever happening again also triggered, as a side effect, a lack of appetite for people to think about big things, thus putting in place systems that reward the immediate and incremental at the expense of the big and revolutionary.

    Another theory, that I heard a while ago but I am not sure how valid it is, is that it is all Enrico Fermi’s fault. The reason being this thing Wikipedia states about him https://en.wikipedia.org/wiki/Enrico_Fermi#Impact_and_legacy

    “He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the “Fermi method”, and is widely taught.”

    When I heard about this, it was also stated that prior to Fermi’s influence, leading thinkers were more prone to write each other letters for a long time arguing their proposed theories whereas Fermi focused on the immediate and pragmatic.

    Not sure which one it is, but it is very obvious to me that if we compare the last 70-80 years to the preceding 70-80 years there is a clear difference in the magnitude of the theories produced by each period.

  120. Gil Kalai Says:

    Teddy Roosevelt was apparently a great guy with a lot of achievements that no critic can take away. But I hope that people who read this particular quote realize that it is a sort of context-free generic pseudo-romantic anti-critical bullshit that both failed and successful leaders often make (and usually the faces marred by dust and sweat and blood, are faces of others).

  121. Attila Smith Says:

    Dear Scott, I’m sorry that you are caricaturing my position by your sarcasm: this is not in line with the scientific, tolerant, open to dialogue persona you are trying to project. And it is particularly ironic after your rant contrasting the negativity of sowa with the positivity of Gowers: who is which in our exchange? That said, revolutionary new scientific contributions are necessarily difficult to understand, else many would have thought of them before. And it is obvious that you can’t compare the difficulty of theories separated by several centuries. Arguably (but not provably) the number of people who understood Newton’s or Darwin’s theories in their time was not larger than the number understanding étale cohomology when it was produced, even correcting for the very different structure of demography, education and dissemination of knowledge in the related eras. Finally: even if you find my position laughable what are your arguments for claiming that Grothendieck’s (European, less than 70 years ago) contributions are not “comparable to general relativity or quantum theory or Turing computation” ?

  122. Scott Says:

    Attila #121: No, you’re still being more negative than I am!

    What was funny about your comment was that, of your long list of “important scientific discoveries of the last 70-80 years,” every last one of them was not only in pure math, but the kinds of pure math that sowa likes—as if to say that biology, chemistry, physics, astronomy, CS, etc., or even applied math like information theory, aren’t even worth talking about.

    More importantly: you confuse the difficulty of discovering a revolutionary theory, with the difficulty of recognizing it once discovered (in other words, “P” and “NP”). Newton’s and Darwin’s discoveries were immediately recognized as important by many smart people, but that doesn’t mean that any of those other smart people could’ve made the discoveries themselves.

    Grothendieck’s work, from what little I know, was clearly one of the great achievements of 20th-century mathematics; I don’t think there’s any dispute on that point.

    For science as a whole, though, one typically asks for something more: a change to our vision of what the physical universe consists of, of basic concepts like space and time and chance and causation, of humankind’s place in the universe, of the limits of human knowledge, or other things that a child might ask about; or the enabling of a new technology that changes the basic conditions of civilization. In 20th-century mathematics, arguably only the work of Gödel, Turing, and Shannon really rises to that level. Or, I’m curious: do you think Grothendieck’s work, or any of the other work you mentioned, does as well? If so, then on what ground?

  123. Aspect Says:

    I found this blog a few years ago. It was my second undergrad year, I think. One of the things that impressed me the most is how you were willing to address pretty much anyone who had something to ask you. Even if the person was misguided or ignorant you always seemed honest and polite when dealing with them.
    “This guy is so successful at what he does and yet he is still humble enough and willing to engage random people on the internet.”
    I’ve always appreciated that. You’ve probably thought about these things already, but cases like the SneerClub are the ones where your way of dealing with people might not work so well. That is, you’re giving them too much credit/attention. As you already observed, you don’t see people like them posting interesting blog articles or engaging in thoughtful discussion about subjects they are interested in. Their time is usually spent on finding ways to criticize others or on being mad at people on the internet in general. Good for them. Maybe that’s how they let off steam if they’re stressed out. However, since they haven’t demonstrated the ability to say anything interesting nor the “good faith” to understand your positions and/or challenge them on their own merits, you can safely ignore them and 99% of the time you’re not going to be losing much.

  124. Gil Kalai Says:

    Fortunately, there is a nice article by Terry Tao called “What is good mathematics” which famously contains the following footnote: “A related difficulty is that, with the notable exception of mathematical rigour, most of the above qualities are somewhat subjective, and contain some inherent imprecision or uncertainty. We thank Gil Kalai for emphasizing this point.” Indeed while perhaps I was not the first to note that judgements of this kind are subjective, in the land of the nerds it is useful to emphasize it from time to time.

    Having said that I would suppose that most mathematicians will regard the developments of algebraic topology, 20th century geometry, representation theory, and algebraic geometry (and other things) at least as rising to the works of Godel/Turing/Shannon (or above them) both in terms of intrinsic importance and also in terms of our physical universe (although different mathematician will give different role to our peculiar arbitrary universe.)

  125. gentzen Says:

    The Problem With Gatekeepers #119: Are you really sure that it is not rather a perception bias, as indicated by Scott #99?

    Or maybe there are recent discoveries that future generations will rank alongside relativity and Turing machines; we just don’t realize yet how important they are.

    Have you ever heard of the term second-price sealed-bid auction or of reverse game theory? I learned about it only recently from a lecture by Christos Papadimitriou. The 2007 Nobel prize in economic sciences was awarded for having laid the foundations of that theory, so it was my own fault that I didn’t know about it before. I knew about some impossibility results similar in spirit to Arrow’s impossibility theorem, but this was the first positive result in the area that I ever heard of.

    However, the reason why I watched that lecture in the first place gives even more evidence to the possibility of perception bias. I personally never understood how evolution is able to achieve the successes of nature (from a computational complexity point of view). Additionally, in my personal experience, simulated annealing was a nice and easy to use optimiser, while genetic algorithms were not. Hence I was naturally interested to watch a lecture about computation and evolution by Christos Papadimitriou. My joy was unbelievable when he said at 28:58 that “simulated annealing works fine” and shortly later “however, genetic algorithms do not work”! And he spelled out exactly my personal observation, as a practical observation and not as some theoretical construction. And later he came up with some real explanation for the role of sex and why evolution might actually work. The important thing for me was that he dared to admit that we might not understand evolution after all. This is what motivated me to watch more of his lectures.

  126. Scott Says:

    Gil #120: I don’t endorse everything Teddy Roosevelt said or did, and I probably don’t even fully endorse that particular quote. But the quote is perfectly clear about the “blood and sweat and tears” being on the doer’s own face, not other people’s.

  127. Scott Says:

    Aspect #123: Thanks so much. To quote the early xkcd—in a different but very likely related context—I indeed feel like “my normal approach is useless here.”

  128. jonas Says:

    Gatekeeprs #100: props for mentioning this year’s Nobel Prize in Physics by the way. It went to a very worthy place. https://www.nobelprize.org/nobel_prizes/physics/laureates/2017/press.html Rainer Weiss, Barry C. Barish, Kip S. Thorne win the prize in 2017 for their work on gravitational wave detector.

  129. The Problem With Gatekeepers Says:

    gentzen #125:

    In science it is always a bad idea to say “no way” in cases like this, so I will say instead “unlikely”. If you were to look at the content of what the best high schools taught as it pertains to mathematics and physics in the 1950s and you were to compare that with what it is being taught today in the best high schools on both topics -the qualification “best” is to make sure I remove other effects like the general deterioration of teaching quality in the West during the same time as it pertains to K-12 levels- I bet that there wouldn’t be much difference: students would be taught mostly classical mechanics with the promise of learning “general relativity” in college to those students interested in specializing in physics. The only meaningful change I can imagine is that today’s students use calculators/computers instead of slide rules. The fact that I can only point to this change also says something about the last 70-80 years. There have been vast improvements in technology but it seems we are working under the same scientific paradigms.

    I agree with Scott #122 as to what constitutes revolutionary knowledge. As somebody said, once invented, almost anybody with the right training and attitude can do calculus. However, it takes a genius like Newton -or Leibniz- to invent calculus. The same can be said about classical mechanics, general relativity or the results due to Godel or Turing.

    The only theory I can see as with the potential of belonging to that league is string theory, but it would need to be empirically verified in the way the LIGO experiment verified the existence of gravitational waves, for example.

  130. asdf Says:

    I remember looking at Sowa’s blog and thinking it was interesting and that its name and some of its contents were basically an affectionate joke. I took Gowers’ own link to it as a sign that Gowers had taken it in stride.

    Re #122 “In 20th-century mathematics, arguably only the work of Gödel, Turing, and Shannon really rises to that level”, I think you have to include von Neumann (foundations of QM etc), the discovery of chaos (hard to put a single name to it, but the foundational paper was written by Edward Lorenz, a meteorologist who discovered the butterfly through debugging his numerical weather prediction stuff, and realized it meant quite simple ODE’s had chaotic solutions, which he wrote up), Wiener (control theory / cybernetics), and maybe I’ll call the Feynman path integral a mathematical contribution. Heck, why not add Cook and Levin, since “it’s not the critic who counts” can be explained by the Impagliazzo’s Five Worlds paper, where P!=NP shows why Gauss (the doer) really does have greater capability than Grouse (the critic).

    I’ll also claim that type theory is relevant to the nature of knowledge, though not at the 5-year-old level. As an applied example, see compcert.inria.fr which is about the only practical compiler in existence which experimentally appears to be practically free of bugs (it’s all done with type theory). It’s particularly important if one is interested in proofs, beyond knowing that some things are provable and others aren’t.

  131. middle aged veterinarian Says:

    Interesting post. Einstein disappoints because he spent his last decades looking for a world that could be understood by an ‘Einstein’ instead of looking for the real world, the world with things like higgs fields and cosmological constants in it. Poor little fool! Feynman liked to play easy games too much: sort of like when you take your gramma to the casino and she is all like screw the roulette wheel the slots are shiny, and they got pictures of John Wayne and Harrison Ford and Gandalf on them! Terry Tao – spent the first 30 years of his life chasing approbation for getting test questions right – like some nightmare where everybody is in an “elite” high school math class until they are middle aged, taking one test after another and putting down ‘brilliant’ solutions simply to impress Teacher … Stephen Hawking – just as bad, but not test questions, answers to the mystery of the universe, as long as they could be narrated in a way that would make the chicks at the BBC ooh and aah over him… Eliezer Y. has the benjamins and the free time to do something with his life and he just refuses to grapple with anything but his parallel-to-the-world-religions view of “rationality”. SlateStarCodex is People Magazine (by the way, the best magazine covering celebrity rehabs – an important subject if one runs a rehab center or invests in one) for monolingual technological people who get paid to assess people in trouble, using the latest coolest statistical props … Shakespeare could have been a great playwright but he got bogged down in the minutiae of Elizabethan words, plus he did not really like people who were not young or rich …. Homer’s Iliad is a little better than Hemingway, but not much … And nothing is more overrated than High Renaissance Italian Art – Michelangelo with his endless variations on “human” “proportions”, Raphael with his sad little attempts at showing what good-souled people would look like under eternal blue and true skies, and Leonardo’s crafty little versions of some future AI’s second-rate pastiche of painting. Well everyone can be criticized. I once helped a couple of guys push their car out of a snowy ditch and they did not even say thanks! Not that I mind. I pushed the car, they drove it. I remember.

  132. Shmi Says:

    Scott:

    > It’s purely a comment about me and my cognitive style.

    and

    > The higher part of my brain says that’s nonsense, that there’s obviously deep insight to be had in all these fields, given all the verifiably brilliant people who swear by them.

    A lot of this is actually about the differences in the way human brains work. Including what kind of abstractions seem natural to different people.

    The Typical Mind fallacy/Mind Projection fallacy is one of the most pervasive ones, and the hardest to accept as one’s own blind spot (in nerd fights it’s often something like “But this is so clear! How can anyone not get that!” and “This is super convoluted, there are far simpler ways to explain it!”).

  133. Shmi Says:

    The Problem With Gatekeepers:

    String Theory has made multiple testable predictions, and every single one of them has been successfully falsified. 10/11 dimensions? Check. Superpartners? Check. You can continue the list, starting from the original failed string model for the strong interaction.

    The reason String Theory is still alive is because of the dire state of competing models, and because of the tantalizing glimpses of the potential advances that it offers, like the AdS/CFT correspondence and the holographic principle in general. That’s why string theorists shrug and say “well, the dimensions could be curled on themselves so we can’t see them yet”, “The superpartners could be too heavy for our instruments to see them” and so on.

    It is becoming more and more likely that any significant advances in unifying QM and GR will be heavily based on the information theoretical ideas, and the Scott’s talk last week, which I hope to understand at least 10% of once it’s posted online and maybe explained by the author on this blog, might be highlighting some baby steps in that direction.

  134. mjgeddes Says:

    #129:
    “There have been vast improvements in technology but it seems we are working under the same scientific paradigms.”

    Well, in terms of mathematics and the physical sciences, most of the low-hanging fruit has probably been picked, with physics being by far the most advanced of the human sciences.

    But other areas have been neglected. Specifically, I think the really big revolutionary advances of the future will be mainly in the social and cognitive sciences.

    In terms of AI-related stuff, we’ve got to understand concepts and knowledge representation! Probability theory can’t handle concepts and planning (memory and imagination). And this is all tied up with the ‘control problem’ I believe. Understand concepts and knowledge representation, the answers to the control problem will fall out naturally I’m claiming!

    Go to my wiki-book here and carefully read though my A-Z list of central concepts in the field of knowledge representation (I’ve now greatly improved and expanded the lists, this one’s now up to 144 entries):

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Ontology%26Concepts

    Too many of these entries are only at the level of philosophy rather than proper science.

    Here is where AI researchers should be directing their point-of-attack!

  135. tb Says:

    Scott #110:

    I am surprised at your assessment of the level of intellectual discourse on r/sneerclub. I do not post there myself, but I have been exposed to new ideas as a result of browsing that subreddit, as well as related anti-rationalist forums. I’d say that I used to be someone who agreed with most of what you and Scott Alexander had to say about the topic, but now I am not: and the reasons I had for changing my mind were (at least by my own lights) principled and informed. One might even say that it was “rational” to change my mind! For instance, I actually found the discussion of Ron’s role in HPMOR that you alluded to in your original post really insightful. The fact is, part of the reason that I liked HPMOR in an earlier time in my life was because I had the same toxic mindset that permeates that entire work regarding the importance of intelligence and the role of “ordinary” people. Reading that post about Ron helped me identify and characterize that mindset both in myself and in the work. There’s a thread about IQ that directly criticizes you on the front page, and inside, there’s a discussion of Kropotkin that left me (as you put it) “intellectually sated.” There’s also links on that subreddit to criticisms of predictions re: the singularity and superintelligence that changed my mind on the subject and were quite interesting in their own right. I could go on.

    Of course, I also think that “sneering” is often more worthwhile than attempting to engage with certain people earnestly. But the two are not mutually exclusive: you can sneer, and you can offer cogent criticisms of your opponent’s position. And I find that your dismissal of what you’ve found in r/sneerclub wanting: there’s a lot of content there that you either have dismissed or failed to notice. Which makes me think that your central thesis — that it’s easier to criticize than to build something up — is not really true, at least not in this case (I hesitate to speak more generally). The only asymmetry I see between your post and some of the posts on r/sneerclub is that I am broadly convinced by the latter and I find the former frustratingly uncharitable.

  136. gentzen Says:

    Gatekeepers #98: In the last (say) 70 years, would you say that Russia or Europe produced any scientific discoveries comparable to general relativity or quantum mechanics or Turing computation? If not, then consider the possibility that we’re not talking about a European vs. American phenomenon at all.

    The unbelievable unification of biology, chemistry, material science, and physics in the last 70 years qualifies as a scientific achievement surpassing both general relativity and Turing computation in how it affects our daily lives and in how both scientists and normal people perceive the world. It does not surpass quantum mechanics, since quantum mechanics enabled that achievement in the first place. And just like quantum mechanics, it was not the work of a single person (like general relativity and Turing computation), but the collaborative effort of many different people over a long period of time. I have no idea of the relative contributions of America, Russia, or Europe to that achievement. However, I think that this achievement provides strong evidence that current academic traditions of “agreement and compliance” are no worse than previous academic traditions of “disagreement and noncompliance” (with your opposer) when it comes to advancing science and embracing new paradigms.

    I am not a physicists, and I don’t know how closely theoretical physicists still associate themselves with mathematics. But for bleeding edge experimental physics, chemistry and material science are no less important than mathematics. Even astrophysics can not afford to neglect astrochemistry. And it goes the other way too, bleeding edge chemistry and material science are no longer possible without advanced physical imaging methods like nuclear resonance tomography or electron microscopy.

    Even so the mathematics produced by attempts to unify QM and GR is deep and satisfying, it is pale compared to the real contributions of physics to our understanding of the world and to enable advances in chemistry and related sciences. And many applications of the (probabilistic or) stochastic paradigm embraced by quantum mechanics and thermodynamics are still out there waiting for scientists to discover and elaborate them, independent of whether mathematics follows that stochastic paradigm or not.

  137. Jeremy Says:

    I may not have thick enough skin for this kind of forum. As someone who sometimes finds himself nodding along with both Zeilberger and sowa, I may be too impressionable. But as the kind of amateur who enjoyed Quantum Computing since Democritus and gave a logic course for undergrads based on who could name the big of number, I continue to hope that computer scientists will make more honest efforts to learn and contribute to algebra.

    I feel compelled to advertise once again on this blog that I do not know the computational complexity of some of the most basic algebro-topological invariants, such as the topological K-theory of a finite simplicial complex. All work on complexity in algebraic topology that I am aware of seems to either reduce to Postnikov tower ideas from the 50s, or be spun off from Anick’s work on unstable rational homotopy theory in the late 80s. It depresses me that even people as strong as Gowers seem all too willing to be scared off by words like spectral sequence. I have often literally heard algebraic topologists say things like “I have known since the 70s that this computation is hard in general”, but there is no market for the unproven theorems behind their statements.

  138. TheoryA Says:

    Scott #33

    I honestly think the whole CS theory community would benefit from an extended question and answer session entitled “What’s the point of Theory B?”. The questions would hopefully be answered by the leading lights in their respective areas.

    I am not sure I know any serious Theory A researchers who can point to specific examples of why current Theory B research is important and/or exciting. This is a massive gap in mutual understanding between neighboring research fields.

    And yes, I am volunteering you to instigate this.

  139. Atreat Says:

    Regarding modern achievements along the lines of Godel, Turing, Einstein: what about Game Theory by von Neumann and Nash?

  140. Joshua Zelinsky Says:

    #tb,

    “One might even say that it was “rational” to change my mind! For instance, I actually found the discussion of Ron’s role in HPMOR that you alluded to in your original post really insightful. The fact is, part of the reason that I liked HPMOR in an earlier time in my life was because I had the same toxic mindset that permeates that entire work regarding the importance of intelligence and the role of “ordinary” people. Reading that post about Ron helped me identify and characterize that mindset both in myself and in the work.”

    The entire discussion seemed to me to be a strawman of what Eliezer was doing by focusing essentially on the literary necessity of Ron’s role, which is a justification for Ron to exist at an outside the story level in the original material, but not much of a reason for him to be an interesting or major character in a rationalist-focused fic. (That said, at least one HPMOR inspired fanfic, Arithmancer did do interesting things with Ron).

    “There’s a thread about IQ that directly criticizes you on the front page, and inside, there’s a discussion of Kropotkin that left me (as you put it) “intellectually sated.””

    Where are you looking? The vast majority of that thread is more sneering, accusing Scott of having a persecution complex and all sorts of very strange things that seem even stranger when one realizes that one of the primary points of both his post and the Scott A post was that IQ isn’t by itself a useful individual metric for most purposes. Frankly, it seems like the entire thread is missing the point, almost deliberately. And the only “discussion” of Kropotkin is two long quotes, which also seems to be more of a strawman of what anyone has been arguing. For example, it starts off with “yeah the fact that a lot of these guys can’t articulate or full-throatedly support a defense of inherent human value that isn’t conditioned on intelligence is so frustrating to me. ” This seems ridiculous at multiple levels: First of all, Scott Alexander and Yudkowsky have discussed a lot what they think of in terms human values. Second it is pretty easy to determine simply from what he does for a living that Scott Alexander has a pretty damn high opinion of inherent human value for people who aren’t that intelligence. Third, most people don’t bother going around giving “a a defense of inherent human value that isn’t conditioned on intelligence” because there’s generally no need to because people in our society (including the dreaded rationalists) generally take this for granted; apparently the readers of Sneerclub are reading into things a belief that isn’t remotely present.

    “There’s also links on that subreddit to criticisms of predictions re: the singularity and superintelligence that changed my mind on the subject and were quite interesting in their own right”

    Which specific links did you find to fit into this category?

  141. Scott Says:

    tb #135: Your polite brief for SneerClub reminds me of a “good cop / bad cop” routine.

    One cop says to me, “you vain fuck, you mediocrity with a shitty blog, you dickhead,” while the other one says, “I’m genuinely surprised that you don’t listen more to my colleague over here; he makes some insightful points that changed the way I think about these questions…” 🙂

    The “fascinating” threads you mention distort my views to where I no longer recognize them. E.g., they somehow took a blog post in which I criticized the design of IQ tests, and argued for IQ’s basic irrelevance for individuals (as opposed to population-level research)—and then force-fit it into the frame that I’m yet another IQ elitist who thinks people with low IQs don’t deserve to live, or something? I found nothing objectionable about the quoted Kropotkin passage, except for the un-argued insinuation that the passage somehow refuted something I had said (what, exactly?). How can I engage with people whose view of me is determined like ~98% by hostile preconceptions, rather than by what I actually write?

    Incidentally, I, too, am extremely skeptical of predictions about a singularity in this century, and often argue with my friends in the rationalist community about it—but to me, the very fact that I argue puts me closer to them than to the sneerers. Contrary to what you say, there’s really no place whatsoever for sneering, if you think (as I do) that there’s any non-negligible chance that your opponents might turn out to be right, or even if there’s anything to be learned from them—and if your opponents have extended you the courtesy of not sneering at you.

  142. Scott Says:

    Jeremy #137:

      I feel compelled to advertise once again on this blog that I do not know the computational complexity of some of the most basic algebro-topological invariants, such as the topological K-theory of a finite simplicial complex. All work on complexity in algebraic topology that I am aware of seems to either reduce to Postnikov tower ideas from the 50s, or be spun off from Anick’s work on unstable rational homotopy theory in the late 80s. It depresses me that even people as strong as Gowers seem all too willing to be scared off by words like spectral sequence.

    You’re a grad student? It sounds to me like, in the above passage, you’ve just carved out an extremely promising research agenda for yourself! The people on earth who both (1) understand algebraic topology, and (2) understand (and are non-dismissive of) computational complexity, can indeed probably be counted on the fingers of one hand. Rather than decrying the situation, I encourage you to see it as an opportunity.

    You worry that there might be no “market” for complexity results about topology—but I can tell you that, e.g., the work by Hass, Lagarias, and Pippenger, and subsequent work by Kuperberg and others, on the computational complexity of unknottedness was well-received in the TCS community (or certainly by me 🙂 ). As was the work by Babai, Seress, et al. that applied the classification of finite simple groups to computational group theory, and pretty much any other work that uses deep math to answer questions that the TCS community is able to understand.

  143. Scott Says:

    TheoryA #138: Sorry, too much on my plate right now! I volunteer you to organize and host that Q&A session. 🙂

  144. Atreat Says:

    Scott #141, “Contrary to what you say, there’s really no place whatsoever for sneering, if you think (as I do) that there’s any non-negligible chance that your opponents might turn out to be right, or even if there’s anything to be learned from them—and if your opponents have extended you the courtesy of not sneering at you.”

    +1

    Scott is articulating the ideal and one I wholeheartedly agree with. Now, humans being humans, we all fall short of this ideal at times, but (*crucially*) it is the ideal and one we should strive for.

    Folks who do not share belief in this ideal are very hard for me to communicate with even though I also fall short of this ideal many times.

  145. TheoryA Says:

    Scott #138

    One option would just be to pose the question as a blog post and see what happens (notice how I returned the workload to you 🙂 ).

    But a guest post be a Theory B expert might be even better.

  146. Atreat Says:

    As an example: the facebook discussion between t’Hooft and Tim Maudlin recently was an excellent example of two people genuinely trying to communicate even though they were sneering at times. Frustration happens. Human communication is messy and full of emotion. But, crucially, both sides were investing some amount of good faith in trying to come to a meeting of the minds.

    SneerClub is almost by definition all about erasing any good faith from the discussion. To ask Scott and others to try and navigate the morass of bad faith offered in the hope of finding some good ideas or worthy criticism is perhaps a bit too much??!! <– trying really hard not to sneer 🙂

  147. eitan bachmat Says:

    Hi Scott
    I might agree with you about Godel/Turing/Shannon if you think about scientific/technological/society impact, but pure math, which is what sowa is interested in is neither about science (math that nature happens to use), nor technology/engineering.
    In fact, if that is the criteria I dont think that Gauss/Riemann/Euler would count as greats from your point of view (Newton would). Yet, mathematicians would count them as greats and try to continue their legacies.
    Regarding the last 70 years in terms of pure math, I think that two major conceptual advances are, The Grothendick program (A.K.A., arithmetic geometry), which helped solve major (and concrete) number theoretic problems using geometric means
    The Langlands program which relates number theory to representation theory, the third is the influence of string theory on geometry, giving fantastic new insights and introducing amazing dualities. Moreover, there are very strong connections between all three. I would also mention the Thurston program in low dimensional geometry which has been the most successful in terms of implementation, the related Schramm program as a major new insight in probability which is also strongly related to conformal field theory and has revolutionized the field and the work of Gromov, again in geometry. Europeans and definitely Russians played a major role in all these developments.
    All these have up to now, essentially zero influence outside math and they are all really difficult, so few mathematicians, even in number theory, representation theory, geometry and probability actively work on them, much as most computer scientists dont try to prove lower bounds.

    I think what worries sowa and others is that even fewer people will want to work in these challenging areas in the future.
    I dont support sowa otherwise, I think Gowers and Tao are awesome

  148. eitan bachmat Says:

    Oh, and the rise of ergodic theory, Furstenberg, Margulis, Ratner, Eskin-Mirzakhani, Mcmullen… (closer to Gowers and Tao)

  149. Gil Kalai Says:

    To add to #142 there are several important research agenda relating topology and complexity and let me mention one. Two papers by Gregory R. Chambers, Dominic Dotterrer, Fedor Manin, Shmuel Weinberger representing a major breakthrough on a conjecture of Gromov (in fact a whole array of conjectures) are https://arxiv.org/abs/1610.04888 and https://arxiv.org/abs/1611.03513 . Complexity and Topology is a main research agenda for Weinberger for over a decade, and it is related to a lot of very deep and difficult mathematics

  150. AJ Says:

    @GilKalai #149 do you think QC is impossible unlike what Scott believes?

  151. Paul Gustafson Says:

    Jeremy 137:

    Re: computational complexity of topological K-theory:

    Have you seen this paper:

    Computing Simplicial Homology Based on Efficient Smith Normal Form Algorithms (http://ljk.imag.fr/membres/Jean-Guillaume.Dumas/Publications/DHSW.pdf ) ?

    As they note at the end, you can compute rational cohomology the same way (or use the universal coefficient theorem). Then you can apply isomorphism between topological K-theory and rational cohomology (presumably this isn’t too computationally costly). I don’t think they look at the cup product structure, so that might be a problem worth considering.

  152. gentzen Says:

    TheoryA #138: What do you mean by “Theory B”? Just anything which is not covered by “Algorithms and Complexity”? Or specifically “Formal Models and Semantics”? Or just any topic which feels too abstract, unclear, or unmotivated from a list like the following:

    finite automata, context-free languages, formal languages and power series, automata on infinite objects, graph rewriting: an algebraic and logic approach, rewrite systems, functional programming and lambda calculus, type systems for programming languages, recursive applicative program schemes, logic programming, denotational semantics, semantic domains, algebraic specification, logics of programs, methods and logics for proving programs, temporal and modal logic, elements of relational database theory, distributed computing: models and methods, operational and algebraic semantics of concurrent processes

  153. tb Says:

    Joshua Zelinsky #140 Scott #141:

    I would disagree that most people “take for granted” a view of human value that isn’t conditioned on intelligence. As that thread pointed out, the fact that a bunch of young people were sending Scott Alexander worried messages about whether their IQ was high enough shows that they have a hard time conceiving of what their own human value would be if they had a low IQ.

    And the criticism of Scott Aaronson in that post did not amount to calling him an IQ elitist. It made the point that by criticizing the methodology of IQ research, both Aaronson and Alexander can avoid discussing the bigger question of why it is that someone would be anxious or worried about whether their IQ is high enough. Saying, “you’re probably still smart, even if you have a low IQ” is still compatible with a “view of human value that is conditioned on intelligence.” What they were questioning was the choice to avoid talking about that broader view of human value, rather than the narrower question of whether IQ tests accurately measure intelligence. Nothing in that thread, as far as I can tell, said that Scott Aaronson was an “IQ elitist” who thinks that people with low IQs deserve to die.

    I don’t intend to speak for or defend to the death every single thread on r/sneerclub, especially because I basically believe that focusing on this subreddit when there are places that are specifically for more full-throated and well-reasoned critiques of rationalist ideology is a way to avoid having to engage with those critiques. r/sneerclub is basically a place to shoot the shit. But this thread in particular I want to talk a little more about. I get from Scott #141 that you think I am being disingenuous somehow: that I am some sort of sneering sjw who is putting on a polite facade in order to attack you from another front. I am not. I am someone who read and still reads your blog and Scott Alexander’s blog, and would identify as a “nerd” in all the relevant ways. And the fact is, if it weren’t for discovering criticisms of the ways of thinking that you two exhibit on the internet, I would have been one of those worried teenagers sending Scott Alexander e-mails about whether their IQ is high enough. So, frankly, I find it plausible that the overall rationalist ideology, which I was engaged with for quite a while, does in fact have a hard time articulating an understanding of human value that is not conditioned on intelligence. Because when I was consuming a lot of “rationalist” work, I myself could not articulate an understanding of human value that was not conditioned on human value.

    Hence why this was an insightful thread for me. Reading has helped me change my mind about that particular belief, and identify where it came from (it came from a lot of rationalist writings, among other places).

    Which brings me to the Ron thing. Joshua Zelinsky points out that there’s some room for interpretation regarding why exactly Ron was excluded from HPMOR, and only according to some of those interpretations is it related to some sort of deeper lack of understanding regarding the role of Ron in the original Harry Potter’s moral universe. Frankly, I find the r/sneerclub poster’s interpretation plausible given the context of the rest of the story, (the constant denigration of those who aren’t super-duper smart, Harry’s comments about “NPCs,” etc.) And frankly, when I used to be someone who could read that story wholeheartedly and without cringing, (or, dare I say it, sneering) at its implicit value system, that was what I got from it: Ron was excluded from the story because only smart people matter or really have rich internal lives.

    Also, here is are a couple of threads about r/sneerclub’s views on AI. https://www.reddit.com/r/SneerClub/comments/68sta8/that_thing_that_had_yudkowsky_superworried_about/
    https://www.reddit.com/r/SneerClub/comments/6gz2jr/superintelligence_the_idea_that_eats_smart_people/
    I think they posted a Yoshua Bengio interview a while back where he criticized singulatarian ideas, but I can’t find it, so I could be misremembering. I hesitate to show you all these threads, because I don’t really believe that it makes sense to hold r/sneerclub to the standards that you are holding them to. Part of that is because they are a place primarily for making fun of people and shooting the shit (I greatly appreciate that type of place in certain contexts). But the other part of it is that they criticize Yudkowsky et al (not you, Scott Aaronson, but people more directly associated with singulatarian ideas) for being dilettantes who don’t know what they’re talking about. Since most of the people in that group are probably not PHDs in computer science, there’s not a whole lot of people who would feel qualified to develop their own ideas about AI: the group mainly operates by citing non-singulatarian experts. So expecting them to develop their own ideas about AI is holding them to a standard that they explicitly reject. So, for all of these reasons, I don’t think the standards that you (Scott and Joshua) are holding r/sneerclub to really make sense. What I am trying to do is show that even by your own standards, your criticism of them mostly fails. Like I said, I genuinely believe that it is “rational” (in any reasonable sense of the word you like) to listen to what they have to say, and I think that I am better off for it.

    Anyway: I get the sense that you (Scott Aaronson) are unhappy with the sense that these people on r/sneerclub are harshly criticizing your character based on a sentence you wrote in a blog once. I am not a public figure, so I don’t really know what it’s like to feel internet hatred in that way. I think it’s totally understandable if you don’t want to engage with it. But if you do engage with it, engage with it fairly: read what the people criticizing you are actually saying, and don’t strawman them by saying that they’re strawmanning you. Live up to your own standards of discourse.

  154. tb Says:

    My apologies: I made a potentially very confusing typo in my post above. The last sentence of the third paragraph is meant to say “Because when I was consuming a lot of “rationalist” work, I myself could not articulate an understanding of human value that was not conditioned on human intelligence,” not “Because when I was consuming a lot of “rationalist” work, I myself could not articulate an understanding of human value that was not conditioned on human value.”

  155. Joshua Zelinsky Says:

    tb #153,

    “I would disagree that most people “take for granted” a view of human value that isn’t conditioned on intelligence. As that thread pointed out, the fact that a bunch of young people were sending Scott Alexander worried messages about whether their IQ was high enough shows that they have a hard time conceiving of what their own human value would be if they had a low IQ.”

    I don’t think this is what they were worried about *at all.* They were worried about whether they’d be successful in general (since IQ is correlated with general life success) and whether they’d be successful in their specific fields (since many were interested in fields where that correlation is strong). The idea that anyone here was making any claims that people might be missing value is simply not what either Scott nor the people writing to Scott were talking about as far as I can tell.

    “Which brings me to the Ron thing. Joshua Zelinsky points out that there’s some room for interpretation regarding why exactly Ron was excluded from HPMOR, and only according to some of those interpretations is it related to some sort of deeper lack of understanding regarding the role of Ron in the original Harry Potter’s moral universe. Frankly, I find the r/sneerclub poster’s interpretation plausible given the context of the rest of the story, (the constant denigration of those who aren’t super-duper smart, Harry’s comments about “NPCs,” etc.)”

    Harry is an 11 year old, immature boy, who happens to have the mind shard of a hyperintelligent evil overlord in his head. Not everything he does is ideal, and that’s true for most protagonists. No one seems to think that Brandon Sanderson endorses every single thing Vin does in the Mistborn books, or that Christopher Nuttal thinks that every thing that Emily does in Schooled in Magic is perfect. I find it interesting that people have trouble with this idea with a character by Yudkowsky. This is especially interesting given that at the same time they complain that the character is a Mary Sue. Maybe it should occur to them that maybe the things they are identifying as bad values and attitudes by Harry really are intended as character flaws? I suspect that the difficulty in people realizing this may be connected to Eliezer not being as good a writer as say Sanderson, or not engaging in as much drafting and feedback but the basic onus at the end of the day is not on Eliezer.

    “Since most of the people in that group are probably not PHDs in computer science, there’s not a whole lot of people who would feel qualified to develop their own ideas about AI: the group mainly operates by citing non-singulatarian experts. So expecting them to develop their own ideas about AI is holding them to a standard that they explicitly reject. So, for all of these reasons, I don’t think the standards that you (Scott and Joshua) are holding r/sneerclub to really make sense.”

    Lots of people reject standards they should be held to. That one rejects it doesn’t say much. Look, I’ll be blunt: I’m not a singulitarian by almost all notions of the terms. I think there are massive problems with pretty much all of what Eliezer dubs the major schools http://yudkowsky.net/singularity/schools/ , and I’m not seeing almost anything on Sneerclub that even at a lay level does a decent job explaining the problems. For example, if one wishes to criticize Kurzweil’s exponential growth claims (what Yudkowsky calls Accelerating Change) one could note that by many metrics technologies for many types have met plateaus. Both the Kurzweil and Vinge type (what Eliezer calls the Event Horizon) claims are vulnerable to the fact that there have been multiple points in time where many new technologies showed up, and then things slowed down (something for example I was able to discuss on a pretty pro-singularity subreddit here https://www.reddit.com/r/Futurology/comments/428uma/the_year_2100_is_about_ten_years_away/cz8wnfi/ ). For the Yudkowsky/Good type of Intelligence Explosion, one can discuss a whole bunch of criticism, including for example potential computational complexity barriers to recursive self-improvement and problems of value stability. Some of these things are more technical than others, but none of them require a PhD to discuss (I know because I’ve discussed all of them with undergrads). I suspect that I made a macro to find random blogs that include the word singularity I’d see more cogent arguments against singularitarianism then I’m seeing there.

    Moreover, even coming from a generally pretty skeptical of a Singularity type position, I can’t see even with a very low bar anything they are doing other than just sneering. Honestly, this sort of lack of positivity or even attempts at productivity is part of why I’m not as involved with the organized skeptical movement as I used to be, but at least there they are often dealing with genuine con-artists and the like, and at least tried to chose a word with a positive connotation. Here the sneering behavior is directed at people who are genuinely trying to operate in good faith.

    And I have to ask, if, as you claim you are a “nerd” what do you think the people who think that repeating something in “nerd voice” would possibly think of you?

    ” But if you do engage with it, engage with it fairly: read what the people criticizing you are actually saying, and don’t strawman them by saying that they’re strawmanning you.”

    I’m not either Scott, nor, as far as I’m aware, have I ever been a target of sneer club. I think I’m pretty confident when I can say that pretty close to everything on that IQ thread was a strawman of both Scott’s positions.

  156. Joshua Zelinsky Says:

    AJ #150,

    Gil is one of the most active and serious skeptics of quantum computing. So, yes, he and Scott do definitely disagree on that.

  157. Atreat Says:

    tb #153,

    I get the sense you want Scott to just out and say that he does not believe a human’s value is conditioned on their IQ. To me it seems remarkable that you could have read this blog and still believe he needs to explicitly say this to you but…

    Should he do this would you stop hounding him for not gleaning some deep insight from a small corner of the interwebs devoted to sneering and bullying?

  158. Scott Says:

    Atreat #157: Indeed, I do not believe that a human’s value is conditioned on their IQ.

    Or was it a strategic blunder for me to say that, without first extracting a concession from tb? 🙂

  159. Scott Says:

    Joshua #155: I just spent the whole afternoon talking with businesspeople about the realities of quantum computing, in the evil place called Silicon Valley. So, THANK YOU for saying everything I wanted to, except that I didn’t have time today and also you said it much better.

  160. Michael Says:

    Arguably, the sneer club did have a point about Scott Alexander’s “the average guy with a low iq beats up nerds” remark- there are plenty of unintelligent people that wouldn’t hurt a fly and it’s not fair to tar everyone with the same brush. But their criticisms of Scott Aaronson’s article was basically that he linked to it.

  161. Raoul Ohio Says:

    The revolution in communication the internet has caused is changing everything (and who knows what is next). One aspect is that people can “find their tribe” of like thinking people. This is probably mostly a good thing, but it also is leading to a resurgence of fascism and racism. And also goofy groups like the sneer club. I make it a point to not read their stuff, so I have no idea if they are having a big laugh, are seriously disturbed, or who knows what. I also don’t care.

    Anyone can obviously find their way into weird corners of the internet with modest effort. I recommend to everyone to not do this. Stumbling onto the sneer club and discussing the issue on SO and debating them on SC only brings publicity to them. That is likely to re energize them.

  162. Raoul Ohio Says:

    BTW, while everyone was busy debating the SC, Grothendieck fan boys, GIC 1+2, etc., the biggest event in physics so far this millennium was announced today. Check it out: https://phys.org/news/2017-10-neutron-star-smash-up-discovery-lifetime.html

  163. tb Says:

    Joshua #155 Atreat #157 Scott #158:

    Scott, thanks for saying that! I really meant it when I said that I was surprised that you thought there was so much wrong with r/sneerclub, because their usual targets are I think a lot more problematic on that front that you usually are.

    Now, I do take issue with Atreat’s notion that I am “hounding” you. I posted two comments on your blog, none of which were abusive or even really especially sneering. They were critical, but criticizing your views is not the same as “hounding” you. If you genuinely feel that what I’m doing is rising to the level of harassment, then I will stop, but I (again) would be surprised if you shared Atreat’s opinion on this point.

    Joshua: the Ron scene is more in the voice of the narrator or the world than in the voice of Harry. This isn’t to say that Harry is a self-insert, but the Harry of the story has not read the original series of Harry Potter books by J.K. Rowling, so his finding Ron “unnecessary” is more of a knowing wink from the author (Yudkowsky) to the audience than an opinion about the original books expressed by the character (Harry). I agree that Harry is meant to be a flawed protagonist and that it would be silly and anti-critical to take everything he says directly to be the author’s opinion, but this case in particular is, I think, supposed to be breaking the fourth wall.

    Also, Joshua, isn’t the first r/sneerclub thread I linked pretty much giving reasons for why the exponential growth claim in the case of AI is probably false? If not, why not, and if so, what’s wrong with those claims? And what’s wrong with the second article? I took both of those to be “cogent” arguments against singulatarianism. And even if you disagree with them, I don’t see why they are any less sophisticated than the arguments you see here or on r/futurology, on average.

    Also, Joshua and Scott: you all are taking the *nerd voice* thing way too seriously. Again, you seem to suspect that I am not a “real nerd” or something like that. Let me just say that I am not sure what else I can do to tell you that I am in fact closer to someone who would be made fun of with a *nerd voice* than a “blue-haired SJW in a coffee shop.” I mean, I read blogs like this for fun. I like math. I spend a lot of time studying and am often literal-minded and socially awkward? Literally all of my friends and family would describe me as a “nerd” to some degree. I think that if someone made fun of me with a *nerd voice* I would probably be mad for a second, then I would take another second to reflect on why they were reacting that way. As many nerds can attest to, I have a tendency to be pedantic and overly earnest when it comes to arguing with people (you probably think that me posting here is an example of that, in which case your annoyance is probably the same annoyance that would motivate someone to make fun of someone else with a *nerd voice* on the internet). This is mostly in jest, because it’s not like I can prove any of these facts about myself. More to the point is that it’s bad manners to assume that someone must be a certain kind of person in order to hold the opinions they hold. I mean, isn’t assuming that I’m some sort of sneering hipster in a coffee shop (or whatever it is you think I am) because I hold these opinions just as bad as assuming that someone is a “misogynistic neckbeard” for holding certain opinions? Please do me the courtesy of taking my word for it when I describe what kind of person I am.

    Which brings me to another point. Making fun of somebody for being a “nerd” is not morally equivalent to anti-semitism. I do think that sometimes people on the left trying to criticize misogynistic “nerd culture” can end up slipping into bigotry against non-neurotypical people, so we probably have more common ground on this front than a lot of people. But I am not a member of a marginalized group because I am a “nerd.”

    Which brings me further to why I did not think that the r/sneerclub thread on IQ was an egregious misreading of Scott (Aaronson’s) views. As far as I could tell, in that thread, there were two direct criticisms of Scott Aaronson and two more general frustrations with his views overall:
    1) He stated that the reason people were interested in studying emotional intelligence, grit, etc. was due to a “barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter.” The regulars on r/sneerclub think that this is a silly view. One might disagree and think that Scott’s view makes sense, but it’s hard to say that he’s being strawmanned here, because he literally said that.
    2) Scott has expressed the view elsewhere that modern art is a bunch of baloney. The regulars on r/sneerclub think that this is a silly view, and frankly a very “sneering” view towards those people who practice modern art. Again, this is a view that Scott has explicitly endorsed, so I do not think that them ascribing it to you is a strawman.
    3) There is a more general frustration, present in this thread and in the comment from u/queerbees referred to earlier, that Scott seems to dismiss a lot of work in the arts, humanities, and social studies in a “sneering” fashion. Again, I do not think this is a strawman, because Scott has wholeheartedly dismissed modern art, and wholeheartedly dismissed the entire field of science studies due to the Sokal affair.
    4) There is a general frustration that Scott Aaronson did not condemn Scott Alexander’s piece for the weird caricatures of people with low IQs presented therein, as well as his failure to present a view of human value that is not conditioned on human intelligence rather than presenting a mere reason to think that IQ tests are not good measures of human intelligence. You might take issue with their assessment of Alexander’s work, and thus think that Aaronson was right to not do these things, but to say that he didn’t do them isn’t a strawman of Aaronson.

    I do not think that any of the above points are strawmen of Scott’s views. If they are, I would like to be informed as to why. If I misread the r/sneerclub thread, I again would like to be informed as to how.

    Again, Joshua and Scott, if you are using “negativity” as a criterion to dismiss something, you should take pains to make sure that you are applying that criterion in a principled way. As u/queerbees pointed out, Scott has been negative and “sneering” toward science studies and modern art several times. So have the people in the reddit thread to whom u/queerbees was directly responding to. But again, I think that even a principled application of “negativity” as a criterion for dismissal is not really correct. Joshua, I’m sure you would attest to the fact that you learned a thing or two from the “skeptic” communities you ended up leaving: I imagine that their criticisms marshalled a lot of interesting and valuable knowledge, and so reading those criticisms taught you a lot, even if teaching wasn’t their main goal. Reading r/sneerclub has been like that for me, and I think it’s unfortunate that you and Scott dismiss it so readily. With Scott, it’s more understandable, because they are directly critical of him, and learning from that would be a bit of a bitter pill to swallow. There are less upsetting ways to learn the same things that he could learn by taking r/sneerclub seriously. But everyone on this thread who isn’t Scott has no excuse, I think.

  164. Gil Kalai Says:

    Eitan, also in terms of scientific/technological/society impact, while the impact of logic (Godel/Turing) on computers and the impact of information theory on various areas are certainly amazing there are other prominent examples of the same caliber, e.g., The impact of representation theory (and related mathematics) on physics, and the impact of statistics on essentially all areas of science. Of course, it is always hard to say how pivotal was the mathematical theory by itself. As I said, there are different views among mathematicians about the significance within mathematics of connections to other areas. Oded Schramm, whom you mentioned, used to say that like other sciences mathematics reveals the reality of our world – the logical reality. Of course, even in revealing the mathematical reality some mathematicians view some directions or questions as too obscure.

  165. TheoryA Says:

    gentzen #152.

    I of course don’t have a definitive definition, but this list from the ICALP call for papers seems reasonable.


    Algebraic and Categorical Models
    Automata, Games, and Formal Languages
    Emerging and Non-standard Models of Computation
    Databases, Semi-Structured Data and Finite Model Theory
    Principles and Semantics of Programming Languages
    Logic in Computer Science, Theorem Proving and Model Checking
    Models of Concurrent, Distributed, and Mobile Systems
    Models of Reactive, Hybrid and Stochastic Systems
    Program Analysis and Transformation
    Specification, Refinement, Verification and Synthesis
    Type Systems and Theory, Typed Calculi

    In crude terms, I feel many (most?) Theory A people have no idea why we should care about *current* research in any of these areas (apart from databases maybe).

    Here is a possible blog post title.

    “Why should we be excited by the last decade of Theory B research?”.

  166. Scott Says:

    TheoryA #165: While I’m not the person to write that post, here’s something I can tell you. When I visited Oxford a year and a half ago, I met Joël Ouaknine, a Theory B researcher, and got super excited and enthusiastic about what he was working on, and wondered why I hadn’t heard about it before. However, that might have been because the research he told me about was basically to take Theory B problems and prove complexity and computability results about them! And, one of my favorite things to have happen: in the course of proving those results, strange concrete numbers and open problems appeared—e.g., such-and-such Theory B problem is decidable so long as the matrices are only 5×5, but with larger matrices no one knows.

  167. Atreat Says:

    tb #163,

    Criticizing the target of a bully for extending an insufficient amount of charity towards said bully. This is what I characterized as “hounding”. Perhaps that was an inapt word. I hereby withdraw that word and leave only the previous sentence in its place. I think that fairly characterizes what you have done with your previous comments.

    On taking the “nerd voice” thing too seriously. You seem to be implicitly admitting this was bullying, but again criticize the target of the bullying for “taking it too seriously.” When this is explicitly pointed out you turn defensive and feel compelled to demonstrate your nerd bona fides. You offer this as motivation, “Again, you seem to suspect that I am not a “real nerd” or something like that.” Pray tell, what lead you to believe this? In examining this thread I can find little that would support this belief. The only thing I can find is Joshua stipulating your claim to be a nerd and asking that you consider what people using “nerd voice” must think of you. I wonder if your over-the-top defensiveness over this might be telling. You are a nerd and yet you think other nerds who are the targets of this bullying are being “too serious.” When asked to empathize with your fellow nerds you become defensive and insist on defending your nerd bona fides even though they have not been called into question. Were you bullied in the past for your nerdishness?

    You state that nerds are not a marginalized group or more accurately you insist *you* are not marginalized just because you identify as a nerd. Again, we see a tendency to deny empathy or charity towards your own self-identified group. Do you deny that nerds are frequently a target of ridicule and bullying in our society?

    You say, “He stated that the reason people were interested in studying emotional intelligence, grit, etc. was due…” when in fact Scott never said anything of the sort. Pedantically, he used the phrase, “many of those writers'” which is a far cry from saying that the only motivation for *all* people studying those qualities was a, “barely concealed impulse…”

    Again, we find you demanding more charity from Scott towards his bullies than you extend to him for his actual writing. Why is that?

    Your points 2, 3, and 4 are also less than charitable and I’ll bet he’d find little to recognize in your paraphrasing of his views.

    Finally, regarding your contention that only Scott should be dismissed from the rationalist community’s supposed duty to take these sneering bullies seriously… I reject that completely. Neither the target of bullies nor the community in general is under any obligation to take bad faith offered in some dark corner of the interwebs and turn it into wine or to find hidden pearls within. If SneerClub wishes to communicate in earnest and thereby come to a meeting of the minds, then let them drop the bad faith, sneering, and bullying. If not, I feel completely exonerated in my decision to ignore them and feel not one iota the poorer for doing so.

  168. Ingslot Vonnesline Says:

    Sneering relates to an interesting cluster of traits that over the eons have a all managed to get themselves wired into the intermediate (Amber) second line of regulatory control that gets triggered in the event the routine regulation – say in respiration – has for some reason begun hem/ar/ujj/ing efficiency with critical bodily functions now hurtling toward red -line full blown emergency breakdown thresholds at which point you could definitely say lay claim to having ‘come over all funny-like……’ Perhaps you are on vatation

  169. BLANDCorporatio Says:

    irt. Atreat #167:

    Are you talking about bullying in general or sneerclub in particular now?

    Bullying requires the bully to seek some kind of confrontation. A bully goes to the victim and makes sure the victim knows it. Private manifestations of derision behind the target’s back do not qualify as bullying. As long as sneerclub is “bad faith offered –in some dark corner of the interwebs–“, it is not bullying.

    Let’s not bleed meaning out of words please, or we’ll end up with the likes of people jaded by racism, because everyone is one anyway.

    Cheers.

  170. Joshua Zelinsky Says:

    tb #163,

    “: the Ron scene is more in the voice of the narrator or the world than in the voice of Harry. This isn’t to say that Harry is a self-insert, but the Harry of the story has not read the original series of Harry Potter books by J.K. Rowling, so his finding Ron “unnecessary” is more of a knowing wink from the author (Yudkowsky) to the audience than an opinion about the original books expressed by the character (Harry). I agree that Harry is meant to be a flawed protagonist and that it would be silly and anti-critical to take everything he says directly to be the author’s opinion, but this case in particular is, I think, supposed to be breaking the fourth wall.”

    Can you expand on why you have this opinion, or moreover, that even given that there’s a fourth wall issue here, that it matters? I’m willing to buy into the idea that there’s a lot of knocking on the fourth wall (albeit not breaking the fourth wall) in HPMOR, but I’m not sure why that means that Yudkowsky therefore has to think this way. That seems like a pretty big leap.

    “lso, Joshua, isn’t the first r/sneerclub thread I linked pretty much giving reasons for why the exponential growth claim in the case of AI is probably false? If not, why not, and if so, what’s wrong with those claims? And what’s wrong with the second article? I took both of those to be “cogent” arguments against singulatarianism. And even if you disagree with them, I don’t see why they are any less sophisticated than the arguments you see here or on r/futurology, on average.”

    I’m going to respond to this comment in reverse order: First, the signal to noise ratio on /r/futurology is not great (and frankly has been getting worse over time) but it doesn’t have nearly as much of the toxic vibe one sees on sneerclub, and still seems to have a better signal to noise ratio. The signal/noise for Scott’s blog is in fact excellent and far better than futurology. Scott’s own blog posts are in general very helpful for understanding difficult aspects of computational complexity (or at least difficult enough for people like me), an d the comment threads have frequently allowed people to improve or even develop somewhat new ideas about interesting math.

    Now, let’s discuss the two articles. They do have some redeeming aspects. I’m going to focus on David Gerard’s post, since the second is at best showing that they can link to things while David’s post is at least actually original content. First, I don’t think that one should be surprised if he’s one of the better critics, in the same way that he’s one of the
    more cogent writers on rationalwiki and is generally a bright person (disclaimer: I’ve internet known him for about a decade now), but even this post is pretty flawed. First of all, nothing in the Yudkowsky/Good type of intelligence explosion requires a general notion of g; an AI could be terrible at poetry and could play a completely awful game of Go and still be a threat. An AI that has a high equivalent g would be more likely to be a problem, but one doesn’t need to be able to write poetry or play Go in order to manipulate humans or release a virus that wipes out humans, or some sort of fun nanotech or the like.

    The comment also focuses on “unresearched propositions” which is wrong (in so far as people like Yudkowsky himself but others like Nick Bostrom and Roman Yampolskiy have thought a lot about these issues) but also misses the point. That something is “unresearched” doesn’t make it wrong, it makes it at best an unknown. This piece seems to essentially assume that if a proposition is “unresearched” one should assign it a low probability.

    It is very easy to write down unresearched questions where one can reasonably be confident about their answers. For example, right now, I conjecture that every sufficiently large positive integer is expressive as the sum of a prime, a Fibonacci number, a power of 2, a power of 3, a power of 5, and a prime which is a palindrome in base 2. Now, I’ve thought about this question for about 15 seconds while writing this down. That’s about as unresearched as you can get, but I’m pretty confident this is true.

    More to the point, if one takes a hypothesis that one has never heard before and immediately assign it a low probability then there’s an inconsistency based on the order you hear hypotheses; depending on whether you hear A or ~A first, one gets a pretty high probability and one gets a low probability.

    David Gerard goes on to discuss:

    “He believes that, while optimizing its own g factor, the intelligent system in question will have a high rate of return on improvements, that one unit of increased g factor will unlock cascading insights that contribute to the development of more than one additional unit of increased g. ”

    So, again, we have this g focus, which isn’t exactly what is relevant. But more to the point, this seems to be a very weak version of what Yudkowsky is arguing. What matters is not just the rate of increase (again not of g, but of the ability to solve a broad variety of types of problems that are relevant), but how fast it goes. David’s model seems to imagine that an intelligence increase is a one-and-done thing, but that’s not how things will work: if one gets “stuck” at a specific intelligence level after one improvement, one might simply use more resources and more time to find improvements.

    To use a very rough analogy, let f(x) be a positive increasing function. Consider the following sequence. Consider a sequence defined by a_1=1, and for n>1,
    a_n = a_{n-1} + 1/f(a_{n-1})
    For any choice of f(x), this will be an increasing sequence and in fact one will have that a_n goes to infinity as n goes to infinity. Now, if 1/f(x) goes to 0 fast, this will go relatively slowly relative to n, but if 1/f(x) doesn’t go to 0, or if it goes to 0 not too fast, this will still increase pretty rapidly. Note that our unit of time, n isn’t a pre-given unit- if from our perspective, the unit of n is a fraction of a second, then the slow rate of growth wont’ make much of a difference.

    Moving on, David writes:

    “Deep neural networks are entirely inscrutable. No one anywhere can tell you why a deep neural network does what it does, so there’s no reason to suspect that they will spontaneously evolve a capability that have proven to be beyond the very best AI experts in the world out of nowhere. ”

    This is completely backwards! If we have a mysterious method that works really, strangely well, and we don’t understand it much, then that’s more, not less reason to be concerned about it, since we can’t predict its behavior and can’t quantify how powerful it is. Now, as it happens David appears to be wrong here: we’ve got a decent understanding of deep neural networks, so this is less of a danger, but that’s because David’s argument should be reversed and his premise if false.

    David does bring up some valid points in the essay (for example noting that one had a CPU engineer explicitly say that Eliezer’s comparison involving CPU construction was off base), but one could get that result from reading the original Less Wrong comment thread- David hasn’t added anything new there.

    There are some aspects of David’s post that are potentially valid criticisms, but their novelty level is low, and it takes some to expand on them to a useful level. But here’s the really sad thing: this is the highlight- a somewhat thought out post by one person, which gets 4 comments, one of which has some content, three of which have little to do with anything, one is literally a personal attack via link to Youtube video. So the best threads on the subreddit are those which have the fewest comments (which I suppose shouldn’t be surprising since if one looks at some of the other threads it seems clear that David is frequently the only person who is consistently at least trying to make substantial points in that subreddit).

    ” you all are taking the *nerd voice* thing way too seriously. Again, you seem to suspect that I am not a “real nerd” or something like that. Let me just say that I am not sure what else I can do to tell you that I am in fact closer to someone who would be made fun of with a *nerd voice* than a “blue-haired SJW in a coffee shop.” I mean, I read blogs like this for fun. ”

    Yes, the nerd voice thing is worth taking seriously, not just by the comments, but that the comments are getting *highly upvoted*. What does that say about the people in question?

    I’m not sure why you keep thinking that we must somehow think you aren’t a real nerd. Neither Scott nor anyone else in this thread has said anything of the sort (and for that matter, I suspect that if one did have a decent definition one would find a fair bit of overlap between your imagined blue-haired SJWs in coffeeshops and nerds, although this may be something where at least Scott Alexander would probably vehemently disagree). But I don’t know why you keep going back to this point, no one has questioned any sort of nerd bonafides, and the only point I made in that context was the exact opposite of that.

    Now, as to the issues of strawmen, if one simply had argued points 1, 2 and 3, then maybe you’d have a point. But let’s be clear: a strawman is not less of a strawman because it happens in part to connect to some points that are actually part of what someone has said. Moreover, the vast majority of the post, was focused on this idea about human value (what you are labeling as part of point 4) at no point of which has either Scott made any sort of assertion that people who are not as intelligent are somehow less human value. This repeated demand for them to proclaim that they don’t support a position which they obviously don’t support and then spend most of the thread attacking them for apparently supporting that position is a complete waste.

    “Joshua, I’m sure you would attest to the fact that you learned a thing or two from the “skeptic” communities you ended up leaving: I imagine that their criticisms marshalled a lot of interesting and valuable knowledge, and so reading those criticisms taught you a lot, even if teaching wasn’t their main goal. ”

    I didn’t say that I had left; I said I was less involved. And in fact, a big part of why I’m still involved is that a big part of their goals is teaching and outreach. And frankly, if it weren’t, and it really were completely negative, I would leave.

  171. Scott Says:

    BLANDCorporatio #169:

      Private manifestations of derision behind the target’s back do not qualify as bullying.

    If we’re talking about the Internet, then I reject that view with the most extreme possible prejudice. If you smear someone online, and it’s not someone vastly above you in status (Bill Gates, Stephen Hawking, Nicole Kidman), it’s a safe bet that your target will learn of your attack, so that you could fairly be said to have invited a (verbal) confrontation. People care about their reputations at least as much as they care about their physical health; this is why libel law exists. If you manage to change someone’s Google search results, you can change their employment and relationship prospects and the other basic conditions of their life.

  172. Scott Says:

    Joshua #170: Suppose f(n)=2n; then how does your sequence increase to infinity???

    Also, I would say that it’s still a major open problem to explain the success of neural nets and deep learning (though I have the recent proposed explanation by Tishby and Zaslavsky, involving the “information bottleneck principle,” on my to-read stack).

  173. Atreat Says:

    BLANDCorporatio #169,

    “Are you talking about bullying in general or sneerclub in particular now?”

    I’m talking about responding with “nerd voice” to make fun of and belittle nerds. About people who’ve expressly said they have no intention of trying to communicate in any kind of mutually beneficial way.

    “A bully goes to the victim and makes sure the victim knows it.”

    Oh? So spreading malicious lies in high school about the sexual proclivities of another behind their back is not bullying? Is “bullying” only a word to be used for direct physical confrontations? It can never be done in a passive aggressive manner?

  174. Joshua Zelinsky Says:

    Scott #172

    I haven’t worked out the rate of increase for that sequence, but it isn’t hard to see that it goes to infinity in general for any positive increasing f(x). Suppose that we had a_n bounded by k for all n. Then f(a_n) is bounded by f(k) and so 1/f(a_n)> 1/f(k) for all n, and so a_n is at least (n-1)/f(k). The key issue here is that we have 1/f(a_{n-}) in the definition for how much we are adding at each stage, not 1/f(n).

    I’m not an expert on neural nets, so if you think that our understanding is still pretty poor, I’ll differ to you (and per the argument I made earlier update to conclude that such nets are more of a potential risk).

  175. BLANDCorporatio Says:

    irt. Scott #171:

    Reject it with all the vehemence you can, you’re still wrong.

    The fact that this happens on the internet or not is irrelevant. One can find out about malicious gossip offline too and subject it to a libel lawsuit if necessary. A LIBEL lawsuit. As in, the charge would be LIBEL.

    We’re not talking about people smearing your name on /r/all either. We’re talking about a subreddit one other poster referred to as a dark corner of the interwebs. It takes going out of your way to find out it exists. You’re the one seeking the confrontation here, so whatever you may wish to accuse sneerclub of doing, bullying is not it.

    Cheers.

  176. Scott Says:

    Joshua #174: Yes, good, thanks! It’s early in the morning here… 🙂

  177. BLANDCorporatio Says:

    irt. Atreat:

    “So spreading malicious lies in highschool [] is not bullying?”

    No. Calling someone a slut to their face is bullying. Calling someone a slut behind their back is malicious gossip.

    The word bully has a terrible meaning. Let’s not trivialize it by reducing the bar in an attempt to condemn more people. It’s a transparently manipulative tactic that only cheapens the notion.

    Cheers.

  178. Atreat Says:

    BLANDCorporatio #177:

    “The word bully has a terrible meaning. Let’s not trivialize it by reducing the bar in an attempt to condemn more people.”

    Implicit in your response is that you believe calling someone a slut to their face to be more harmful than saying the same behind their back.

    I disagree and I think many others who’ve experienced both will disagree as well. The malice is the same. The harm done can be the same.

    Regardless, the SneerClub has been calling “nerd voice” via direct reply with the community upvoting it.

  179. Jr Says:

    I can’t help myself: What is the merit of the accusation that Feynman was chauvinistic? (I presume male chauvinistic is meant, not that he was super-patriotic.) I have read only his discussion of art and I found nothing disrespectful of women in it, but it was also obvious that it would trigger many SJW.

  180. jonathan Says:

    For a writer, I can think of no higher aspiration than that: to write like Bertrand Russell or like Scott Alexander—in such a way that, even when people quote you to stand above you, your words break free of the imprisoning quotation marks, wiggle past the critics, and enter the minds of readers of your generation and of generations not yet born.

    Incidentally, I consider you a very good writer. I put you in the same equivalence class as Russell and Scott Alexander.

  181. BLANDCorporatio Says:

    irt. Atreat #178:

    Usually, and correct me if I’m wrong here, but many legal systems make a distinction between stealing someone’s wallet when they’re away, and mugging them for it. Assuming the victim cooperates, both crimes leave the victim in the same worse-off state (a wallet less). It is acknowledged however that what makes the mugging worse and the mugging perpetrator more dangerous is their willingness to violently confront/threaten the victim.

    And let’s not lose perspective of the true topic here. You yourself deemed sneerclub a dark corner of the internet. The entire first page of sneerclub, as of this posting, has 315 comments. It includes several threads submitted two months ago. This single blog post already after a single week has more than half that number of replies.

    So no, a handful of people saying mean things about someone else between themselves in a place few are likely to hear or care about is not bullying.

    Cheers.

  182. tb Says:

    There’s a lot that’s been said here in response to my post. I mostly want to ignore the discussion regarding “bullying” and “nerd bona-fides” because I find it really stupid, so I’m just going to say two things about that:
    1) I agree with everything BLANDCorporatio says re: bullying and sneerclub. Furthermore, I think that calling something “bullying” or “overly negative” and using that as a justification to ignore it is an extremely common way to ignore criticism.
    2) The fact that Atreat feels the need to psychoanalyze me and ask whether I was bullied for being a nerd as a child is part of a general trend that may explain why I seem “defensive” about the issue. Many people in this thread seem to believe that I am being disingenuous, or somehow trying to trap scott into saying something or other for the sake of scoring some rhetorical points for the r/sneerclub crowd. I am not: that was the whole point of saying what I was saying.

    Now, to Joshua’s post. My point about HPMOR and Ron was that Harry could not be thinking “Ron, as a character, has no reason to exist in this story and/or in J.K. Rowling’s original series of books,” because he is a character in the story where J.K. Rowling’s original series of books does not exist. But the intent of having Harry think that was clearly to communicate that message: but it is not Harry who is communicating that message, it is the author. Like, if I had a character in a book lampshade a certain plot contrivance by thinking something like “geez, if this were a story, the author would have really pulled that one out of nowhere!” you would not be correct to ascribe the thought “this plot element is contrived” to the character, although that thought is what the author is communicating by having that character think it.

    I’m not really interested in whether David Gerard’s article is a great criticism of AI risk, more whether it’s a criticism worth responding to. The fact that you had to respond to it in such detail kind of proves my point re: that. Your points re: upvotes and the relative frequency of these articles just shows what I’ve admitted all along, which is that r/sneerclub is primarily a place for shooting the shit and making fun of people. My point is more that it’s pretty anti-intellectual and ad hoc to ignore every criticism that comes from a place like that just because it isn’t primarily a place for that sort of thing, especially when (as u/queerbees pointed out) tons of sneering goes on here all the time.

    Also, as to your point about strawmen: nobody in the r/sneerclub thread said that Scott Aaronson had that view of IQ. They ascribed that view to Scott Alexander, and presumed that Scott Aaronson was at least sympathetic to it given his failure to condemn it in his post: but they were mostly frustrated with him for other reasons (see points 1 and 2). Most of the thread isn’t about Scott Aaronson, it’s about Scott Alexander. Whether what they say is a strawman of Scott Alexander I don’t know, but I’m inclined to believe that it isn’t given my previous negative experience with the rationalist community.

    Finally, the *nerd voice* thing. I actually find that to be a pretty funny joke, and I’m sort of flabbergasted that you seem to think it equivalent to a cartoon middle school bully actually making fun of someone with a real live nerd voice. First of all, writing *nerd voice* to imitate mocking someone with a nerd voice in real life is obviously supposed to be sort of humorous in a way that I have a hard time believing you don’t get. Second, it’s always made in response to people who tenaciously bother the r/sneerclub regulars by demanding they engage in a kind of internet argument where if the other person doesn’t want to continue for whatever reason, the initiator of the argument can claim victory. The only appropriate response to that sort of person is to refuse to engage with them entirely. We see something like this the u/queerbees post that Scott discussed: u/queerbees interlocutor was placing the burden of proof re: the value of science studies in such a manner that no matter what u/queerbees said, the interlocutor would always be able to declare victory. All you can do is disengage with them in such a way that makes it clear that you’re not accepting their standards. The fact that I have to explain this kind of flabbergasts me: frankly, I have a lot more respect for the person doing the *nerd voice* than the person trying to beat him in an internet argument.

    In fact, I’m getting the sense that somebody should take me aside and mock me with a *nerd voice* right about now, to be completely honest. I’m clearly not making any headway with you all, so the only reason for me to continue is to be the person who lasted the longest with an internet argument, which is a “depressingly low distinction” if there ever was one. So I think I’ll peace out for now: thanks to Joshua for being so game to discuss this stuff with me.

  183. Atreat Says:

    tb #182

    “Many people in this thread seem to believe that I am being disingenuous, or somehow trying to trap scott into saying something or other for the sake of scoring some rhetorical points for the r/sneerclub crowd.”

    No, I think there is nothing in this thread including my previous comment that would warrant this belief. That’s all internal to you man.

    BLANDCorporatio #181, Fine. You think I’m over emphasizing the malice or harmful effects of SneerClub. I’ll accept that. I can’t speak for others, but they certainly have bothered me little.

    But all this is in response to tb lamenting that Scott and others were insufficiently charitable or open to “criticism” that amounts to little more than mocking of nerds. I find that chutzpah remarkable. So I’ll put “bullying” down and pick up “mocking” and repeat, I find it remarkable that others (tb) would sincerely chastise Scott and others for being insufficiently charitable to it.

  184. sevenfive Says:

    For those curious about the ringleader of SneerClub’s day job, I think this comment is a good taste:

    https://www.reddit.com/r/SneerClub/comments/7633el/not_the_critic_who_counts/doic134/

  185. BLANDCorporatio Says:

    irt. tb #182:

    To be clear, just because I tone-police Scott and Atreat about the use of “bullying” doesn’t mean I take sneerclub seriously. I don’t. It seems to be a place to “kick the shit”, vent steam, or play contrarian for contrarianness’ sake, much like the Flat Earth Society. If there is anything valuable on sneerclub, it’s a case of a stopped clock finding a nut once in a while.

    Just to be clear.

    irt. Atreat #183:

    Fair enough.

    Cheers.

  186. Queerbees' boyfriend Says:

    If /r/sneerclub linking to this blog counts as bullying, wouldn’t the comments and links here about /u/queerbees also count as bullying?

  187. Scott Says:

    Queerness’ boyfriend #186: Isn’t the term you’re looking for “self-defense”? I’d never heard of queerbees or the other sneerers, and had never said a word against them before they lobbed a large number of ad-hominems at me and some of my friends.

  188. Joshua Zelinsky Says:

    tb # 182 “I mostly want to ignore the discussion regarding “bullying” and “nerd bona-fides” because I find it really stupid”

    The second of these topics is a topic which only came up because you brought it up. I for one will be perfectly happy to drop it.

    “ow, to Joshua’s post. My point about HPMOR and Ron was that Harry could not be thinking “Ron, as a character, has no reason to exist in this story and/or in J.K. Rowling’s original series of books,” because he is a character in the story where J.K. Rowling’s original series of books does not exist. But the intent of having Harry think that was clearly to communicate that message: but it is not Harry who is communicating that message, it is the author. Like, if I had a character in a book lampshade a certain plot contrivance by thinking something like “geez, if this were a story, the author would have really pulled that one out of nowhere!” you would not be correct to ascribe the thought “this plot element is contrived” to the character, although that thought is what the author is communicating by having that character think it.”

    So, I strongly disagree here, and I think part of the problem is that you are not distinguishing between breaking the fourth wall (e.g. Rocky and Bullwinkle turning to talk to the audience) and knocking on the fourth wall, where a line can be reasonably interpreted in character but also may have a meaning to an audience.

    “I’m not really interested in whether David Gerard’s article is a great criticism of AI risk, more whether it’s a criticism worth responding to. The fact that you had to respond to it in such detail kind of proves my point re: that. Your points re: upvotes and the relative frequency of these articles just shows what I’ve admitted all along, which is that r/sneerclub is primarily a place for shooting the shit and making fun of people. My point is more that it’s pretty anti-intellectual and ad hoc to ignore every criticism that comes from a place like that just because it isn’t primarily a place for that sort of thing, especially when (as u/queerbees pointed out) tons of sneering goes on here all the time.”

    So here’s part of what you may be missing: we all have a limited amount of time and resources, and there are a lot more sources out there. It is possible that some interview with a random celebrity in People Magazine will happen to include a highly profound statement, but it would be a waste of my time to read People Magazine. Sure, sneerclub probably has a better useful content density than People and if I restricted to just the posts by David, it would probably be higher, but that doesn’t make it useful. And that’s before we get to the fundamentally, emotionally toxic nature of their entire approach.

    “Also, as to your point about strawmen: nobody in the r/sneerclub thread said that Scott Aaronson had that view of IQ. They ascribed that view to Scott Alexander, and presumed that Scott Aaronson was at least sympathetic to it given his failure to condemn it in his post: but they were mostly frustrated with him for other reasons (see points 1 and 2). Most of the thread isn’t about Scott Aaronson, it’s about Scott Alexander. Whether what they say is a strawman of Scott Alexander I don’t know, but I’m inclined to believe that it isn’t given my previous negative experience with the rationalist community.”

    So, I don’t know what your previous experiences have been like, although how much personal anecdote matters here may matter, but I have to say this is almost the exact opposite of my own experience with rationalists. Many rationalists are involved in the EA movement, which has as major goals trying to maximize the number of human lives saved per a dollar donated. That’s something which by nature focuses primarily on saving lives in the developing world, many of whom will be by many metrics, substantially less intelligent due to disease burden growing up, lack of nutrition, and other issues. The sheer fact that this overlap exists completely undermines any claim about the rationalist community being people who somehow think that only intelligent people have value. It is a particularly awful claim to make in the case of Scott Alexander given that what he does in practice for work puts him every single day helping all sorts of people throughout the intelligence spectrum.

    “Finally, the *nerd voice* thing. I actually find that to be a pretty funny joke, and I’m sort of flabbergasted that you seem to think it equivalent to a cartoon middle school bully actually making fun of someone with a real live nerd voice. First of all, writing *nerd voice* to imitate mocking someone with a nerd voice in real life is obviously supposed to be sort of humorous in a way that I have a hard time believing you don’t get. Second, it’s always made in response to people who tenaciously bother the r/sneerclub regulars by demanding they engage in a kind of internet argument where if the other person doesn’t want to continue for whatever reason, the initiator of the argument can claim victory. ”

    So, it is possible that we have different norms about how to interpret things. The only humor value I see in this is in the complete lets-laugh-at-the-outgroup fashion. As a rule of thumb, if you are mocking yourself in a conversation there may be some sort of genuine humor with your interlocutor but if your humor is directed solely at the statements made by someone you disagree with, your humor is at best wildly unproductive. Moreover, and this is important, the failure to be willing to engage in serious conversations with people is a failure. If people can’t spend a little time being willing to engage in people they disagree with (as you have been very willing to do), then that’s a problem on their end. And if they lack the time to substantially do so, they should just say so outright. To respond in a completely mocking fashion doesn’t help anyone whatsoever.

  189. Sniffnoy Says:

    My response to TB (and those arguing with him), though they may have left:

    In all the argument with TB, I feel like perhaps the most important point has been missed here. And that point is that, as Eliezer Yudkowsky would say, you can’t learn physics by studying psychology.

    That is to say: The genetic fallacy… is a fallacy. Ad hominem… is not valid reasoning. Bulverism, psychologizing — these things are not argument; these things are failure modes of argument.

    If, in an argument, you start talking about the other person’s motivation for arguing what they do, you are making a big mistake. Remember the actual question at hand! Hug the query!

    Now, OK, in truth I overstated my point above… I don’t actually want to say that this truly has no place in argument. But it’s just such a tempting failure mode for people that I would advise you really, really, know what you’re doing before engaging in such a thing. This should not something you’re doing anywhere near routinely. These are things that need to have giant flashing red warning lights on them because they’re almost always a mistake — a substitution of the question for a different one — but people so often think they’re relevant.

    (Like, TB… I notice you’ve objected here when other commenters have made the argument about you rather than about the original point! So, y’know… generalize that attitude. 😛 )

    In addition, as Josh has mentioned, this sort of thing poisons discussion, and reduces your own ability to think about such things. If you truly must — which, again, you probably don’t — at least do it politely. Yes, this is possible. (If you thought someone worth talking to was being irrational about a matter, how would you put it to them?)

    And yes, asking why people don’t endorse particular statements definitely falls under this. More generally, it is unreasonable to demand that people make your particular favorite point. People will argue how they want to. Psychologizing based on what they omit is not only getting away from the actual point but, also, can be used to show just about anything about the person in question.

    (Again — if you’re interested in honest discussion and you think someone’s omitted something, how would you mention that? Here’s a phrase you may find useful: “I’d like to expand on the point you’ve just made…”)

    Finally… I feel dirty answering this, since this is a dumb discussion that should never have come up and a response to an unreasonable demand, but since in this case there’s an easy answer that everyone else has missed, I’ll do it… TB, if you want an example of a post where Scott Alexander explicitly repudiates the view you attribute to him, here’s one, and here’s another post that’s also related. I hope we can now put that stupid question, which was never relevant to anything in the first place, to rest.

    (Yeah, I guess I’m not exactly being polite here. I’m not seriously worried about that causing problems in this case. 😛 But once again — politeness is helpful, but it’s not nearly as important as hugging the query. It just happens that people often fail at both together and so sometimes these get mixed up a bit.)

    (Also, unrelatedly, Josh, your comments would really be more readable if you used the blockquote tag…)

  190. TheoryA Says:

    Scott #166

    Should I infer anything from the fact that you now can’t remember what the problem was or why anyone would care about it? 😉

  191. gentzen Says:

    TheoryA #165: Thanks, this makes it sufficiently clear what you have in mind. Your proposed blog title “Why should we be excited by the last decade of Theory B research” reminds me of Scott’s important “Why philosophers should care about computational complexity” essay.

    That title is nice, because it focuses on “philosophers” as a specific target audience (instead of a generic “we”), and succinctly names its topic as “computational complexity”. The topic “last decade of theory B research” is slightly too much focused on communicating to people like me (who know the basics of those subjects, but are by no means involved in bleeding edge research) that the question should “be answered by the leading lights” and specifically address the question why “we should care about *current* research” in those topics (with a focus on *current*).

    (There is actually more I want to write, but I have no time left now. I have the books “Codes and Automata” (2010) by Berstel, Perrin, and Reutenauer and “Automatentheorie und Logik” (2011) by Hofmann, and Lange, the three top google results for “DAG automata” (2015, 2004, 2017), a related “Structurally cyclic petri nets” (2015) article, and wonder how to formulate my question whether understanding the relevance of recent advanced for problems like the road coloring problem, the modal mu calculus and parity games, or DAG automata and petry nets would not still point back to questions about the basics, rather than about *current* breakthroughs or open issues.)

  192. Atreat Says:

    Sniffoy #189,

    What you describe is the ideal. It is a very pretty ideal and I agree it is a worthy goal. The big benefit is higher signal to noise ratio when two interlocutors are striving for mutually beneficial two way communication.

    However, real human communication (especially on the internet among anonymous persons) rarely works this way. Especially when the *motivation* of one or more of the interlocutors is orthogonal to beneficial two way communication. It would be so much easier if we could instantly divine the motivation and good faith investment of our communication partners. Unfortunately, we can’t.

    I submit that any person arguing on the internet who continues to offer good faith no matter the behavior of his communication partner is headed for ruin.

    In the case of SneerClub, the absence of good faith is a fact defined by the very premise of the club. Probably better to just leave them be with nary a mention. My friend Scott, flummoxed by the pathology of such a group, responded. tb, took him to task for being unable to find a worthy criticism in a group that offered none and whose answers amount to juvenile mocking when asked to proffer such.

    I can’t divine the motivation of tb, but I assumed good faith and responded with what I deem the obvious asymmetry of taking someone to task for giving insufficient charity to the people mocking him. People acting in what for me seems obvious bad faith.

    Please note: it was tb who brought the question of his own motivation into this conversation. Not anyone else. This is an important point. To be honest, it does increase my wariness of offering more good faith as does the continued silence in response to the point I’ve made in regard to the asymmetry of charity he is asking of Scott.

    In summary, deciphering the motivation of ones communication partners is not a priori a bad idea or a mistake. In fact, it can be essential to understanding the parameters of the communication.

    When a bully hits you in the face it is a direct form of human communication. Motivation is important. You could respond by pontificating about game theory and tit for tat in the hopes of trying to convince your communication partner to change their strategy. Or you could hit back. Or you could walk away. Being deliberately obtuse, in such a situation, about the motivations of your communication partner so as to uphold the ideal of beneficial two-way communication would be rather self-defeating.

  193. Scott Says:

    TheoryA #190:

      Should I infer anything from the fact that you now can’t remember what the problem was or why anyone would care about it?

    Yes, you should infer that my memory is getting worse and worse. 🙂

    The problem was something like: given a matrix A and vector v, to decide whether Atv belongs to a particular halfspace for all positive integers t. I don’t remember what Theory B thing motivated it. But I feel good that, unlike in many cases, I at least remember more than enough to reacquire my previous knowledge if I ever needed it.

  194. BLANDCorporatio Says:

    irt. gentzen #191:

    “[I] wonder how to formulate my question whether understanding the relevance of recent advanced for problems like […] the modal mu calculus and parity games […] would not still point back to questions about the basics, rather than about *current* breakthroughs or open issues”

    I’m not sure I understand the difficulty here. On the one hand it seems as if you and TheoryA aren’t convinced about the relevance of fields such as, say, mu-calculus/parity games to whatever Theories A might mean (algorithmic stuff). If that is indeed the difficulty, then indeed any relevance needs to be argued from the basics, imo.

    But on the other hand, as an interested layman, for whatever it’s worth I would suggest a couple avenues of finding interest in new recent developments in parity games/mu calculus.

    The biggest recent thing is the proof that parity games can be solved in quasi-polynomial time (there’s a paper by Calude about this, followed by one from Jurdzinski which updates his progress measures approach, followed by one from Schewe that uses puts Calude’s work in a progress measure form). This may be interesting for theoretical computer science because it’s yet another potentially NP-intermediate problem that we now know is in quasipolynomial time (just like graph isomorphism).

    On the more algorithmic side, there’s work from Krishnendu Chatterjee about more efficiently solving parity games with few colors; for example https://arxiv.org/abs/1410.0833. This may be algorithmically interesting because it makes use of/expands on some graph decomposition techniques from Henzinger (a summary I found readable is this). Judging by Chatterjee and Henzinger’s output, those techniques have proven very productive in verification and may be of more general interest.

    Cheers.

  195. Stella Says:

    I’m a bit late to this post and don’t really have any input on the topic, but I just wanted to echo Alex Z #42. Maryanthe was probably the best teacher I ever had (decent competition from Prof Babai and Prof Razborov though who are both incredible as well), and I recommended her course to anyone in mathematics, regardless of their personal interests. She’s also, on a personal level, just really incredibly nice.

    The model theory course I took with her changed my life by changing the way I think, on a rather fundamental level. She also taught me far more topology than my actual topology course.

  196. Sniffnoy Says:

    Atreat: I don’t think we’re really disagreeing so much! That was my point, that this is what /r/SneerClub seems to be primarily doing and that this is the fundamental problem with what they do. Others may have been baited into it here but fundamentally once that sort of thing starts you’re just not going to have a useful discussion. The thing to do is to not do it in the first place. And if someone else starts doing it they’re probably not worth arguing with. And if you do decide to argue with them, point out the problem directly, rather than letting such things stand.

  197. grrrr Says:

    @ Atreat #167 “You state that nerds are not a marginalized group or more accurately you insist *you* are not marginalized just because you identify as a nerd. Again, we see a tendency to deny empathy or charity towards your own self-identified group. Do you deny that nerds are frequently a target of ridicule and bullying in our society?”

    I mean, members of r/incels believe themselves to be frequent targets of ridicule and bullying by the rest of society for being ugly and failing to get laid, but should we define them as marginalized?

  198. Atreat Says:

    grrrr #197, what does *defining* have to do with it? In fact, *are* they frequent targets of ridicule? In fact, *do* they frequently get bullied by the rest of society? I know nothing about this reddit group so why ask me? Moreover, what illumination do you imagine an answer would give?

    Maybe you think using the word ‘marginalized’ confers a moral ticket to be cashed in or something? Did you happen to see a recent South Park where a character learns that he shares some tiny amount of neanderthal DNA? Might want to check it out.

  199. Jon K. Says:

    Important conversation going on here on the difference between sneering, bullying, satire, constructive criticism, etc.

    Imagine if humans/scientists could strip out the emotion and ego when debating/discussing science… maybe psychedelics would help?

  200. Scott Says:

    Jon K.: Err, ego and emotion aren’t exactly unique to conversations about science.

    Having said that, I would be extremely interested in a firsthand report from anyone who found that psychedelics helped them do or discuss science.

  201. grrrr Says:

    @ Atreat #199 defining matters because a marginalized or oppressed group is one that is, as a whole, considered separate from the rest of society and socially/economically/politically disadvantaged. Nerds do not make up a coherent group that is systemically disenfranchised. Self proclaimed “nerds” believing themselves to be members of a societally oppressed group is often concerning because:
    1) “Nerd” has the implications of white, straight, and male. (while not all self proclaimed ‘nerds’ are all of these things, this is the vast majority of what is considered “nerd culture” and the nerd community has often shown itself to be often exclusionary towards women)
    2) many of the grievances of such self proclaimed nerds (those who believe that nerds are an oppressed group) seem to overlap with r/incels– mostly, that they are underprivileged because women refuse to give them the light of day whilst ogling and responding to the advances of “jocks”. (e.g. see scott aaronson referring to other men scoring with women as “neanderthals” on the Walter Lewin comment thread) Obviously this is bad because it perpetuates male entitlement to sex and women’s attention (idk if you need me to explain this further or if it’s worth my time).
    3) “Nerd” implies an above average level of expertise concerning a subject or academic ability, ignoring the idea that people with disabilities that prevent them from achieving academically may also be bullied for the same reasons nerds are.
    4) It’s really immature to break the world down into “smart nerds” and “dumb jocks” as if we live in some really poorly written movie about middle school. And, again implies that smart people are picked on and dumb people are not. (Whilst also implying that women are dumb and shallow and therefore will always choose the dumb jock over the smart nerd.)

    tldr: tb stating that nerds are not marginalized does not equal not having empathy for victims of bullies. Especially when “nerd” connotes victims of bullies who are often exclusively white, straight, males who are academically or intellectually gifted (or above average anyway).

  202. O. S. Dawg Says:

    Scott #4: Here is an indirect link to one of Zeilberger’s essays you might find a bit less “loony”:

    http://experimentalmath.info/blog/2013/12/doron-zeilberger-comments-on-experimental-mathematics-in-ams-notices/

  203. mjgeddes Says:

    Scott,

    Have you seen the latest from DeepMind? This is really objectively ‘incredible’! AlphaGo Zero started with ZERO knowledge of Go and *in just a matter of days* has surpassed the best previous version of their Go-playing program, entirely teaching itself the game:

    https://deepmind.com/blog/alphago-zero-learning-scratch/

  204. Scott Says:

    grrrr #201: I agree, it’s extremely important that we don’t define nerds as being “marginalized.”

    If we did, we might have to pretend to tolerate their hilarious whining about bullies and suicidal depression and the culture redefining them as scary creeps and whatever else they falsely imagine is making their lives miserable.

    We might even need to let nerds get more than two words in, before we interjected to explain why their alleged “suffering” is Problematic and doesn’t count, unless it can be recategorized as a different form of suffering that does count. (E.g., are the nerds in question also trans, female, disabled, or poor? OK, then we could run with that. But give us some reason not to throw these people on the trash heap.)

    And needless to say, if we started down this road, letting our humanity get in the way of our politics, it could take away attention and resources from the officially oppressed groups, in the strictly and relentlessly zero-sum power struggle of progressive politics—the struggle that reflects such glorious credit on our shared cause, and that’s done so much to help us win elections and defeat right-wing nationalists.

    More broadly, I believe that if history teaches us anything, it’s that morality consists of standing up for those who are suffering, only after their particular form of suffering has been recognized and sanctioned by the trend-setting authorities. If you stand up too early—e.g., for women pursuing academic careers in 1910, or gays in 1950, or kulaks in the 1930s USSR—that just makes you weird and gross and unpopular, am I right?

  205. gentzen Says:

    BLANDCorporatio #194: Those results are nice indeed and have applications beyond verification. However, you wrote:

    Judging by Chatterjee and Henzinger’s output, those techniques have proven very productive in verification and may be of more general interest.

    And “verification” here is exactly what I mean by “basics”. In order to explain the relevance of results about the modal mu calculus and parity games, you need to explain why automata on infinite structures are important for verification, and how the various acceptance conditions for those automata are related to parity games.

  206. BLANDCorporatio Says:

    irt. gentzen #205:

    As I’ve said, I’m not sure I understand the question.

    One potential meaning was that “parity games/modal mu-calculus are useless [to whatever field I’m interested in]”, to which my reaction is to reach for the metaphorical gun 😛 Anyway, counting to ten and calming down, of course they wouldn’t be interesting for any field out there but I can think of a few recent results that could transfer to other domains (and I listed those).

    If however the difficulty is explaining the general relevance of, say, parity games to some other field, then yes this is a question about the basics (something I also mentioned in my post #194).

    To answer this second difficulty in a blog comment I can only offer a quick and dirty intuitive sketch.

    Systems verification, in this sketch, is about checking whether a system whose lifetime is infinite eventually provides appropriate responses to the challenges the environment throws at it. (We also allow the system to “miss” a finite number of those challenges.)

    If that’s the problem you are after, then parity games fit very well, if the environment challenges are odd numbers, and appropriate responses are higher even numbers.

    That intuition breaks down a little here as challenges and appropriateness of responses don’t usually fit on a linear hierarchy, so that requires some getting down and dirty with the math to show that this time we can get away with it.

    And of course one can look at how to convert from combinations of Buchi/coBuchi objectives to parity games, or at how modal mu-calculus is at least as expressive as temporal logics such as LTL or CTL* used in verification, and at the easy equivalence between model checking in modal mu-calculus and parity games to get an idea that for a decently large class of formulas you’d want to check about a system, there exists a parity game formulation such that solving the game is checking the formula.

    In general, of course, system verification may have stricter requirements. In particular, the lifetime of the system may be bounded and the time it is allowed to take before providing an appropriate response may be limited. Budget/resource constraints may also appear, as well as non-determinism. All of these can and have been modelled by various extensions of parity games (mean-payoff, energy games, stochastic).

    Parity games are the simplest of these and in that sense a test case of the most we can expect to get with the least effort; however hard they may be, the others are harder. BUT, some techniques from parity games do apply to at least some of the other verification games, such as, say, the strategy improvement algorithms.

    Cheers.

  207. fred Says:

    Scott #204

    A bit of a joke, but I do think that the solution to some “identity politics” will be a typical trick of CS – introduce and extra level of indirection to human interaction.
    Once we’re all meeting only through VR and made up avatars, color and gender will truly become relative!

  208. grrrr Says:

    @Scott #205 “We might even need to let nerds get more than two words in, before we interjected to explain why their alleged “suffering” is Problematic and doesn’t count, unless it can be recategorized as a different form of suffering that does count.”

    Not all nerds, not all victims of bullies, but misogynistic assholes who believe that women not responding to them is suffering and believe their inability to get laid is due to discrimination against nerds, their “suffering” is problematic and no, it doesn’t f***king count.

  209. Scott Says:

    grrrr #208: Alas, I don’t have time for another debate about this—particularly since the last time I commented, it took me months to sort through all the responses. And I have research to do, students to mentors, and two kids to raise.

    But since you keep bringing up “nerds who can’t get laid” in comment after comment, let me simply remark: it seems to me that, any time whatsoever you look at thousands or millions of people who tell you that they’re suffering from the same problem, and that problem is something you wouldn’t volunteer to suffer from yourself, and you’re tempted to say something like “their ‘suffering’ is problematic and no, it doesn’t f***king count”—and I don’t care whether we’re talking about

    – Occupy Wall Street protesters complaining that they can’t afford rent or find decent jobs
    – Black Lives Matter protesters
    – Rural American whites afraid of automation, free trade, and demographic changes destroying their communities
    – Palestinians living in fear that their houses will get demolished
    – Israelis living in fear of terrorism
    – Germans in 1932 living in fear of poverty and hyperinflation
    – People battling inborn pedophilic urges
    – Insert whatever, for you, is the most politically explosive example you can think of

    —and yes, this is an ideal to which I’ve failed to live up many times, but against which I continue to measure myself: again I say, any time you declare that the suffering in question is “problematic and doesn’t f***king count,” that’s the time to step back and examine whether your ideology has overtaken your humanity. This is probably my most fundamental moral belief.

  210. Atreat Says:

    Scott #209,

    “Insert whatever, for you, is the most politically explosive example you can think of”

    – Adam Lanza, the sandy hook killer who undoubtedly suffered (and if you believe in karma and rebirth) continues to suffer

    – Osama bin Laden and ISIS killers, who must have experienced and are continuing to experience monstrous suffering

    ……………

    I know those don’t qualify as millions and millions, but I think the point remains: all sentient beings suffer and someone striving for the bodhisattva ideal should not discriminate in exercising their compassion.

    No doubt, people will look at the examples above and recoil in horror and incredible anger at the mere suggestion that compassion should be offered to the likes of the above. And woe be to anyone who would suggest something so monstrous!!! for the wrath of the intellectually bankrupt SJW’s is vast.

    Alas, His Holiness the Dalai Lama, the poor guy, has suggested and believes the above. Which for some intellectually bankrupt SJW’s who purport to admire HHDL might cause a short circuit in their limited anger wiring.

    Scott, I’ll bet you didn’t know that your fundamental moral belief is very close to a declaration of striving for the Bodhisattva ideal. My heart is bursting with admiration and celebration that you reached this without much of any training. I think you should read Shantideva if you ever find a bit of time. You’ll find there are many other brothers and sisters who think as you and devote their life to strengthening this muscle for compassion in the effort to dispel the miseries of the world.

  211. Scott Says:

    Atreat #210: Thank you.

    One can, of course, acknowledge the internal torment that led a mass shooter or a terrorist to his crime, while still condemning the crime itself in the strongest imaginable terms. Indeed, what else can we do?

    Here’s the central point on which I part ways from “grrr”: simply talking online about how you, or others like you, have suffered or are suffering is not problematic and not a crime. I find it monstrous even to place it in the same moral universe as Adam Lanza or Elliot Rodger or whatever. Yes, if you demonize an entire demographic group who you blame for causing your suffering, or express a wish for that group to come to harm, of course mere talking can become bad. In general, though, suffering people trying to comfort each other online, or help each other find solutions, strikes me as not merely not bad but praiseworthy.

    And as for the widespread impulse to condemn people, for bringing up politically inconvenient suffering? Steven Weinberg famously wrote, “with or without religion, good people will do good and evil people will do evil. But for good people to do evil, that takes religion.” Inspired by him, I would say: for good people to condemn suffering people for trying to talk about their suffering … that takes an ideology.

Leave a Reply