Archive for the ‘Nerd Interest’ Category

The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me

Tuesday, May 3rd, 2016

I’ve supervised a lot of great student projects in my nine years at MIT, but my inner nerdy teenager has never been as personally delighted by a project as it is right now.  Today, I’m proud to announce that Adam Yedidia, a PhD student at MIT (but an MEng student when he did most of this work), has explicitly constructed a one-tape, two-symbol Turing machine with 7,918 states, whose behavior (when run on a blank tape) can never be proven from the usual axioms of set theory, under reasonable consistency hypotheses.  Adam has also constructed a 4,888-state Turing machine that halts iff there’s a counterexample to Goldbach’s Conjecture, and a 5,372-state machine that halts iff there’s a counterexample to the Riemann Hypothesis.  In all three cases, this is the first time we’ve had a reasonable explicit upper bound on how many states you need in a Turing machine before you can see the behavior in question.

Here’s our research paper, on which Adam generously included me as a coauthor, even though he did the heavy lifting.  Also, here’s a github repository where you can download all the code Adam used to generate these Turing machines, and even use it to build your own small Turing machines that encode interesting mathematical statements.  Finally, here’s a YouTube video where Adam walks you through how to use his tools.

A more precise statement of our main result is this: we give a 7,918-state Turing machine, called Z (and actually explicitly listed in our paper!), such that:

  1. Z runs forever, assuming the consistency of a large-cardinal theory called SRP (Stationary Ramsey Property), but
  2. Z can’t be proved to run forever in ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice, the usual foundation for mathematics), assuming that ZFC is consistent.

A bit of background: it follows, as an immediate consequence of Gödel’s Incompleteness Theorem, that there’s some computer program, of some length, that eludes the power of ordinary mathematics to prove what it does, when it’s run with an unlimited amount of memory.  So for example, such a program could simply enumerate all the possible consequences of the ZFC axioms, one after another, and halt if it ever found a contradiction (e.g., a proof of 1+1=3).  Assuming ZFC is consistent, this program must run forever.  But again assuming ZFC is consistent, ZFC can’t prove that the program runs forever, since if it did, then it would prove its own consistency, thereby violating the Second Incompleteness Theorem!

Alas, this argument still leaves us in the dark about where, in space of computer programs, the “Gödelian gremlin” rears its undecidable head.  A program that searches for an inconsistency in ZFC is a fairly complicated animal: it needs to encode not only the ZFC axiom schema, but also the language and inference rules of first-order logic.  Such a program might be thousands of lines long if written in a standard programming language like C, or millions of instructions if compiled down to a bare-bones machine code.  You’d certainly never run across such a program by chance—not even if you had a computer the size of the observable universe, trying one random program after another for billions of years in a “primordial soup”!

So the question stands—a question that strikes me as obviously important, even though as far as I know, only one or two people ever asked the question before us; see here for example.  Namely: do the axioms of set theory suffice to analyze the behavior of every computer program that’s at most, let’s say, 50 machine instructions long?  Or are there super-short programs that already exhibit “Gödelian behavior”?

Theoretical computer scientists might object that this is “merely a question of constants.”  Well yes, OK, but the origin of life in our universe—a not entirely unrelated puzzle—is also “merely a question of constants”!  In more detail, we know that it’s possible with our laws of physics to build a self-replicating machine: say, DNA or RNA and their associated paraphernalia.  We also know that tiny molecules like H2O and CO2 are not self-replicating.  But we don’t know how small the smallest self-replicating molecule can be—and that’s an issue that influences whether we should expect to find ourselves alone in the universe or find it teeming with life.

Some people might also object that what we’re asking about has already been studied, in the half-century quest to design the smallest universal Turing machine (the subject of Stephen Wolfram’s $25,000 prize in 2007, to which I responded with my own $25.00 prize).  But I see that as fundamentally different, for the following reason.  A universal Turing machine—that is, a machine that simulates any other machine that’s described to it on its input tape—has the privilege of offloading almost all of its complexity onto the description format for the input machine.  So indeed, that’s exactly what all known tiny universal machines do!  But a program that checks (say) Goldbach’s Conjecture, or the Riemann Hypothesis, or the consistency of set theory, on an initially blank tape, has no such liberty.  For such machines, the number of states really does seem like an intrinsic measure of complexity, because the complexity can’t be shoehorned anywhere else.

One can also phrase what we’re asking in terms of the infamous Busy Beaver function.  Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts. The Busy Beaver function was the centerpiece of my 1998 essay Who Can Name the Bigger Number?, which might still attract more readers than anything else I’ve written since. As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory.  For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

But the BB function has a second amazing property: namely, it’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle.  To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory.  Clearly such a machine could be built, with some finite number of states k.  But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent!  For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.

OK, but we can now ask a quantitative question: how many values of the BB function is it possible for us to know?  Where exactly is the precipice at which this function “departs the realm of mortals and enters the realm of God”: is it closer to n=10 or to n=10,000,000?  In practice, four values of BB have been determined so far:

  • BB(1)=1
  • BB(2)=6
  • BB(3)=21 (Lin and Rado 1965)
  • BB(4)=107 (Brady 1975)

We also know some lower bounds:

See Heiner Marxen’s page or the Googology Wiki (which somehow I only learned about today) for more information.

Some Busy Beaver enthusiasts have opined that even BB(6) will never be known exactly.  On the other hand, the abstract argument from before tells us only that, if we confine ourselves to (say) ZF set theory, then there’s some k—possibly in the tens of millions or higher—such that the values of BB(k), BB(k+1), BB(k+2), and so on can never be proven.  So again: is the number of knowable values of the BB function more like 10, or more like a million?

This is the question that Adam and I (but mostly Adam) have finally addressed.

It’s hopeless to design a Turing machine by hand for all but the simplest tasks, so as a first step, Adam created a new programming language, called Laconic, specifically for writing programs that compile down to small Turing machines.  Laconic programs actually compile to an intermediary language called TMD (Turing Machine Descriptor), and from there to Turing machines.

Even then, we estimate that a direct attempt to write a Laconic program that searched for a contradiction in ZFC would lead to a Turing machine with millions of states.  There were three ideas needed to get the state count down to something reasonable.

The first was to take advantage of the work of Harvey Friedman, who’s one of the one or two people I mentioned earlier who’s written about these problems before.  In particular, Friedman has been laboring since the 1960s to find “natural” arithmetical statements that are provably independent of ZFC or other strong set theories.  (See this AMS Notices piece by Martin Davis for a discussion of Friedman’s progress as of 2006.)  Not only does Friedman’s quest continue, but some of his most important progress has come only within the last year.  His statements—typically involving objects called “order-invariant graphs”—strike me as alien, and as far removed from anything I’d personally have independent reasons to think about (but is that just a sign of my limited perspective?).  Be that as it may, Friedman’s statements still seem a lot easier to encode as short computer programs than the full apparatus of first-order logic and set theory!  So that’s what we started with; our work wouldn’t have been possible without Friedman (who we consulted by email throughout the project).

The second idea was something we called “on-tape processing.”  Basically, instead of compiling directly from Laconic down to Turing machine, Adam wrote an interpreter in Turing machine (which took about 4000 states—a single, fixed cost), and then had the final Turing machine first write a higher-level program onto its tape and then interpret that program.  Instead of the compilation process producing a huge multiplicative overhead in the number of Turing machine states (and a repetitive machine), this approach gives us only an additive overhead.  We found that this one idea decreased the number of states by roughly an order of magnitude.

The third idea was first suggested in 2002 by Ben-Amram and Petersen (and refined for us by Luke Schaeffer); we call it “introspective encoding.”  When we write the program to be interpreted onto the Turing machine tape, the naïve approach would use one Turing machine state per bit.  But that’s clearly wasteful, since in an n-state Turing machine, every state contains ~log(n) bits of information (because of the other states it needs to point to).  A better approach tries to exploit as many of those bits as it can; doing that gave us up to a factor-of-5 additional savings in the number of states.

For Goldbach’s Conjecture and the Riemann Hypothesis, we paid the same 4000-state overhead for the interpreter, but then the program to be interpreted was simpler, giving a smaller overall machine.  Incidentally, it’s not intuitively obvious that the Riemann Hypothesis is equivalent to the statement that some particular computer program runs forever, but it is—that follows, for example, from work by Lagarias and by Davis, Matijasevich, and Robinson (we used the latter; an earlier version of this post incorrectly stated that we used the Lagarias result).

To preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so.  But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted.  Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.

Of course, three orders of magnitude still remain between the largest value of n (namely, 4) for which BB(n) is known to be knowable in ZFC-based mathematics, and the smallest value of n (namely, 7,918) for which BB(n) is known to be unknowable.  I’m optimistic that further improvements are possible to the machine Z—whether that means simplifications to Friedman’s statement, a redesigned interpreter (possibly using lambda calculus?), or a “multi-stage rocket model” where a bare-bones interpreter would be used to unpack a second, richer interpreter which would be used to unpack a third, etc., until you got to the actual program you cared about.  But I’d be shocked if anyone in my lifetime determined the value of BB(10), for example, or proved the value independent of set theory.  Even after the Singularity happens, I imagine that our robot overlords would find the determination of BB(10) quite a challenge.

In an early Shtetl-Optimized post, I described theoretical computer science as “quantitative epistemology.”  Constructing small Turing machines whose behavior eludes set theory is not conventional theoretical computer science by any stretch of the imagination: it’s closer in practice to programming languages or computer architecture, or even the recreational practice known as code-golfing.  On the other hand, I’ve never been involved with any other project that was so clearly, explicitly about pinning down the quantitative boundary between the knowable and the unknowable.

Comments on our paper are welcome.

Addendum: Some people might wonder “why Turing machines,” as opposed to a more reasonable programming language like C or Python.  Well, first of all, we needed a language that could address an unlimited amount of memory.  Also, the BB function is traditionally defined in terms of Turing machines.  But the most important issue is that we wanted there to be no suspicion whatsoever that our choice of programming language was artificially helping to make our machine small.  And hopefully everyone can agree that one-tape, two-symbol Turing machines aren’t designed for anyone’s convenience!

“Largely just men doing sums”: My review of the excellent Ramanujan film

Sunday, May 1st, 2016

[Warning: This movie review contains spoilers, as well as a continued fraction expansion.]

These days, it takes an extraordinary occasion for me and Dana to arrange the complicated, rocket-launch-like babysitting logistics involved in going out for a night at the movies.  One such an occasion was an opening-weekend screening of The Man Who Knew Infinitythe new movie about Srinivasa Ramanujan and his relationship with G. H. Hardy—followed by a Q&A with Matthew Brown (who wrote and directed the film), Robert Kanigel (who wrote the biography on which the film was based), and Fields Medalist Manjul Bhargava (who consulted on the film).

I read Kanigel’s The Man Who Knew Infinity in the early nineties; it was a major influence on my life.  There were equations in that book to stop a nerdy 13-year-old’s pulse, like

$$1+9\left( \frac{1}{4}\right) ^{4}+17\left( \frac{1\cdot5}{4\cdot8}\right)
^{4}+25\left( \frac{1\cdot5\cdot9}{4\cdot8\cdot12}\right) ^{4}+\cdots
=\frac{2^{3/2}}{\pi^{1/2}\Gamma\left( 3/4\right) ^{2}}$$

$$\frac{1}{1+\frac{e^{-2\pi}}{1+\frac{e^{-4\pi}}{1+\frac{e^{-6\pi}}{1+\cdots}}%
}}=\left( \sqrt{\frac{5+\sqrt{5}}{2}}-\frac{\sqrt{5}+1}{2}\right)
\sqrt[5]{e^{2\pi}}$$

A thousand pages of exposition about Ramanujan’s mysterious self-taught mathematical style, the effect his work had on Hardy and Littlewood, his impact on the later development of analysis, etc., could never replace the experience of just staring at these things!  Popularizers are constantly trying to “explain” mathematical beauty by comparing it to art, music, or poetry, but I can best understand art, music, and poetry if I assume other people experience them like the above identities.  Across all the years and cultures and continents, can’t you feel Ramanujan himself leaping off your screen, still trying to make you see this bizarre aspect of the architecture of reality that the goddess Namagiri showed him in a dream?

Reading Kanigel’s book, I was also entranced by the culture of early-twentieth-century Cambridge mathematics: the Tripos, Wranglers, High Table.  I asked, why was I here and not there?  And even though I was (and remain) at most 1729-1729 of a Ramanujan, I could strongly identify with his story, because I knew that I, too, was about to embark on the journey from total scientific nobody to someone who the experts might at least take seriously enough to try to prove him wrong.

Anyway, a couple years after reading Kanigel’s biography, I went to the wonderful Canada/USA MathCamp, and there met Richard K. Guy, who’d actually known Hardy.  I couldn’t have been more impressed had Guy visited Platonic heaven and met π and e there.  To put it mildly, no one in my high school had known G. H. Hardy.

I often fantasized—this was the nineties—about writing the screenplay myself for a Ramanujan movie, so that millions of moviegoers could experience the story as I did.  Incidentally, I also fantasized about writing screenplays for Alan Turing and John Nash movies.  I do have a few mathematical biopic ideas that haven’t yet been taken, and for which any potential buyers should get in touch with me:

  • Radical: The Story of Évariste Galois
  • Give Me a Place to Stand: Archimedes’ Final Days
  • Mathématicienne: Sophie Germain In Her Prime
  • The Prime Power of Ludwig Sylow
    (OK, this last one would be more of a limited-market release)

But enough digressions; how was the Ramanujan movie?

Just as Ramanujan himself wasn’t an infallible oracle (many of his claims, e.g. his formula for the prime counting function, turned out to be wrong), so The Man Who Knew Infinity isn’t a perfect movie.  Even so, there’s no question that this is one of the best and truest movies ever made about mathematics and mathematicians, if not the best and truest.  If you’re the kind of person who reads this blog, go see it now.  Don’t wait!  As they stressed at the Q&A, the number of tickets sold in the first couple weeks is what determines whether or not the movie will see a wider release.

More than A Beautiful Mind or Good Will Hunting or The Imitation Game, or the play Proof, or the TV series NUMB3RS, the Ramanujan movie seems to me to respect math as a thing-in-itself, rather than just a tool or symbol for something else that interests the director much more.  The background to the opening credits—and what better choice could there be?—is just page after page from Ramanujan’s notebooks.  Later in the film, there’s a correct explanation of what the partition function P(n) is, and of one of Ramanujan’s and Hardy’s central achievements, which was to give an asymptotic formula for P(n), namely $$ P(n) \approx \frac{e^{π \sqrt{2n/3}}}{4\sqrt{3}n}, $$ and to prove the formula’s correctness.

The film also makes crystal-clear that pure mathematicians do what they do not because of applications to physics or anything else, but simply because they feel compelled to: for the devout Ramanujan, math was literally about writing down “the thoughts of God,” while for the atheist Hardy, math was a religion-substitute.  Notably, the movie explores the tension between Ramanujan’s untrained intuition and Hardy’s demands for rigor in a way that does them both justice, resisting the Hollywood urge to make intuition 100% victorious and rigor just a stodgy punching bag to be defeated.

For my taste, the movie could’ve gone even further in the direction of “letting the math speak”: for example, it could’ve explained just one of Ramanujan’s infinite series.  Audiences might even have liked some more T&A (theorems and asymptotic bounds).  During the Q&A that I attended, I was impressed to see moviegoers repeatedly pressing a somewhat-coy Manjul Bhargava to explain Ramanujan’s actual mathematics (e.g., what exactly were the discoveries in his first letter to Hardy?  what was in Ramanujan’s Lost Notebook that turned out to be so important?).  Then again, this was Cambridge, MA, so the possibility should at least be entertained that what I witnessed was unrepresentative of American ticket-buyers.

From what I’ve read, the movie is also true to South Indian dress, music, religion, and culture.  Yes, the Indian characters speak to each other in English rather than Tamil, but Brown explained that as a necessary compromise (not only for the audience’s sake, but also because Dev Patel and the other Indian actors didn’t speak Tamil).

Some reviews have mentioned issues with casting and characterization.  For example, Hardy is portrayed by Jeremy Irons, who’s superb but also decades older than Hardy was at the time he knew Ramanujan.  Meanwhile Ramanujan’s wife, Janaki, is played by a fully-grown Devika Bhise; the real Janaki was nine (!) when she married Ramanujan, and fourteen when Ramanujan left for England.  J. E. Littlewood is played as almost a comic-relief buffoon, so much so that it feels incongruous when, near the end of the film, Irons-as-Hardy utters the following real-life line:

I still say to myself when I am depressed and find myself forced to listen to pompous and tiresome people, “Well, I have done one thing you could never have done, and that is to have collaborated with Littlewood and Ramanujan on something like equal terms.”

Finally, a young, mustachioed Bertrand Russell is a recurring character.  Russell and Hardy really were friends and fellow WWI pacifists, but Hardy seeking out Bertie’s advice about each Ramanujan-related development seems like almost certainly just an irresistible plot device.

But none of that matters.  What bothered me more were the dramatizations of the prejudice Ramanujan endured in England.  Ramanujan is shown getting knocked to the ground, punched, and kicked by British soldiers barking anti-Indian slurs at him; he then shows up for his next meeting with Hardy covered in bruises, which Hardy (being aloof) neglects to ask about.  Ramanujan is also depicted getting shoved, screamed at, and told never to return by a math professor who he humiliates during a lecture.  I understand why Brown made these cinematic choices: there’s no question that Ramanujan experienced prejudice and snobbery in Cambridge, and that he often felt lonely and unwelcome there.  And it’s surely easier to show Ramanujan literally getting beaten up by racist bigots, than to depict his alienation from Cambridge society as the subtler matter that it most likely was.  To me, though, that’s precisely why the latter choice would’ve been even more impressive, had the film managed to pull it off.

Similarly, during World War I, the film shows not only Trinity College converted into a military hospital, and many promising students marched off to their deaths (all true), but also a shell exploding on campus near Ramanujan, after which Ramanujan gazes in horror at the bleeding dead bodies.  Like, isn’t the truth here dramatic enough?

One other thing: the movie leaves you with the impression that Ramanujan died of tuberculosis.  More recent analysis concluded that it was probably hepatic amoebiasis that he brought with him from India—something that could’ve been cured with the medicine of the time, had anyone correctly diagnosed it.  (Incidentally, the film completely omits Ramanujan’s final year, back in India, when he suffered a relapse of his illness and slowly withered away, yet with Janaki by his side, continued to do world-class research and exchanged letters with Hardy until the very last days.  Everyone I read commented that this was “the right dramatic choice,” but … I dunno, I would’ve shown it!)

But enough!  I fear that to harp on these defects is to hold the film to impossibly-high, Platonic standards, rather than standards that engage with the reality of Hollywood.  An anecdote that Brown related at the end of the Q&A session brought this point home for me.  Apparently, Brown struggled for an entire decade to attract funding for a film about a turn-of-the-century South Indian mathematician visiting Trinity College, Cambridge, whose work had no commercial or military value whatsoever.  At one point, Brown was actually told that he could get the movie funded, if he’d agree to make Ramanujan fall in love with a white nurse, so that a British starlet who would sell tickets could be cast as his love interest.  One can only imagine what a battle it must have been to get a correct explanation of the partition function onto the screen.

In the end, though, nothing made me appreciate The Man Who Knew Infinity more than reading negative reviews of it, like this one by Olly Richards:

Watching someone balancing algorithms or messing about with multivariate polynomials just isn’t conducive to urgently shovelling popcorn into your face.  Difficult to dislike, given its unwavering affection for its subject, The Man Who Knew Infinity is nevertheless hamstrung by the dryness of its subject … Sturdy performances and lovely scenery abound, but it’s still largely just men doing sums; important sums as it turns out, but that isn’t conveyed to the audience until the coda [which mentions black holes] tells us of the major scientific advances they aided.

On behalf of mathematics, on behalf of my childhood self, I’m grateful that Brown fought this fight, and that he won as much as he did.  Whether you walk, run, board a steamship, or take taxi #1729, go see this film.

Addendum: See also this review by Peter Woit, and this in Notices of the AMS by Ramanujan expert George Andrews.

Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

Grading Trudeau on quantum computing

Sunday, April 17th, 2016

Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer.  You can see all the results here (here’s the audio from my entry).


The emails starting hitting me like … a hail of maple syrup from the icy north.  Had I seen the news?  Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts.  At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”

Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:

“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me.  When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not.  It’s 1, or a 0, they’re binary systems.  Uh, what quantum states allow for is much more complex information to be encoded into a single bit.  Regular computer bit is either a 1 or a 0, on or off.  A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer.  So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”

What marks does Trudeau get for this?  On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said.  Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way.  He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.

On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article.  For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation.  (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.)  The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either.  When I’m grading exams, I always give generous partial credit for honest admissions of ignorance.  B+.

Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said.  Those with inside info, feel free to share in the comments (anonymously if you want!).

(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)

Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.

Here’s some video of me spouting about Deep Questions

Thursday, February 4th, 2016

In January 2014, I attended an FQXi conference on Vieques island in Puerto Rico.  While there, Robert Lawrence Kuhn interviewed me for his TV program Closer to Truth, which deals with science and religion and philosophy and you get the idea.  Alas, my interview was at the very end of the conference, and we lost track of the time—so unbeknownst to me, a plane full of theorists was literally sitting on the runway waiting for me to finish philosophizing!  This was the second time Kuhn interviewed me for his show; the first time was on a cruise ship near Norway in 2011.  (Thankless hero that I am, there’s nowhere I won’t travel for the sake of truth.)

Anyway, after a two-year wait, the videos from Puerto Rico are finally available online.  While my vignettes cover what, for most readers of this blog, will be very basic stuff, I’m sort of happy with how they turned out: I still stutter and rock back and forth, but not as much as usual.  For your viewing convenience, here are the new videos:

I had one other vignette, about why the universe exists, but they seem to have cut that one.  Alas, if I knew why the universe existed in January 2014, I can’t remember any more.

One embarrassing goof: I referred to the inventor of Newcomb’s Paradox as “Simon Newcomb.”  Actually it was William Newcomb: a distant relative of Simon Newcomb, the 19th-century astronomer who measured the speed of light.

At their website, you can also see my older 2011 videos, and videos from others who might be known to readers of this blog, like Marvin Minsky, Roger Penrose, Rebecca Newberger Goldstein, David ChalmersSean Carroll, Max Tegmark, David Deutsch, Raphael Bousso, Freeman DysonNick BostromRay Kurzweil, Rodney Brooks, Stephen Wolfram, Greg Chaitin, Garrett Lisi, Seth Lloyd, Lenny Susskind, Lee Smolin, Steven Weinberg, Wojciech Zurek, Fotini Markopoulou, Juan Maldacena, Don Page, and David Albert.  (No, I haven’t yet watched most of these, but now that I linked to them, maybe I will!)

Thanks very much to Robert Lawrence Kuhn and Closer to Truth (and my previous self, I guess?) for providing Shtetl-Optimized content so I don’t have to.


Update: Andrew Critch of CFAR asked me to post the following announcement.

We’re seeking a full time salesperson for the Center for Applied Rationality in Berkeley, California. We’ve streamlined operations to handle large volume in workshop admissions, and now we need that volume to pour in. Your role would be to fill our workshops, events, and alumni community with people. Last year we had 167 total new alumni. This year we want 120 per month. Click here to find out more.

Marvin Minsky

Tuesday, January 26th, 2016

Yesterday brought the sad news that Marvin Minsky passed away at age 88.  I never met Minsky (I wish I had); I just had one email exchange with him back in 2002, about Stephen Wolfram’s book.  But Minsky was my academic great-grandfather (through Manuel Blum and Umesh Vazirani), and he influenced me in many other ways.  For example, in his and Papert’s 1968 book Perceptrons—notorious for “killing neural net research for a decade,” because of its mis- or over-interpreted theorems about the representational limitations of single-layer neural nets—the way Minsky and Papert proved those theorems was by translating questions about computation into questions about the existence or nonexistence of low-degree polynomials with various properties, and then answering the latter questions using MATH.  Their “polynomial method” is now a mainstay of quantum algorithms research (having been brought to the subject by Beals et al.), and in particular, has been a mainstay of my own career.  Hardly Minsky’s best-known contribution to human knowledge, but that even such a relatively minor part of his oeuvre could have legs half a century later is a testament to his impact.

I’m sure readers will have other thoughts to share about Minsky, so please do so in the comments section.  Personal reminiscences are especially welcome.

Edging in: the biggest science news of 2015

Sunday, January 3rd, 2016

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

If I can’t do math, I don’t want to be part of your revolution

Thursday, December 3rd, 2015

1. Emma Goldman, the fiery early-20th-century anarchist, is credited for giving the world the immortal refrain “if I can’t dance, I don’t want to be part of your revolution” (actually it’s not clear that she ever said it so pithily, but she did express such a thought).  Admittedly, no one would mistake me for either a dancer or an anarchist, but I’ve always felt a kinship with Goldman over her terpsichorean line in the sand.  The other day, it occurred to me that there’s a parallel sentence that sums up my entire political philosophy—on the one hand, my default instinct to side with the downtrodden and with the progressive left, but on the other, my dissent from any even vaguely anti-STEM, anti-rationality, or anti-nerd undercurrents, and my refusal to join any popular uprising that seems liable (for example) to delay the discovery of a P≠NP proof, by inconveniencing the people working on one.

So, here’s my sentence, which you should feel free to reprint on t-shirts and coffee mugs as desired:

If I can’t do math, I don’t want to be part of your revolution.

2. Over at Scientific American‘s website, John Horgan posted an account of a workshop on Integrated Information Theory, which I attended a couple weeks ago at NYU (along with David Chalmers, Giulio Tononi, Christof Koch, Max Tegmark, and a dozen or so others).  I was the “official skeptic” of the workshop, and gave a talk based on my blog post The Unconscious Expander.  I don’t really agree with what Horgan says about physics and information in general, but I do (of course) join him in his skepticism of IIT, and he gives a pretty accurate summary of what people said at the workshop.  (Alas, my joke about my lunch not being poisoned completely bombed with the IIT crowd … as I should’ve predicted!)  The workshop itself was lots of fun; thanks so much to David, Giulio, and Hedda Hassel Morch for organizing it.

3. As you might have noticed, I’ve created a new category on this blog: “Obviously I’m Not Defending Aaronson.”  This category—reserved for posts that caused at least a hundred people to hate me—refers to a peculiar phrase I encountered over and over, in the social media threads denouncing me as a horrible person.  The phrase tends to occur in passages like: “look, obviously I’m not defending Aaronson, but it’s worth pointing out that, if you carefully reread everything he wrote, he never actually said that war orphans should be roasted alive and then eaten for fun.  That’s just something we all know that a clueless, arrogant nerd like him would think.”

4. Right now I’m at the “ThinkQ” conference at IBM in Yorktown Heights.  Here are the PowerPoint slides from my talk yesterday, entitled “The Largest Possible Quantum Speedups.”  Regular readers of this blog will find a lot that’s old and a little that’s new.

Talk, be merry, and be rational

Monday, November 23rd, 2015

Yesterday I wrote a statement on behalf of a Scott Alexander SlateStarCodex/rationalist meetup, which happened last night at MIT (in the same room where I teach my graduate class), and which I’d really wanted to attend but couldn’t.  I figured I’d share the statement here:

I had been looking forward to attending tonight’s MIT SlateStarCodex meetup as I hardly ever look forward to anything. Alas, I’m now stuck in Chicago, with my flight cancelled due to snow, and with all flights for the next day booked up. But instead of continuing to be depressed about it, I’ve decided to be happy that this meetup is even happening at all—that there’s a community of people who can read, let’s say, a hypothetical debate moderator questioning Ben Carson about what it’s like to be a severed half-brain, and simply be amused, instead of silently trying to figure out who benefits from the post and which tribe the writer belongs to. (And yes, I know: the answer is the gray tribe.) And you can find this community anywhere—even in Cambridge, Massachusetts! Look, I spend a lot of time online, just getting more and more upset reading social justice debates that are full of people calling each other douchebags without even being able to state anything in the same galactic supercluster as the other side’s case. And then what gives me hope for humanity is to click over to the slatestarcodex tab, and to see all the hundreds of comments (way more than my blog gets) by people who disagree with each other but who all basically get it, who all have minds that don’t make me despair. And to realize that, when Scott Alexander calls an SSC meetup, he can fill a room just about anywhere … well, at least anywhere I would visit. So talk, be merry, and be rational.

I’m now back in town, and told by people who attended the meetup that it was crowded, disorganized, and great.  And now I’m off to Harvard, to attend the other Scott A.’s talk “How To Ruin A Perfectly Good Randomized Controlled Trial.”


Update (Nov. 24) Scott Alexander’s talk at Harvard last night was one of the finest talks I’ve ever attended. He was introduced to rapturous applause as simply “the best blogger on the Internet,” and as finally an important speaker, in a talk series that had previously wasted everyone’s time with the likes of Steven Pinker and Peter Singer. (Scott demurred that his most notable accomplishment in life was giving the talk at Harvard that he was just now giving.) The actual content, as Scott warned from the outset, was “just” a small subset of a basic statistics course, but Scott brought each point alive with numerous recent examples, from psychiatry, pharmacology, and social sciences, where bad statistics or misinterpretations of statistics were accepted by nearly everyone and used to set policy. (E.g., Alcoholics Anonymous groups that claimed an “over 95%” success rate, because the people who relapsed were kicked out partway through and not counted toward the total.) Most impressively, Scott leapt immediately into the meat, ended after 20 minutes, and then spent the next two hours just taking questions. Scott is publicity-shy, but I hope for others’ sake that video of the talk will eventually make its way online.

Then, after the talk, I had the honor of meeting two fellow Boston-area rationalist bloggers, Kate Donovan and Jesse Galef. Yes, I said “fellow”: for almost a decade, I’ve considered myself on the fringes of the “rationalist movement.” I’d hang out a lot with skeptic/effective-altruist/transhumanist/LessWrong/OvercomingBias people (who are increasingly now SlateStarCodex people), read their blogs, listen and respond to their arguments, answer their CS theory questions. But I was always vaguely uncomfortable identifying myself with any group that even seemed to define itself by how rational it was compared to everyone else (even if the rationalists constantly qualified their self-designation with “aspiring”!). Also, my rationalist friends seemed overly interested in questions like how to prevent malevolent AIs from taking over the world, which I tend to think we lack the tools to make much progress on right now (though, like with many other remote possibilities, I’m happy for some people to work on them and see if they find anything interesting).

So, what changed? Well, in the debates about social justice, public shaming, etc. that have swept across the Internet these past few years, it seems to me that my rationalist friends have proven themselves able to weigh opposing arguments, examine their own shortcomings, resist groupthink and hysteria from both sides, and attack ideas rather than people, in a way that the wider society—and most depressingly to me, the “enlightened, liberal” part of society—has often failed. In a real-world test (“real-world,” in this context, meaning social media…), the rationalists have walked the walk and rationaled the rational, and thus they’ve given me no choice but to stand up and be counted as one of them.

Have a great Thanksgiving, those of you in the US!


Another Update: Dana, Lily, and I had the honor of having Scott Alexander over for dinner tonight. I found this genius of human nature, who took so much flak last year for defending me, to be completely uninterested in discussing anything related to social justice or online shaming. Instead, his gaze was fixed on the eternal: he just wanted to grill me all evening about physics and math and epistemology. Having recently read this Nature News article by Ron Cowen, he kept asking me things like: “you say that in quantum gravity, spacetime itself is supposed to dissolve into some sort of network of qubits. Well then, how does each qubit know which other qubits it’s supposed to be connected to? Are there additional qubits to specify the connectivity pattern? If so, then doesn’t that cause an infinite regress?” I handwaved something about AdS/CFT, where a dynamic spacetime is supposed to emerge from an ordinary quantum theory on a fixed background specified in advance. But I added that, in some sense, he had rediscovered the whole problem of quantum gravity that’s confused everyone for almost a century: if quantum mechanics presupposes a causal structure on the qubits or whatever other objects it talks about, then how do you write down a quantum theory of the causal structures themselves?

I’m sure there’s a lesson in here somewhere about what I should spend my time on.

Ordinary Words Will Do

Sunday, October 18th, 2015

Izabella Laba, a noted mathematician at the University of British Columbia, recently posted some tweets that used me as a bad, cautionary example for how “STEM faculty should be less contemptuous of social sciences.”  Here was the offending comment of mine, from the epic Walter Lewin thread last fall:

[W]hy not dispense with the empirically-empty notion of “privilege,” and just talk directly about the actual well-being of actual people, or groups of people?  If men are doing horrific things to women—for example, lashing them for driving cars, like in Saudi Arabia—then surely we can just say so in plain language.  Stipulating that the torturers are “exercising their male privilege” with every lash adds nothing to anyone’s understanding of the evil.  It’s bad writing.  More broadly, it seems to me that the entire apparatus of “privilege,” “delegitimation,” etc. etc. can simply be tossed overboard, to rust on the ocean floor alongside dialectical materialism and other theoretical superstructures that were once pompously insisted upon as preconditions of enlightened social discourse.  This isn’t quantum field theory.  Ordinary words will do.

Prof. Laba derisively commented:

Might as well ask you to explain calculus without using fancy words like “derivative” or “continuous.”  Simple number arithmetic will do.

Prof. Laba’s tweets were favorited by Jordan Ellenberg, a mathematician who wrote the excellent popular book How Not to Be Wrong.  (Ellenberg had also criticized me last year for my strange, naïve idea that human relations can be thought about using logic.)

Given my respect for the critics, I guess I’m honor-bound to respond.

For the record, I tend not to think about the social sciences—or for that matter, the natural sciences—as monolithic entities at all.  I admire any honest attempt to discover the truth about anything.  And not being a postmodern relativist, I believe there are deep truths worth discovering in history, psychology, economics, linguistics, possibly even sociology.  Reading the books of Steven Pinker underscored for me how much is actually understood nowadays about human nature—much of it only figured out within the last half-century.  Likewise, reading the voluminous profundities of Scott Alexander taught me that even in psychiatry, there are truths (and even a few definite cures) to be had for those who seek.

I also believe that the social sciences are harder—way harder—than math or physics or CS.  They’re harder because of the tenuousness of the correlations, because of the complexity of each individual human brain (let alone 7 billion of them on the same planet), but most of all, because politics and ideology and the scientist’s own biases place such powerful thumbs on the scale.  This makes it all the more impressive when a social scientist, like (say) Stanley Milgram or Judith Rich Harris or Napoleon Chagnon, teaches the world something important and new.

I will confess to contempt for anything that I regard as pompous obscurantism—for self-referential systems of jargon whose main purposes are to bar outsiders, to mask a lack of actual understanding, and to confer power on certain favored groups.  And I regard the need to be alert to such systems, to nip them in the bud before they grow into Lysenkoism, as in some sense the problem of intellectual life.  Which brings me to the most fundamental asymmetry between the hard and soft sciences.  Namely, the very fact that it’s so much harder to nurture new truths to maturity in the social sciences than it is in math or physics, means that in the former, the jargon-weeds have an easier time filling the void—and we know they’ve done it again and again, even in the post-Enlightenment West.

Time for a thought experiment.  Suppose you showed up at a university anytime between, let’s say, 1910 and 1970, and went from department to department asking (in so many words): what are you excited about this century?  Where are your new continents, what’s the future of your field?  Who should I read to learn about that future?

In physics, the consensus answer would’ve been something like: Planck, Einstein, Bohr, Schrödinger, Dirac.

In psychology, it would’ve been: Freud and Jung (with another faction for B. F. Skinner).

In politics and social sciences, over an enormous swath of academia (including in the West), it would’ve been: Marx, Engels, Trotsky, Lenin.

With hindsight, we now know that the physics advice would’ve been absolute perfection, the psychology and politics advice an unmitigated disaster.  Yes, physicists today know more than Einstein, can even correct him on some points, but the continents he revealed to us actually existed—indeed, have only become more important since Einstein’s time.

But Marx and Freud?  You would’ve done better to leave the campus, and ask a random person on the street what she or he thought about economics and psychology.  In high school, I remember cringing through a unit on the 1920s, when we learned about how “two European professors upset a war-weary civilization’s established certainties—with Einstein overturning received wisdom about space and time, and Freud doing just the same for the world of the mind.”  It was never thought important to add that Einstein’s theories turned out to be true while Freud’s turned out to be false.  Still, at least Freud’s ideas led “only” to decades of bad psychology and hundreds of innocent people sent to jail because of testimony procured through hypnosis, rather than to tens of millions of dead, as with the other social-scientific theory that reigned supreme among 20th-century academics.

Marx and Freud built impressive intellectual edifices—sufficiently impressive for a large fraction of intellectuals to have accepted those men as gurus on par with Darwin and Einstein for almost a century.  Yet on nearly every topic they wrote about, we now know that Marx and Freud couldn’t have been any more catastrophically wrong.  Moreover, their wrongness was knowable at the time—and was known to many, though the ones who knew were typically the ones who the intellectual leaders sneered at, as deluded reactionaries.

Which raises a question: suppose that, in the 1920s, I’d taken the social experts’ advice to study Marx and Freud, didn’t understand much of what they said (and found nonsensical much of what I did understand), and eventually rejected them as pretentious charlatans.  Then why wouldn’t I have been just like Prof. Laba’s ignorant rube, who dismisses calculus because he doesn’t understand technical terms like “continuous” and “derivative”?

On reflection, I don’t think that the two cases are comparable at all.

The hard sciences need technical vocabularies for a simple reason: because they’re about things that normal people don’t spend their hours obsessively worrying about.  Yes, I’d have a hard time understanding organic chemists or differential geometers, but largely for the same reasons I’d have a hard time understanding football fans or pirates.  It’s not just that I don’t understand the arguments; it’s that the arguments are about a world that’s alien to me (and that, to be honest, I don’t care about as much as I do my world).

Suppose, by contrast, that you’re writing about the topics everyone spends their time obsessively worrying about: politics, society, the human mind, the relations between the races and sexes.  In other words, suppose you’re writing about precisely the topics for which the ordinary English language has been honed over centuries—for which Shakespeare and Twain and Dr. King and so many others deployed the language to such spectacular effect.  In that case, what excuse could you possibly have to write in academese, to pepper your prose with undefined in-group neologisms?

Well, let’s be charitable; maybe you have a reason.  For example, maybe you’re doing a complicated meta-analysis of psychology papers, so you need to talk about r-values and kurtosis and heteroskedasticity.  Or maybe you’re putting people in an fMRI machine while you ask them questions, so you need to talk about the temporal resolution in the anterior cingulate cortex.  Or maybe you’re analyzing sibling rivalries using game theory, so you need Nash equilibria.  Or you’re picking apart sentences using Chomskyan formal grammar.  In all these cases, armchair language doesn’t suffice because you’re not just sitting in your armchair: you’re using a new tool to examine the everyday from a different perspective.  For present purposes, you might as well be doing algebraic geometry.

The Freudians and Marxists would, of course, claim that they’re doing the exact same thing.  Yes, they’d say, you thought you had the words to discuss your own mind or the power structure of society, but really you didn’t, because you lacked the revolutionary theoretical framework that we now possess.  (Trotsky’s writings  are suffused with this brand of arrogance in nearly every sentence: for example, when he ridicules the bourgeoisie liberals who whine about “human rights violations” in the early USSR, yet who are too dense to phrase their objections within the framework of dialectical materialism.)

I submit that, even without the hindsight of 2015, there would’ve been excellent reasons to be skeptical of these claims.  Has it ever happened, you might ask yourself, that someone sat in their study and mused about the same human questions that occupied Plato and Shakespeare and Hume, in the same human way they did, and then came up with a new, scientific conclusion that was as rigorous and secure as relativity or evolution?

Let me know if I missed something, but I can’t think of a single example.  Sure, it seems to me, there have been geniuses of human nature, who enlarged our vision without any recourse to the quantitative methods of science.  But even those geniuses “only” contributed melodies for other geniuses to answer in counterpoint, rather than stones for everyone who came later to build upon.  Also, the geniuses usually wrote well.

Am I claiming that progress is impossible in the social realm?  Not at all.  The emancipation of slaves, the end of dueling and blasphemy laws and the divine right of kings, women’s suffrage and participation in the workforce, gay marriage—all these strike me as crystal-clear examples of moral progress, as advances that will still be considered progress a thousand years from now, if there’s anyone around then to discuss such things.  Evolutionary psychology, heuristics and biases, reciprocal altruism, and countless other developments likewise strike me as intellectual progress within the sciences of human nature.  But none of these advances needed recondite language!  Ordinary words sufficed for Thomas Paine and Frederick Douglass and John Stuart Mill, as they sufficed for Robert Axelrod and for Kahneman and Tversky.  So forgive me for thinking that whatever is true and important in the social world today, should likewise be defensible to every smart person in ordinary words, and that this represents a genuine difference between the social sciences and physics.

Which brings us to the central point that Prof. Laba disputed in that comment of mine.  I believe there are countless moral heroes in our time, as well as social scientists who struggle heroically to get the right answers.  But as far as I can tell, the people who build complex intellectual edifices around words like “privilege” and “delegitimation” and “entitlement” and “marginalized” are very much the same sort of people who, a few generations ago, built similar edifices around “bourgeoisie” and “dialectical” and “false consciousness.”  In both cases, there’s an impressive body of theory that’s held up as the equivalent in its domain of relativity, quantum mechanics, and Darwinism, with any skeptics denounced as science-deniers.  In both cases, enlightened liberals are tempted to side with the theorists, since the theorists believe in so many of the same causes that the enlightened liberals believe in, and hate so many of the same people who the enlightened liberals hate.  But in both cases, the theorists’ language seems to alternate between incomprehensible word-salad and fervid, often profanity-laced denunciations, skipping entirely over calm clarity.  And in both cases, the only thing that the impressive theoretical edifice ever seems to get used for, is to prove over and over that certain favored groups should get more power while disfavored ones should get less.

So I’m led to the view that, if you want to rouse people’s anger about injustice or their pity about preventable suffering, or end arbitrary discrimination codified into law, or give individuals more freedom to pursue their own happiness, or come up with a new insight about human nature, or simply describe the human realities that you see around you—for all these purposes, the words that sufficed for every previous generation’s great humanists will also suffice for you.

On the other hand, to restrict freedom and invent new forms of discrimination—and to do it in the name of equality and justice—that takes theory.  You’ll need a sophisticated framework, for example, to prove that even if two adults both insist they’re consenting to a relationship, really they might not be, because of power structures in the wider society that your superior insight lets you see.  You’ll need advanced discourse to assure you that, even though your gut reaction might be horror at (say) someone who misspoke once and then had their life gleefully destroyed on social media, your gut is not to be trusted, because it’s poisoned by the same imperialist, patriarchal biases as everything else—and because what looks like a cruel lynching needs to be understood in a broader social context (did the victim belong to a dominant group, or to a marginalized one?).  Finally, you’ll need oodles of theory (bring out the Marcuse) to explain why the neoliberal fanaticism about “free speech” and “tolerance” and “due process” and “the presumption of innocence” is too abstract and simplistic—for those concepts, too, fail to distinguish between a marginalized group that deserves society’s protection and a dominant group that doesn’t.

So I concede to Prof. Laba that the complicated discourse of privilege, hegemony, etc. serves a definite purpose for the people who wield it, just as much as the complicated discourse of quantum field theory serves a purpose for physicists.  It’s just that the purposes of the privilege-warriors aren’t my purposes.  For my purposes—which include fighting injustice, advancing every social and natural science as quickly as possible, and helping all well-meaning women and men see each other’s common humanity—I said last year and I say again that ordinary words will do.


Update (Oct. 26): Izabella Laba has written a response to this post, for which I’m extremely grateful. Her reply reveals that she and I have a great deal of common ground, and also a few clear areas of disagreement (e.g., what’s wrong with Steven Pinker?). But my most important objection is simply that, the first time I loaded her blog, the text went directly over the rock image in the background, making it impossible to read without highlighting it.