The stupidest story I ever wrote (it was a long flight)

May 18th, 2018

All the legal maneuvers, the decades of recriminations, came down in the end to two ambiguous syllables.  No one knew why old man Memeson had named his two kids “Laurel” and “Yanny,” or why his late wife had gone along with it.  Not Laura, not Lauren, but Laurel—like, the leaves that the complacent rest on?  Poor girl.  And yet she lucked out compared to her younger brother. “Yanny”? Rhymes with fanny, seriously?  If you got picked on in school half as much as Yanny did, you too might grow up angry enough to spend half your life locked in an inheritance fight.

But people mostly tolerated the old man’s eccentricities, because he clearly knew something. All through the 1930s, Memeson Audio was building the highest-end radios and record players that money could buy.  And long after he’d outdone the competition, Memeson continued to outdo himself. At the 1939 New York World’s Fair, he proudly unveiled a prototype of his finest record player yet, the one he’d been tinkering with in his personal workshop for a decade: the Unmistakable.  Interviewed about it later, people who attended the demo swore that you couldn’t mishear a single syllable that came out of the thing if you were 99% deaf. No one had ever heard a machine like it—or would, perhaps, until the advent of digital audio.  On Internet forums, audiophiles still debate how exactly Memeson managed to do it with the technology of the time.  Alas, just like the other Memeson debate—about which more shortly—this one might continue indefinitely, since only one Unmistakable was ever built, and that World’s Fair was the last time anyone heard it.

The day after the triumphant demonstration, a crowd cheered as Memeson boarded a train in Grand Central Station to return to his factory near Chicago, there to supervise the mass production of Unmistakables. Meanwhile Laurel and Yanny, now both in their thirties and helping to run the family firm, stood on the platform and beamed. It hadn’t been easy to grow up with such a singleminded father, one who seemed to love his radios a million times more than them, but at a moment like this, it almost felt worth it.  When Laurel and Yanny returned to the Fair to continue overseeing the Memeson Audio exhibition, they’d be the highest-ranking representatives of the company, and would bask in their old man’s reflected glory.

In biographies, Memeson is described as a pathological recluse, who’d hole himself up in his workshop for days at a time, with strict orders not to be disturbed by anyone.  But on this one occasion—as it turned out, the last time he’d ever be seen in public—Memeson was as hammy as could be.  As the train pulled out of Grand Central, he leaned out of an open window in his private car and grinned for the cameras, waving with one arm and holding up the Unmistakable with the other.

Every schoolchild knows what happened next: the train derailed an hour later.  Along with twenty other passengers, Memeson was killed, while all that remained of his Unmistakable was a mess of wires and splintered wood.

Famously, there was one last exchange. As the train began moving, a journalist waved his hat at Memeson and called out “safe travels, sir!”

Memeson smiled and tipped his hat.

Then, noticing Laurel and Yanny on the platform, the journalist yelled to Memeson, in jest (or so he thought): “if something happens, which of these two is next in line to run the business?”

The old man had never been known for his sense of humor, and seemed from his facial expression (or so witnesses would later say) to treat the question with utmost seriousness. As the train receded into the distance, he shouted—well, everyone agrees that it was two syllables. But which? With no written will to consult—one of Memeson’s many idiosyncrasies was his defiance of legal advice—it all came down to what people heard, or believed, or believed they heard.

On the one hand, it would of course be extremely unusual back then for a woman to lead a major technology firm. And Memeson had never shown the slightest interest in social causes: not women’s suffrage, not the New Deal, nothing. In court, Yanny’s lawyers would press these points, arguing that the old man couldn’t possibly have intended to pass on his empire to a daughter.

On the other hand, Laurel was his first-born child.  And some people said that, if Memeson had ever had a human connection with anyone, it was with her.  There were even employees who swore that, once in a while, Laurel was seen entering and leaving her dad’s workshop—a privilege the old man never extended to Yanny or anyone else. Years later, Laurel would go so far as to claim that, during these visits, she’d contributed crucial ideas to the design of the Unmistakable. Most commentators dismiss this claim as bluster: why would she wait to drop such a bombshell until she and Yanny had severed their last ties, until both siblings’ only passion in life was to destroy the other, to make the world unable to hear the other’s name?

At any rate, neither Laurel nor anyone else was ever able to build another Unmistakable, or to give a comprehensible account of how it worked.  But Laurel certainly has die-hard defenders to this day—and while I’ve tried to be evenhanded in this account, I confess to being one of them.

In the end, who people believed about this affair seemed to come down to where they stood—literally. Among the passengers in the train cars adjoining Memeson’s, the ones who heard him are generally adamant that they heard “Laurel”; while most who stood on the platform are equally insistent about “Yanny.”  Today, some Memeson scholars theorize that this discrepancy is due to a Doppler effect.  People on the platform would’ve heard a lower pitch than people comoving with Memeson, and modern reconstructions raise the possibility, however farfetched, that this alone could “morph” one name to the other.  If we accept this, then it suggests that Memeson himself would have intended “Laurel”—but pitch changing a word?  Really?

Today, Laurel and Yanny are both gone, like their father and his company, but their dispute is carried on by their children and grandchildren, with several claims still winding their way through the courts.

Are there any recordings from the platform?  There is one, which was lost for generations before it unexpectedly turned up again. Alas, any hopes that this recording would definitively resolve the matter were … well, just listen to the thing.  Maybe the audio quality isn’t good enough.  Maybe an Unmistakable recording, had it existed, would’ve revealed the observer-independent truth, given us a unique map from the sensory world to the world of meaning.

The Zeroth Commandment

May 6th, 2018

“I call heaven and earth to witness against you this day, that I have set before thee life and death, the blessing and the curse: therefore choose life, that thou mayest live, thou and thy seed.” –Deuteronomy 30:19

“Remember your humanity, and forget the rest.” –Bertrand Russell and Albert Einstein, 1955


I first met Robin Hanson, professor of economics at George Mason University, in 2005, after he and I had exchanged emails about Aumann’s agreement theorem.  I’d previously read Robin’s paper about that theorem with Tyler Cowen, which is called Are Disagreements Honest?, and which stands today as one of the most worldview-destabilizing documents I’ve ever read.  In it, Robin and Tyler develop the argument that you can’t (for example) assert that

  1. you believe that extraterrestrial life probably exists,
  2. your best friend believes it probably doesn’t, and
  3. you and your friend are both honest, rational people who understand Bayes’ Theorem; you just have a reasonable difference of opinion about the alien question, presumably rooted in differing life experiences or temperaments.

For if, to borrow a phrase from Carl Sagan, you “wish to pursue the question courageously,” then you need to consider “indexical hypotheticals”: possible worlds where you and your friend swapped identities.  As far as the Bayesian math is concerned, the fact that you’re you, and your friend is your friend, is just one more contingent fact to conditionalize on: something that might affect what private knowledge you have, but that has no bearing on whether extraterrestrial life exists or doesn’t.  Once you grasp this point, so the argument goes, you should be just as troubled by the fact that your friend disagrees with you, as you would be were the disagreement between two different aspects of your self.  To put it differently: there might be a billion flavors of irrationality, but insofar as people can talk to each other and are honest and rational, they should converge on exactly the same conclusions about every matter of fact, even ones as remote-sounding as the existence of extraterrestrial life.

When I read this, my first reaction was that it was absurdly wrong and laughable.  I confess that I was even angry, to see something so counter to everything I knew asserted with such blithe professorial confidence.  Yet, in a theme that will surely be familiar with anyone who’s engaged with Robin or his writing, I struggled to articulate exactly why the argument was wrong.  My first guess was that, just like typical straitjacketed economists, Robin and Tyler had simply forgotten that real humans lack unlimited time to think and converse with each other.  Putting those obvious limitations back into the theory, I felt, would surely reinstate the verdict of common sense, that of course two people can agree to disagree without violating any dictates of rationality.

Now, if only I’d had the benefit of a modern education on Twitter and Facebook, I would’ve known that I could’ve stopped right there, with the first counterargument that popped into my head.  I could’ve posted something like the following on all my social media accounts:

“Hanson and Cowen, typical narrow-minded economists, ludicrously claim that rational agents with common priors can’t agree to disagree. They stupidly ignore the immense communication and computation that reaching agreement would take.  Why are these clowns allowed to teach?  SAD!”

Alas, back in 2003, I hadn’t yet been exposed to the epistemological revolution wrought by the 280-character smackdown, so I got the idea into my head that I actually needed to prove my objection was as devastating as I thought.  So I sat down with pen and paper for some hours—and discovered, to my astonishment, that my objection didn’t work at all.  According to my complexity-theoretic refinement of Aumann’s agreement theorem, which I later published in STOC’2005, two Bayesian agents with a common prior can ensure that they agree to within ±ε about the value of a [0,1]-valued random variable, with probability at least 1-δ over their shared prior, by exchanging only O(1/(δε2)) bits of information—completely independent of how much knowledge the agents have.  My conclusion was that, if Aumann’s Nobel-prizewinning theorem fails to demonstrate the irrationality of real-life disagreements, then it’s not for reasons of computational or communication efficiency; it has to be for other reasons instead.  (See also my talk on this at the SPARC summer camp.)

In my and Robin’s conversations—first about Aumann’s theorem, then later about the foundations of quantum mechanics and AI and politics and everything else you can imagine—Robin was unbelievably generous with his time and insights, willing to spend days with me, then a totally unknown postdoc, to get to the bottom of whatever was the dispute at hand.  When I visited Robin at George Mason, I got to meet his wife and kids, and see for myself the almost comical contrast between the conventional nature of his family life and the destabilizing radicalism (some would say near-insanity) of his thinking.  But I’ll say this for Robin: I’ve met many eccentric intellectuals in my life, but I have yet to meet anyone whose curiosity is more genuine than Robin’s, or whose doggedness in following a chain of reasoning is more untouched by considerations of what all the cool people will say about him at the other end.

So if you believe that the life of the mind benefits from a true diversity of opinions, from thinkers who defend positions that actually differ in novel and interesting ways from what everyone else is saying—then no matter how vehemently you disagree with any of his views, Robin seems like the prototype of what you want more of in academia.  To anyone who claims that Robin’s apparent incomprehension of moral taboos, his puzzlement about social norms, are mere affectations masking some sinister Koch-brothers agenda, I reply: I’ve known Robin for years, and while I might be ignorant of many things, on this I know you’re mistaken.  Call him wrongheaded, naïve, tone-deaf, insensitive, even an asshole, but don’t ever accuse him of insincerity or hidden agendas.  Are his open, stated agendas not wild enough for you??

In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it.  Most famously, Robin is one of the major developers of prediction markets, and also the inventor of futarchy—a proposed system of government that would harness prediction markets to get well-calibrated assessments of the effects of various policies.  Robin also first articulated the concept of the Great Filter in the evolution of life in our universe.  It’s Great Filter reasoning that tells us, for example, that if we ever discover fossil microbial life on Mars (or worse yet, simple plants and animals on extrasolar planets), then we should be terrified, because it would mean that several solutions to the Fermi paradox that don’t involve civilizations like ours killing themselves off would have been eliminated.  Sure, once you say it, it sounds pretty obvious … but did you think of it?

Earlier this year, Robin published a book together with Kevin Simler, entitled The Elephant In The Brain: Hidden Motives In Everyday Life.  I was happy to provide feedback on the manuscript and then to offer a jacket blurb (though the publisher cut nearly everything I wrote, leaving only that I considered the book “a masterpiece”).  The book’s basic thesis is that a huge fraction of human behavior, possibly the majority of it, is less about its ostensible purpose than about signalling what kind of people we are—and that this has implications for healthcare and education spending, among many other topics.  (Thus, the book covers some of the same ground as The Case Against Education, by Robin’s GMU colleague Bryan Caplan, which I reviewed here.)

I view The Elephant In The Brain as Robin’s finest work so far, though a huge part of the credit surely goes to Kevin Simler.  Robin’s writing style tends to be … spare.  telegraphic.  He gives you the skeleton of an argument, but leaves it to you to add the flesh, the historical context and real-world examples and caveats.  And he never holds your hand by saying anything like: “I know this is going to sound weird, but…”  Robin doesn’t care how weird it sounds.  With EITB, you get the best of both worlds: Robin’s unique-on-this-planet trains of logic, and Kevin’s considerable gifts at engaging prose.  It’s a powerful combination.

I’m by no means an unqualified Hanson fan.  If you’ve ever felt completely infuriated by Robin—if you’ve ever thought, fine, maybe this guy turned out to be unpopularly right some other times, but this time he’s really just being willfully and even dangerously obtuse—then know that I’ve shared that feeling more than most over the past decade.  I recall in particular a lecture that Robin gave years ago in which he argued—and I apologize to Robin if I mangle a detail, but this was definitely the essence—that even if you grant that anthropogenic climate change will destroy human civilization and most complex ecosystems hundreds of years from now, that’s not necessarily something you should worry about, because if you apply the standard exponential time-discounting that economists apply to everything else, along with reasonable estimates for the monetary value of everything on earth, you discover that all life on earth centuries from now just isn’t worth very much in today’s dollars.

On hearing this, the familiar Hanson-emotions filled me: White-hot, righteous rage.  Zeal to cut Robin down, put him in his place, for the sake of all that’s decent in humanity.  And then … confusion about where exactly his argument fails.

For whatever it’s worth, I’d probably say today that Robin is wrong on this, because economists’ exponential discounting implicitly assumes that civilization’s remarkable progress of the last few centuries will continue unabated, which is the very point that the premise of the exercise denies.  But notice what I can’t say: “shut up Robin, we’ve all heard this right-wing libertarian nonsense before.”  Even when Robin spouts nonsense, it’s often nonsense that no one has heard before, brought back from intellectual continents that wouldn’t be on the map had Robin not existed.


So why am I writing about Robin now?  If you haven’t been living in a non-wifi-equipped cave, you probably know the answer.

A week ago, alas, Robin blogged his confusion about why the people most concerned about inequalities of wealth, never seem to be concerned about inequalities of romantic and sexual fulfillment—even though, in other contexts, those same people would probably affirm that relationships are much more important to their personal happiness than wealth is.  As a predictable result of his prodding this angriest hornet’s-nest on the planet, Robin has now been pilloried all over the Internet, in terms that make the attacks on me three years ago over the comment-171 affair look tender and kind by comparison.  The attacks included a Slate hit-piece entitled “Is Robin Hanson America’s Creepiest Economist?” (though see also this in-depth followup interview), a Wonkette post entitled “This Week In Garbage Men: Incels Sympathizers [sic] Make Case for Redistribution of Vaginas,” and much more.  Particularly on Twitter, Robin’s attackers have tended to use floridly profane language, and to target his physical appearance and assumed sexual proclivities and frustrations; some call for his firing or death.  I won’t link to the stuff; you can find it.

Interestingly, many of the Twitter attacks assume that Robin himself must be an angry “incel” (short for “involuntary celibate”), since who else could treat that particular form of human suffering as worthy of reply?  Few seem to have done the 10-second research to learn that, in reality, Robin is a happily married father of two.

I noticed the same strange phenomenon during the comment-171 affair: commentators on both left and right wanted to make me the poster child for “incels,” with a few offering me advice, many swearing they would’ve guessed it immediately from my photograph.  People apparently didn’t read just a few paragraphs into my story—to the part where, once I finally acquired some of the norms that mainstream culture refuses to tell people, I enjoyed a normal or even good dating life, eventually marrying a brilliant fellow theoretical computer scientist, with whom I started raising a rambunctious daughter (who’s now 5, and who’s been joined by our 1-year-old son).  If not for this happy ending, I too might have entertained my critics’ elaborate theories about my refusal to accept my biological inferiority, my simply having lost the genetic lottery (ability to do quantum computing research notwithstanding).  But what can one do faced with the facts?


For the record: I think that Robin should never, ever have made this comparison, and I wish he’d apologize for it now.  Had he asked my advice, I would’ve screamed “DON’T DO IT” at the top of my lungs.  I once contemplated such a comparison myself—and even though it was many years ago, in the depths of a terrifying relapse of the suicidal depression that had characterized much of my life, I still count it among my greatest regrets.  I hereby renounce and disown the comparison forever.  And I beg forgiveness from anyone who was hurt or offended by it—or for that matter, by anything else I ever said, on this blog or elsewhere.

Indeed, let me go further: if you were ever hurt or offended by anything I said, and if I can make partial restitution to you by taking some time to field your questions about quantum computing and information, or math, CS, and physics more generally, or academic career advice, or anything else where I’m said to know something, please shoot me an email.  I’m also open to donating to your favorite charity.

My view is this: the world in which a comparison between the sufferings of the romantically and the monetarily impoverished could increase normal people’s understanding of the former, is so different from our world as to be nearly unrecognizable.  To say that this comparison is outside the Overton window is a comic understatement: it’s outside the Overton galaxy.  Trying to have the conversation that Robin wanted to have on social media, is a little like trying to have a conversation about microaggressions in 1830s Alabama.  At first, your listeners will simply be confused—but their confusion will be highly unstable, like a Higgs boson, and will decay in about 10-22 seconds into righteous rage.

For experience shows that, if you even breathe a phrase like “the inequality of romantic and sexual fulfillment,” no one who isn’t weird in certain ways common in the hard sciences (e.g., being on the autism spectrum) will be able to parse you as saying anything other than that sex ought to be “redistributed” by the government in the same way that money is redistributed, which in turn suggests a dystopian horror scenario where women are treated like property, married against their will, and raped.  And it won’t help if you shout from the rooftops that you want nothing of this kind, oppose it as vehemently as your listeners do.  For, not knowing what else you could mean, the average person will continue to impose the nightmare scenario on anything you say, and will add evasiveness and dishonesty to the already severe charges against you.

Before going any further in this post, let me now say that any male who wants to call himself my ideological ally ought to agree to the following statement.

I hold the bodily autonomy of women—the principle that women are freely-willed agents rather than the chattel they were treated as for too much of human history; that they, not their fathers or husbands or anyone else, are the sole rulers of their bodies; and that they must never under any circumstances be touched without their consent—to be my Zeroth Commandment, the foundation-stone of my moral worldview, the starting point of every action I take and every thought I think.  This principle of female bodily autonomy, for me, deserves to be chiseled onto tablets of sapphire, placed in a golden ark adorned with winged cherubim sitting atop a pedestal inside the Holy of Holies in a temple on Mount Moriah.

This, or something close to it, is really what I believe.  And I advise any lonely young male nerd who might be reading this blog to commit to the Zeroth Commandment as well, and to the precepts of feminism more broadly.

To such a nerd, I say: yes, throughout your life you’ll encounter many men and women who will despise you for being different, in ways that you’re either powerless to change, or could change only at the cost of renouncing everything you are.  Yet, far from excusing any moral lapses on your part, this hatred simply means that you need to adhere to a higher moral standard than most people.  For whenever you stray even slightly from the path of righteousness, the people who detest nerds will leap excitedly, seeing irrefutable proof of all their prejudices.  Do not grant them that victory.  Do not create a Shanda fur die Normies.

I wish I believed in a God who could grant you some kind of eternal salvation, in return for adhering to a higher moral standard throughout your life, and getting in return at best grudging toleration, as well as lectures about your feminist failings by guys who’ve obeyed the Zeroth Commandment about a thousandth as scrupulously as you have.  As an atheist, though, the most I can offer you is that you can probably understand the proof of Cantor’s theorem, while most of those who despise you probably can’t.  And also: as impossible as it might seem right now, there are ways that even you can pursue the ordinary, non-intellectual kinds of happiness in life, and there will be many individuals along the way ready to help you: the ones who remember their humanity and forget their ideology.  I wish you the best.


Amid the many vitriolic responses to Robin—fanned, it must be admitted, by Robin’s own refusal to cede any ground to his critics, or to modulate his style or tone in the slightest—the one striking outlier was a New York Times essay by Ross Douthat.  This essay, which has itself now been widely panned, uses Robin as an example of how, in Douthat’s words, “[s]ometimes the extremists and radicals and weirdos see the world more clearly than the respectable and moderate and sane.  Douthat draws an interesting parallel between Robin and the leftist feminist philosopher Amia Srinivasan, who recently published a beautifully-written essay in the London Review of Books entitled Does anyone have the right to sex?  In analyzing that question, Srinivasan begins by discussing male “incels,” but then shifts her attention to far more sympathetic cases: women and men suffering severe physical or mental disabilities (and who, in some countries, can already hire sexual surrogates with government support); who were disfigured by accidents; who are treated as undesirable for racist reasons.  Let me quote from her conclusion:

The question, then, is how to dwell in the ambivalent place where we acknowledge that no one is obligated to desire anyone else, that no one has a right to be desired, but also that who is desired and who isn’t is a political question, a question usually answered by more general patterns of domination and exclusion … the radical self-love movements among black, fat and disabled women do ask us to treat our sexual preferences as less than perfectly fixed. ‘Black is beautiful’ and ‘Big is beautiful’ are not just slogans of empowerment, but proposals for a revaluation of our values … The question posed by radical self-love movements is not whether there is a right to sex (there isn’t), but whether there is a duty to transfigure, as best we can, our desires.

All over social media, there are howls of outrage that Douthat would dare to mention Srinivasan’s essay, which is wise and nuanced and humane, in the same breath as the gross, creepy, entitled rantings of Robin Hanson.  I would say: grant that Srinivasan and Hanson express themselves extremely differently, and also that Srinivasan is a trillion times better than Hanson at anticipating and managing her readers’ reactions.  Still, on the merits, is there any relevant difference between the two cases beyond: “undesirability” of the disabled, fat, and trans should be critically examined and interrogated, because those people are objects of progressive sympathy; whereas “undesirability” of nerdy white and Asian males should be taken as a brute fact or even celebrated, because those people are objects of progressive contempt?

To be fair, a Google search also turns up progressives who, dissenting from the above consensus, excoriate Srinivasan for her foray, however thoughtful, into taboo territory.  As best I can tell, the dissenters’ argument runs like so: as much as it might pain us, we must not show any compassion to women and men who are suicidally lonely and celibate by virtue of being severely disabled, disfigured, trans, or victims of racism.  For if we did, then consistency might eventually force us to show compassion to white male nerds as well.


Here’s the central point that I think Robin failed to understand: society, today, is not on board even with the minimal claim that the suicidal suffering of men left behind by the sexual revolution really exists—or, if it does, that it matters in the slightest or deserves any sympathy or acknowledgment whatsoever.  Indeed, the men in question pretty much need to be demonized as entitled losers and creeps, because if they weren’t, then sympathy for them—at least, for those among them who are friends, coworkers, children, siblings—might become hard to prevent.  In any event, it seems to me that until we as a society resolve the preliminary question, of whether to recognize a certain category of suffering as real, there’s no point even discussing how policy or culture might help to address the suffering, consistently with the Zeroth Commandment.

Seen in this light, Robin is a bit like the people who email me every week imagining they can prove P≠NP, yet who can’t even prove astronomically easier statements, even ones that are already known.  When trying to scale an intellectual Everest, you might as well start with the weakest statement that’s already unproven or non-obvious or controversial.

So where are we today?  Within the current Overton window, a perfectly appropriate response to suicidal loneliness and depression among the “privileged” (i.e., straight, able-bodied, well-educated white or Asian men) seems to be: “just kill yourselves already, you worthless cishet scum, and remove your garbage DNA from the gene pool.”  If you think I’m exaggerating, I beseech you to check for yourself on Twitter.  I predict you’ll find that and much worse, wildly upvoted, by people who probably go to sleep every night congratulating themselves for their progressivism, their egalitarianism, and—of course—their burning hatred for anything that smacks of eugenics.

A few days ago, Ellen Pao, the influential former CEO of Reddit, tweeted:

CEOs of big tech companies: You almost certainly have incels as employees. What are you going to do about it?

Thankfully, even many leftists reacted with horror to Pao’s profoundly illiberal question.  They wondered about the logistics she had in mind: does she want tech companies to spy on their (straight, male) employees’ sex lives, or lack thereof?  If any are discovered who are (1) celibate and (2) bitter at the universe about it, then will it be an adequate defense against firing if they’re also feminists, who condemn misogyny and violence and affirm the Zeroth Commandment?  Is it not enough that these men were permanently denied the third level of Maslow’s hierarchy of needs (the one right above physical safety); must they also be denied careers as a result?  And is this supposed to prevent their radicalization?

For me, the scariest part of Pao’s proposal is that, whatever in this field is on the leftmost fringe of the Overton window today, experience suggests we’ll find it smack in the center a decade from now.  So picture a future wherein, if you don’t support rounding up and firing your company’s romantically frustrated—i.e., the policy of “if you don’t get laid, you don’t get paid”—then that itself is a shockingly reactionary attitude, and grounds for your own dismissal.

Some people might defend Pao by pointing out that she was only asking a question, not proposing a specific policy.  But then, the same is true of Robin Hanson.


Why is it so politically difficult even to show empathy toward socially awkward, romantically challenged men—to say to them, “look, I don’t know what if anything can be done about your problem, but yeah, the sheer cosmic arbitrariness of it kind of sucks, and I sympathize with you”?  Why do enlightened progressives, if they do offer such words of comfort to their “incel” friends, seem to feel about it the same way Huck Finn did, at the pivotal moment in Western literature when he decides to help his friend Jim escape from slavery—i.e., not beaming with pride over his own moral courage, but ashamed of himself, and resigned that he’ll burn in hell for the sake of a mere personal friendship?

This is a puzzle, but I think I might know the answer.  We begin with the observation that virtually every news article, every thinkpiece, every blog post about “incels,” fronts contemptible mass murderers like Elliot Rodger and Alek Minassian, who sought bloody revenge on a world that failed to provide them the women to whom they felt entitled; as well as various Internet forums (many recently shut down) where this subhuman scum was celebrated by other scum.

The question is: why don’t people look at the broader picture, as they’ve learned to do in so many other cases?  In other words, why don’t they say:

  • There really do exist extremist Muslims, who bomb schools and buses, or cheer and pass out candies when that happens, and who wish to put the entire world under Sharia on point of the sword.  Fortunately, the extremists are outnumbered by hundreds of millions of reasonable Muslims, with whom anyone, even a Zionist Jew like me, can have a friendly conversation in which we discuss our respective cultures’ grievances and how they might be addressed in a win-win manner.  (My conversations with Iranian friends sometimes end with us musing that, if only they made them Ayatollah and me Israeli Prime Minister, we could sign a peace accord next week, then go out for kebabs and babaganoush.)
  • There really are extremist leftists—Marxist-Leninist-Maoist-whateverists—who smash store windows, kill people (or did, in the 60s), and won’t be satisfied by anything short of the total abolition of private property and the heads of the capitalists lining the streets on pikes.  But they’re vastly outnumbered by the moderate progressives, like me, who are less about proletarian revolution than they are about universal healthcare, federal investment in science and technology, a carbon tax, separation of church and state, and stronger protection of national parks.
  • In exactly the same way, there are “incel extremists,” like Rodger or Minassian, spiteful losers who go on killing sprees because society didn’t give them the sex they were “owed.”  But they’re outnumbered by tens of millions of decent, peaceful people who could reasonably be called “incels”—those who desperately want romantic relationships but are unable to achieve them, because of extreme shyness, poor social skills, tics, autism-spectrum traits, lack of conventional attractiveness, bullying, childhood traumas, etc.—yet who’d never hurt a fly.  These moderates need not be “losers” in all aspects of life: many have fulfilling careers and volunteer and give to charity and love their nieces and nephews, some are world-renowned scientists and writers.  For many of the moderates, it might be true that recent cultural shifts exacerbated their problems; that an unlucky genetic dice-roll “optimized” them for a world that no longer exists.  These people deserve the sympathy and support of the more fortunate among us; they constitute a political bloc entitled to advocate for its interests, as other blocs do; and all decent people should care about how we might help them, consistently with the Zeroth Commandment.

The puzzle, again, is: why doesn’t anyone say this?

And I think the answer is simply that no one ever hears from “moderate incels.”  And the reason, in turn, becomes obvious the instant you think about it.  Would you volunteer to march at the front of the Lifelong Celibacy Awareness Parade?  Or to be identified by name as the Vice President of the League of Peaceful and Moderate Incels?  Would you accept such a social death warrant?  It takes an individual with extraordinary moral courage, such as Scott Alexander, even to write anything whatsoever about this issue that tries to understand or help the sufferers rather than condemn them.  For this reason—i.e., purely, 100% a selection effect, nothing more—the only times the wider world ever hears anything about “incels” is when some despicable lunatic like Rodger or Minassian snaps and murders the innocent.  You might call this the worst PR problem in the history of the world.


So what’s the solution?  While I’m not a Christian, I find that Jesus’ prescription of universal compassion has a great deal to recommend it here—applied liberally, like suntan lotion, to every corner of the bitter “SJW vs. incel” online debate.

The usual stereotype of nerds is that, while we might be good at memorizing facts or proving theorems or coding up filesystems, we’re horrendously deficient in empathy and compassion, constantly wanting to reduce human emotions to numbers in spreadsheets or something.  As I’ve remarked elsewhere, I’ve scarcely encountered any stereotype that rings falser to my experience.  In my younger, depressed phase, when I was metaphorically hanging on to life by my fingernails, it was nerds and social misfits who offered me their hands up, while many of the “normal, well-adjusted, socially competent” people gleefully stepped on my fingers.

But my aspiration is not merely that we nerds can do just as well at compassion as those who hate us.  Rather, I hope we can do better.  This isn’t actually such an ambitious goal.  To achieve it, all we need to do is show universal, Jesus-style compassion, to politically favored and disfavored groups alike.

To me that means: compassion for the woman facing sexual harassment, or simply quizzical glances that wonder what she thinks she’s doing pursuing a PhD in physics.  Compassion for the cancer patient, for the bereaved parent, for the victim of famine.  Compassion for the undocumented immigrant facing deportation.  Compassion for the LGBT man or woman dealing with self-doubts, ridicule, and abuse.  Compassion for the nerdy male facing suicidal depression because modern dating norms, combined with his own shyness and fear of rule-breaking, have left him unable to pursue romance or love.  Compassion for the woman who feels like an ugly, overweight, unlovable freak who no one will ask on dates.  Compassion for the African-American victim of police brutality.  Compassion even for the pedophile who’d sooner kill himself than hurt a child, but who’s been given no support for curing or managing his condition.  This is what I advocate.  This is my platform.

If I ever decided to believe the portrait of me painted by Arthur Chu, or the other anti-Aaronson Twitter warriors, then I hope I’d have the moral courage to complete their unstated modus ponens, by quietly swallowing a bottle of sleeping pills.  After all, Chu’s vision of the ideal future seems to have no more room for me in it than Eichmann’s did.  But the paradoxical corollary is that, every time I remind myself why I think Chu is wrong, it feels like a splendorous affirmation of life itself.  I affirm my love for my wife and children and parents and brother, my bonds with my friends around the world, the thrill of tackling a new research problem and sharing my progress with colleagues, the joy of mentoring students of every background and religion and gender identity, the smell of fresh-baked soft pretzels and the beauty of the full moon over the Mediterranean.  If I had to find pearls in manure, I’d say: with their every attack, the people who hate me give me a brand-new opportunity to choose life over death, and better yet to choose compassion over hatred—even compassion for the haters themselves.

(Far be it from me to psychoanalyze him, as he constantly does to me, but Chu’s unremitting viciousness doesn’t strike me as coming from a place of any great happiness with his life.  So I say: may even Mr. Chu find whatever he’s looking for.  And while his utopia might have no place for me, I’m determined that mine should have a place for him—even if it’s just playing Jeopardy! and jumping around to find the Daily Doubles.)

It’s a commonplace that sometimes, the only way you can get a transformative emotional experience—like awe at watching the first humans walk on the moon, or joy at reuniting with a loved one after a transatlantic flight—is on top of a mountain of coldly rational engineering and planning.  But the current Robin Hanson affair reminds us that the converse is true as well.  I.e., the only way we can have the sort of austere, logical, norm-flouting conversations about the social world that Robin has been seeking to have for decades, without the whole thing exploding in thermonuclear anger, is on top of a mountain of empathy and compassion.  So let’s start building that mountain.


Endnotes. Already, in my mind’s eye, I can see the Twitter warriors copying and sharing whichever sentence of this post angered them the most, using it as proof that I’m some lunatic who should never be listened to about anything. I’m practically on my hands and knees begging you here: show that my fears are unjustified.  Respond, by all means, but respond to the entirety of what I had to say.

I welcome comments, so long as they’re written in a spirit of kindness and mutual respect. But because writing this post was emotionally and spiritually draining for me–not to mention draining in, you know, time—I hope readers won’t mind if I spend a day or two away, with my wife and kids and my research, before participating in the comments myself.


Update (May 7). Numerous commenters have successfully convinced me that the word “incel,” though it literally just means “involuntary celibate,” and was in fact coined by a woman to describe her own experience, has been permanently disgraced by its association with violent misogynists and their online fan clubs.  It will never again regain its original meaning, any more than “Adolf” will ever again be just a name; nor will one be able to discuss “moderate incels” as distinct from the extremist kind.  People of conscience will need to be extremely vigilant against motte-and-bailey tactics—wherein society’s opinion-makers will express their desire for all “incels” to be silenced or fired or removed from the gene pool or whatever, obviously having in mind all romantically frustrated male nerds (all of whom they despise), and will fall back when challenged (and only when challenged) on the defense that they only meant the violence-loving misogynists.  For those of us motivated by compassion rather than hatred, though, we need another word.  I suggest the older term “love-shy,” coined by Brian Gilmartin in his book on the subject.

Meanwhile, be sure to check out this comment by “Sniffnoy” for many insightful criticisms of this post, most of which I endorse.

Review of Bryan Caplan’s The Case Against Education

April 26th, 2018

If ever a book existed that I’d judge harshly by its cover—and for which nothing inside could possibly make me reverse my harsh judgment—Bryan Caplan’s The Case Against Education would seem like it.  The title is not a gimmick; the book’s argument is exactly what it says on the tin.  Caplan—an economist at George Mason University, home of perhaps the most notoriously libertarian economics department on the planet—holds that most of the benefit of education to students (he estimates around 80%, but certainly more than half) is about signalling the students’ preexisting abilities, rather than teaching or improving the students in any way.  He includes the entire educational spectrum in his indictment, from elementary school all the way through college and graduate programs.  He does have a soft spot for education that can be shown empirically to improve worker productivity, such as technical and vocational training and apprenticeships.  In other words, precisely the kind of education that many readers of this blog may have spent their lives trying to avoid.

I’ve spent almost my whole conscious existence in academia, as a student and postdoc and then as a computer science professor.  CS is spared the full wrath that Caplan unleashes on majors like English and history: it does, after all, impart some undeniable real-world skills.  Alas, I’m not one of the CS professors who teaches anything obviously useful, like how to code or manage a project.  When I teach undergrads headed for industry, my only role is to help them understand concepts that they probably won’t need in their day jobs, such as which problems are impossible or intractable for today’s computers; among those, which might be efficiently solved by quantum computers decades in the future; and which parts of our understanding of all this can be mathematically proven.

Granted, my teaching evaluations have been [clears throat] consistently excellent.  And the courses I teach aren’t major requirements, so the students come—presumably?—because they actually want to know the stuff.  And my former students who went into industry have emailed me, or cornered me, to tell me how much my courses helped them with their careers.  OK, but how?  Often, it’s something about my class having helped them land their dream job, by impressing the recruiters with their depth of theoretical understanding.  As we’ll see, this is an “application” that would make Caplan smile knowingly.

If Caplan were to get his way, the world I love would be decimated.  Indeed, Caplan muses toward the end of the book that the world he loves would be decimated too: in a world where educational investment no longer exceeded what was economically rational, he might no longer get to sit around with other economics professors discussing what he finds interesting.  But he consoles himself with the thought that decisionmakers won’t listen to him anyway, so it won’t happen.

It’s tempting to reply to Caplan: “now now, your pessimism about anybody heeding your message seems unwarranted.  Have anti-intellectual zealots not just taken control of the United States, with an explicit platform of sticking it to the educated elites, and restoring the primacy of lower-education jobs like coal mining, no matter the long-term costs to the economy or the planet?  So cheer up, they might listen to you!”

Indeed, given the current stakes, one might simply say: Caplan has set himself against the values that are the incredibly fragile preconditions for all academic debate—even, ironically, debate about the value of academia, like the one we’re now having.  So if we want such debate to continue, then we have no choice but to treat Caplan as an enemy, and frame the discussion around how best to frustrate his goals.

In response to an excerpt of Caplan’s book in The Atlantic, my friend Sean Carroll tweeted:

It makes me deeply sad that a tenured university professor could write something like this about higher education.  There is more to learning than the labor market.

Why should anyone with my basic values, or Sean’s, give Caplan’s thesis any further consideration?  As far as I can tell, there are only two reasons: (1) common sense, and (2) the data.

In his book, Caplan presents dozens of tables and graphs, but he also repeatedly asks his readers to consult their own memories—exploiting the fact that we all have firsthand experience of school.  He asks: if education is about improving students’ “human capital,” then why are students so thrilled when class gets cancelled for a snowstorm?  Why aren’t students upset to be cheated out of some of the career-enhancing training that they, or their parents, are paying so much for?  Why, more generally, do most students do everything in their power—in some cases, outright cheating—to minimize the work they put in for the grade they receive?  Is there any product besides higher education, Caplan asks, that people pay hundreds of thousands of dollars for, and then try to consume as little of as they can get away with?  Also, why don’t more students save hundreds of thousands of dollars by simply showing up at a university and sitting in on classes without paying—something that universities make zero effort to stop?  (Many professors would be flattered, and would even add you to the course email list, entertain your questions, and give you access to the assignments—though they wouldn’t grade your assignments.)

And: if the value of education comes from what it teaches you, how do we explain the fact that students forget almost everything so soon after the final exam, as attested by both experience and the data?  Why are employers satisfied with a years-ago degree; why don’t they test applicants to see how much understanding they’ve retained?

Or if education isn’t about any of the specific facts being imparted, but about “learning how to learn” or “learning how to think creatively”—then how is it that studies find academic coursework has so little effect on students’ general learning and reasoning abilities either?  That, when there is an improvement in reasoning ability, it’s tightly concentrated on the subject matter of the course, and even then it quickly fades away after the course is over?

More broadly, if the value of mass education derives from making people more educated, how do we explain the fact that high-school and college graduates, most of them, remain so abysmally ignorant?  After 12-16 years in something called “school,” large percentages of Americans still don’t know that the earth orbits the sun; believe that heavier objects fall faster than lighter ones and that only genetically modified organisms contain genes; and can’t locate the US or China on a map.  Are we really to believe, asks Caplan, that these apparent dunces have nevertheless become “deeper thinkers” by virtue of their schooling, in some holistic, impossible-to-measure way?  Or that they would’ve been even more ignorant without school?  But how much more ignorant can you be?  They could be illiterate, yes: Caplan grants the utility of teaching reading, writing, and arithmetic.  But how much beyond the three R’s (if those) do typical students retain, let alone use?

Caplan also poses the usual questions: if you’re not a scientist, engineer, or academic (or even if you are), how much of your undergraduate education do you use in your day job?  How well did the course content match what, in retrospect, you feel someone starting your job really needs to know?  Could your professors do your job?  If not, then how were they able to teach you to do it better?

Caplan acknowledges the existence of inspiring teachers who transform their students’ lives, in ways that need not be reflected in their paychecks: he mentions Robin Williams’ character in The Dead Poets’ Society.  But he asks: how many such teachers did you have?  If the Robin Williamses are vastly outnumbered by the drudges, then wouldn’t it make more sense for students to stream the former directly into their homes via the Internet—as they can now do for free?

OK, but if school teaches so little, then how do we explain the fact that, at least for those students who are actually able to complete good degrees, research confirms that (on average) having gone to school really does pay, exactly as advertised?  Employers do pay more for a college graduate—yes, even an English or art history major—than for a dropout.  More generally, starting salary rises monotonically with level of education completed.  Employers aren’t known for a self-sacrificing eagerness to overpay.  Are they systematically mistaken about the value of school?

Synthesizing decades of work by other economists, Caplan defends the view that the main economic function of school is to give students a way to signal their preexisting qualities, ones that correlate with being competent workers in a modern economy.  I.e., that school is tacitly a huge system for winnowing and certifying young people, which also fulfills various subsidiary functions, like keeping said young people off the street, socializing them, maybe occasionally even teaching them something.  Caplan holds that, judged as a certification system, school actually works—well enough to justify graduates’ higher starting salaries, without needing to postulate any altruistic conspiracy on the part of employers.

For Caplan, a smoking gun for the signaling theory is the huge salary premium of an actual degree, compared to the relatively tiny premium for each additional year of schooling other than the degree year—even when we hold everything else constant, like the students’ academic performance.  In Caplan’s view, this “sheepskin effect” even lets us quantitatively estimate how much of the salary premium on education reflects actual student learning, as opposed to the students signaling their suitability to be hired in a socially approved way (namely, with a diploma or “sheepskin”).

Caplan knows that the signaling story raises an immediate problem: namely, if employers just want the most capable workers, then knowing everything above, why don’t they eagerly recruit teenagers who score highly on the SAT or IQ tests?  (Or why don’t they make job offers to high-school seniors with Harvard acceptance letters, skipping the part where the seniors have to actually go to Harvard?)

Some people think the answer is that employers fear getting sued: in the 1971 Griggs vs. Duke Power case, the US Supreme Court placed restrictions on the use of intelligence tests in hiring, because of disparate impact on minorities.  Caplan, however, rejects this explanation, pointing out that it would be child’s-play for employers to design interview processes that functioned as proxy IQ tests, were that what the employers wanted.

Caplan’s theory is instead that employers don’t value only intelligence.  Instead, they care about the conjunction of intelligence with two other traits: conscientiousness and conformity.  They want smart workers who will also show up on time, reliably turn in the work they’re supposed to, and jump through whatever hoops authorities put in front of them.  The main purpose of school, over and above certifying intelligence, is to serve as a hugely costly and time-consuming—and therefore reliable—signal that the graduates are indeed conscientious conformists.  The sheer game-theoretic wastefulness of the whole enterprise rivals the peacock’s tail or the bowerbird’s ornate bower.

But if true, this raises yet another question.  In the signaling story, graduating students (and their parents) are happy that the students’ degrees land them good jobs.  Employers are happy that the education system supplies them with valuable workers, pre-screened for intelligence, conscientiousness, and conformity.  Even professors are happy that they get paid to do research and teach about topics that interest them, however irrelevant those topics might be to the workplace.  So if so many people are happy, who cares if, from an economic standpoint, it’s all a big signaling charade, with very little learning taking place?

For Caplan, the problem is this: because we’ve all labored under the mistaken theory that education imparts vital skills for a modern economy, there are trillions of dollars of government funding for every level of education—and that, in turn, removes the only obstacle to a credentialing arms race.  The equilbrium keeps moving over the decades, with more and more years of mostly-pointless schooling required to prove the same level of conscientiousness and conformity as before.  Jobs that used to require only a high-school diploma now require a bachelors; jobs that used to require only a bachelors now require a masters, and so on—despite the fact that the jobs themselves don’t seem to have changed appreciably.

For Caplan, a thoroughgoing libertarian, the solution is as obvious as it is radical: abolish government funding for education.  (Yes, he explicitly advocates a complete “separation of school and state.”)  Or if some state role in education must be retained, then let it concentrate on the three R’s and on practical job skills.  But what should teenagers do, if we’re no longer urging them to finish high school?  Apparently worried that he hasn’t yet outraged liberals enough, Caplan helpfully suggests that we relax the laws around child labor.  After all, he says, if we’ve decided anyway that teenagers who aren’t academically inclined should suffer through years of drudgery, then instead of warming a classroom seat, why shouldn’t they apprentice themselves to a carpenter or a roofer?  That way they could contribute to the economy, and gain the independence from their parents that most of them covet, and learn skills that they’d be much more likely to remember and use than the dissection of owl pellets.  Even if working a real job involved drudgery, at least it wouldn’t be as pointless as the drudgery of school.

Given his conclusions, and the way he arrives at them, Caplan realizes that he’ll come across to many as a cartoon stereotype of a narrow-minded economist, who “knows the price of everything but the value of nothing.”  So he includes some final chapters in which, setting aside the charts and graphs, he explains how he really feels about education.  This is the context for what I found to be the most striking passages in the book:

I am an economist and a cynic, but I’m not a typical cynical economist.  I’m a cynical idealist.  I embrace the ideal of transformative education.  I believe wholeheardedly in the life of the mind.  What I’m cynical about is people … I don’t hate education.  Rather I love education too much to accept our Orwellian substitute.  What’s Orwellian about the status quo?  Most fundamentally, the idea of compulsory enlightenment … Many idealists object that the Internet provides enlightenment only for those who seek it.  They’re right, but petulant to ask for more.  Enlightenment is a state of mind, not a skill—and state of mind, unlike skill, is easily faked.  When schools require enlightenment, students predictably respond by feigning interest in ideas and culture, giving educators a false sense of accomplishment. (p. 259-261)

OK, but if one embraces the ideal, then rather than dynamiting the education system, why not work to improve it?  According to Caplan, the answer is that we don’t know whether it’s even possible to build a mass education system that actually works (by his lights).  He says that, if we discover that we’re wasting trillions of dollars on some sector, the first order of business is simply to stop the waste.  Only later should we entertain arguments about whether we should restart the spending in some new, better way, and we shouldn’t presuppose that the sector in question will win out over others.


Above, I took pains to set out Caplan’s argument as faithfully as I could, before trying to pass judgment on it.  At some point in a review, though, the hour of judgment arrives.

I think Caplan gets many things right—even unpopular things that are difficult for academics to admit.  It’s true that a large fraction of what passes for education doesn’t deserve the name—even if, as a practical matter, it’s far from obvious how to cut that fraction without also destroying what’s precious and irreplaceable.  He’s right that there’s no sense in badgering weak students to go to college if those students are just going to struggle and drop out and then be saddled with debt.  He’s right that we should support vocational education and other non-traditional options to serve the needs of all students.  Nor am I scandalized by the thought of teenagers apprenticing themselves to craftspeople, learning skills that they’ll actually value while gaining independence and starting to contribute to society.  This, it seems to me, is a system that worked for most of human history, and it would have to fail pretty badly in order to do worse than, let’s say, the average American high school.  And in the wake of the disastrous political upheavals of the last few years, I guess the entire world now knows that, when people complain that the economy isn’t working well enough for non-college-graduates, we “technocratic elites” had better have a better answer ready than “well then go to college, like we did.”

Yes, probably the state has a compelling interest in trying to make sure nearly everyone is literate, and probably most 8-year-olds have no clue what’s best for themselves.  But at least from adolescence onward, I think that enormous deference ought to be given to students’ choices.  The idea that “free will” (in the practical rather than metaphysical sense) descends on us like a halo on our 18th birthdays, having been absent beforehand, is an obvious fiction.  And we all know it’s fiction—but it strikes me as often a destructive fiction, when law and tradition force us to pretend that we believe it.

Some of Caplan’s ideas dovetail with the thoughts I’ve had myself since childhood on how to make the school experience less horrible—though I never framed my own thoughts as “against education.”  Make middle and high schools more like universities, with freedom of movement and a wide range of offerings for students to choose from.  Abolish hall passes and detentions for lateness: just like in college, the teacher is offering a resource to students, not imprisoning them in a dungeon.  Don’t segregate by age; just offer a course or activity, and let kids of any age who are interested show up.  And let kids learn at their own pace.  Don’t force them to learn things they aren’t ready for: let them love Shakespeare because they came to him out of interest, rather than loathing him because he was forced down their throats.  Never, ever try to prevent kids from learning material they are ready for: instead of telling an 11-year-old teaching herself calculus to go back to long division until she’s the right age (does that happen? ask how I know…), say: “OK hotshot, so you can differentiate a few functions, but can you handle these here books on linear algebra and group theory, like Terry Tao could have when he was your age?”

Caplan mentions preschool as the one part of the educational system that strikes him as least broken.  Not because it has any long-term effects on kids’ mental development (it might not), just because the tots enjoy it at the time.  They get introduced to a wide range of fun activities.  They’re given ample free time, whether for playing with friends or for building or drawing by themselves.  They’re usually happy to be dropped off.  And we could add: no one normally minds if parents drop their kids off late, or pick them up early, or take them out for a few days.  The preschool is just a resource for the kids’ benefit, not a never-ending conformity test.  As a father who’s now seen his daughter in three preschools, this matches my experience.

Having said all this, I’m not sure I want to live in the world of Caplan’s “complete separation of school and state.”  And I’m not using “I’m not sure” only as a euphemism for “I don’t.”  Caplan is proposing a radical change that would take civilization into uncharted territory: as he himself notes, there’s not a single advanced country on earth that’s done what he advocates.  The trend has everywhere been in the opposite direction, to invest more in education as countries get richer and more technology-based.  Where there have been massive cutbacks to education, the causes have usually been things like famine or war.

So I have the same skepticism of Caplan’s project that I’d have (ironically) of Bolshevism or any other revolutionary project.  I say to him: don’t just persuade me, show me.  Show me a case where this has worked.  In the social world, unlike the mathematical world, I put little stock in long chains of reasoning unchecked by experience.

Caplan explicitly invites his readers to test his assertions against their own lives.  When I do so, I come back with a mixed verdict.  Before college, as you may have gathered, I find much to be said for Caplan’s thesis that the majority of school is makework, the main purposes of which are to keep the students out of trouble and on the premises, and to certify their conscientiousness and conformity.  There are inspiring teachers here and there, but they’re usually swimming against the tide.  I still feel lucky that I was able to finagle my way out by age 15, and enter Clarkson University and then Cornell with only a G.E.D.

In undergrad, on the other hand, and later in grad school at Berkeley, my experience was nothing like what Caplan describes.  The professors were actual experts: people who I looked up to or even idolized.  I wanted to learn what they wanted to teach.  (And if that ever wasn’t the case, I could switch to a different class, excepting some major requirements.)  But was it useful?

As I look back, many of my math and CS classes were grueling bootcamps on how to prove theorems, how to design algorithms, how to code.  Most of the learning took place not in the classroom but alone, in my dorm, as I struggled with the assignments—having signed up for the most advanced classes that would allow me in, and thereby left myself no escape except to prove to the professor that I belonged there.  In principle, perhaps, I could have learned the material on my own, but in reality I wouldn’t have.  I don’t still use all of the specific tools I acquired, though I do still use a great many of them, from the Gram-Schmidt procedure to Gaussian integrals to finding my way around a finite group or field.  Even if I didn’t use any of the tools, though, this gauntlet is what upgraded me from another math-competition punk to someone who could actually write research papers with long proofs.  For better or worse, it made me what I am.

Just as useful as the math and CS courses were the writing seminars—places where I had to write, and where my every word got critiqued by the professor and my fellow students, so I had to do a passable job.  Again: intensive forced practice in what I now do every day.  And the fact that it was forced was now fine, because, like some leather-bound masochist, I’d asked to be forced.

On hearing my story, Caplan would be unfazed.  Of course college is immensely useful, he’d say … for those who go on to become professors, like me or him.  He “merely” questions the value of higher education for almost everyone else.

OK, but if professors are at least good at producing more people like themselves, able to teach and do research, isn’t that something, a base we can build on that isn’t all about signaling?  And more pointedly: if this system is how the basic research enterprise perpetuates itself, then shouldn’t we be really damned careful with it, lest we slaughter the golden goose?

Except that Caplan is skeptical of the entire enterprise of basic research.  He writes:

Researchers who specifically test whether education accelerates progress have little to show for their efforts.  One could reply that, given all the flaws of long-run macroeconomic data, we should ignore academic research in favor of common sense.  But what does common sense really say? … True, ivory tower self-indulgence occasionally revolutionizes an industry.  Yet common sense insists the best way to discover useful ideas is to search for useful ideas—not to search for whatever fascinates you and pray it turns out to be useful (p. 175).

I don’t know if common sense insists that, but if it does, then I feel on firm ground to say that common sense is woefully inadequate.  It’s easy to look at most basic research, and say: this will probably never be useful for anything.  But then if you survey the inventions that did change the world over the past century—the transistor, the laser, the Web, Google—you find that almost none would have happened without what Caplan calls “ivory tower self-indulgence.”  What didn’t come directly from universities came from entities (Bell Labs, DARPA, CERN) that wouldn’t have been thinkable without universities, and that themselves were largely freed from short-term market pressures by governments, like universities are.

Caplan’s skepticism of basic research reminded me of a comment in Nick Bostrom’s book Superintelligence:

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn’t.  Though harsh, the remark hints at a truth. (p. 314)

I work in theoretical computer science: a field that doesn’t itself win Fields Medals (at least not yet), but that has occasions to use parts of math that have won Fields Medals.  Of course, the stuff we use cutting-edge math for might itself be dismissed as “ivory tower self-indulgence.”  Except then the cryptographers building the successors to Bitcoin, or the big-data or machine-learning people, turn out to want the stuff we were talking about at conferences 15 years ago—and we discover to our surprise that, just as the mathematicians gave us a higher platform to stand on, so we seem to have built a higher platform for the practitioners.  The long road from Hilbert to Gödel to Turing and von Neumann to Eckert and Mauchly to Gates and Jobs is still open for traffic today.

Yes, there’s plenty of math that strikes even me as boutique scholasticism: a way to signal the brilliance of the people doing it, by solving problems that require years just to understand their statements, and whose “motivations” are about 5,000 steps removed from anything Caplan or Bostrom would recognize as motivation.  But where I part ways is that there’s also math that looked to me like boutique scholasticism, until Greg Kuperberg or Ketan Mulmuley or someone else finally managed to explain it to me, and I said: “ah, so that’s why Mumford or Connes or Witten cared so much about this.  It seems … almost like an ordinary applied engineering question, albeit one from the year 2130 or something, being impatiently studied by people a few moves ahead of everyone else in humanity’s chess game against reality.  It will be pretty sweet once the rest of the world catches up to this.”


I have a more prosaic worry about Caplan’s program.  If the world he advocates were actually brought into being, I suspect the people responsible wouldn’t be nerdy economics professors like himself, who have principled objections to “forced enlightenment” and to signalling charades, yet still maintain warm fuzzies for the ideals of learning.  Rather, the “reformers” would be more on the model of, say, Steve Bannon or Scott Pruitt or Alex Jones: people who’d gleefully take a torch to the universities, fortresses of the despised intellectual elite, not in the conviction that this wouldn’t plunge humanity back into the Dark Ages, but in the hope that it would.

When the US Congress was debating whether to cancel the Superconducting Supercollider, a few condensed-matter physicists famously testified against the project.  They thought that $10-$20 billion for a single experiment was excessive, and that they could provide way more societal value with that kind of money were it reallocated to them.  We all know what happened: the SSC was cancelled, and of the money that was freed up, 0%—absolutely none of it—went to any of the other research favored by the SSC’s opponents.

If Caplan were to get his way, I fear that the story would be similar.  Caplan talks about all the other priorities—from feeding the world’s poor to curing diseases to fixing crumbling infrastructure—that could be funded using the trillions currently wasted on runaway credential signaling.  But in any future I can plausibly imagine where the government actually axes education, the savings go to things like enriching the leaders’ cronies and launching vanity wars.

My preferences for American politics have two tiers.  In the first tier, I simply want the Democrats to vanquish the Republicans, in every office from president down to dogcatcher, in order to prevent further spiraling into nihilistic quasi-fascism, and to restore the baseline non-horribleness that we know is possible for rich liberal democracies.  Then, in the second tier, I want the libertarians and rationalists and nerdy economists and Slate Star Codex readers to be able to experiment—that’s a key word here—with whether they can use futarchy and prediction markets and pricing-in-lieu-of-regulation and other nifty ideas to improve dramatically over the baseline liberal order.  I don’t expect that I’ll ever get what I want; I’ll be extremely lucky even to get the first half of it.  But I find that my desires regarding Caplan’s program fit into the same mold.  First and foremost, save education from those who’d destroy it because they hate the life of the mind.  Then and only then, let people experiment with taking a surgical scalpel to education, removing from it the tumor of forced enlightenment, because they love the life of the mind.

Summer of the Shark

April 19th, 2018

Sometimes a single word or phrase is enough to expand your mental toolkit across almost every subject.  “Averaging argument.”  “Motte and bailey.”  “Empirically indistinguishable.”  “Overfitting.”  Yesterday I learned another such phrase: “Summer of the Shark.”

This, apparently, was the summer of 2001, when lacking more exciting news, the media gave massive coverage to every single shark attack it could find, creating the widespread impression of an epidemic—albeit, one that everyone forgot about after 9/11.  In reality, depending on what you compare it to, the rate of shark attacks was either normal or unusually low in the summer of 2001.  As far as I can tell, the situation is that the absolute number of shark attacks has been increasing over the decades, but the increase is entirely attributable to human population growth (and to way more surfers and scuba divers).  The risk per person, always minuscule (cows apparently kill five times more people), appears to have been going down.  This might or might not be related to the fact that shark populations are precipitously declining all over the world, due mostly to overfishing and finning, but also the destruction of habitat.

There’s a tendency—I notice it in myself—to say, “fine, news outlets have overhyped this trend; that’s what they do.  But still, there must be something going on, since otherwise you wouldn’t see everyone talking about it.”

The point of the phrase “Summer of the Shark” is to remind yourself that a “trend” can be, and often is, entirely a product of people energetically looking for a certain thing, even while the actual rate of the thing is unremarkable, abnormally low, or declining.  Of course this has been a favorite theme of Steven Pinker, but I don’t know if even reading his recent books, Better Angels and Enlightenment Now, fully brought home the problem’s pervasiveness for me.  If a self-sustaining hype bubble can form even over something as relatively easy to measure as the number of shark attacks, imagine how common it must be with more nebulous social phenomena.

Without passing judgment—I’m unsure about many of them myself—how many of the following have you figured, based on the news or your Facebook or Twitter feeds, are probably some sort of epidemic?

  • Crime by illegal immigrants
  • Fraudulent voting by non-citizens
  • SJWs silencing free speech on campus
  • Unemployment in heartland America
  • Outrageous treatment of customers by airlines
  • Mass school shootings
  • Sexism in Silicon Valley
  • Racism at Starbucks

Now be honest: for how many of these do you have any real idea whether the problem is anomalously frequent relative to its historical rate, or to the analogous problems in other sectors of society?  How many seem to be epidemics that require special explanations (“the dysfunctional culture of X”), but only because millions of people started worrying about these particular problems and discussing them—in many cases, thankfully so?  How many seem to be epidemics, but only because people can now record outrageous instances with their smartphones, then make them viral on social media?

Needless to say, the discovery that a problem is no worse in domain X than it is in Y, or is better, doesn’t mean we shouldn’t fight hard to solve it in X—especially if X happens to be our business.  Set thy own house in order.  But it does mean that, if we see X but not Y attacked for its deeply entrenched, screwed-up culture, a culture that lets these things happen over and over, then we’re seeing a mistake at best, and the workings of prejudice at worst.

I’m not saying anything the slightest bit original here.  But my personal interest is less in the “Summer of the Shark” phenomenon itself than in its psychology.  Somehow, we need to figure out a trick to move this cognitive error from the periphery of consciousness to center stage.  I mustn’t treat it as just a 10% correction: something to acknowledge intellectually, before I go on to share a rage-inducing headline on Facebook anyway, once I’ve hit on a suitable reason why my initial feelings of anger were basically justified after all.  Sometimes it’s a 100% correction.  I’ve been guilty, I’m sure, of helping to spread SotS-type narratives.  And I’ve laughed when SotS narratives were uncritically wielded by others, for example in The Onion.  I should do better.

I can’t resist sharing one of history’s most famous Jewish jokes, with apologies to those who know it.  In the shtetl, a horrible rumor spreads: a Jewish man raped and murdered a beautiful little Christian girl in the forest.  Terrified, the Jews gather in the synagogue and debate what to do.  They know that the Cossacks won’t ask: “OK, but before we do anything rash, what’s the rate of Jewish perpetration of this sort of crime?  How does it compare to the Gentile rate, after normalizing by the populations’ sizes?  Also, what about Jewish victims of Gentile crimes?  Is the presence of Jews causally related to more of our children being murdered than would otherwise be?”  Instead, a mob will simply slaughter every Jew it can find.  But then, just when it seems all is lost, the rabbi runs into the synagogue and jubilantly declares: “wonderful news, everyone!  It turns out the murdered girl was Jewish!”

And now I should end this post, before it jumps the shark.


Update: This post by Scott Alexander, which I’d somehow forgotten about, makes exactly the same point, but better and more memorably. Oh well, one could do worse than to serve as a Cliff Notes and link farm for Slate Star Codex.

How to upper-bound the probability of something bad

April 13th, 2018

Scott Alexander has a new post decrying how rarely experts encode their knowledge in the form of detailed guidelines with conditional statements and loops—or what one could also call flowcharts or expert systems—rather than just blanket recommendations.  He gives, as an illustration of what he’s looking for, an algorithm that a psychiatrist might use to figure out which antidepressants or other treatments will work for a specific patient—with the huge proviso that you shouldn’t try his algorithm at home, or (most importantly) sue him if it doesn’t work.

Compared to a psychiatrist, I have the huge advantage that if my professional advice fails, normally no one gets hurt or gets sued for malpractice or commits suicide or anything like that.  OK, but what do I actually know that can be encoded in if-thens?

Well, one of the commonest tasks in the day-to-day life of any theoretical computer scientist, or mathematician of the Erdös flavor, is to upper bound the probability that something bad will happen: for example, that your randomized algorithm or protocol will fail, or that your randomly constructed graph or code or whatever it is won’t have the properties needed for your proof.

So without further ado, here are my secrets revealed, my ten-step plan to probability-bounding and computer-science-theorizing success.

Step 1. “1” is definitely an upper bound on the probability of your bad event happening.  Check whether that upper bound is good enough.  (Sometimes, as when this is an inner step in a larger summation over probabilities, the answer will actually be yes.)

Step 2. Try using Markov’s inequality (a nonnegative random variable exceeds its mean by a factor of k at most a 1/k fraction of the time), combined with its close cousin in indispensable obviousness, the union bound (the probability that any of several bad events will happen, is at most the sum of the probabilities of each bad event individually).  About half the time, you can stop right here.

Step 3. See if the bad event you’re worried about involves a sum of independent random variables exceeding some threshold. If it does, hit that sucker with a Chernoff or Hoeffding bound.

Step 4. If your random variables aren’t independent, see if they at least form a martingale: a fancy word for a sum of terms, each of which has a mean of 0 conditioned on all the earlier terms, even though it might depend on the earlier terms in subtler ways.  If so, Azuma your problem into submission.

Step 5. If you don’t have a martingale, but you still feel like your random variables are only weakly correlated, try calculating the variance of whatever combination of variables you care about, and then using Chebyshev’s inequality: the probability that a random variable differs from its mean by at most k times the standard deviation (i.e., the square root of the variance) is at most 1/k2.  If the variance doesn’t work, you can try calculating some higher moments too—just beware that, around the 6th or 8th moment, you and your notebook paper will likely both be exhausted.

Step 6. OK, umm … see if you can upper-bound the variation distance between your probability distribution and a different distribution for which it’s already known (or is easy to see) that it’s unlikely that anything bad happens. A good example of a tool you can use to upper-bound variation distance is Pinsker’s inequality.

Step 7. Now is the time when you start ransacking Google and Wikipedia for things like the Lovász Local Lemma, and concentration bounds for low-degree polynomials, and Hölder’s inequality, and Talagrand’s inequality, and other isoperimetric-type inequalities, and hypercontractive inequalities, and other stuff that you’ve heard your friends rave about, and have even seen successfully used at least twice, but there’s no way you’d remember off the top of your head under what conditions any of this stuff applies, or whether any of it is good enough for your application. (Just between you and me: you may have already visited Wikipedia to refresh your memory about the earlier items in this list, like the Chernoff bound.) “Try a hypercontractive inequality” is surely the analogue of the psychiatrist’s “try electroconvulsive therapy,” for a patient on whom all milder treatments have failed.

Step 8. So, these bad events … how bad are they, anyway? Any chance you can live with them?  (See also: Step 1.)

Step 9. You can’t live with them? Then back up in your proof search tree, and look for a whole different approach or algorithm, which would make the bad events less likely or even kill them off altogether.

Step 10. Consider the possibility that the statement you’re trying to prove is false—or if true, is far beyond any existing tools.  (This might be the analogue of the psychiatrist’s: consider the possibility that evil conspirators really are out to get your patient.)

Amazing progress on longstanding open problems

April 11th, 2018

For those who haven’t seen it:

  1. Aubrey de Grey, better known to the world as a radical life extension researcher, on Sunday posted a preprint on the arXiv claiming to prove that the chromatic number of the plane is at least 5—the first significant progress on the Hadwiger-Nelson problem since 1950.  If you’re tuning in from home, the Hadwiger-Nelson problem asks: what’s the minimum number of colors that you need to color the Euclidean plane, in order to ensure that every two points at distance exactly 1 from each other are colored differently?  It’s not hard to show that at least 4 colors are necessary, or that 7 colors suffice: try convincing yourself by staring at the figure below.  Until a few days ago, nothing better was known.
    This is a problem that’s intrigued me ever since I learned about it at a math camp in 1996, and that I spent at least a day of my teenagerhood trying to solve.
    De Grey constructs an explicit graph with unit distances—originally with 1567 vertices, now with 1585 vertices after after a bug was fixed—and then verifies by computer search (which takes a few hours) that 5 colors are needed for it.  Update: My good friend Marijn Heule, at UT Austin, has now apparently found a smaller such graph, with “only” 874 vertices.  See here.
    So, can we be confident that the proof will stand—i.e., that there are no further bugs?  See the comments of Gil Kalai’s post for discussion.  Briefly, though, it’s now been independently verified, using different SAT-solvers, that the chromatic number of de Grey’s corrected graph is indeed 5.  Paul Phillips emailed to tell me that he’s now independently verified that the graph is unit distance as well.  So I think it’s time to declare the result correct.
    Question for experts: is there a general principle by which we can show that, if the chromatic number of the plane is at least 6, or is 7, then there exists a finite subgraph that witnesses it?  (This is closely related to asking, what’s the logical complexity of the Hadwiger-Nelson problem: is it Π1?)  Update: As de Grey and a commenter pointed out to me, this is the de Bruijn-Erdös Theorem from 1951.  But the proofs inherently require the Axiom of Choice.  Assuming AC, this also gives you that Hadwiger-Nslson is a Π1 statement, since the coordinates of the points in any finite counterexample can be assumed to be algebraic. However, this also raises the strange possibility that the chromatic number of the plane could be smaller assuming AC than not assuming it.
  2. Last week, Urmila Mahadev, a student (as was I, oh so many years ago) of Umesh Vazirani at Berkeley, posted a preprint on the arXiv giving a protocol for a quantum computer to prove the results of any computation it performs to a classical skeptic—assuming a relatively standard cryptographic assumption, namely the quantum hardness of the Learning With Errors (LWE) problem, and requiring only classical communication between the skeptic and the QC.  I don’t know how many readers remember, but way back in 2006, inspired by a $25,000 prize offered by Stephen Wolfram, I decided to offer a $25 prize to anyone who could solve the problem of proving the results of an arbitrary quantum computation to a classical skeptic, or who could give oracle evidence that a solution was impossible.  I had first learned this fundamental problem from Daniel Gottesman.
    Just a year or two later, independent work of Aharonov, Ben-Or, and Eban, and of Broadbent, Fitzsimons, and Kashefi made a major advance on the problem, by giving protocols that were information-theoretically secure.  The downside was that, in contrast to Mahadev’s new protocol, these earlier protocols required the verifier to be a little bit quantum: in particular, to exchange individual unentangled qubits with the QC.  Or, as shown by later work, the verifier could be completely classical, but only if it could send challenges to two or more quantum computers that were entangled but unable to communicate with each other.  In light of these achievements, I decided to award both groups their own checks for half the prize amount ($12.50), to be split among themselves however they chose.
    Neither with Broadbent et al.’s or Aharonov et al.’s earlier work, nor with Mahadev’s new work, is it immediately clear whether the protocols relativize (that is, whether they work relative to an arbitrary oracle), but it’s plausible that they don’t.
    Anyway, assuming that her breakthrough result stands, I look forward to awarding Urmila the full $25 prize when I see her at the Simons Institute in Berkeley this June.

Huge congratulations to Aubrey and Urmila for their achievements!


Update (April 12): My friend Virgi Vassilevska Williams asked me to announce a theoretical computer science women event, which will take during the upcoming STOC in LA.


Another Update: Another friend, Holden Karnofsky of the Open Philanthropy Project, asked me to advertise that OpenPhil is looking to hire a Research Analyst and Senior Research Analyst. See also this Medium piece (“Hiring Analytical Thinkers to Help Give Away Billions”) to learn more about what the job would involve.

Two announcements

April 7th, 2018

Before my next main course comes out of the oven, I bring you two palate-cleansing appetizers:

  1. My childhood best friend Alex Halderman, whose heroic exploits helping to secure the world’s voting systems have often been featured on this blog, now has a beautifully produced video for the New York Times, entitled “I Hacked An Election.  So Can The Russians.”  Here Alex lays out the case for an audited paper trail—i.e., for what the world’s cybersecurity experts have been unanimously flailing their arms about for two decades—in terms so simple and vivid that even Congresspeople should be able to understand them.  Please consider sharing the video if you support this important cause.
  2. Jakob Nordstrom asked me to advertise the 5th Swedish Summer School in Computer Science, to be held August 5-11, 2018, in the beautiful Stockholm archipelago at Djuronaset.  This year the focus is on quantum computing, and the lecturers are two of my favorite people in the entire field: Ronald de Wolf (giving a broad intro to QC) and Oded Regev (lecturing on post-quantum cryptography).  The school is mainly for PhD students, but is also open to masters students, postdocs, and faculty.  If you wanted to spend one week getting up to speed on quantum, it’s hard for me to imagine that you’d find any opportunity more excellent.  The application deadline is April 20, so apply now if you’re interested!

30 of my favorite books

March 28th, 2018

A reader named Shozab writes:

Scott, if you had to make a list of your favourite books, which ones would you include?
And yes, you can put in quantum computing since Democritus!

Since I’ve gotten the same request before, I guess this is as good a time as any.  My ground rules:

  • I’ll only include works because I actually read them and they had a big impact on me at some point in my life—not because I feel abstractly like they’re important or others should read them, or because I want to be seen as the kind of person who recommends them.
  • But not works that impacted me before the age of about 10, since my memory of childhood reading habits is too hazy.
  • To keep things manageable, I’ll include at most one work per author.  My choices will often be idiosyncratic—i.e., not that author’s “best” work.  However, it’s usually fair to assume that if I include something by X, then I’ve also read and enjoyed other works by X, and that I might be including this work partly just as an entry point into X’s oeuvre.
  • In any case where the same author has both “deeper” and more “accessible” works, both of which I loved, I’ll choose the more accessible.  But rest assured that I also read the deeper work. 🙂
  • This shouldn’t need to be said, but since I know it does: listing a work by author X does not imply my agreement with everything X has ever said about every topic.
  • The Bible, the Homeric epics, Plato, and Shakespeare are excluded by fiat.  They’re all pretty important (or so one hears…), and you should probably read them all, but I don’t want the responsibility of picking and choosing from among them.
  • No books about the Holocaust, or other unremittingly depressing works like 1984.  Those are a special category to themselves: I’m glad that I read them, but would never read them twice.
  • The works are in order of publication date, with a single exception (see if you can spot it!).

Without further ado:

Quantum Computing Since Democritus by Scott Aaronson

Dialogue Concerning the Two Chief World Systems by Galileo Galilei

Dialogues Concerning Natural Religion by David Hume

Narrative of the Life of Frederick Douglass by himself

The Adventures of Huckleberry Finn by Mark Twain

The Subjection of Women by John Stuart Mill

The Autobiography of Charles Darwin by himself

Altneuland by Theodor Herzl

The Practice and Theory of Bolshevism by Bertrand Russell

What Is Life?: With Mind and Matter and Autobiographical Sketches by Erwin Schrödinger

Fads and Fallacies in the Name of Science by Martin Gardner

How Children Fail by John Holt

Set Theory and the Continuum Hypothesis by Paul Cohen

The Gods Themselves by Isaac Asimov (specifically, the middle third)

A History of Pi by Petr Beckmann

The Selfish Gene by Richard Dawkins

The Mind-Body Problem by Rebecca Goldstein

Alan Turing: The Enigma by Andrew Hodges

Surely You’re Joking Mr. Feynman by Richard Feynman

The Book of Numbers by John Conway and Richard Guy

The Demon-Haunted World by Carl Sagan

Gems of Theoretical Computer Science by Uwe Schöning and Randall Pruim

Fashionable Nonsense by Alan Sokal and Jean Bricmont

Our Dumb Century by The Onion

Quantum Computation and Quantum Information by Michael Nielsen and Isaac Chuang

The Blank Slate by Steven Pinker

Field Notes from a Catastrophe by Elizabeth Kolbert

Infidel by Ayaan Hirsi Ali

Logicomix by Apostolos Doxiadis and Christos Papadimitriou

The Beginning of Infinity by David Deutsch

You’re welcome to argue with me in the comments, e.g., by presenting evidence that I didn’t actually like these books. 🙂  More seriously: list your own favorites, discuss your reactions to these books, be a “human recommendation engine” by listing books that “those who liked the above would also enjoy,” whatever.

Addendum: Here’s another bonus twenty books, as I remember more and as commenters remind me of more that I liked quite as much as the thirty above.

The Man Who Knew Infinity by Robert Kanigel

A Mathematician’s Apology by G. H. Hardy

A Confederacy of Dunces by John Kennedy Toole

The First Three Minutes by Steven Weinberg

Breaking the Code by Hugh Whitemore

Arcadia by Tom Stoppard

Adventures of a Mathematician by Stanislaw Ulam

The Man Who Loved Only Numbers by Paul Hoffman

Mathematical Writing by Donald Knuth, Tracy Larabee, and Paul Roberts

A Beautiful Mind by Sylvia Nasar

An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani

The Road to Reality by Roger Penrose

The Nili Spies by Anita Engle (about the real-life heroic exploits of the Aaronsohn family)

Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

The Princeton Companion to Mathematics edited by Timothy Gowers

The Making of the Atomic Bomb by Richard Rhodes

Fear No Evil by Natan Sharansky

The Mind’s I by Douglas Hofstadter and Daniel Dennett

Disturbing the Universe by Freeman Dyson

Unsong by Scott Alexander

Review of Steven Pinker’s Enlightenment Now

March 22nd, 2018

It’s not every day that I check my office mailbox and, amid the junk brochures, find 500 pages on the biggest questions facing civilization—all of them, basically—by possibly the single person on earth most qualified to tackle those questions.  That’s what happened when, on a trip back to Austin from my sabbatical, I found a review copy of Steven Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

I met with Steve while he was writing this book, and fielded his probing questions about the relationships among the concepts of information, entropy, randomness, Kolmogorov complexity, and coarse graining, in a way that might have affected a few paragraphs in Chapter 2.  I’m proud to be thanked in the preface—well, as “Scott Aronson.”  I have a lot of praise for the book, but let’s start with this: the omission of the second “a” from my surname was the worst factual error that I found.

If you’ve read anything else by Pinker, then you more-or-less know what to expect: an intellectual buffet that’s pure joy to devour, even if many of the dishes are ones you’ve tasted before.  For me, the writing alone is worth the admission price: Pinker is, among many other distinctions, the English language’s master of the comma-separated list.  I can see why Bill Gates recently called Enlightenment Now his “new favorite book of all time“—displacing his previous favorite, Pinker’s earlier book The Better Angels of Our Nature.  If you’ve read Better Angels, to which Enlightenment Now functions as a sort of sequel, then you know even more specifically what to expect: a saturation bombing of line graphs showing you how, despite the headlines, the world has been getting better in almost every imaginable way—graphs so thorough that they’ll eventually drag the most dedicated pessimist, kicking and screaming, into sharing Pinker’s sunny disposition, at least temporarily (but more about that later).

The other book to which Enlightenment Now bears comparison is David Deutsch’s The Beginning of Infinity.  The book opens with one of Deutsch’s maxims—“Everything that is not forbidden by laws of nature is achievable, given the right knowledge”—and Deutsch’s influence can be seen throughout Pinker’s new work, as when Pinker repeats the Deutschian mantra that “problems are solvable.”  Certainly Deutsch and Pinker have a huge amount in common: classical liberalism, admiration for the Enlightenment as perhaps the best thing that ever happened to the human species, and barely-perturbable optimism.

Pinker’s stated aim is to make an updated case for the Enlightenment—and specifically, for the historically unprecedented “ratchet of progress” that humankind has been on for the last few hundred years—using the language and concepts of the 21st century.  Some of his chapter titles give a sense of the scope of the undertaking:

  • Life
  • Health
  • Wealth
  • Inequality
  • The Environment
  • Peace
  • Safety
  • Terrorism
  • Equal Rights
  • Knowledge
  • Happiness
  • Reason
  • Science

When I read these chapter titles aloud to my wife, she laughed, as if to say: how could anyone have the audacity to write a book on just one of these enormities, let alone all of them?  But you can almost hear the gears turning in Pinker’s head as he decided to do it: well, someone ought to take stock in a single volume of where the human race is and where it’s going.  And if, with the rise of thuggish autocrats all over the world, the principles of modernity laid down by Locke, Spinoza, Kant, Jefferson, Hamilton, and Mill are under attack, then someone ought to rise to those principles’ unironic defense.  And if no one else will do it, it might as well be me!  If that’s how Pinker thought, then I agree: it might as well have been him.

I also think Pinker is correct that Enlightenment values are not so anodyne that they don’t need a defense.  Indeed, nothing demonstrates the case for Pinker’s book, the non-obviousness of his thesis, more clearly than the vitriolic reviews the book has been getting in literary venues.  Take this, for example, from John Gray in The New Statesman: “Steven Pinker’s embarrassing new book is a feeble sermon for rattled liberals.”

Pinker is an ardent enthusiast for free-market capitalism, which he believes produced most of the advance in living standards over the past few centuries. Unlike [Herbert Spencer, the founder of Social Darwinism], he seems ready to accept that some provision should be made for those who have been left behind. Why he makes this concession is unclear. Nothing is said about human kindness, or fairness, in his formula. Indeed, the logic of his dictum points the other way.

Many early-20th-century Enlightenment thinkers supported eugenic policies because they believed “improving the quality of the population” – weeding out human beings they deemed unproductive or undesirable – would accelerate the course of human evolution…

Exponents of scientism in the past have used it to promote Fabian socialism, Marxism-Leninism, Nazism and more interventionist varieties of liberalism. In doing so, they were invoking the authority of science to legitimise the values of their time and place. Deploying his cod-scientific formula to bolster market liberalism, Pinker does the same.

You see, when Pinker says he supports Enlightenment norms of reason and humanism, he really means to say that he supports unbridled capitalism and possibly even eugenics.  As I read this sort of critique, the hair stands on my neck, because the basic technique of hostile mistranslation is so familiar to me.  It’s the technique that once took a comment in which I pled for shy nerdy males and feminist women to try to understand each other’s suffering, as both navigate a mating market unlike anything in previous human experience—and somehow managed to come away with the take-home message, “so this entitled techbro wants to return to a past when society would just grant him a female sex slave.”

I’ve noticed that everything Pinker writes bears the scars of the hostile mistranslation tactic.  Scarcely does he say anything before he turns around and says, “and here’s what I’m not saying”—and then proceeds to ward off five different misreadings so wild they wouldn’t have occurred to me, but then if you read Leon Wieseltier or John Gray or his other critics, there the misreadings are, trotted out triumphantly; it doesn’t even matter how much time Pinker spent trying to prevent them.


OK, but what of the truth or falsehood of Pinker’s central claims?

I share Pinker’s sense that the Enlightenment may be the best thing that ever happened in our species’ sorry history.  I agree with his facts, and with his interpretations of the facts.  We rarely pause to consider just how astounding it is—how astounding it would be to anyone who lived before modernity—that child mortality, hunger, and disease have plunged as far as they have, and we show colossal ingratitude toward the scientists and inventors and reformers who made it possible.  (Pinker lists the following medical researchers and public health crusaders as having saved more than 100 million lives each: Karl Landsteiner, Abel Wolman, Linn Enslow, William Foege, Maurice Hilleman, John Enders.  How many of them had you heard of?  I’d heard of none.)  This is, just as Pinker says, “the greatest story seldom told.”

Beyond the facts, I almost always share Pinker’s moral intuitions and policy preferences.  He’s right that, whether we’re discussing nuclear power, terrorism, or GMOs, going on gut feelings like disgust and anger, or on vivid and memorable incidents, is a terrible way to run a civilization.  Instead we constantly need to count: how many would be helped by this course of action, how many would be harmed?  As Pinker points out, that doesn’t mean we need to become thoroughgoing utilitarians, and start fretting about whether the microscopic proto-suffering of a bacterium, multiplied by the 1031 bacteria that there are, outweighs every human concern.  It just means that we should heed the utilitarian impulse to quantify way more than is normally done—at the least, in every case where we’ve already implicitly accepted the underlying values, but might be off by orders of magnitude in guessing what they imply about our choices.

The one aspect of Pinker’s worldview that I don’t share—and it’s a central one—is his optimism.  My philosophical temperament, you might say, is closer to that of Rebecca Newberger Goldstein, the brilliant novelist and philosopher (and Pinker’s wife), who titled a lecture given shortly after Trump’s election “Plato’s Despair.”

Somehow, I look at the world from more-or-less the same vantage point as Pinker, yet am terrified rather than hopeful.  I’m depressed that Enlightenment values have made it so far, and yet there’s an excellent chance (it seems to me) that it will be for naught, as civilization slides back into authoritarianism, and climate change and deforestation and ocean acidification make the one known planet fit for human habitation increasingly unlivable.

I’m even depressed that Pinker’s book has gotten such hostile reviews.  I’m depressed, more broadly, that for centuries, the Enlightenment has been met by its beneficiaries with such colossal incomprehension and ingratitude.  Save 300 million people from smallpox, and you can expect in return a lecture about your naïve and arrogant scientistic reductionism.  Or, electronically connect billions of people to each other and to the world’s knowledge, in a way beyond the imaginings of science fiction half a century ago, and people will use the new medium to rail against the gross, basement-dwelling nerdbros who made it possible, then upvote and Like each other for their moral courage in doing so.

I’m depressed by the questions: how can a human race that reacts in that way to the gifts of modernity possibly be trusted to use those gifts responsibly?  Does it even “deserve” the gifts?

As I read Pinker, I sometimes imagined a book published in 1923 about the astonishing improvements in the condition of Europe’s Jews following their emancipation.  Such a book might argue: look, obviously past results don’t guarantee future returns; all this progress could be wiped out by some freak future event.  But for that to happen, an insane number of things would need to go wrong simultaneously: not just one European country but pretty much all of them would need to be taken over by antisemitic lunatics who were somehow also hyper-competent, and who wouldn’t just harass a few Jews here and there until the lunatics lost power, but would systematically hunt down and exterminate all of them with an efficiency the world had never before seen.  Also, for some reason the Jews would need to be unable to escape to Palestine or the US or anywhere else.  So the sane, sober prediction is that things will just continue to improve, of course with occasional hiccups (but problems are solvable).

Or I thought back to just a few years ago, to the wise people who explained that, sure, for the United States to fall under the control of a racist megalomaniac like Trump would be a catastrophe beyond imagining.  Were such a comic-book absurdity realized, there’d be no point even discussing “how to get democracy back on track”; it would already have suffered its extinction-level event.  But the good news is that it will never happen, because the voters won’t allow it: a white nationalist authoritarian could never even get nominated, and if he did, he’d lose in a landslide.  What did Pat Buchanan get, less than 1% of the vote?

I don’t believe in a traditional God, but if I did, the God who I’d believe in is one who’s constantly tipping the scales of fate toward horribleness—a God who regularly causes catastrophes to happen, even when all the rational signs point toward their not happening—basically, the God who I blogged about here.  The one positive thing to be said about my God is that, unlike the just and merciful kind, I find that mine rarely lets me down.

Pinker is not blind.  Again and again, he acknowledges the depths of human evil and idiocy, the forces that even now look to many of us like they’re leaping up at Pinker’s exponential improvement curves with bared fangs.  It’s just that each time, he recommends putting an optimistic spin on the situation, because what’s the alternative?  Just to get all, like, depressed?  That would be unproductive!  As Deutsch says, problems will always arise, but problems are solvable, so let’s focus on what it would take to solve them, and on the hopeful signs that they’re already being solved.

With climate change, Pinker gives an eloquent account of the enormity of the crisis, echoing the mainstream scientific consensus in almost every particular.  But he cautions that, if we tell people this is plausibly the end of civilization, they’ll just get fatalistic and paralyzed, so it’s better to talk about solutions.  He recommends an aggressive program of carbon pricing, carbon capture and storage, nuclear power, research into new technologies, and possibly geoengineering, guided by strong international cooperation—all things I’d recommend as well.  OK, but what are the indications that anything even close to what’s needed will get done?  The right time to get started, it seems to me, was over 40 years ago.  Since then, the political forces that now control the world’s largest economy have spiralled into ever more vitriolic denial, the more urgent the crisis has gotten and the more irrefutable the evidence.  Pinker writes:

“We cannot be complacently optimistic about climate change, but we can be conditionally optimistic.  We have some practicable ways to prevent the harms and we have the means to learn more.  Problems are solvable.  That does not mean that they will solve themselves, but it does mean that we can solve them if we sustain the benevolent forces of modernity that have allowed us to solve problems so far…” (p. 154-155)

I have no doubt that conditional optimism is a useful stance to adopt, in this case as in many others.  The trouble, for me, is the gap between the usefulness of a view and its probable truth—a gap that Pinker would be quick to remind me about in other contexts.  Even if a placebo works for those who believe in it, how do you make yourself believe in what you understand to be a placebo?  Even if all it would take, for the inmates to escape a prison, is simultaneous optimism that they’ll succeed if they work together—still, how can an individual inmate be optimistic, if he sees that the others aren’t, and rationally concludes that dying in prison is his probable fate?  For me, the very thought of the earth gone desolate—its remaining land barely habitable, its oceans a sewer, its radio beacons to other worlds fallen silent—all for want of ability to coordinate a game-theoretic equilibrium, just depresses me even more.

Likewise with thermonuclear war: Pinker knows, of course, that even if there were “only” an 0.5% chance of one per year, multiplied across the decades of the nuclear era that’s enormously, catastrophically too high, and there have already been too many close calls.  But look on the bright side: the US and Russia have already reduced their arsenals dramatically from their Cold War highs.  There’d be every reason for optimism about continued progress, if we weren’t in this freak branch of the wavefunction where the US and Russia (not to mention North Korea and other nuclear states) were now controlled by authoritarian strongmen.

With Trump—for how could anyone avoid him in a book like this?—Pinker spends several pages reviewing the damage he’s inflicted on democratic norms, the international order, the environment, and the ideal of truth itself:

“Trump’s barefaced assertion of canards that can instantly be debunked … shows that he sees public discourse not as a means of finding common ground based on objective reality but as a weapon with which to project dominance and humiliate rivals” (p. 336).

Pinker then writes a sentence that made me smile ruefully: “Not even a congenital optimist can see a pony in this Christmas stocking” (p. 337).  Again, though, Pinker looks at poll data suggesting that Trump and the world’s other resurgent quasi-fascists are not the wave of the future, but the desperate rearguard actions of a dwindling and aging minority that feels itself increasingly marginalized by the modern world (and accurately so).  The trouble is, Nazism could also be seen as “just” a desperate, failed attempt to turn back the ratchet of cosmopolitanism and moral progress, by people who viscerally understood that time and history were against them.  Yet even though Nazism ultimately lost (which was far from inevitable, I think), the damage it inflicted on its way out was enough, you might say, to vindicate the shrillest pessimist of the 1930s.

Then there’s the matter of takeover by superintelligent AI.  I’ve now spent years hanging around communities where it’s widely accepted that “AI value alignment” is the most pressing problem facing humanity.  I strongly disagree with this view—but on reflection, not because I don’t think AI could be a threat; only because I think other, more prosaic things are much more imminent threats!  I feel the urge to invent a new, 21st-century Yiddish-style proverb: “oy, that we should only survive so long to see the AI-bots become our worst problem!”

Pinker’s view is different: he’s dismissive of the fear (even putting it in the context of the Y2K bug, and people marching around sidewalks with sandwich boards that say “REPENT”), and thinks the AI-risk folks are simply making elementary mistakes about the nature of intelligence.  Pinker’s arguments are as follows: first, intelligence is not some magic, all-purpose pixie dust, which humans have more of than animals, and which a hypothetical future AI would have more of than humans.  Instead, the brain is a bundle of special-purpose modules that evolved for particular reasons, so “the concept [of artificial general intelligence] is barely coherent” (p. 298).  Second, it’s only humans’ specific history that causes them to think immediately about conquering and taking over, as goals to which superintelligence would be applied.  An AI could have different motivations entirely—and it will, if its programmers have any sense.  Third, any AI would be constrained by the resource limits of the physical world.  For example, just because an AI hatched a brilliant plan to recursively improve itself, doesn’t mean it could execute that plan without (say) building a new microchip fab, acquiring the necessary raw materials, and procuring the cooperation of humans.  Fourth, it’s absurd to imagine a superintelligence converting the universe into paperclips because of some simple programming flaw or overliteral interpretation of human commands, since understanding nuances is what intelligence is all about:

“The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence.  So is the ability to interpret the intentions of a language user in context” (p. 300).

I’ll leave it to those who’ve spent more time thinking about these issues to examine these arguments in detail (in the comments of this post, if they like).  But let me indicate briefly why I don’t think they fare too well under scrutiny.

For one thing, notice that the fourth argument is in fundamental tension with the first and second.  If intelligence is not an all-purpose elixir but a bundle of special-purpose tools, and if those tools can be wholly uncoupled from motivation, then why couldn’t we easily get vast intelligence expended toward goals that looked insane from our perspective?  Have humans never been known to put great intelligence in the service of ends that strike many of us as base, evil, simpleminded, or bizarre?  Consider the phrase often applied to men: “thinking with their dicks.”  Is there any sub-Einsteinian upper bound on the intelligence of the men who’ve been guilty of that?

Second, while it seems clear that there are many special-purpose mental modules—the hunting instincts of a cat, the mating calls of a bird, the pincer-grasping or language-acquisition skills of a human—it seems equally clear that there is some such thing as “general problem-solving ability,” which Newton had more of than Roofus McDoofus, and which even Roofus has more of than a chicken.  But whatever we take that ability to consist of, and whether we measure it by a scalar or a vector, it’s hard to imagine that Newton was anywhere near whatever limits on it are imposed by physics.  His brain was subject to all sorts of archaic evolutionary constraints, from the width of the birth canal to the amount of food available in the ancestral environment, and possibly also to diminishing returns on intelligence in humans’ social environment (Newton did, after all, die a virgin).  But if so, then given the impact that Newton, and others near the ceiling of known human problem-solving ability, managed to achieve even with their biology-constrained brains, how could we possibly see the prospect of removing those constraints as just a narrow technological matter, like building a faster calculator or a more precise clock?

Third, the argument about intelligence being constrained by physical limits would seem to work equally well for a mammoth or cheetah scoping out the early hominids.  The mammoth might say: yes, these funny new hairless apes are smarter than me, but intelligence is just one factor among many, and often not the decisive one.  I’m much bigger and stronger, and the cheetah is faster.  (If the mammoth did say that, it would be an unusually smart mammoth as well, but never mind.)  Of course we know what happened: from wild animals’ perspective, the arrival of humans really was a catastrophic singularity, comparable to the Chicxulub asteroid (and far from over), albeit one that took between 104 and 106 years depending on when we start the clock.  Over the short term, the optimistic mammoths would be right: pure, disembodied intelligence can’t just magically transform itself into spears and poisoned arrows that render you extinct.  Over the long term, the most paranoid mammoth on the tundra couldn’t imagine the half of what the new “superintelligence” would do.

Finally, any argument that relies on human programmers choosing not to build an AI with destructive potential, has to contend with the fact that humans did invent, among other things, nuclear weapons—and moreover, for what seemed like morally impeccable reasons at the time.  And a dangerous AI would be a lot harder to keep from proliferating, since it would consist of copyable code.  And it would only take one.  You could, of course, imagine building a good AI to neutralize the bad AIs, but by that point there’s not much daylight left between you and the AI-risk people.


As you’ve probably gathered, I’m a worrywart by temperament (and, I like to think, experience), and I’ve now spent a good deal of space on my disagreements with Pinker that flow from that.  But the funny part is, even though I consistently see clouds where he sees sunshine, we’re otherwise looking at much the same scene, and our shared view also makes us want the same things for the world.  I find myself in overwhelming, nontrivial agreement with Pinker about the value of science, reason, humanism, and Enlightenment; about who and what deserves credit for the stunning progress humans have made; about which tendencies of civilization to nurture and which to recoil in horror from; about how to think and write about any of those questions; and about a huge number of more specific issues.

So my advice is this: buy Pinker’s book and read it.  Then work for a future where the book’s optimism is justified.

Hawking

March 16th, 2018

A long post is brewing (breaking my month-long silence), but as I was working on it, the sad news arrived that Stephen Hawking passed away. There’s little I can add to the tributes that poured in from around the world: like chocolate or pizza, Hawking was beloved everywhere and actually deserved to be. Like, probably, millions of other nerds of my generation, I read A Brief History of Time as a kid and was inspired by it (though I remember being confused back then about the operational meaning of imaginary time, and am still confused about it almost 30 years later).  In terms of a scientist capturing the public imagination, through a combination of genuine conceptual breakthroughs, an enthralling personal story, an instantly recognizable countenance, and oracular pronouncements on issues of the day, the only one in the same league was Einstein. I didn’t agree with all of Hawking’s pronouncements, but the quibbles paled beside the enormous areas of agreement.  Hawking was a force for good in the world, and for the values of science, reason, and Enlightenment (to anticipate the subject of my next post).

I’m sorry that I never really met Hawking, though I did participate in two conferences that he also attended, and got to watch him slowly form sentences on his computer. At one conference in 2011, he attended my talk—this one—and I was told by mutual acquaintances that he liked it.  That meant more to me than it probably should have: who cares if some random commenters on YouTube dissed your talk, if the Hawk-Man himself approved?

As for Hawking’s talks—well, there’s a reason why they filled giant auditoriums all over the world.  Any of us in the business of science popularization would do well to study them and take lessons.

If you want a real obituary of Hawking, by someone who knew him well—one that, moreover, actually explains his main scientific contributions (including the singularity theorems, Hawking radiation, and the no-boundary proposal)—you won’t do any better than this by Roger Penrose. Also don’t miss this remembrance in Time by Hawking’s friend and betting partner, and friend-of-the-blog, John Preskill. (Added: and this by Sean Carroll.)