Archive for the ‘Nerd Interest’ Category

How can we fight online shaming campaigns?

Wednesday, February 25th, 2015

Longtime friend and colleague Boaz Barak sent me a fascinating New York Times Magazine article that profiles people who lost their jobs or otherwise had their lives ruined, because of a single remark that then got amplified a trillionfold in importance by social media.  (The author, Jon Ronson, also has a forthcoming book on the topic.)  The article opens with Justine Sacco: a woman who, about to board a flight to Cape Town, tweeted “Going to Africa.  Hope I don’t get AIDS.  Just kidding.  I’m white!”

To the few friends who read Sacco’s Twitter feed, it would’ve been obvious that she was trying to mock the belief of many well-off white people that they live in a bubble, insulated from the problems of the Third World; she wasn’t actually mocking black Africans who suffer from AIDS.  In a just world, maybe Sacco deserved someone to take her aside and quietly explain that her tweet might be read the wrong way, that she should be more careful next time.  Instead, by the time she landed in Cape Town, she learned that she’d become the #1 worldwide Twitter trend and a global symbol of racism.  She lost her career, she lost her entire previous life, and tens of thousands of people expressed glee about it.  The article rather heartbreakingly describes Sacco’s attempts to start over.

There are many more stories like the above.  Some I’d already heard about: the father of three who lost his job after he whispered a silly joke involving “dongles” to the person next to him at a conference, whereupon Adria Richards, a woman in front of him, snapped his photo and posted it to social media, to make an example of him as a sexist pig.  (Afterwards, a counter-reaction formed, which successfully got Richards fired from her job: justice??)  Other stories I hadn’t heard.

Reading this article made it clear to me just how easily I got off, in my own recent brush with the online shaming-mobs.  Yes, I made the ‘mistake’ of writing too openly about my experiences as a nerdy male teenager, and the impact that one specific aspect of feminist thought (not all of feminism!) had had on me.  Within the context of the conversation that a few nerdy men and women were having on this blog, my opening up led to exactly the results I was hoping for: readers thoughtfully sharing their own experiences, a meaningful exchange of ideas, even (dare I say it?) glimmers of understanding and empathy.

Alas, once the comment was wrested from its original setting into the clickbait bazaar, the story became “MIT professor explains: the real oppression is having to learn to talk to women” (the title of Amanda Marcotte’s hit-piece, something even some in Marcotte’s ideological camp called sickeningly cruel).  My photo was on the front page of Salon, next to the headline “The plight of the bitter nerd.”  I was subjected to hostile psychoanalysis not once but twice on ‘Dr. Nerdlove,’ a nerd-bashing site whose very name drips with irony, rather like the ‘Democratic People’s Republic of Korea.’  There were tweets and blog comments that urged MIT to fire me, that compared me to a mass-murderer, and that “deduced” (from first principles!) all the ways in which my parents screwed up in raising me and my female students cower in fear of me.   And yes, when you Google me, this affair now more-or-less overshadows everything else I’ve done in my life.

But then … there were also hundreds of men and women who rose to my defense, and they were heavily concentrated among the people I most admire and respect.  My supporters ranged from the actual female students who took my classes or worked with me or who I encouraged in their careers, from whom there was only kindness, not a single negative word; to the shy nerds who thanked me for being one of the only people to acknowledge their reality; to the lesbians and bisexual women who told me my experience also resonated with them; to the female friends and colleagues who sent me notes urging me to ignore the nonsense.  In the end, not only have I not lost any friends over this, I’ve gained new ones, and I’ve learned new sides of the friends I had.

Oh, and I didn’t get any death threats: I guess that’s good!  (Once in my life I did get death threats—graphic, explicit threats, about which I had to contact the police—but it was because I refused to publicize someone’s P=NP proof.)

Since I was away from campus when this blew up, I did feel some fear about the professional backlash that would await me on my return.  Would my office be vandalized?  Would activist groups be protesting my classes?  Would MIT police be there to escort me from campus?

Well, you want to know what happened instead?  Students and colleagues have stopped me in the hall, or come by my office, just to say they support me.  My class has record enrollment this term.  I was invited to participate in MIT’s Diversity Summit, since the organizers felt it would mean a lot to the students to see someone there who had opened up about diversity issues in STEM in such a powerful way.  (I regretfully had to decline, since the summit conflicted with a trip to Stanford.)  And an MIT graduate women’s reading group invited me for a dinner discussion (at my suggestion, Laurie Penny participated as well).  Imagine that: not only are MIT’s women’s groups not picketing me, they’re inviting me over for dinner!  Is there any better answer to the claim, urged on me by some of my overzealous supporters, that the bile of Amanda Marcotte represents all of feminism these days?

Speaking of which, I met Laurie Penny for coffee last month, and she and I quickly hit it off.  We’ve even agreed to write a joint blog post about our advice for shy nerds.  (In my What I Believe post, I had promised a post of advice for shy female nerds—but at Laurie’s urging, we’re broadening the focus to shy nerds of both sexes.)  Even though Laurie’s essay is the thing that brought me to the attention of the Twitter-mobs (which wasn’t Laurie’s intent!), and even though I disagreed with several points in her essay, I knew on reading it that Laurie was someone I’d enjoy talking to.  Unlike so much writing by online social justice activists, which tends to be encrusted with the specialized technical terms of that field—you know, terms like “asshat,” “shitlord,” “douchecanoe,” and “precious feefees of entitled white dudes”—Laurie’s prose shone with humanity and vulnerability: her own, which she freely shared, and mine, which she generously acknowledged.

Overall, the response to my comment has never made me happier or more grateful to be part of the STEM community (I never liked the bureaucratic acronym “STEM,” but fine, I’ll own it).  To many outsiders, we STEM nerds are a sorry lot: we’re “sperglords” (yes, slurs are fine, as long as they’re directed against the right targets!) who might be competent in certain narrow domains, but who lack empathy and emotional depth, and are basically narcissistic children.  Yet somehow when the chips were down, it’s my fellow STEM nerds, and people who hang out with STEM nerds a lot, who showed me far more empathy and compassion than many of the “normals” did.  So if STEM nerds are psychologically broken, then I say: may I surround myself, for the rest of my life, with men and women who are psychologically broken like I am.  May I raise Lily, and any future children I have, to be as psychologically broken as they can be.  And may I stay as far as possible from anyone who’s too well-adjusted.

I reserve my ultimate gratitude for the many women in STEM, friends and strangers alike, who sent me messages of support these past two months.  I’m not ashamed to say it: witnessing how so many STEM women stood up for me has made me want to stand up for them, even more than I did before.  If they’re not called on often enough in class, I’ll call on them more.  If they’re subtly discouraged from careers in science, I’ll blatantly encourage them back.  If they’re sexually harassed, I’ll confront their harassers myself (well, if asked to).  I will listen to them, and I will try to improve.

Is it selfish that I want to help female STEM nerds partly because they helped me?  Here’s the thing: one of my deepest moral beliefs is in the obligation to fight for those among the disadvantaged who don’t despise you, and who wouldn’t gladly rid the planet of everyone like you if they could.  (As I’ve written before, on issue after issue, this belief makes me a left-winger by American standards, and a right-winger by academic ones.)  In the present context, I’d say I have a massive moral obligation toward female STEM nerds and toward Laurie Penny’s version of feminism, and none at all toward Marcotte’s version.

All this is just to say that I’m unbelievably lucky—privileged (!)—to have had so many at MIT and elsewhere willing to stand up for me, and to have reached in a stage in life where I’m strong enough to say what I think and to weather anything the Internet says back.  What worries me is that others, more vulnerable, didn’t and won’t have it as easy when the Twitter hate-machine turns its barrel on them.  So in the rest of this post, I’d like to discuss the problem of what to do about social-media shaming campaigns that aim to, and do, destroy the lives of individuals.  I’m convinced that this is a phenomenon that’s only going to get more and more common: something sprung on us faster than our social norms have evolved to deal with it.  And it would be nice if we could solve it without having to wait for a few high-profile suicides.

But first, let me address a few obvious questions about why this problem is even a problem at all.

Isn’t social shaming as old as society itself—and permanent records of the shaming as old as print media?

Yes, but there’s also something fundamentally new about the problem of the Twitter-mobs.  Before, it would take someone—say, a newspaper editor—to make a conscious decision to the effect, “this comment is worth destroying someone’s life over.”  Today, there might be such an individual, but it’s also possible for lives to be destroyed in a decentralized, distributed fashion, with thousands of Twitterers collaborating to push a non-story past the point of no return.  And among the people who “break” the story, not one has to intend to ruin the victim’s life, or accept responsibility for it afterward: after all, each one made the story only ε bigger than it already was.  (Incidentally, this is one reason why I haven’t gotten a Twitter account: while it has many worthwhile uses, it’s also a medium that might as well have been designed for mobs, for ganging up, for status-seeking among allies stripped of rational arguments.  It’s like the world’s biggest high school.)

Don’t some targets of online shaming campaigns, y’know, deserve it?

Of course!  Some are genuine racists or misogynists or homophobes, who once would’ve been able to inflict hatred their entire lives without consequence, and were only brought down thanks to social media.  The trouble is, the participants in online shaming campaigns will always think they’re meting out righteous justice, whether they are or aren’t.  But there’s an excellent reason why we’ve learned in modern societies not to avenge even the worst crimes via lynch mobs.  There’s a reason why we have trials and lawyers and the opportunity for the accused to show their innocence.

Some might say that no safeguards are possible or necessary here, since we’re not talking about state violence, just individuals exercising their free speech right to vilify someone, demand their firing, that sort of thing.  Yet in today’s world, trial-by-Internet can be more consequential than the old kind of trial: would you rather spend a year in jail, but then be free to move to another town where no one knew about it, or have your Google search results tarnished with lurid accusations (let’s say, that you molested children) for the rest of your life—to have that forever prevent you from getting a job or a relationship, and have no way to correct the record?  With trial by Twitter, there’s no presumption of innocence, no requirement to prove that any other party was harmed, just the law of the schoolyard.

Whether shaming is justified in a particular case is a complicated question, but for whatever it’s worth, here are a few of the questions I would ask:

  • Did the person express a wish for anyone (or any group of people) to come to harm, or for anyone’s rights to be infringed?
  • Did the person express glee or mockery about anyone else’s suffering?
  • Did the person perpetrate a grievous factual falsehood—like, something one could prove was a falsehood in a court of law?
  • Did the person violate anyone else’s confidence?
  • How much does the speaker’s identity matter?  If it had been a man rather than a woman (or vice versa) saying parallel things, would we have taken equal offense?
  • Does the comment have what obscenity law calls “redeeming social value”?  E.g., does it express an unusual viewpoint, or lead to an interesting discussion?

Of course, even in those cases where shaming campaigns are justified, they’ll sometimes be unproductive and ill-advised.

Aren’t society’s most powerful fair targets for public criticism, even mocking or vicious criticism?

Of course.  Few would claim, for example, that we have an ethical obligation to ease up on Todd Akin over his “legitimate rape” remarks, since all the rage might give Akin an anxiety attack.  Completely apart from the (de)merits of the remarks, we accept that, when you become (let’s say) an elected official, a CEO, or a university president, part of the bargain is that you no longer get to complain if people organize to express their hatred of you.

But what’s striking about the cases in the NYT article is that it’s not public figures being gleefully destroyed: just ordinary people who in most cases, made one ill-advised joke or tweet, no worse than countless things you or I have probably said in private among friends.  The social justice warriors try to justify what would otherwise look like bullying by shifting attention away from individuals: sure, Justine Sacco might be a decent person, but she stands for the entire category of upper-middle-class, entitled white women, a powerful structural force against whom the underclass is engaged in a righteous struggle.  Like in a war, the enemy must be fought by any means necessary, even if it means picking off one hapless enemy foot-soldier to make an example to the rest.  And anyway, why do you care more about this one professional white woman, than about the millions of victims of racism?  Is it because you’re a racist yourself?

I find this line of thinking repugnant.  For it perverts worthy struggles for social equality into something callous and inhuman, and thereby undermines the struggles themselves.  It seems to me to have roughly the same relation to real human rights activism as the Inquisition did to the ethical teachings of Jesus.  It’s also repugnant because of its massive chilling effect: watching a few shaming campaigns is enough to make even the most well-intentioned writer want to hide behind a pseudonym, or only offer those ideas and experiences that are sure to win approval.  And the chilling effect is not some accidental byproduct; it’s the goal.  This negates what, for me, is a large part of the promise of the Internet: that if people from all walks of life can just communicate openly, everything made common knowledge, nothing whispered or secondhand, then all the well-intentioned people will eventually come to understand each other.


If I’m right that online shaming of decent people is a real problem that’s only going to get worse, what’s the solution?  Let’s examine five possibilities.

(1) Libel law.  For generations, libel has been recognized as one of the rare types of speech that even a liberal, democratic society can legitimately censor (along with fraud, incitement to imminent violence, national secrets, child porn, and a few others).  That libel is illegal reflects a realistic understanding of the importance of reputation: if, for example, CNN falsely reports that you raped your children, then it doesn’t really matter if MSNBC later corrects the record; your life as you knew it is done.

The trouble is, it’s not clear how to apply libel law in the age of social media.  In the cases we’re talking about, an innocent person’s life gets ruined because of the collective effect of thousands of people piling on to make nasty comments, and it’s neither possible nor desirable to prosecute all of them.  Furthermore, in many cases the problem is not that the shamers said anything untrue: rather, it’s that they “merely” took something true and spitefully misunderstood it, or blew it wildly, viciously, astronomically out of proportion.  I don’t see any legal remedies here.

(2) “Shame the shamers.”  Some people will say the only answer is to hit the shamers with their own weapons.  If an overzealous activist gets an innocent jokester fired from his job, shame the activist until she’s fired from her job.  If vigilantes post the jokester’s home address on the Internet with crosshairs overlaid, find the vigilantes’ home addresses and post those.  It probably won’t surprise many people that I’m not a fan of this solution.  For it only exacerbates the real problem: that of mob justice overwhelming reasoned debate.  The most I can say in favor of vigilantism is this: you probably don’t get to complain about online shaming, if what you’re being shamed for is itself a shaming campaign that you prosecuted against a specific person.

(In a decade writing this blog, I can think of exactly one case where I engaged in what might be called a shaming campaign: namely, against the Bell’s inequality denier Joy Christian.  Christian had provoked me over six years, not merely by being forehead-bangingly wrong about Bell’s theorem, but by insulting me and others when we tried to reason with him, and by demanding prize money from me because he had ‘proved’ that quantum computing was a fraud.  Despite that, I still regret the shaming aspects of my Joy Christian posts, and will strive not to repeat them.)

(3) Technological solutions.  We could try to change the functioning of the Internet, to make it harder to use it to ruin people’s lives.  This, more-or-less, is what the European Court of Justice was going for, with its much-discussed recent ruling upholding a “right to be forgotten” (more precisely, a right for individuals to petition for embarrassing information about them to be de-listed from search engines).  Alas, I fear that the Streisand effect, the Internet’s eternal memory, and the existence of different countries with different legal systems will forever make a mockery of all such technological solutions.  But, OK, given that Google is constantly tweaking its ranking algorithms anyway, maybe it could give less weight to cruel attacks against non-public-figures?  Or more weight (or even special placement) to sites explaining how the individual was cleared of the accusations?  There might be scope for such things, but I have the strong feeling that they should be done, if at all, on a voluntary basis.

(4) Self-censorship.  We could simply train people not to express any views online that might jeopardize their lives or careers, or at any rate, not to express those views under their real names.  Many people I’ve talked to seem to favor this solution, but I can’t get behind it.  For it effectively cedes to the most militant activists the right to decide what is or isn’t acceptable online discourse.  It tells them that they can use social shame as a weapon to get what they want.  When women are ridiculed for sharing stories of anorexia or being sexually assaulted or being discouraged from careers in science, it’s reprehensible to say that the solution is to teach those women to shut up about it.  I not only agree with that but go further: privacy is sometimes important, but is also an overrated value.  The respect that one rational person affords another for openly sharing the truth (or his or her understanding of the truth), in a spirit of sympathy and goodwill, is a higher value than privacy.  And the Internet’s ability to foster that respect (sometimes!) is worth defending.

(5) Standing up.  And so we come to the only solution that I can wholeheartedly stand behind.  This is for people who abhor shaming campaigns to speak out, loudly, for those who are unfairly shamed.

At the nadir of my own Twitter episode, when it felt like my life was now finished, throw in the towel, the psychiatrist Scott Alexander wrote a 10,000-word essay in my defense, which also ranged controversially into numerous other issues.  In a comment on his girlfriend Ozy’s blog, Alexander now says that he regrets aspects of Untitled (then again, it was already tagged “Things I Will Regret Writing” when he posted it!).  In particular, he now feels that the piece was too broad in its critique of feminism.  However, he then explains as follows what motivated him to write it:

Scott Aaronson is one of the nicest and most decent people in the world, who does nothing but try to expand human knowledge and support and mentor other people working on the same in a bunch of incredible ways. After a lot of prompting he exposed his deepest personal insecurities, something I as a psychiatrist have to really respect. Amanda Marcotte tried to use that to make mincemeat of him, casually, as if destroying him was barely worth her time. She did it on a site where she gets more pageviews than he ever will, among people who don’t know him, and probably stained his reputation among nonphysicists permanently. I know I have weird moral intuitions, but this is about as close to pure evil punching pure good in the face just because it can as I’ve ever seen in my life. It made me physically ill, and I mentioned the comments of the post that I lost a couple pounds pacing back and forth and shaking and not sleeping after I read it. That was the place I was writing from. And it was part of what seemed to me to be an obvious trend, and although “feminists vs. nerds” is a really crude way of framing it, I couldn’t think of a better one in that mental state and I couldn’t let it pass.

I had three reactions on reading this.  First, if there is a Scott in this discussion who’s “pure good,” then it’s not I.  Second, maybe the ultimate solution to the problem of online shaming mobs is to make a thousand copies of Alexander, and give each one a laptop with an Internet connection.  But third, as long as we have only one of him, the rest of us have a lot of work cut out for us.  I know, without having to ask, that the only real way I can thank Alexander for coming to my defense, is to use this blog to defend other people (anywhere on the ideological spectrum) who are attacked online for sharing in a spirit of honesty and goodwill.  So if you encounter such a person, let me know—I’d much prefer that to letting me know about the latest attempt to solve NP-complete problems in polynomial time with some analog contraption.


Unrelated Update: Since I started this post with Boaz Barak, let me also point to his recent blog post on why theoretical computer scientists care so much about asymptotics, despite understanding full well that the constants can overwhelm them in practice.  Boaz articulates something that I’ve tried to say many times, but he’s crisper and more eloquent.


Update (Feb. 27): Since a couple people asked, I explain here what I see as the basic problems with the “Dr. Nerdlove” site.


Update (Feb. 28): In the middle of this affair, perhaps the one thing that depressed me the most was Salon‘s “Plight of the bitter nerd” headline. Random idiots on the Internet were one thing, but how could a “serious,” “respectable” magazine lend its legitimacy to such casual meanness? I’ve now figured out the answer: I used to read Salon sometimes in the late 90s and early 2000s, but not since then, and I simply hadn’t appreciated how far the magazine had descended into clickbait trash. There’s an amusing fake Salon Twitter account that skewers the magazine with made-up headlines (“Ten signs your cat might be racist” / “Nerd supremacism: should we have affirmative action to get cool people into engineering?”), mixed with actual Salon headlines, in such a way that it would be difficult to tell many of them apart were they not marked. (Indeed, someone should write a web app where you get quizzed to see how well you can distinguish them.) “The plight of the bitter nerd” is offered there as one of the real headlines that’s indistinguishable from the parodies.

“The Man Who Tried to Redeem the World with Logic”

Wednesday, February 18th, 2015

No, I’m not talking about me!

Check out an amazing Nautilus article of that title by Amanda Gefter, a fine science writer of my acquaintance.  The article tells the story of Walter Pitts, who [spoiler alert] grew up on the mean streets of Prohibition-era Detroit, discovered Russell and Whitehead’s Principia Mathematica in the library at age 12 while hiding from bullies, corresponded with Russell about errors he’d found in the Principia, then ran away from home at age 15, co-invented neural networks with Warren McCulloch in 1943, became the protégé of Norbert Wiener at MIT, was disowned by Wiener because Wiener’s wife concocted a lie that Pitts and others who she hated had seduced Wiener’s daughter, and then became depressed and drank himself to death.  Interested yet?  It’s not often that I encounter a piece of nerd history that’s important and riveting and that had been totally unknown to me; this is one of the times.

Update (Feb. 19): Also in Nautilus, you can check out a fun interview with me.

Update (Feb. 24): In loosely-related news, check out a riveting profile of Geoffrey Hinton (and more generally, of deep learning, a.k.a. re-branded neural networks) in the Chronicle of Higher Education.  I had the pleasure of meeting Hinton when he visited MIT a few months ago; he struck me as an extraordinary person.  Hat tip to commenter Chris W.

Happy Second Birthday Lily

Wednesday, January 21st, 2015

cat2

Two years ago, I blogged when Lily was born.  Today I can blog that she runs, climbs, swims (sort of), constructs 3-word sentences, demands chocolate cake, counts to 10 in both English and Hebrew, and knows colors, letters, shapes, animals, friends, relatives, the sun, and the moon.  To all external appearances she’s now conscious as you and I are (and considerably more so than the cat in the photo).

But the most impressive thing Lily does—the thing that puts her far beyond where her parents were at the same age, in a few areas—is her use of the iPad.  There she does phonics exercises, plays puzzle games that aren’t always trivial for me to win, and watches educational videos on YouTube (skipping past the ads, and complaining if the Internet connection goes down).  She chooses the apps and videos herself, easily switching between them when she gets bored.  It’s a sight to behold, and definitely something to try with your own toddler if you have one.  (There’s a movement these days that encourages parents to ban kids from using touch-screen devices, fearful that too much screen time will distract them from the real world.  To which I reply: for better or worse, this is the real world that our kids will grow up into.)

People often ask whether Dana and I will steer Lily into becoming a theoretical computer scientist like us.  My answer is “hell no”: I’ll support Lily in whatever she wants to do, whether that means logic, combinatorics, algebraic geometry, or even something further afield like theoretical neuroscience or physics.

As recent events illustrated, the world is not always the kindest place for nerds (male or female), with our normal ways of thinking, talking, and interacting sometimes misunderstood by others in the cruelest ways imaginable.  Yet despite everything, nerds do sometimes manage to meet, get married, and even produce offspring with nerd potential of their own.  We’re here, we’re sometimes inappropriately clear, and we’re not going anywhere.

So to life!  And happy birthday Lily!

What I believe

Tuesday, December 30th, 2014

Two weeks ago, prompted by a commenter named Amy, I wrote by far the most personal thing I’ve ever made public—what’s now being referred to in some places as just “comment 171.”  My thinking was: I’m giving up a privacy that I won’t regain for as long as I live, opening myself to ridicule, doing the blog equivalent of a queen-and-two-rook sacrifice.  But at least—and this is what matters—no one will ever again be able to question the depth of my feminist ideals.  Not after they understand how I clung to those ideals through a decade when I wanted to die.  And any teenage male nerds who read this blog, and who find themselves in a similar hole, will know that they too can get out without giving up on feminism. Surely that’s a message any decent person could get behind?

Alas, I was overoptimistic.  Twitter is now abuzz with people accusing me of holding precisely the barbaric attitudes that my story was all about resisting, defeating, and escaping, even when life throws you into those nasty attitudes’ gravity well, even when it tests you as most of your critics will never be tested.  Many of the tweets are full of the courageous clucks of those who speak for justice as long as they’re pretty sure their friends will agree with them: wow just wow, so sad how he totes doesn’t get it, expletives in place of arguments.  This whole affair makes me despair of the power of language to convey human reality—or at least, of my own ability to use language for that end.  I took the most dramatic, almost self-immolating step I could to get people to see me as I was, rather than according to some preexisting mental template of a “privileged, entitled, elite male scientist.”  And many responded by pressing down the template all the more firmly, twisting my words until they fit, and then congratulating each other for their bravery in doing so.

Here, of course, these twitterers (and redditors and facebookers) inadvertently helped make my argument for me.  Does anyone still not understand the sort of paralyzing fear that I endured as a teenager, that millions of other nerds endure, and that I tried to explain in the comment—the fear that civilized people will condemn you as soon as they find out who you really are (even if the truth seems far from uncommonly bad), that your only escape is to hide or lie?

Thankfully, not everyone responded with snarls.  Throughout the past two weeks, I’ve been getting regular emails from shy nerds who thanked me profusely for sharing as I did, for giving them hope for their own lives, and for articulating a life-crushing problem that anyone who’s spent a day among STEM nerds knows perfectly well, but that no one acknowledges in polite company.  I owe the writers of those emails more than they owe me, since they’re the ones who convinced me that on balance, I did the right thing.

I’m equally grateful to have gotten some interesting, compassionate responses from feminist women.  The most striking was that of Laurie Penny in the New Statesman—a response that others of Penny’s views should study, if they want to understand how to win hearts and change minds.

I do not intend for a moment to minimise Aaronson’s suffering. Having been a lonely, anxious, horny young person who hated herself and was bullied I can categorically say that it is an awful place to be. I have seen responses to nerd anti-feminism along the lines of ‘being bullied at school doesn’t make you oppressed.’ Maybe it’s not a vector of oppression in the same way, but it’s not nothing. It burns. It takes a long time to heal.

Feminism, however, is not to blame for making life hell for ‘shy, nerdy men.’ Patriarchy is to blame for that. It is a real shame that Aaronson picked up Dworkin rather than any of the many feminist theorists and writers who manage to combine raw rage with refusal to resort to sexual shame as an instructive tool. Weaponised shame- male, female or other- has no place in any feminism I subscribe to. Ironically, Aronson [sic] actually writes a lot like Dworkin- he writes from pain felt and relived and wrenched from the intimate core of himself, and because of that his writing is powerfully honest, but also flawed …

What fascinates me about Aaronson’s piece, in which there was such raw, honest suffering, was that there was not one mention of women in any respect other than how they might relieve him from his pain by taking pity, or educating him differently. And Aaronson is not a misogynist. Aaronson is obviously a compassionate, well-meaning and highly intelligent man [damn straight—SA]

I’ll have more to say about Penny’s arguments in a later post—where I agree and where I part ways from her—but there’s one factual point I should clear up now.  When I started writing comment 171, I filled it with anecdotes from the happier part of my life (roughly, from age 24 onward): the part where I finally became able to ask; where women, with a frequency that I couldn’t have imagined as a teenager, actually answered ‘yes'; and where I got to learn about their own fears and insecurities and quirks.  In the earlier draft, I also wrote about my wife’s experiences as a woman in computer science, which differed from Amy’s in some crucial ways.  But then I removed it all, for a simple reason: because while I have the right to bare my own soul on my blog, I don’t have the right to bare other people’s unless they want me to.

Without further ado, and for the benefit of the world’s Twitterariat, I’m now just going to state nine of my core beliefs.

1. I believe that women are authors of their own stories, that they don’t exist merely to please men, that they are not homogeneous, that they’re not slot machines that ‘pay out’ but only if you say the right things.  I don’t want my two-year-old daughter to grow up to be anyone else’s property, and I’m happy that she won’t.  And I’d hope all this would no more need to be said, than (say) that Gentiles shouldn’t be slaughtered to use their blood in making matzo.

2. I believe everyone’s story should be listened to—and concretely, that everyone should feel 300% welcome to participate in my comments section.  I don’t promise to agree with you, but I promise to try to engage your ideas thoughtfully, whether you’re a man, woman, child, AI-bot, or unusually-bright keyboard-pecking chicken.  Indeed, I spend a nontrivial fraction of my life doing exactly that (well, not so much with chickens).

3. I believe no one has the right to anyone else’s sexual affections.  I believe establishing this principle was one of the triumphs of modern civilization.

4. I believe women who go into male-dominated fields like math, CS, and physics deserve praise, encouragement, and support.  But that’s putting the point too tepidly: if I get to pick 100 people (unrelated to me) to put onto a spaceship as the earth is being destroyed, I start thinking immediately about six or seven of my female colleagues in complexity and quantum computing.  And no, Twitter: not because being female, they could help repopulate the species.  Just because they’re great people.

5. I believe there still exist men who think women are inferior, that they have no business in science, that they’re good only for sandwich-making and sex.  Though I don’t consider it legally practicable, as a moral matter I’d be fine if every such man were thrown in prison for life.

6. I believe that even if they don’t hold views anything like the above (as, overwhelmingly, they don’t), there might be nerdy males who unintentionally behave in ways that tend to drive some women away from science.  I believe this is a complicated problem best approached with charity: we want win-win solutions, where no one is made to feel despised because of who they are.  Toward that end, I believe open, honest communication (as I’ve been trying to foster on this blog) is essential.

7. I believe that no one should be ashamed of inborn sexual desires: not straight men, not straight women, not gays, not lesbians, not even pedophiles (though in the last case, there might really be no moral solution other than a lifetime of unfulfilled longing).  Indeed, I’ve always felt a special kinship with gays and lesbians, precisely because the sense of having to hide from the world, of being hissed at for a sexual makeup that you never chose, is one that I can relate to on a visceral level.  This is one reason why I’ve staunchly supported gay marriage since adolescence, when it was still radical.  It’s also why the tragedy of Alan Turing, of his court-ordered chemical castration and subsequent suicide, was one of the formative influences of my life.

8. I believe that “the problem of the nerdy heterosexual male” is surely one of the worst social problems today that you can’t even acknowledge as being a problem—the more so, if you weight the problems by how likely academics like me are to know the sufferers and to feel a personal stake in helping them. How to help all the young male nerds I meet who suffer from this problem, in a way that passes feminist muster, and that triggers the world’s sympathy rather than outrage, is a problem that interests me as much as P vs. NP, and that right now seems about equally hard.

9. I believe that, just as there are shy, nerdy men, there are also shy, nerdy women, who likewise suffer from feeling unwanted, sexually invisible, or ashamed to express their desires.  On top of that, these women also have additional difficulties that come with being women!  At the same time, I also think there are crucial differences between the two cases—at least in the world as it currently exists—which might make the shy-nerdy-male problem vastly harder to solve than the shy-nerdy-female one.  Those differences, and my advice for shy nerdy females, will be the subject of another post.  (That’s the thing about blogging: in for a penny, in for a post.)


Update (Dec. 31): I struggle always to be ready to change my views in light of new arguments and evidence. After reflecting on the many thoughtful comments here, there are two concessions that I’m now willing to make.

The first concession is that, as Laurie Penny maintained, my problems weren’t caused by feminism, but rather by the Patriarchy. One thing I’ve learned these last few days is that, as many people use it, the notion of “Patriarchy” is sufficiently elastic as to encompass almost anything about the relations between the sexes that is, or has ever been, bad or messed up—regardless of who benefits, who’s hurt, or who instigated it. So if you tell such a person that your problem was not caused by the Patriarchy, it’s as if you’ve told a pious person that a certain evil wasn’t the Devil’s handiwork: the person has trouble even parsing what you said, since within her framework, “evil” and “Devil-caused” are close to synonymous. If you want to be understood, far better just to agree that it was Beelzebub and be done with it. This might sound facetious, but it’s really not: I believe in the principle of always adopting the other side’s terms of reference, whenever doing so will facilitate understanding and not sacrifice what actually matters to you.

Smash the Patriarchy!

The second concession is that, all my life, I’ve benefited from male privilege, white privilege, and straight privilege. I would only add that, for some time, I was about as miserable as it’s possible for a person to be, so that in an instant, I would’ve traded all three privileges for the privilege of not being miserable. And if, as some suggested, there are many women, blacks, and gays who would’ve gladly accepted the other side of that trade—well then, so much the better for all of us, I guess. “Privilege” simply struck me as a pompous, cumbersome way to describe such situations: why not just say that person A’s life stinks in this way, and person B’s stinks in that way? If they’re not actively bothering each other, then why do we also need to spread person A’s stink over to person B and vice versa, by claiming they’re each “privileged” by not having the other one’s?

However, I now understand why so many people became so attached to that word: if I won’t use it, they think it means I think that sexism, racism, and homophobia don’t exist, rather than just that I think people fixated on a really bad way to talk about these problems.


Update (Jan. 1): Yesterday I gave a seminar at the Hebrew University of Jerusalem. Since I’d been spending all my time dealing with comment-171-gate, I showed up with no slides, no notes, no anything—just me and the whiteboard. But for an hour and a half, I got to forget entirely about the thousands of people on the Internet I’d never met who were now calling me an asshole because of wild, “postmodernist” misreadings of a blog comment, which twisted what I said (and meant) into its exact opposite, building up a fake-Scott-Aaronson onto whom the ax-grinders could project all of their own bogeymen. For 90 minutes I got to forget all that, and just throw myself into separations between randomized and quantum query complexity. It was the most cathartic lecture of my life. And in the near future, I’d like more such catharses. Someday I’ll say more about the inexhaustibly-fascinating topic of nerds and sex—and in particular, I’ll write the promised post about shy female nerds—but not now. This will be my last post on the subject for a while.

On balance, I don’t regret having shared my story—because it prompted an epic discussion; because I learned so much from the dozens of other nerd coming-of-age stories that it drew out, similar to mine but also different; because what I learned will change the way I talk about these issues in the future; and most of all, because so many people, men and also some women, emailed me to say how my speaking out gave them hope for their own lives. But I do regret a few rhetorical flourishes, which I should have known might be misread maliciously, though I could never have guessed how maliciously. I never meant to minimize the suffering of other people, nor to deny that many others have had things as bad or worse than I did (again, how does one even compare?). I meant only that, if we’re going to discuss how to change the culture of STEM fields, or design sexual-conduct policies to minimize suffering, then I request a seat at the table not as the “white male powerful oppressor figure,” but as someone who also suffered something atypically extreme, overcame it, and gained relevant knowledge that way. I never meant to suggest that anyone else should leave the table.

To the people who tweeted that female MIT students should now be afraid to take classes with me: please check out the beautiful blog post by Yan, a female student who did take 6.045 with me. See also this by Lisa Danz and this by Chelsea Voss.

More broadly: thank you to everyone who sent me messages of support, but especially to all the female mathematicians and scientists who did so.  I take great solace from the fact that, of all the women and men whose contributions to the world I had respected beforehand, not one (to my knowledge) reacted to this affair in a mean-spirited way.

Happy New Year, everyone. May 2015 be a year of compassion and understanding.


Update (Jan. 2): If you’ve been following this at all, then please, please, please read Scott Alexander’s tour-de-force post. To understand what it was like for me to read this, after all I’ve been through the past few days, try to imagine Galileo’s Dialogue Concerning the Two Chief World Systems, the American Declaration of Independence, John Stuart Mill’s The Subjection of Women, and Clarence Darrow’s closing arguments in the Scopes trial all rolled into one, except with you as the protagonist. Reason and emotion are traditionally imagined as opposites, but that’s never seemed entirely right to me: while, yes, part of reason is learning how to separate out emotion, I never experience such intense emotion as when, like with Alexander’s piece, I see reason finally taking a stand, reason used to face down a thousand bullies and as a fulcrum to move the world.


Update (Jan. 13): Please check out this beautiful Quora answer by Jean Yang, a PhD student in MIT CSAIL. She’s answering the question: “What do you think of Scott Aaronson’s comment #171 and the subsequent posts?”

More generally, I’ve been thrilled by the almost-unanimously positive reactions that I’ve been getting these past two weeks from women in STEM fields, even as so many people outside STEM have responded with incomprehension and cruelty.  Witnessing that pattern has—if possible—made me even more of a supporter and admirer of STEM women than I was before this thing started.


Update (Jan. 17): See this comment on Lavinia Collins’s blog for my final response to the various criticisms that have been leveled at me.

The Turing movie

Tuesday, December 16th, 2014

Last week I finally saw The Imitation Game, the movie with Benedict Cumberbatch as Alan Turing.

OK, so for those who haven’t yet seen it: should you?  Here’s my one paragraph summary: imagine that you told the story of Alan Turing—one of the greatest triumphs and tragedies of human history, needing no embellishment whatsoever—to someone who only sort-of understood it, and who filled in the gaps with weird fabrications and Hollywood clichés.  And imagine that person retold the story to a second person, who understood even less, and that that person retold it to a third, who understood least of all, but who was charged with making the movie that would bring Turing’s story before the largest audience it’s ever had.  And yet, imagine that enough of the enormity of the original story made it through this noisy channel, that the final product was still pretty good.  (Except, imagine how much better it could’ve been!)

The fabrications were especially frustrating to me, because we know it’s possible to bring Alan Turing’s story to life in a way that fully honors the true science and history.  We know that, because Hugh Whitemore’s 1986 play Breaking the Code did it.  The producers of The Imitation Game would’ve done better just to junk their script, and remake Breaking the Code into a Hollywood blockbuster.  (Note that there is a 1996 BBC adaptation of Breaking the Code, with Derek Jacobi as Turing.)

Anyway, the movie focuses mostly on Turing’s codebreaking work at Bletchley Park, but also jumps around in time to his childhood at Sherborne School, and to his arrest for “homosexual indecency” and its aftermath.  Turing’s two world-changing papers—On Computable Numbers and Computing Machinery and Intelligence—are both mentioned, though strangely, his paper about computing zeroes of the Riemann zeta function is entirely overlooked.

Here are my miscellaneous comments:

  • The boastful, trash-talking, humor-impaired badass-nerd of the movie seems a lot closer to The Big Bang Theory‘s Sheldon Cooper, or to some other Hollywood concept of “why smart people are so annoying,” than to the historical Alan Turing.  (At least in Sheldon’s case, the archetype is used for laughs, not drama or veracity.)  As portrayed in the definitive biography (Andrew Hodges’ Alan Turing: The Enigma), Turing was eccentric, sure, and fiercely individualistic (e.g., holding up his pants with pieces of string), but he didn’t get off on insulting the intelligence of the people around him.
  • In the movie, Turing is pretty much singlehandedly responsible for designing, building, and operating the Bombes (the codebreaking machines), which he does over the strenuous objections of his superiors.  This, of course, is absurd: Bletchley employed about 10,000 people at its height.  Turing may have been the single most important cog in the operation, but he was still a cog.  And by November 1942, the operation was already running smoothly enough that Turing could set sail for the US (in waters that were now much safer, thanks to Bletchley!), to consult on other cryptographic projects at Bell Labs.
  • But perhaps the movie’s zaniest conceit is that Turing was also in charge of deciding what to do with Bletchley’s intelligence (!).  In the movie, it falls to him, not the military, to decide which ship convoys will be saved, and which sacrificed to prevent spilling Bletchley’s secret.  If that had any historicity to it, it would surely be the most military and political power ever entrusted to a mathematician (update: see the comments section for potential counterexamples).
  • It’s true that Turing (along with three other codebreakers) wrote a letter directly to Winston Churchill, pleading for more funding for Bletchley Park—and that Churchill saw the letter, and ordered “Action this day! Make sure they have all they want on extreme priority.”  However, the letter was not a power play to elevate Turing over Hugh Alexander and his other colleagues: in fact, Alexander co-signed the letter.  More broadly, the fierce infighting between Turing and everyone else at Bletchley Park, central to the movie’s plot, seems to have been almost entirely invented for dramatic purposes.
  • The movie actually deserves a lot of credit for getting right that the major technical problem of Bletchley Park was how to get the Bombes to search through keys fast enough—and that speeding things up is where Turing made a central contribution.  As a result, The Imitation Game might be the first Hollywood movie ever made whose plot revolves around computational efficiency.  (Counterexamples, anyone?)  Unfortunately, the movie presents Turing’s great insight as being that one can speed up the search by guessing common phrases, like “HEIL HITLER,” that are likely to be in the plaintext.  That was, I believe, obvious to everyone from the beginning.
  • Turing never built a computer in his own home, and he never named a computer “Christopher,” after his childhood crush Christopher Morcom.  (On the other hand, Christopher Morcom existed, and his early death from tuberculosis really did devastate Turing, sending him into morbid-yet-prescient ruminations about whether a mind could exist separately from a brain.)
  • I found it ironic that The Imitation Game, produced in 2014, is far more squeamish about on-screen homosexuality than Breaking the Code, produced in 1986.  Turing talks about being gay (which is an improvement over 2001’s Enigma, which made Turing straight!), but is never shown embracing another man.  However, the more important problem is that the movie botches the story of the burglary of Turing’s house (i.e., the event that led to Turing’s arrest and conviction for homosexual indecency), omitting the role of Turing’s own naiveté in revealing his homosexuality to the police, and substituting some cloak-and-dagger spy stuff.  Once again, Breaking the Code handled this perfectly.
  • In one scene, Euler is pronounced “Yooler.”

For more, see an excellent piece in Slate, How Accurate Is The Imitation Game?.  And for other science bloggers’ reactions, see this review by Christos Papadimitriou (which I thought was extremely kind, though it focuses more on Turing himself than on the movie), this reaction by Peter Woit, which largely echoes mine, and this by Clifford Johnson.

Walter Lewin

Wednesday, December 10th, 2014

Yesterday I heard the sad news that Prof. Walter Lewin, age 78—perhaps the most celebrated physics teacher in MIT’s history—has been stripped of his emeritus status and barred from campus, and all of his physics lectures removed from OpenCourseWare, because an internal investigation found that he had been sexually harassing students online.  I don’t know anything about what happened beyond the terse public announcements, but those who do know tell me that the charges were extremely serious, and that “this wasn’t a borderline case.”

I’m someone who feels that sexual harassment must never be tolerated, neither here nor anywhere else.  But I also feel that, if a public figure is going to be publicly brought down like this (yes, even by a private university), then the detailed findings of the investigation should likewise be made public, regardless of how embarrassing they are.  I know others differ, but I think the need of the world to see that justice was done overrides MIT’s internal administrative needs, and even Prof. Lewin’s privacy (the names of any victims could, of course, be kept secret).

More importantly, I wish to register that I disagree in the strongest possible terms with MIT’s decision to remove Prof. Lewin’s lectures from OpenCourseWare—thereby forcing the tens of thousands of students around the world who were watching these legendary lectures to hunt for ripped copies on BitTorrent.  (Imagine that: physics lectures as prized contraband!)  By all means, punish Prof. Lewin as harshly as he deserves, but—as students have been pleading on Reddit, in the MIT Tech comments section, and elsewhere—don’t also punish the countless students of both sexes who continue to benefit from his work.  (For godsakes, I’d regard taking down the lectures as a tough call if Prof. Lewin had gone on a murder spree.)  Doing this sends the wrong message about MIT’s values, and is a gift to those who like to compare modern American college campuses to the Soviet Union.

Update: For those who are interested, while the comment section starts out with a discussion of whether Walter Lewin’s physics lectures should’ve been removed from OCW, it’s now broadened to include essentially all aspects of the human condition.

What does the NSA think of academic cryptographers? Recently-declassified document provides clues

Sunday, November 16th, 2014

Brighten Godfrey was one of my officemates when we were grad students at Berkeley.  He’s now a highly-successful computer networking professor at the University of Illinois Urbana-Champaign, where he studies the wonderful question of how we could get the latency of the Internet down to the physical limit imposed by the finiteness of the speed of light.  (Right now, we’re away from that limit by a factor of about 50.)

Last week, Brighten brought to my attention a remarkable document: a 1994 issue of CryptoLog, an NSA internal newsletter, which was recently declassified with a few redactions.  The most interesting thing in the newsletter is a trip report (pages 12-19 in the newsletter, 15-22 in the PDF file) by an unnamed NSA cryptographer, who attended the 1992 EuroCrypt conference, and who details his opinions on just about every talk.  If you’re interested in crypto, you really need to read this thing all the way through, but here’s a small sampling of the zingers:

  • Three of the last four sessions were of no value whatever, and indeed there was almost nothing at Eurocrypt to interest us (this is good news!). The scholarship was actually extremely good; it’s just that the directions which external cryptologic researchers have taken are remarkably far from our own lines of interest.
  • There were no proposals of cryptosystems, no novel cryptanalysis of old designs, even very little on hardware design. I really don’t see how things could have been any better for our purposes. We can hope that the absentee cryptologists stayed away because they had no new ideas, or even that they’ve taken an interest in other areas of research.
  • Alfredo DeSantis … spoke on “Graph decompositions and secret-sharing schemes,” a silly topic which brings joy to combinatorists and yawns to everyone else.
  • Perhaps it is beneficial to be attacked, for you can easily augment your publication list by offering a modification.
  • This result has no cryptanalytic application, but it serves to answer a question which someone with nothing else to think about might have asked.
  • I think I have hammered home my point often enough that I shall regard it as proved (by emphatic enunciation): the tendency at IACR meetings is for academic scientists (mathematicians, computer scientists, engineers, and philosophers masquerading as theoretical computer scientists) to present commendable research papers (in their own areas) which might affect cryptology at some future time or (more likely) in some other world. Naturally this is not anathema to us.
  • The next four sessions were given over to philosophical matters. Complexity theorists are quite happy to define concepts and then to discuss them even though they have no examples of them.
  • Don Beaver (Penn State), in another era, would have been a spellbinding charismatic preacher; young, dashing (he still wears a pony-tail), self-confident and glib, he has captured from Silvio Micali the leadership of the philosophic wing of the U.S. East Coast cryptanalytic community.
  • Those of you who know my prejudice against the “zero-knowledge” wing of the philosophical camp will be surprised to hear that I enjoyed the three talks of the session better than any of that ilk that I had previously endured. The reason is simple: I took along some interesting reading material and ignored the speakers. That technique served to advantage again for three more snoozers, Thursday’s “digital signature and electronic cash” session, but the final session, also on complexity theory, provided some sensible listening.
  • But it is refreshing to find a complexity theory talk which actually addresses an important problem!
  • The other two talks again avoided anything of substance.  [The authors of one paper] thought it worthwhile, in dealing [with] the general discrete logarithm problem, to prove that the problem is contained in the complexity classes NP and co-AM, but is unlikely to be in co-NP.
  • And Ueli Maurer, again dazzling us with his brilliance, felt compelled, in “Factoring with an Oracle” to arm himself with an Oracle (essentially an Omniscient Being that complexity theorists like to turn to when they can’t solve a problem) while factoring. He’s calculating the time it would take him (and his Friend) to factor, and would like also to demonstrate his independence by consulting his Partner as seldom as possible. The next time you find yourself similarly equipped, you will perhaps want to refer to his paper.
  • The conference again offered an interesting view into the thought processes of the world’s leading “cryptologists.” It is indeed remarkable how far the Agency has strayed from the True Path.

Of course, it would be wise not to read too much into this: it’s not some official NSA policy statement, but the griping of a single, opinionated individual somewhere within the NSA, who was probably bored and trying to amuse his colleagues.  All the same, it’s a fascinating document, not only for its zingers about people who are still very much active on the cryptographic scene, but also for its candid insights into what the NSA cares about and why, and for its look into the subculture within cryptography that would lead, years later, to Neal Koblitz’s widely-discussed anti-provable-security manifestos.

Reading this document drove home for me that the “provable security wars” are a very simple matter of the collision of two communities with different intellectual goals, not of one being right and the other being wrong.  Here’s a fun exercise: try reading this trip report while remembering that, in the 1980s—i.e., the decade immediately preceding the maligned EuroCrypt conference—the “philosophic wing” of cryptography that the writer lampoons actually succeeded in introducing revolutionary concepts (interactive proofs, zero-knowledge, cryptographic pseudorandomness, etc.) that transformed the field, concepts that have now been recognized with no fewer than three Turing Awards (to Yao, Goldwasser, and Micali).  On the other hand, it’s undoubtedly true that this progress was of no immediate interest to the NSA.  On the third hand, the “philosophers” might reply that helping the NSA wasn’t their goal.  The best interests of the NSA don’t necessarily coincide with the best interests of scientific advancement (not to mention the best interests of humanity—but that’s a separate debate).

Interstellar’s dangling wormholes

Monday, November 10th, 2014

Update (Nov. 15): A third of my confusions addressed by reading Kip Thorne’s book! Details at the bottom of this post.


On Saturday Dana and I saw Interstellar, the sci-fi blockbuster co-produced by the famous theoretical physicist Kip Thorne (who told me about his work on this movie when I met him eight years ago).  We had the rare privilege of seeing the movie on the same day that we got to hang out with a real astronaut, Dan Barry, who flew three shuttle missions and did four spacewalks in the 1990s.  (As the end result of a project that Dan’s roboticist daughter, Jenny Barry, did for my graduate course on quantum complexity theory, I’m now the coauthor with both Barrys on a paper in Physical Review A, about uncomputability in quantum partially-observable Markov decision processes.)

Before talking about the movie, let me say a little about the astronaut.  Besides being an inspirational example of someone who’s achieved more dreams in life than most of us—seeing the curvature of the earth while floating in orbit around it, appearing on Survivor, and publishing a Phys. Rev. A paper—Dan is also a passionate advocate of humanity’s colonizing other worlds.  When I asked him whether there was any future for humans in space, he answered firmly that the only future for humans was in space, and then proceeded to tell me about the technical viability of getting humans to Mars with limited radiation exposure, the abundant water there, the romantic appeal that would inspire people to sign up for the one-way trip, and the extinction risk for any species confined to a single planet.  Hearing all this from someone who’d actually been to space gave Interstellar, with its theme of humans needing to leave Earth to survive (and its subsidiary theme of the death of NASA’s manned space program meaning the death of humanity), a special vividness for me.  Granted, I remain skeptical about several points: the feasibility of a human colony on Mars in the foreseeable future (a self-sufficient human colony on Antarctica, or under the ocean, strike me as plenty hard enough for the next few centuries); whether a space colony, even if feasible, cracks the list of the top twenty things we ought to be doing to mitigate the risk of human extinction; and whether there’s anything more to be learned, at this point in history, by sending humans to space that couldn’t be learned a hundred times more cheaply by sending robots.  On the other hand, if there is a case for continuing to send humans to space, then I’d say it’s certainly the case that Dan Barry makes.

OK, but enough about the real-life space traveler: what did I think about the movie?  Interstellar is a work of staggering ambition, grappling with some of the grandest themes of which sci-fi is capable: the deterioration of the earth’s climate; the future of life in the universe; the emotional consequences of extreme relativistic time dilation; whether “our” survival would be ensured by hatching human embryos in a faraway world, while sacrificing almost all the humans currently alive; to what extent humans can place the good of the species above family and self; the malleability of space and time; the paradoxes of time travel.  It’s also an imperfect movie, one with many “dangling wormholes” and unbalanced parentheses that are still generating compile-time errors in my brain.  And it’s full of stilted dialogue that made me giggle—particularly when the characters discussed jumping into a black hole to retrieve its “quantum data.”  Also, despite Kip Thorne’s involvement, I didn’t find the movie’s science spectacularly plausible or coherent (more about that below).  On the other hand, if you just wanted a movie that scrupulously obeyed the laws of physics, rather than intelligently probing their implications and limits, you could watch any romantic comedy.  So sure, Interstellar might make you cringe, but if you like science fiction at all, then it will also make you ponder, stare awestruck, and argue with friends for days afterward—and enough of the latter to make it more than worth your while.  Just one tip: if you’re prone to headaches, do not sit near the front of the theater, especially if you’re seeing it in IMAX.

For other science bloggers’ takes, see John Preskill (who was at a meeting with Steven Spielberg to brainstorm the movie in 2006), Sean Carroll, Clifford Johnson, and Peter Woit.

In the rest of this post, I’m going to list the questions about Interstellar that I still don’t understand the answers to (yes, the ones still not answered by the Interstellar FAQ).  No doubt some of these are answered by Thorne’s book The Science of Interstellar, which I’ve ordered (it hasn’t arrived yet), but since my confusions are more about plot than science, I’m guessing that others are not.

SPOILER ALERT: My questions give away basically the entire plot—so if you’re planning to see the movie, please don’t read any further.  After you’ve seen it, though, come back and see if you can help with any of my questions.


1. What’s causing the blight, and the poisoning of the earth’s atmosphere?  The movie is never clear about this.  Is it a freak occurrence, or is it human-caused climate change?  If the latter, then wouldn’t it be worth some effort to try to reverse the damage and salvage the earth, rather than escaping through a wormhole to another galaxy?

2. What’s with the drone?  Who sent it?  Why are Cooper and Murph able to control it with their laptop?  Most important of all, what does it have to do with the rest of the movie?

3. If NASA wanted Cooper that badly—if he was the best pilot they’d ever had and NASA knew it—then why couldn’t they just call him up?  Why did they have to wait for beings from the fifth dimension to send a coded message to his daughter revealing their coordinates?  Once he did show up, did they just kind of decide opportunistically that it would be a good idea to recruit him?

4. What was with Cooper’s crash in his previous NASA career?  If he was their best pilot, how and why did the crash happen?  If this was such a defining, traumatic incident in his life, why is it never brought up for the rest of the movie?

5. How is NASA funded in this dystopian future?  If official ideology holds that the Apollo missions were faked, and that growing crops is the only thing that matters, then why have the craven politicians been secretly funneling what must be trillions of dollars to a shadow-NASA, over a period of fifty years?

6. Why couldn’t NASA have reconnoitered the planets using robots—especially since this is a future where very impressive robots exist?  Yes, yes, I know, Matt Damon explains in the movie that humans remain more versatile than robots, because of their “survival instinct.”  But the crew arrives at the planets missing extremely basic information about them, like whether they’re inhospitable to human life because of freezing temperatures or mile-high tidal waves.  This is information that robotic probes, even of the sort we have today, could have easily provided.

7. Why are the people who scouted out the 12 planets so limited in the data they can send back?  If they can send anything, then why not data that would make Cooper’s mission completely redundant (excepting, of course, the case of the lying Dr. Mann)?  Does the wormhole limit their transmissions to 1 bit per decade or something?

8. Rather than wasting precious decades waiting for Cooper’s mission to return, while (presumably) billions of people die of starvation on a fading earth, wouldn’t it make more sense for NASA to start colonizing the planets now?  They could simply start trial colonies on all the planets, even if they think most of the colonies will fail.  Yes, this plan involves sacrificing individuals for the greater good of humanity, but NASA is already doing that anyway, with its slower, riskier, stupider reconnaissance plan.  The point becomes even stronger when we remember that, in Professor Brand’s mind, the only feasible plan is “Plan B” (the one involving the frozen human embryos).  Frozen embryos are (relatively) cheap: why not just spray them all over the place?  And why wait for “Plan A” to fail before starting that?

9. The movie involves a planet, Miller, that’s so close to the black hole Gargantua, that every hour spent there corresponds to seven years on earth.  There was an amusing exchange on Slate, where Phil Plait made the commonsense point that a planet that deep in a black hole’s gravity well would presumably get ripped apart by tidal forces.  Plait later had to issue an apology, since, in conceiving this movie, Kip Thorne had made sure that Gargantua was a rapidly rotating black hole—and it turns out that the physics of rotating black holes are sufficiently different from those of non-rotating ones to allow such a planet in principle.  Alas, this clever explanation still leaves me unsatisfied.  Physicists, please help: even if such a planet existed, wouldn’t safely landing a spacecraft on it, and getting it out again, require a staggering amount of energy—well beyond what the humans shown in the movie can produce?  (If they could produce that much acceleration and deceleration, then why couldn’t they have traveled from Earth to Saturn in days rather than years?)  If one could land on Miller and then get off of it using the relatively conventional spacecraft shown in the movie, then the amusing thought suggests itself that one could get factor-of-60,000 computational speedups, “free of charge,” by simply leaving one’s computer in space while one spent some time on the planet.  (And indeed, something like that happens in the movie: after Cooper and Anne Hathaway return from Miller, Romilly—the character who stayed behind—has had 23 years to think about physics.)

10. Why does Cooper decide to go into the black hole?  Surely he could jettison enough weight to escape the black hole’s gravity by sending his capsule into the hole, while he himself shared Anne Hathaway’s capsule?

11. Speaking of which, does Cooper go into the black hole?  I.e., is the “tesseract” something he encounters before or after he crosses the event horizon?  (Or maybe it should be thought of as at the event horizon—like a friendlier version of the AMPS firewall?)

12. Why is Cooper able to send messages back in time—but only by jostling books around, moving the hands of a watch, and creating patterns of dust in one particular room of one particular house?  (Does this have something to do with love and gravity being the only two forces in the universe that transcend space and time?)

13. Why does Cooper desperately send the message “STAY” to his former self?  By this point in the movie, isn’t it clear that staying on Earth means the death of all humans, including Murph?  If Cooper thought that a message could get through at all, then why not a message like: “go, and go directly to Edmunds’ planet, since that’s the best one”?  Also, given that Cooper now exists outside of time, why does he feel such desperate urgency?  Doesn’t he get, like, infinitely many chances?

14. Why is Cooper only able to send “quantum data” that saves the world to the older Murph—the one who lives when (presumably) billions of people are already dying of starvation?  Why can’t he send the “quantum data” back to the 10-year-old Murph, for example?  Even if she can’t yet understand it, surely she could hand it over to Professor Brand.  And even if this plan would be unlikely to succeed: again, Cooper now exists outside of time.  So can’t he just keep going back to the 10-year-old Murph, rattling those books over and over until the message gets through?

15. What exactly is the “quantum data” needed for, anyway?  I gather it has something to do with building a propulsion system that can get the entire human population out of the earth’s gravity well at a reasonable cost?  (Incidentally, what about all the animals?  If the writers of the Old Testament noticed that issue, surely the writers of Interstellar could.)

16. How does Cooper ever make it out of the black hole?  (Maybe it was explained and I missed it: once he entered the black hole, things got extremely confusing.)  Do the fifth-dimensional beings create a new copy of Cooper outside the black hole?  Do they postselect on a branch of the wavefunction where he never entered the black hole in the first place?  Does Murph use the “quantum data” to get him out?

17. At his tearful reunion with the elderly Murph, why is Cooper totally uninterested in meeting his grandchildren and great-grandchildren, who are in the same room?  And why are they uninterested in meeting him?  I mean, seeing Murph again has been Cooper’s overriding motivation during his journey across the universe, and has repeatedly been weighed against the survival of the entire human race, including Murph herself.  But seeing Murph’s kids—his grandkids—isn’t even worth five minutes?

18. Speaking of which, when did Murph ever find time to get married and have kids?  Since she’s such a major character, why don’t we learn anything about this?

19. Also, why is Murph an old woman by the time Cooper gets back?  Yes, Cooper lost a few decades because of the time dilation on Miller’s planet.  I guess he lost the additional decades while entering and leaving Gargantua?  If the five-dimensional beings were able to use their time-travel / causality-warping powers to get Cooper out of the black hole, couldn’t they have re-synced his clock with Murph’s while they were at it?

20. Why does Cooper need to steal a spaceship to get to Anne Hathaway’s planet?  Isn’t Murph, like, the one in charge?  Can’t she order that a spaceship be provided for Cooper?

21. Astute readers will note that I haven’t yet said anything about the movie’s central paradox, the one that dwarfs all the others.  Namely, if humans were going to go extinct without a “wormhole assist” from the humans of the far future, then how were there any humans in the far future to provide the wormhole assist?  And conversely, if the humans of the far future find themselves already existing, then why do they go to the trouble to put the wormhole in their past (which now seems superfluous, except maybe for tidying up the story of their own origins)?  The reason I didn’t ask about this is that I realize it’s supposed to be paradoxical; we’re supposed to feel vertigo thinking about it.  (And also, it’s not entirely unrelated to how PSPACE-complete problems get solved with polynomial resources, in my and John Watrous’s paper on computation with closed timelike curves.)  My problem is a different one: if the fifth-dimensional, far-future humans have the power to mold their own past to make sure everything turned out OK, then what they actually do seems pathetic compared to what they could do.  For example, why don’t they send a coded message to the 21st-century humans (similar to the coded messages that Cooper sends to Murph), telling them how to avoid the blight that destroys their crops?  Or just telling them that Edmunds’ planet is the right one to colonize?  Like the God of theodicy arguments, do the future humans want to use their superpowers only to give us a little boost here and there, while still leaving us a character-forming struggle?  Even if this reticence means that billions of innocent people—ones who had nothing to do with the character-forming struggle—will die horrible deaths?  If so, then I don’t understand these supposedly transcendently-evolved humans any better than I understand the theodical God.


Anyway, rather than ending on that note of cosmic pessimism, I guess I could rejoice that we’re living through what must be the single biggest month in the history of nerd cinema—what with a sci-fi film co-produced by a great theoretical physicist, a Stephen Hawking biopic, and the Alan Turing movie coming out in a few weeks.  I haven’t yet seen the latter two.  But it looks like the time might be ripe to pitch my own decades-old film ideas, like “Radical: The Story of Évariste Galois.”


Update (Nov. 15): I just finished reading Kip Thorne’s interesting book The Science of Interstellar.  I’d say that it addresses (doesn’t always clear up, but at least addresses) 7 of my 21 confusions: 1, 4, 9, 10, 11, 15, and 19.  Briefly:

1. Thorne correctly notes that the movie is vague about what’s causing the blight and the change to the earth’s atmosphere, but he discusses a bunch of possibilities, which are more in the “freak disaster” than the “manmade” category.

4. Cooper’s crash was supposed to have been caused by a gravitational anomaly, as the bulk beings of the far future were figuring out how to communicate with 21st-century humans.  It was another foreshadowing of those bulk beings.

9. Thorne notices the problem of the astronomical amount of energy needed to safely land on Miller’s planet and then get off of it—given that this planet is deep inside the gravity well of the black hole Gargantua, and orbiting Gargantua at a large fraction of the speed of light.  Thorne offers a solution that can only be called creative: namely, while nothing about this was said in the movie (since Christopher Nolan thought it would confuse people), it turns out that the crew accelerated to relativistic speed and then decelerated using a gravitational slingshot around a second, intermediate-mass black hole, which just happened to be in the vicinity of Gargantua at precisely the right times for this.  Thorne again appeals to slingshots around unmentioned but strategically-placed intermediate-mass black holes several more times in the book, to explain other implausible accelerations and decelerations that I hadn’t even noticed.

10. Thorne acknowledges that Cooper didn’t really need to jump into Gargantua in order to jettison the mass of his body (which is trivial compared to the mass of the spacecraft).  Cooper’s real reason for jumping, he says, was the desperate hope that he could somehow find the quantum data there needed to save the humans on Earth, and then somehow get it out of the black hole and back to the humans.  (This being a movie, it of course turns out that Cooper was right.)

11. Yes, Cooper encounters the tesseract while inside the black hole.  Indeed, he hits it while flying into a singularity that’s behind the event horizon, but that isn’t the black hole’s “main” singularity—it’s a different, milder singularity.

15. While this wasn’t made clear in the movie, the purpose of the quantum data was indeed to learn how to manipulate the gravitational anomalies in order to decrease Newton’s constant G in the vicinity of the earth—destroying the earth but also allowing all the humans to escape its gravity with the rocket fuel that’s available.  (Again, nothing said about the poor animals.)

19. Yes, Cooper lost the additional decades while entering Gargantua.  (Furthermore, while Thorne doesn’t discuss this, I guess he must have lost them only when he was still with Anne Hathaway, not after he separates from her.  For otherwise, Anne Hathaway would also be an old woman by the time Cooper reaches her on Edmunds’ planet, contrary to what’s shown in the movie.)

Microsoft SVC

Tuesday, September 23rd, 2014

By now, the news that Microsoft abruptly closed its Silicon Valley research lab—leaving dozens of stellar computer scientists jobless—has already been all over the theoretical computer science blogosphere: see, e.g., Lance, Luca, Omer Reingold, Michael Mitzenmacher.  I never made a real visit to Microsoft SVC (only went there once IIRC, for a workshop, while a grad student at Berkeley); now of course I won’t have the chance.

The theoretical computer science community, in the Bay Area and elsewhere, is now mobilizing to offer visiting positions to the “refugees” from Microsoft SVC, until they’re able to find more permanent employment.  I was happy to learn, this week, that MIT’s theory group will likely play a small part in that effort.

Like many others, I confess to bafflement about Microsoft’s reasons for doing this.  Won’t the severe damage to MSR’s painstakingly-built reputation, to its hiring and retention of the best people, outweigh the comparatively small amount of money Microsoft will save?  Did they at least ask Mr. Gates, to see whether he’d chip in the proverbial change under his couch cushions to keep the lab open?  Most of all, why the suddenness?  Why not wind the lab down over a year, giving the scientists time to apply for new jobs in the academic hiring cycle?  It’s not like Microsoft is in a financial crisis, lacking the cash to keep the lights on.

Yet one could also view this announcement as a lesson in why academia exists and is necessary.  Yes, one should applaud those companies that choose to invest a portion of their revenue in basic research—like IBM, the old AT&T, or Microsoft itself (which continues to operate great research outfits in Redmond, Santa Barbara, both Cambridges, Beijing, Bangalore, Munich, Cairo, and Herzliya).  And yes, one should acknowledge the countless times when academia falls short of its ideals, when it too places the short term above the long.  All the same, it seems essential that our civilization maintain institutions for which the pursuit and dissemination of knowledge are not just accoutrements for when financial times are good and the Board of Directors is sympathetic, but are the institution’s entire reasons for being: those activities that the institution has explicitly committed to support for as long as it exists.

Steven Pinker’s inflammatory proposal: universities should prioritize academics

Thursday, September 11th, 2014

If you haven’t yet, I urge you to read Steven Pinker’s brilliant piece in The New Republic about what’s broken with America’s “elite” colleges and how to fix it.  The piece starts out as an evisceration of an earlier New Republic article on the same subject by William Deresiewicz.  Pinker agrees with Deresiewicz that something is wrong, but finds Deresiewicz’s diagnosis of what to be lacking.  The rest of Pinker’s article sets out his own vision, which involves America’s top universities taking the radical step of focusing on academics, and returning extracurricular activities like sports to their rightful place as extras: ways for students to unwind, rather than a university’s primary reason for existing, or a central criterion for undergraduate admissions.  Most controversially, this would mean that the admissions process at US universities would become more like that in virtually every other advanced country: a relatively-straightforward matter of academic performance, rather than an exercise in peering into the applicants’ souls to find out whether they have a special je ne sais quoi, and the students (and their parents) desperately gaming the intentionally-opaque system, by paying consultants tens of thousands of dollars to develop souls for them.

(Incidentally, readers who haven’t experienced it firsthand might not be able to understand, or believe, just how strange the undergraduate admissions process in the US has become, although Pinker’s anecdotes give some idea.  I imagine anthropologists centuries from now studying American elite university admissions, and the parenting practices that have grown up around them, alongside cannibalism, kamikaze piloting, and other historical extremes of the human condition.)

Pinker points out that a way to assess students’ ability to do college coursework—much more quickly and accurately than by relying on the soul-detecting skills of admissions officers—has existed for a century.  It’s called the standardized test.  But unlike in the rest of the world (even in ultraliberal Western Europe), standardized tests are politically toxic in the US, seen as instruments of racism, classism, and oppression.  Pinker reminds us of the immense irony here: standardized tests were invented as a radical democratizing tool, as a way to give kids from poor and immigrant families the chance to attend colleges that had previously only been open to the children of the elite.  They succeeded at that goal—too well for some people’s comfort.

We now know that the Ivies’ current emphasis on sports, “character,” “well-roundedness,” and geographic diversity in undergraduate admissions was consciously designed (read that again) in the 1920s, by the presidents of Harvard, Princeton, and Yale, as a tactic to limit the enrollment of Jews.  Nowadays, of course, the Ivies’ “holistic” admissions process no longer fulfills that original purpose, in part because American Jews learned to play the “well-roundedness” game as well as anyone, shuttling their teenage kids between sports, band practice, and faux charity work, while hiring professionals to ghostwrite application essays that speak searingly from the heart.  Today, a major effect of “holistic” admissions is instead to limit the enrollment of Asian-Americans (especially recent immigrants), who tend disproportionately to have superb SAT scores, but to be deficient in life’s more meaningful dimensions, such as lacrosse, student government, and marching band.  More generally—again, pause to wallow in the irony—our “progressive” admissions process works strongly in favor of the upper-middle-class families who know how to navigate it, and against the poor and working-class families who don’t.

Defenders of the status quo have missed this reality on the ground, it seems to me, because they’re obsessed with the notion that standardized tests are “reductive”: that is, that they reduce a human being to a number.  Aren’t there geniuses who bomb standardized tests, they ask, as well as unimaginative grinds who ace them?  And if you make test scores a major factor in admissions, then won’t students and teachers train for the tests, and won’t that pervert open-ended intellectual curiosity?  The answer to both questions, I think, is clearly “yes.”  But the status-quo-defenders never seem to take the next step, of examining the alternatives to standardized testing, to see whether they’re even worse.

I’d say the truth is this: spots at the top universities are so coveted, and so much rarer than the demand, that no matter what you use as your admissions criterion, that thing will instantly get fetishized and turned into a commodity by students, parents, and companies eager to profit from their anxiety.  If it’s grades, you’ll get a grades fetish; if sports, you’ll get a sports fetish; if community involvement, you’ll get soup kitchens sprouting up for the sole purpose of giving ambitious 17-year-olds something to write about in their application essays.  If Harvard and Princeton announced that from now on, they only wanted the most laid-back, unambitious kids, the ones who spent their summers lazily skipping stones in a lake, rather than organizing their whole lives around getting in to Harvard and Princeton, tens of thousands of parents in the New York metropolitan area would immediately enroll their kids in relaxation and stone-skipping prep courses.  So, given that reality, why not at least make the fetishized criterion one that’s uniform, explicit, predictively valid, relatively hard to game, and relevant to universities’ core intellectual mission?

(Here, I’m ignoring criticisms specific to the SAT: for example, that it fails to differentiate students at the extreme right end of the bell curve, thereby forcing the top schools to use other criteria.  Even if those criticisms are true, they could easily be fixed by switching to other tests.)

I admit that my views on this matter might be colored by my strange (though as I’ve learned, not at all unique) experience, of getting rejected from almost every “top” college in the United States, and then, ten years later, getting recruited for faculty jobs by the very same institutions that had rejected me as a teenager.  Once you understand how undergraduate admissions work, the rejections were unsurprising: I was a 15-year-old with perfect SATs and a published research paper, but not only was I young and immature, with spotty grades and a weird academic trajectory, I had no sports, no music, no diverse leadership experiences.  I was a narrow, linear, A-to-B thinker who lacked depth and emotional intelligence: the exact opposite of what Harvard and Princeton were looking for in every way.  The real miracle is that despite these massive strikes against me, two schools—Cornell and Carnegie Mellon—were nice enough to give me a chance.  (I ended up going to Cornell, where I got a great education.)

Some people would say: so then what’s the big deal?  If Harvard or MIT reject some students that maybe they should have admitted, those students will simply go elsewhere, where—if they’re really that good—they’ll do every bit as well as they would’ve done at the so-called “top” schools.  But to me, that’s uncomfortably close to saying: there are millions of people who go on to succeed in life despite childhoods of neglect and poverty.  Indeed, some of those people succeed partly because of their rough childhoods, which served as the crucibles of their character and resolve.  Ergo, let’s neglect our own children, so that they too can have the privilege of learning from the school of hard knocks just like we did.  The fact that many people turn out fine despite unfairness and adversity doesn’t mean that we should inflict unfairness if we can avoid it.

Let me end with an important clarification.  Am I saying that, if I had dictatorial control over a university (ha!), I would base undergraduate admissions solely on standardized test scores?  Actually, no.  Here’s what I would do: I would admit the majority of students mostly based on test scores.  A minority, I would admit because of something special about them that wasn’t captured by test scores, whether that something was musical or artistic talent, volunteer work in Africa, a bestselling smartphone app they’d written, a childhood as an orphaned war refugee, or membership in an underrepresented minority.  Crucially, though, the special something would need to be special.  What I wouldn’t do is what’s done today: namely, to turn “specialness” and “well-roundedness” into commodities that the great mass of applicants have to manufacture before they can even be considered.

Other than that, I would barely look at high-school grades, regarding them as too variable from one school to another.  And, while conceding it might be impossible, I would try hard to keep my university in good enough financial shape that it didn’t need any legacy or development admits at all.


Update (Sep. 14): For those who feel I’m exaggerating the situation, please read the story of commenter Jon, about a homeschooled 15-year-old doing graduate-level work in math who, three years ago, was refused undergraduate admission to both Berkeley and Caltech, with the math faculty powerless to influence the admissions officers. See also my response.