Archive for the ‘Obviously I’m Not Defending Aaronson’ Category

My new motto

Sunday, August 30th, 2020

Update (Sep 1): Thanks for the comments, everyone! As you can see, I further revised this blog’s header based on the feedback and on further reflection.

The Right could only kill me and everyone I know.
The Left is scarier; it could convince me that it was my fault
!

(In case you missed it on the blog’s revised header, right below “Quantum computers aren’t just nondeterministic Turing machines” and “Hold the November US election by mail.” I added an exclamation point at the end to suggest a slightly comic delivery.)

Update: A friend expressed concern that, because my new motto appears to “blame both sides,” it might generate confusion about my sympathies or what I want to happen in November. So to eliminate all ambiguity: I hereby announce that I will match all reader donations made in the next 72 hours to either the Biden-Harris campaign or the Lincoln Project, up to a limit of $2,000. Honor system; just tell me in the comments what you donated.

Justice has no faction

Thursday, June 18th, 2020

(1) To start with some rare good news: I was delighted that the US Supreme Court, in a 5-4 holding led by Chief Justice Roberts (!), struck down the Trump administration’s plan to end DACA (Deferred Action for Childhood Arrivals). Dismantling DACA would’ve been a first step toward deporting 700,000 overwhelmingly blameless and peaceful people from, in many cases, the only homes they remember, for no particular reason other than to slake the resentment of Trump’s base. Better still was the majority’s argument: that when, by law, a federal agency has to supply a reason for a policy change (in this case, ending DACA), its reason can’t just be blatantly invented post facto.

To connect to my last post: I hope this gives some evidence that, if Trump refuses to accept an electoral loss in November, and if it ends up in the Supreme Court as Bush v. Gore did, then Roberts might once again break from the Court’s other four rightists, in favor of the continued survival of the Republic.

(2) Along with Steven Pinker, Scott Alexander, Sam Altman, Jonathan Haidt, Robert Solovay, and others who might be known to this blog’s readership, I decided after reflection to sign a petition in support of Steve Hsu, a theoretical physicist turned genomics researcher, and the Senior Vice President for Research and Innovation at Michigan State University.

Information Processing: Hail to the Chief
Hsu is the one on the right.

Hsu now faces possible firing, because of a social media campaign apparently started by an MSU grad student and SneerClub poster named Kevin Bird. What are the charges? Hsu appeared in 2017 on an alt-right podcast (albeit, one that Noam Chomsky has also appeared on). On Hsu’s own podcast, he interviewed Ron Unz, who despite Jewish birth has become a nutcase Holocaust denier—yet somehow that topic never came up on the podcast. Hsu said that, as a scientist, he doesn’t know whether group differences in average IQ have a genetic component, but our commitment to anti-racism should never hinge on questions of biology (a view also espoused by Peter Singer, perhaps the leading liberal moral philosopher of our time). Hsu has championed genomics research that, in addition to medical uses, might someday help enable embryo screening for traits like IQ. Finally, Hsu supports the continued use of standardized tests in university admissions (yes, that’s one of the listed charges).

Crucially, it doesn’t matter for present purposes if you disagree with many of Hsu’s views. The question is more like: is agreement with Steven Pinker, Jonathan Haidt, and other mild-mannered, Obama-supporting thinkers featured in your local airport bookstore now a firing offense in academia? And will those who affirm that it is, claim in the next breath to be oppressed, marginalized, the Rebel Alliance?

To be fair to the cancelers, I think they have two reasonable arguments in their favor.

The first that they’re “merely” asking for Hsu to step down as vice president, not for him to lose his tenured professorship in physics. Only professors, say the activists, enjoy academic freedom; administrators need to uphold the values and public image of their university, as Larry Summers learned fifteen years ago. (And besides, we might add, what intellectual iconoclast in their right mind would ever become a university VP, or want to stay one??) I’d actually be fine with this if I had any confidence that it was going to end here. But I don’t. Given the now-enshrined standards—e.g., that professors hold positions of power, and that the powerful can oppress the powerless, or even do violence to them, just by expressing or entertaining thoughts outside an ever-shrinking range—why should Hsu trust any assurances that he’ll be left alone, if he does go back to being a physics professor? If the SneerClubbers can cancel him, then how long until they cancel Pinker, or Haidt, or me? (I hope the SneerClubbers enthusiastically embrace those ideas! If they do, then no one ever again gets to call me paranoid about Red Guards behind every bush.)

The second reasonable argument is that, as far as I can tell, Hsu really did grant undeserved legitimacy to a Holocaust denier, via a friendly interview about other topics on his podcast. I think it would help if, without ceding a word that he doesn’t believe, Hsu were now to denounce racism, Holocaust denial, and specifically Ron Unz’s flirtation with Holocaust denial in the strongest possible terms, and explain why he didn’t bring the topic up with his guest (e.g., did he not know Unz’s views?).

If I used Twitter…

Saturday, April 4th, 2020

I’m thinking of writing a novel where human civilization is threatened by a global pandemic, and is then almost singlehandedly rescued by one man … a man who reigned for decades as the world’s prototypical ruthless and arrogant tech billionaire, but who was then transformed by the love of his wife. That is, if the billionaire can make it past government regulators as evil as they are stupid. I need some advice: how can I make my storyline a bit subtler, so critics don’t laugh it off as some immature nerd fantasy?

Updates (April 5): Thanks to several commenters for emphasizing that the wife needs to be a central character here: I agree! The other thing is, I don’t want Fox News cheering my novel for its Atlas Shrugged vibe. So maybe the pandemic is only surging out of control in the US because of the incompetence of a Republican president? I don’t want to go ridiculously overboard, but like, maybe the president is some thuggish conman with the diction of a 5-year-old, who the deluded Republicans cheer anyway? And maybe he’s also a Bible-thumping fundamentalist? OK, that’s too much, so maybe the fundamentalist is like the vice president or something, and he gets put in charge of the pandemic response and then sets about muzzling the scientists? As I said, I really need advice on making the messages subtler.

An alternative argument for why women leave STEM: Guest post by Karen Morenz

Thursday, January 16th, 2020

Scott’s preface: Imagine that every time you turned your blog over to a certain topic, you got denounced on Twitter and Reddit as a privileged douchebro, entitled STEMlord, counterrevolutionary bourgeoisie, etc. etc. The sane response would simply be to quit blogging about that topic. But there’s also an insane (or masochistic?) response: the response that says, “but if everyone like me stopped talking, we’d cede the field by default to the loudest, angriest voices on all sides—thereby giving those voices exactly what they wanted. To hell with that!”

A few weeks ago, while I was being attacked for sharing Steven Pinker’s guest post about NIPS vs. NeurIPS, I received a beautiful message of support from a PhD student in physical chemistry and quantum computing named Karen Morenz. Besides her strong words of encouragement, Karen wanted to share with me an essay she had written on Medium about why too many women leave STEM.

Karen’s essay, I found, marshaled data, logic, and her own experience in support of an insight that strikes me as true and important and underappreciated—one that dovetails with what I’ve heard from many other women in STEM fields, including my wife Dana. So I asked Karen for permission to reprint her essay on this blog, and she graciously agreed.

Briefly: anyone with a brain and a soul wants there to be many more women in STEM. Karen outlines a realistic way to achieve this shared goal. Crucially, Karen’s way is not about shaming male STEM nerds for their deep-seated misogyny, their arrogant mansplaining, or their gross, creepy, predatory sexual desires. Yes, you can go the shaming route (God knows it’s being tried). If you do, you’ll probably snare many guys who really do deserve to be shamed as creeps or misogynists, along with many more who don’t. Yet for all your efforts, Karen predicts, you’ll no more solve the original problem of too few women in STEM, than arresting the kulaks solved the problem of lifting the masses out of poverty.

For you still won’t have made a dent in the real issue: namely that, the way we’ve set things up, pursuing an academic STEM career demands fanatical devotion, to the exclusion of nearly everything else in life, between the ages of roughly 18 and 35. And as long as that’s true, Karen says, the majority of talented women are going to look at academic STEM, in light of all the other great options available to them, and say “no thanks.” Solving this problem might look like more money for maternity leave and childcare. It might also look like re-imagining the academic career trajectory itself, to make it easier to rejoin it after five or ten years away. Way back in 2006, I tried to make this point in a blog post called Nerdify the world, and the women will follow. I’m grateful to Karen for making it more cogently than I did.

Without further ado, here’s Karen’s essay. –SA

Is it really just sexism? An alternative argument for why women leave STEM

by Karen Morenz

Everyone knows that you’re not supposed to start your argument with ‘everyone knows,’ but in this case, I think we ought to make an exception:

Everyone knows that STEM (Science, Technology, Engineering and Mathematics) has a problem retaining women (see, for example Jean, Payne, and Thompson 2015). We pour money into attracting girls and women to STEM fields. We pour money into recruiting women, training women, and addressing sexism, both overt and subconscious. In 2011, the United States spent nearly $3 billion tax dollars on STEM education, of which roughly one third was spent supporting and encouraging underrepresented groups to enter STEM (including women). And yet, women are still leaving at alarming rates.

Alarming? Isn’t that a little, I don’t know, alarmist? Well, let’s look at some stats.

A recent report by the National Science Foundation (2011) found that women received 20.3% of the bachelor’s degrees and 18.6% of the PhD degrees in physics in 2008. In chemistry, women earned 49.95% of the bachelor’s degrees but only 36.1% of the doctoral degrees. By comparison, in biology women received 59.8% of the bachelor’s degrees and 50.6% of the doctoral degrees. A recent article in Chemical and Engineering News showed a chart based on a survey of life sciences workers by Liftstream and MassBio demonstrating how women are vastly underrepresented in science leadership despite earning degrees at similar rates, which I’ve copied below. The story is the same in academia, as you can see on the second chart — from comparable or even larger number of women at the student level, we move towards a significantly larger proportion of men at the more and more advanced stages of an academic career.

Although 74% of women in STEM report “loving their work,” half (56%, in fact) leave over the course of their career — largely at the “mid-level” point, when the loss of their talent is most costly as they have just completed training and begun to contribute maximally to the work force.

A study by Dr. Flaherty found that women who obtain faculty position in astronomy spent on average 1 year less than their male counterparts between completing their PhD and obtaining their position — but he concluded that this is because women leave the field at a rate 3 to 4 times greater than men, and in particular, if they do not obtain a faculty position quickly, will simply move to another career. So, women and men are hired at about the same rate during the early years of their post docs, but women stop applying to academic positions and drop out of the field as time goes on, pulling down the average time to hiring for women.

There are many more studies to this effect. At this point, the assertion that women leave STEM at an alarming rate after obtaining PhDs is nothing short of an established fact. In fact, it’s actually a problem across all academic disciplines, as you can see in this matching chart showing the same phenomenon in humanities, social sciences, and education. The phenomenon has been affectionately dubbed the “leaky pipeline.”

But hang on a second, maybe there just aren’t enough women qualified for the top levels of STEM? Maybe it’ll all get better in a few years if we just wait around doing nothing?

Nope, sorry. This study says that 41% of highly qualified STEM people are female. And also, it’s clear from the previous charts and stats that a significantly larger number of women are getting PhDs than going on the be professors, in comparison to their male counterparts. Dr. Laurie Glimcher, when she started her professorship at Harvard University in the early 1980s, remembers seeing very few women in leadership positions. “I thought, ‘Oh, this is really going to change dramatically,’ ” she says. But 30 years later, “it’s not where I expected it to be.” Her experiences are similar to those of other leading female faculty.

So what gives? Why are all the STEM women leaving?

It is widely believed that sexism is the leading problem. A quick google search of “sexism in STEM” will turn up a veritable cornucopia of articles to that effect. And indeed, around 60% of women report experiencing some form of sexism in the last year (Robnett 2016). So, that’s clearly not good.

And yet, if you ask leading women researchers like Nobel Laureate in Physics 2018, Professor Donna Strickland, or Canada Research Chair in Advanced Functional Materials (Chemistry), Professor Eugenia Kumacheva, they say that sexism was not a barrier in their careers. Moreover, extensive research has shown that sexism has overall decreased since Professors Strickland and Kumacheva (for example) were starting their careers. Even more interestingly, Dr. Rachael Robnett showed that more mathematical fields such as Physics have a greater problem with sexism than less mathematical fields, such as Chemistry, a finding which rings true with the subjective experience of many women I know in Chemistry and Physics. However, as we saw above, women leave the field of Chemistry in greater proportions following their BSc than they leave Physics. On top of that, although 22% of women report experiencing sexual harassment at work, the proportion is the same among STEM and non-STEM careers, and yet women leave STEM careers at a much higher rate than non-STEM careers.

So,it seems that sexism can not fully explain why women with STEM PhDs are leaving STEM. At the point when women have earned a PhD, for the most part they have already survived the worst of the sexism. They’ve already proven themselves to be generally thick-skinned and, as anyone with a PhD can attest, very stubborn in the face of overwhelming difficulties. Sexism is frustrating, and it can limit advancement, but it doesn’t fully explain why we have so many women obtaining PhDs in STEM, and then leaving. In fact, at least in the U of T chemistry department, faculty hires are directly proportional to the applicant pool —although the exact number of applicants are not made public, from public information we can see that approximately one in four interview invitees are women, and approximately one in four hires are women. Our hiring committees have received bias training, and it seems that it has been largely successful. That’s not to say that we’re done, but it’s time to start looking elsewhere to explain why there are so few women sticking around.

So why don’t more women apply?

Well, one truly brilliant researcher had the groundbreaking idea of asking women why they left the field. When you ask women why they left, the number one reason they cite is balancing work/life responsibilities — which as far as I can tell is a euphemism for family concerns.

The research is in on this. Women who stay in academia expect to marry later, and delay or completely forego having children, and if they do have children, plan to have fewer than their non-STEM counterparts (Sassler et al 2016Owens 2012). Men in STEM have no such difference compared to their non-STEM counterparts; they marry and have children about the same ages and rates as their non-STEM counterparts (Sassler et al 2016). Women leave STEM in droves in their early to mid thirties (Funk and Parker 2018) — the time when women’s fertility begins to decrease, and risks of childbirth complications begin to skyrocket for both mother and child. Men don’t see an effect on their fertility until their mid forties. Of the 56% of women who leave STEM, 50% wind up self-employed or using their training in a not for profit or government, 30% leave to a non-STEM more ‘family friendly’ career, and 20% leave to be stay-at-home moms (Ashcraft and Blithe 2002). Meanwhile, institutions with better childcare and maternity leave policies have twice(!) the number of female faculty in STEM (Troeger 2018). In analogy to the affectionately named “leaky pipeline,” the challenge of balancing motherhood and career has been titled the “maternal wall.”

To understand the so-called maternal wall better, let’s take a quick look at the sketch of a typical academic career.

For the sake of this exercise, let’s all pretend to be me. I’m a talented 25 year old PhD candidate studying Physical Chemistry — I use laser spectroscopy to try to understand atypical energy transfer processes in innovative materials that I hope will one day be used to make vastly more efficient solar panels. I got my BSc in Chemistry and Mathematics at the age of 22, and have published 4 scientific papers in two different fields already (Astrophysics and Environmental Chemistry). I’ve got a big scholarship, and a lot of people supporting me to give me the best shot at an academic career — a career I dearly want. But, I also want a family — maybe two or three kids. Here’s what I can expect if I pursue an academic career:

With any luck, 2–3 years from now I’ll graduate with a PhD, at the age of 27. Academics are expected to travel a lot, and to move a lot, especially in their 20s and early 30s — all of the key childbearing years. I’m planning to go on exchange next year, and then the year after that I’ll need to work hard to wrap up research, write a thesis, and travel to several conferences to showcase my work. After I finish my PhD, I’ll need to undertake one or two post doctoral fellowships, lasting one or two years each, probably in completely different places. During that time, I’ll start to apply for professorships. In order to do this, I’ll travel around to conferences to advertise my work and to meet important leaders in my field, and then, if I am invited for interviews, I’ll travel around to different universities for two or three days at a time to undertake these interviews. This usually occurs in a person’s early 30s — our helpful astronomy guy, Dr. Flaherty, found the average time to hiring was 5 years, so let’s say I’m 32 at this point. If offered a position, I’ll spend the next year or two renovating and building a lab, buying equipment, recruiting talented graduate students, and designing and teaching courses. People work really, really hard during this time and have essentially no leisure time. Now I’m 34. Within usually 5 years I’ll need to apply for tenure. This means that by the time I’m 36, I’ll need to be making significant contributions in my field, and then in the final year before applying for tenure, I will once more need to travel to many conferences to promote my work, in order to secure tenure — if I fail to do so, my position at the university would probably be terminated. Although many universities offer a “tenure extension” in cases where an assistant professor has had a child, this does not solve all of the problems. Taking a year off during that critical 5 or 6 year period often means that the research “goes bad” — students flounder, projects that were promising get “scooped” by competitors at other institutions, and sometimes, in biology and chemistry especially, experiments literally go bad. You wind up needing to rebuild much more than just a year’s worth of effort.

At no point during this time do I appear stable enough, career-wise, to take even six months off to be pregnant and care for a newborn. Hypothetical future-me is travelling around, or even moving, conducting and promoting my own independent research and training students. As you’re likely aware, very pregnant people and newborns don’t travel well. And academia has a very individualistic and meritocratic culture. Starting at the graduate level, huge emphasis is based on independent research, and independent contributions, rather than valuing team efforts. This feature of academia is both a blessing and a curse. The individualistic culture means that people have the independence and the freedom to pursue whatever research interests them — in fact this is the main draw for me personally. But it also means that there is often no one to fall back on when you need extra support, and because of biological constraints, this winds up impacting women more than men.

At this point, I need to make sure that you’re aware of some basics of female reproductive biology. According to Wikipedia, the unquestionable source of all reliable knowledge, at age 25, my risk of conceiving a baby with chromosomal abnormalities (including Down’s Syndrome) is 1 in about 1400. By 35, that risk more than quadruples to 1 in 340. At 30, I have a 75% chance of a successful birth in one year, but by 35 it has dropped to 66%, and by 40 it’s down to 44%. Meanwhile, 87 to 94% of women report at least 1 health problem immediately after birth, and 1.5% of mothers have a severe health problem, while 31% have long-term persistent health problems as a result of pregnancy (defined as lasting more than six months after delivery). Furthermore, mothers over the age of 35 are at higher risk for pregnancy complications like preterm delivery, hypertension, superimposed preeclampsia, severe preeclampsia (Cavazos-Rehg et al 2016). Because of factors like these, pregnancies in women over 35 are known as “geriatric pregnancies” due to the drastically increased risk of complications. This tight timeline for births is often called the “biological clock” — if women want a family, they basically need to start before 35. Now, that’s not to say it’s impossible to have a child later on, and in fact some studies show that it has positive impacts on the child’s mental health. But it is riskier.

So, women with a PhD in STEM know that they have the capability to make interesting contributions to STEM, and to make plenty of money doing it. They usually marry someone who also has or expects to make a high salary as well. But this isn’t the only consideration. Such highly educated women are usually aware of the biological clock and the risks associated with pregnancy, and are confident in their understanding of statistical risks.

The Irish say, “The common challenge facing young women is achieving a satisfactory work-life balance, especially when children are small. From a career perspective, this period of parenthood (which after all is relatively short compared to an entire working life) tends to coincide exactly with the critical point at which an individual’s career may or may not take off. […] All the evidence shows that it is at this point that women either drop out of the workforce altogether, switch to part-time working or move to more family-friendly jobs, which may be less demanding and which do not always utilise their full skillset.”

And in the Netherlands, “The research project in Tilburg also showed that women academics have more often no children or fewer children than women outside academia.” Meanwhile in Italy “On a personal level, the data show that for a significant number of women there is a trade-off between family and work: a large share of female economists in Italy do not live with a partner and do not have children”

Most jobs available to women with STEM PhDs offer greater stability and a larger salary earlier in the career. Moreover, most non-academic careers have less emphasis on independent research, meaning that employees usually work within the scope of a larger team, and so if a person has to take some time off, there are others who can help cover their workload. By and large, women leave to go to a career where they will be stable, well funded, and well supported, even if it doesn’t fulfill their passion for STEM — or they leave to be stay-at-home moms or self-employed.

I would presume that if we made academia a more feasible place for a woman with a family to work, we could keep almost all of those 20% of leavers who leave to just stay at home, almost all of the 30% who leave to self-employment, and all of those 30% who leave to more family friendly careers (after all, if academia were made to be as family friendly as other careers, there would be no incentive to leave). Of course, there is nothing wrong with being a stay at home parent — it’s an admirable choice and contributes greatly to our society. One estimate valued the equivalent salary benefit of stay-at-home parenthood at about $160,000/year. Moreover, children with a stay-at-home parent show long term benefits such as better school performance — something that most academic women would want for their children. But a lot of people only choose it out of necessity — about half of stay-at-home moms would prefer to be working (Ciciolla, Curlee, & Luthar 2017). When the reality is that your salary is barely more than the cost of daycare, then a lot of people wind up giving up and staying home with their kids rather than paying for daycare. In a heterosexual couple it will usually be the woman that winds up staying home since she is the one who needs to do things like breast feed anyways. And so we lose these women from the workforce.

And yet, somehow, during this informal research adventure of mine, most scholars and policy makers seem to be advising that we try to encourage young girls to be interested in STEM, and to address sexism in the workplace, with the implication that this will fix the high attrition rate in STEM women. But from what I’ve found, the stats don’t back up sexism as the main reason women leave. There is sexism, and that is a problem, and women do leave STEM because of it — but it’s a problem that we’re already dealing with pretty successfully, and it’s not why the majority of women who have already obtained STEM PhDs opt to leave the field. The whole family planning thing is huge and for some reason, almost totally swept under the rug — mostly because we’re too shy to talk about it, I think.

In fact, I think that the plethora of articles suggesting that the problem is sexism actually contribute to our unwillingness to talk about the family planning problem, because it reinforces the perception that that men in power will not hire a woman for fear that she’ll get pregnant and take time off. Why would anyone talk about how they want to have a family when they keep hearing that even the mere suggestion of such a thing will limit their chances of being hired? I personally know women who have avoided bringing up the topic with colleagues or supervisors for fear of professional repercussions. So we spend all this time and energy talking about how sexism is really bad, and very little time trying to address the family planning challenge, because, I guess, as the stats show, if women are serious enough about science then they just give up on the family (except for the really, really exceptional ones who can handle the stresses of both simultaneously).

To be very clear, I’m not saying that sexism is not a problem. What I am saying is that, thanks to the sustained efforts of a large number of people over a long period of time, we’ve reduced the sexism problem to the point where, at least at the graduate level, it is no longer the largest major barrier to women’s advancement in STEM. Hurray! That does not mean that we should stop paying attention to the issue of sexism, but does mean that it’s time to start paying more attention to other issues, like how to properly support women who want to raise a family while also maintaining a career in STEM.

So what can we do to better support STEM women who want families?

A couple of solutions have been tentatively tested. From a study mentioned above, it’s clear that providing free and conveniently located childcare makes a colossal difference to women’s choices of whether or not to stay in STEM, alongside extended and paid maternity leave. Another popular and successful strategy was implemented by a leading woman in STEM, Laurie Glimcher, a past Harvard Professor in Immunology and now CEO of Dana-Farber Cancer Institute. While working at NIH, Dr. Glimcher designed a program to provide primary caregivers (usually women) with an assistant or lab technician to help manage their laboratories while they cared for children. Now, at Dana-Farber Cancer Institute, she has created a similar program to pay for a technician or postdoctoral researcher for assistant professors. In the academic setting, Dr. Glimcher’s strategies are key for helping to alleviate the challenges associated with the individualistic culture of academia without compromising women’s research and leadership potential.

For me personally, I’m in the ideal situation for an academic woman. I graduated my BSc with high honours in four years, and with many awards. I’ve already had success in research and have published several peer reviewed papers. I’ve faced some mild sexism from peers and a couple of TAs, but nothing that’s seriously held me back. My supervisors have all been extremely supportive and feminist, and all of the people that I work with on a daily basis are equally wonderful. Despite all of this support, I’m looking at the timelines of an academic career, and the time constraints of female reproduction, and honestly, I don’t see how I can feasible expect to stay in academia and have the family life I want. And since I’m in the privileged position of being surrounded by supportive and feminist colleagues, I can say it: I’m considering leaving academia, if something doesn’t change, because even though I love it, I don’t see how it can fit in to my family plans.

But wait! All of these interventions are really expensive. Money doesn’t just grow on trees, you know!

It doesn’t in general, but in this case it kind of does — well, actually, we already grew it. We spend billions of dollars training women in STEM. By not making full use of their skills, if we look at only the american economy, we are wasting about $1.5 billion USD per year in economic benefits they would have produced if they stayed in STEM. So here’s a business proposal: let’s spend half of that on better family support and scientific assistants for primary caregivers, and keep the other half in profit. Heck, let’s spend 99% — $1.485 billion (in the states alone) on better support. That should put a dent in the support bill, and I’d sure pick up $15 million if I saw it lying around. Wouldn’t you?

By demonstrating that we will support women in STEM who choose to have a family, we will encourage more women with PhDs to apply for the academic positions that they are eminently qualified for. Our institutions will benefit from the wider applicant pool, and our whole society will benefit from having the skills of these highly trained and intelligent women put to use innovating new solutions to our modern day challenges.

Quantum Dominance, Hegemony, and Superiority

Thursday, December 19th, 2019

Yay! I’m now a Fellow of the ACM. Along with my fellow new inductee Peter Shor, who I hear is a real up-and-comer in the quantum computing field. I will seek to use this awesome responsibility to steer the ACM along the path of good rather than evil.

Also, last week, I attended the Q2B conference in San Jose, where a central theme was the outlook for practical quantum computing in the wake of the first clear demonstration of quantum computational supremacy. Thanks to the folks at QC Ware for organizing a fun conference (full disclosure: I’m QC Ware’s Chief Scientific Advisor). I’ll have more to say about the actual scientific things discussed at Q2B in future posts.

None of that is why you’re here, though. You’re here because of the battle over “quantum supremacy.”

A week ago, my good friend and collaborator Zach Weinersmith, of SMBC Comics, put out a cartoon with a dark-curly-haired scientist named “Dr. Aaronson,” who’s revealed on a hot mic to be an evil “quantum supremacist.” Apparently a rush job, this cartoon is far from Zach’s finest work. For one thing, if the character is supposed to be me, why not draw him as me, and if he isn’t, why call him “Dr. Aaronson”? In any case, I learned from talking to Zach that the cartoon’s timing was purely coincidental: Zach didn’t even realize what a hornet’s-nest he was poking with this.

Ever since John Preskill coined it in 2012, “quantum supremacy” has been an awkward term. Much as I admire John Preskill’s wisdom, brilliance, generosity, and good sense, in physics as in everything else—yeah, “quantum supremacy” is not a term I would’ve coined, and it’s certainly not a hill I’d choose to die on. Once it had gained common currency, though, I sort of took a liking to it, mostly because I realized that I could mine it for dark one-liners in my talks.

The thinking was: even as white supremacy was making its horrific resurgence in the US and around the world, here we were, physicists and computer scientists and mathematicians of varied skin tones and accents and genders, coming together to pursue a different and better kind of supremacy—a small reflection of the better world that we still believed was possible. You might say that we were reclaiming the word “supremacy”—which, after all, just means a state of being supreme—for something non-sexist and non-racist and inclusive and good.

In the world of 2019, alas, perhaps it was inevitable that people wouldn’t leave things there.

My first intimation came a month ago, when Leonie Mueck—someone who I’d gotten to know and like when she was an editor at Nature handling quantum information papers—emailed me about her view that our community should abandon the term “quantum supremacy,” because of its potential to make women and minorities uncomfortable in our field. She advocated using “quantum advantage” instead.

So I sent Leonie back a friendly reply, explaining that, as the father of a math-loving 6-year-old girl, I understood and shared her concerns—but also, that I didn’t know an alternative term that really worked.

See, it’s like this. Preskill meant “quantum supremacy” to refer to a momentous event that seemed likely to arrive in a matter of years: namely, the moment when programmable quantum computers would first outpace the ability of the fastest classical supercomputers on earth, running the fastest algorithms known by humans, to simulate what the quantum computers were doing (at least on special, contrived problems). And … “the historic milestone of quantum advantage”? It just doesn’t sound right. Plus, as many others pointed out, the term “quantum advantage” is already used to refer to … well, quantum advantages, which might fall well short of supremacy.

But one could go further. Suppose we did switch to “quantum advantage.” Couldn’t that term, too, remind vulnerable people about the unfair advantages that some groups have over others? Indeed, while “advantage” is certainly subtler than “supremacy,” couldn’t that make it all the more insidious, and therefore dangerous?

Oblivious though I sometimes am, I realized Leonie would be unhappy if I offered that, because of my wholehearted agreement, I would henceforth never again call it “quantum supremacy,” but only “quantum superiority,” “quantum dominance,” or “quantum hegemony.”

But maybe you now see the problem. What word does the English language provide to describe one thing decisively beating or being better than a different thing for some purpose, and which doesn’t have unsavory connotations?

I’ve heard “quantum ascendancy,” but that makes it sound like we’re a UFO cult—waiting to ascend, like ytterbium ions caught in a laser beam, to a vast quantum computer in the sky.

I’ve heard “quantum inimitability” (that is, inability to imitate using a classical computer), but who can pronounce that?

Yesterday, my brilliant former student Ewin Tang (yes, that one) relayed to me a suggestion by Kevin Tian: “quantum eclipse” (that is, the moment when quantum computers first eclipse classical ones for some task). But would one want to speak of a “quantum eclipse experiment”? And shouldn’t we expect that, the cuter and cleverer the term, the harder it will be to use unironically?

In summary, while someone might think of a term so inspired that it immediately supplants “quantum supremacy” (and while I welcome suggestions), I currently regard it as an open problem.

Anyway, evidently dissatisfied with my response, last week Leonie teamed up with 13 others to publish a letter in Nature, which was originally entitled “Supremacy is for racists—use ‘quantum advantage,'” but whose title I see has now been changed to the less inflammatory “Instead of ‘supremacy’ use ‘quantum advantage.'” Leonie’s co-signatories included four of my good friends and colleagues: Alan Aspuru-Guzik, Helmut Katzgraber, Anne Broadbent, and Chris Granade (the last of whom got started in the field by helping me edit Quantum Computing Since Democritus).

(Update: Leonie pointed me to a longer list of signatories here, at their website called “quantumresponsibility.org.” A few names that might be known to Shtetl-Optimized readers are Andrew White, David Yonge-Mallo, Debbie Leung, Matt Leifer, Matthias Troyer.)

Their letter says:

The community claims that quantum supremacy is a technical term with a specified meaning. However, any technical justification for this descriptor could get swamped as it enters the public arena after the intense media coverage of the past few months.

In our view, ‘supremacy’ has overtones of violence, neocolonialism and racism through its association with ‘white supremacy’. Inherently violent language has crept into other branches of science as well — in human and robotic spaceflight, for example, terms such as ‘conquest’, ‘colonization’ and ‘settlement’ evoke the terra nullius arguments of settler colonialism and must be contextualized against ongoing issues of neocolonialism.

Instead, quantum computing should be an open arena and an inspiration for a new generation of scientists.

When I did an “Ask Me Anything” session, as the closing event at Q2B, Sarah Kaiser asked me to comment on the Nature petition. So I repeated what I’d said in my emailed response to Leonie—running through the problems with each proposed alternative term, talking about the value of reclaiming the word “supremacy,” and mostly just trying to diffuse the tension by getting everyone laughing together. Sarah later tweeted that she was “really disappointed” in my response.

Then the Wall Street Journal got in on the action, with a brief editorial (warning: paywalled) mocking the Nature petition:

There it is, folks: Mankind has hit quantum wokeness. Our species, akin to Schrödinger’s cat, is simultaneously brilliant and brain-dead. We built a quantum computer and then argued about whether the write-up was linguistically racist.

Taken seriously, the renaming game will never end. First put a Sharpie to the Supremacy Clause of the U.S. Constitution, which says federal laws trump state laws. Cancel Matt Damon for his 2004 role in “The Bourne Supremacy.” Make the Air Force give up the term “air supremacy.” Tell lovers of supreme pizza to quit being so chauvinistic about their toppings. Please inform Motown legend Diana Ross that the Supremes are problematic.

The quirks of quantum mechanics, some people argue, are explained by the existence of many universes. How did we get stuck in this one?

Steven Pinker also weighed in, with a linguistically-informed tweetstorm:

This sounds like something from The Onion but actually appeared in Nature … It follows the wokified stigmatization of other innocent words, like “House Master” (now, at Harvard, Residential Dean) and “NIPS” (Neural Information Processing Society, now NeurIPS). It’s a familiar linguistic phenomenon, a lexical version of Gresham’s Law: bad meanings drive good ones out of circulation. Examples: the doomed “niggardly” (no relation to the n-word) and the original senses of “cock,” “ass,” “prick,” “pussy,” and “booty.” Still, the prissy banning of words by academics should be resisted. It dumbs down understanding of language: word meanings are conventions, not spells with magical powers, and all words have multiple senses, which are distinguished in context. Also, it makes academia a laughingstock, tars the innocent, and does nothing to combat actual racism & sexism.

Others had a stronger reaction. Curtis Yarvin, better known as Mencius Moldbug, is one of the founders of “neoreaction” (and a significant influence on Steve Bannon, Michael Anton, and other Trumpists). Regulars might remember that Yarvin argued with me in Shtetl-Optimized‘s comment section, under a post in which I denounced Trump’s travel ban and its effects on my Iranian PhD student. Since then, Yarvin has sent me many emails, which have ranged from long to extremely long, and whose message could be summarized as: “[labored breathing] Abandon your liberal Enlightenment pretensions, young Nerdwalker. Come over the Dark Side.”

After the “supremacy is for racists” letter came out in Nature, though, Yarvin sent me his shortest email ever. It was simply a link to the letter, along with the comment “I knew it would come to this.”

He meant: “What more proof do you need, young Nerdawan, that this performative wokeness is a cancer that will eventually infect everything you value—even totally apolitical research in quantum information? And by extension, that my whole worldview, which warned of this, is fundamentally correct, while your faith in liberal academia is naïve, and will be repaid only with backstabbing?”

In a subsequent email, Yarvin predicted that in two years, the whole community will be saying “quantum advantage” instead of “quantum supremacy,” and in five years I’ll be saying “quantum advantage” too. As Yarvin famously wrote: “Cthulhu may swim slowly. But he only swims left.”

So what do I really think about this epic battle for (and against) supremacy?

Truthfully, half of me just wants to switch to “quantum advantage” right now and be done with it. As I said, I know some of the signatories of the Nature letter to be smart and reasonable and kind. They don’t wish to rid the planet of everyone like me. They’re not Amanda Marcottes or Arthur Chus. Furthermore, there’s little I despise more than a meaty scientific debate devolving into a pointless semantic one, with brilliant friend after brilliant friend getting sucked into the vortex (“you too?”). I’m strongly in the Pinkerian camp, which holds that words are just arbitrary designators, devoid of the totemic power to dictate thoughts. So if friends and colleagues—even just a few of them—tell me that they find some word I use to be offensive, why not just be a mensch, apologize for any unintended hurt, switch words midsentence, and continue discussing the matter at hand?

But then the other half of me wonders: once we’ve ceded an open-ended veto over technical terms that remind anyone of anything bad, where does it stop? How do we ever certify a word as kosher? At what point do we all get to stop arguing and laugh together?

To make this worry concrete, look back at Sarah Kaiser’s Twitter thread—the one where she expresses disappointment in me. Below her tweet, someone remarks that, besides “quantum supremacy,” the word “ancilla” (as in ancilla qubit, a qubit used for intermediate computation or other auxiliary purposes) is problematic as well. Here’s Sarah’s response:

I agree, but I wanted to start by focusing on the obvious one, Its harder for them to object to just one to start with, then once they admit the logic, we can expand the list

(What would Curtis Yarvin say about that?)

You’re probably now wondering: what’s wrong with “ancilla”? Apparently, in ancient Rome, an “ancilla” was a female slave, and indeed that’s the Latin root of the English adjective “ancillary” (as in “providing support to”). I confess that I hadn’t known that—had you? Admittedly, once you do know, you might never again look at a Controlled-NOT gate—pitilessly flipping an ancilla qubit, subject only to the whims of a nearby control qubit—in quite the same way.

(Ah, but the ancilla can fight back against her controller! And she does—in the Hadamard basis.)

The thing is, if we’re gonna play this game: what about annihilation operators? Won’t those need to be … annihilated from physics?

And what about unitary matrices? Doesn’t their very name negate the multiplicity of perspectives and cultures?

What about Dirac’s oddly-named bra/ket notation, with its limitless potential for puerile jokes, about the “bra” vectors displaying their contents horizontally and so forth? (Did you smile at that, you hateful pig?)

What about daggers? Don’t we need a less violent conjugate tranpose?

Not to beat a dead horse, but once you hunt for examples, you realize that the whole dictionary is shot through with domination and brutality—that you’d have to massacre the English language to take it out. There’s nothing special about math or physics in this respect.

The same half of me also thinks about my friends and colleagues who oppose claims of quantum supremacy, or even the quest for quantum supremacy, on various scientific grounds. I.e., either they don’t think that the Google team achieved what it said, or they think that the task wasn’t hard enough for classical computers, or they think that the entire goal is misguided or irrelevant or uninteresting.

Which is fine—these are precisely the arguments we should be having—except that I’ve personally seen some of my respected colleagues, while arguing for these positions, opportunistically tack on ideological objections to the term “quantum supremacy.” Just to goose up their case, I guess. And I confess that every time they did this, it made me want to keep saying “quantum supremacy” from now till the end of time—solely to deny these colleagues a cheap and unearned “victory,” one they apparently felt they couldn’t obtain on the merits alone. I realize that this is childish and irrational.

Most of all, though, the half of me that I’m talking about thinks about Curtis Yarvin and the Wall Street Journal editorial board, cackling with glee to see their worldview so dramatically confirmed—as theatrical wokeness, that self-parodying modern monstrosity, turns its gaze on (of all things) quantum computing research. More red meat to fire up the base—or at least that sliver of the base nerdy enough to care. And the left, as usual, walks right into the trap, sacrificing its credibility with the outside world to pursue a runaway virtue-signaling spiral.

The same half of me thinks: do we really want to fight racism and sexism? Then let’s work together to assemble a broad coalition that can defeat Trump. And Jair Bolsonaro, and Viktor Orbán, and all the other ghastly manifestations of humanity’s collective lizard-brain. Then, if we’re really fantasizing, we could liberalize the drug laws, and get contraception and loans and education to women in the Third World, and stop the systematic disenfranchisement of black voters, and open up the world’s richer, whiter, and higher-elevation countries to climate refugees, and protect the world’s remaining indigenous lands (those that didn’t burn to the ground this year).

In this context, the trouble with obsessing over terms like “quantum supremacy” is not merely that it diverts attention, while contributing nothing to fighting the world’s actual racism and sexism. The trouble is that the obsessions are actually harmful. For they make academics—along with progressive activists—look silly. They make people think that we must not have meant it when we talked about the existential urgency of climate change and the world’s other crises. They pump oxygen into right-wing echo chambers.

But it’s worse than ridiculous, because of the message that I fear is received by many outside the activists’ bubble. When you say stuff like “[quantum] supremacy is for racists,” what’s heard might be something more like:

“Watch your back, you disgusting supremacist. Yes, you. You claim that you mentor women and minorities, donate to good causes, try hard to confront the demons in your own character? Ha! None of that counts for anything with us. You’ll never be with-it enough to be our ally, so don’t bother trying. We’ll see to it that you’re never safe, not even in the most abstruse and apolitical fields. We’ll comb through your words—even words like ‘ancilla qubit’—looking for any that we can cast as offensive by our opaque and ever-shifting standards. And once we find some, we’ll have it within our power to end your career, and you’ll be reduced to groveling that we don’t. Remember those popular kids who bullied you in second grade, giving you nightmares of social ostracism that persist to this day? We plan to achieve what even those bullies couldn’t: to shame you with the full backing of the modern world’s moral code. See, we’re the good guys of this story. It’s goodness itself that’s branding you as racist scum.”

In short, I claim that the message—not the message intended, of course, by anyone other than a Chu or a Marcotte or a SneerClubber, but the message received—is basically a Trump campaign ad. I claim further that our civilization’s current self-inflicted catastrophe will end—i.e., the believers in science and reason and progress and rule of law will claw their way back to power—when, and only when, a generation of activists emerges that understands these dynamics as well as Barack Obama did.

Wouldn’t it be awesome if, five years from now, I could say to Curtis Yarvin: you were wrong? If I could say to him: my colleagues and I still use the term ‘quantum supremacy’ whenever we care to, and none of us have been cancelled or ostracized for it—so maybe you should revisit your paranoid theories about Cthulhu and the Cathedral and so forth? If I could say: quantum computing researchers now have bigger fish to fry than arguments over words—like moving beyond quantum supremacy to the first useful quantum simulations, as well as the race for scalability and fault-tolerance? And even: progressive activists now have bigger fish to fry too—like retaking actual power all over the world?

Anyway, as I said, that’s how half of me feels. The other half is ready to switch to “quantum advantage” or any other serviceable term and get back to doing science.

On two blog posts of Jerry Coyne

Saturday, July 13th, 2019

A few months ago, I got to know Jerry Coyne, the recently-retired biologist at the University of Chicago who writes the blog “Why Evolution Is True.” The interaction started when Jerry put up a bemused post about my thoughts on predictability and free will, and I pointed out that if he wanted to engage me on those topics, there was more to go on than an 8-minute YouTube video. I told Coyne that it would be a shame to get off on the wrong foot with him, since perusal of his blog made it obvious that whatever he and I disputed, it was dwarfed by our areas of agreement. He and I exchanged more emails and had lunch in Chicago.

By way of explaining how he hadn’t read “The Ghost in the Quantum Turing Machine,” Coyne emphasized the difference in my and his turnaround times: while these days I update my blog only a couple times per month, Coyne often updates multiple times per day. Indeed the sheer volume of material he posts, on subjects from biology to culture wars to Chicago hot dogs, would take months to absorb.

Today, though, I want to comment on just two posts of Jerry’s.

The first post, from back in May, concerns David Gelernter, the computer science professor at Yale who was infamously injured in a 1993 attack by the Unabomber, and who’s now mainly known as a right-wing commentator. I don’t know Gelernter, though I did once attend a small interdisciplinary workshop in the south of France that Gelernter also attended, wherein I gave a talk about quantum computing and computational complexity in which Gelernter showed no interest. Anyway, Gelernter, in an essay in May for the Claremont Review of Books, argued that recent work has definitively disproved Darwinism as a mechanism for generating new species, and until something better comes along, Intelligent Design is the best available alternative.

Curiously, I think that Gelernter’s argument falls flat not for detailed reasons of biology, but mostly just because it indulges in bad math and computer science—in fact, in precisely the sorts of arguments that I was trying to answer in my segment on Morgan Freeman’s Through the Wormhole (see also Section 3.2 of Why Philosophers Should Care About Computational Complexity). Gelernter says that

  1. a random change to an amino acid sequence will pretty much always make it worse,
  2. the probability of finding a useful new such sequence by picking one at random is at most ~1 in 1077, and
  3. there have only been maybe ~1040 organisms in earth’s history.

Since 1077 >> 1040, Darwinism is thereby refuted—not in principle, but as an explanation for life on earth. QED.

The most glaring hole in the above argument, it seems to me, is that it simply ignores intermediate possible numbers of mutations. How hard would it be to change, not 1 or 100, but 5 amino acids in a given protein to get a usefully different one—as might happen, for example, with local optimization methods like simulated annealing run at nonzero temperature? And how many chances were there for that kind of mutation in the earth’s history?

Gelernter can’t personally see how a path could cut through the exponentially large solution space in a polynomial amount of time, so he asserts that it’s impossible. Many of the would-be P≠NP provers who email me every week do the same. But this particular kind of “argument from incredulity” has an abysmal track record: it would’ve applied equally well, for example, to problems like maximum matching that turned out to have efficient algorithms. This is why, in CS, we demand better evidence of hardness—like completeness results or black-box lower bounds—neither of which seem however to apply to the case at hand. Surely Gelernter understands all this, but had he not, he could’ve learned it from my lecture at the workshop in France!

Alas, online debate, as it’s wont to do, focused less on Gelernter’s actual arguments and the problems with them, than on the tiresome questions of “standing” and “status.” In particular: does Gelernter’s authority, as a noted computer science professor, somehow lend new weight to Intelligent Design? Or conversely: does the very fact that a computer scientist endorsed ID prove that computer science itself isn’t a real science at all, and that its practitioners should never be taken seriously in any statements about the real world?

It’s hard to say which of these two questions makes me want to bury my face deeper into my hands. Serge Lang, the famous mathematician and textbook author, spent much of his later life fervently denying the connection between HIV and AIDS. Lynn Margulis, the discoverer of the origin of mitochondria (and Carl Sagan’s first wife), died a 9/11 truther. What broader lesson should we draw from any of this? And anyway, what percentage of computer scientists actually do doubt evolution, and how does it compare to the percentage in other academic fields and other professions? Isn’t the question of how divorced we computer scientists are from the real world an … ahem … empirical matter, one hard to answer on the basis of armchair certainties and anecdotes?

Speaking of empiricism, if you check Gelernter’s publication list on DBLP and his Google Scholar page, you’ll find that he did influential work in programming languages, parallel computing, and other areas from 1981 through 1997, and then in the past 22 years published a grand total of … two papers in computer science. One with four coauthors, the other a review/perspective piece about his earlier work. So it seems fair to say that, some time after receiving tenure in a CS department, Gelernter pivoted (to put it mildly) away from CS and toward conservative punditry. His recent offerings, in case you’re curious, include the book America-Lite: How Imperial Academia Dismantled Our Culture (and Ushered In the Obamacrats).

Some will claim that this case underscores what’s wrong with the tenure system itself, while others will reply that it’s precisely what tenure was designed for, even if in this instance you happen to disagree with what Gelernter uses his tenured freedom to say. The point I wanted to make is different, though. It’s that the question “what kind of a field is computer science, anyway, that a guy can do high-level CS research on Monday, and then on Tuesday reject Darwinism and unironically use the word ‘Obamacrat’?”—well, even if I accepted the immense weight this question places on one atypical example (which I don’t), and even if I dismissed the power of compartmentalization (which I again don’t), the question still wouldn’t arise in Gelernter’s case, since getting from “Monday” to “Tuesday” seems to have taken him 15+ years.

Anyway, the second post of Coyne’s that I wanted to talk about is from just yesterday, and is about Jeffrey Epstein—the financier, science philanthropist, and confessed sex offender, whose appalling crimes you’ll have read all about this week if you weren’t on a long sea voyage without Internet or something.

For the benefit of my many fair-minded friends on Twitter, I should clarify that I’ve never met Jeffrey Epstein, let alone accepted any private flights to his sex island or whatever. I doubt he has any clue who I am either—even if he did once claim to be “intrigued” by quantum information.

I do know a few of the scientists who Epstein once hung out with, including Seth Lloyd and Steven Pinker. Pinker, in particular, is now facing vociferous attacks on Twitter, similar in magnitude perhaps to what I faced in the comment-171 affair, for having been photographed next to Epstein at a 2014 luncheon that was hosted by Lawrence Krauss (a physicist who later faced sexual harassment allegations of his own). By the evidentiary standards of social media, this photo suffices to convict Pinker as basically a child molester himself, and is also a devastating refutation of any data that Pinker might have adduced in his books about the Enlightenment’s contributions to human flourishing.

From my standpoint, what’s surprising is not that Pinker is up against this, but that it took this long to happen, given that Pinker’s pro-Enlightenment, anti-blank-slate views have had the effect of painting a giant red target on his back. Despite the near-inevitability, though, you can’t blame Pinker for wanting to defend himself, as I did when it was my turn for the struggle session.

Thus, in response to an emailed inquiry by Jerry Coyne, Pinker shared some detailed reflections about Epstein; Pinker then gave Coyne permission to post those reflections on his blog (though they were originally meant for Coyne only). Like everything Pinker writes, they’re worth reading in full. Here’s the opening paragraph:

The annoying irony is that I could never stand the guy [Epstein], never took research funding from him, and always tried to keep my distance. Friends and colleagues described him to me as a quantitative genius and a scientific sophisticate, and they invited me to salons and coffee klatches at which he held court. But I found him to be a kibitzer and a dilettante — he would abruptly change the subject ADD style, dismiss an observation with an adolescent wisecrack, and privilege his own intuitions over systematic data.

Pinker goes on to discuss his record of celebrating, and extensively documenting, the forces of modernity that led to dramatic reductions in violence against women and that have the power to continue doing so. On Twitter, Pinker had already written: “Needless to say I condemn Epstein’s crimes in the strongest terms.”

I probably should’ve predicted that Pinker would then be attacked again—this time, for having prefaced his condemnation with the phrase “needless to say.” The argument, as best I can follow, runs like this: given all the isms of which woke Twitter has already convicted Pinker—scientism, neoliberalism, biological determinism, etc.—how could Pinker’s being against Epstein’s crimes (which we recently learned probably include the rape, and not only statutorily, of a 15-year-old) possibly be assumed as a given?

For the record, just as Epstein’s friends and enablers weren’t confined to one party or ideology, so the public condemnation of Epstein strikes me as a matter that is (or should be) beyond ideology, with all reasonable dispute now confined to the space between “very bad” and “extremely bad,” between “lock away for years” and “lock away for life.”

While I didn’t need Pinker to tell me that, one reason I personally appreciated his comments is that they helped to answer a question that had bugged me, and that none of the mountains of other condemnations of Epstein had given me a clear sense about. Namely: supposing, hypothetically, that I’d met Epstein around 2002 or so—without, of course, knowing about his crimes—would I have been as taken with him as many other academics seem to have been? (Would you have been? How sure are you?)

Over the last decade, I’ve had the opportunity to meet some titans and semi-titans of finance and business, to discuss quantum computing and other nerdy topics. For a few (by no means all) of these titans, my overriding impression was precisely their unwillingness to concentrate on any one point for more than about 20 seconds—as though they wanted the crust of a deep intellectual exchange without the meat filling. My experience with them fit Pinker’s description of Epstein to a T (though I hasten to add that, as far as I know, none of these others ran teenage sex rings).

Anyway, given all the anger at Pinker for having intersected with Epstein, it’s ironic that I could easily imagine Pinker’s comments rattling Epstein the most of anyone’s, if Epstein hears of them from his prison cell. It’s like: Epstein must have developed a skin like a rhinoceros’s by this point about being called a child abuser, a creep, and a thousand similar (and similarly deserved) epithets. But “a kibitzer and a dilettante” who merely lured famous intellectuals into his living room, with wads of cash not entirely unlike the ones used to lure teenage girls to his massage table? Ouch!

OK, but what about Alan Dershowitz—the man who apparently used to be Epstein’s close friend, who still is Pinker’s friend, and who played a crucial role in securing Epstein’s 2008 plea bargain, the one now condemned as a travesty of justice? I’m not sure how I feel about Dershowitz.  It’s like: I understand that our system requires attorneys willing to mount a vociferous defense even for clients who they privately know or believe to be guilty—and even to get those clients off on technicalities or bargaining whenever they can.  I’m also incredibly grateful that I chose CS rather than law school, because I don’t think I could last an hour advocating causes that I knew to be unjust. Just like my fellow CS professor, the intelligent design advocate David Gelernter, I have the privilege and the burden of speaking only for myself.

The Zeroth Commandment

Sunday, May 6th, 2018

“I call heaven and earth to witness against you this day, that I have set before thee life and death, the blessing and the curse: therefore choose life, that thou mayest live, thou and thy seed.” –Deuteronomy 30:19

“Remember your humanity, and forget the rest.” –Bertrand Russell and Albert Einstein, 1955


I first met Robin Hanson, professor of economics at George Mason University, in 2005, after he and I had exchanged emails about Aumann’s agreement theorem.  I’d previously read Robin’s paper about that theorem with Tyler Cowen, which is called Are Disagreements Honest?, and which stands today as one of the most worldview-destabilizing documents I’ve ever read.  In it, Robin and Tyler develop the argument that you can’t (for example) assert that

  1. you believe that extraterrestrial life probably exists,
  2. your best friend believes it probably doesn’t, and
  3. you and your friend are both honest, rational people who understand Bayes’ Theorem; you just have a reasonable difference of opinion about the alien question, presumably rooted in differing life experiences or temperaments.

For if, to borrow a phrase from Carl Sagan, you “wish to pursue the question courageously,” then you need to consider “indexical hypotheticals”: possible worlds where you and your friend swapped identities.  As far as the Bayesian math is concerned, the fact that you’re you, and your friend is your friend, is just one more contingent fact to conditionalize on: something that might affect what private knowledge you have, but that has no bearing on whether extraterrestrial life exists or doesn’t.  Once you grasp this point, so the argument goes, you should be just as troubled by the fact that your friend disagrees with you, as you would be were the disagreement between two different aspects of your self.  To put it differently: there might be a billion flavors of irrationality, but insofar as people can talk to each other and are honest and rational, they should converge on exactly the same conclusions about every matter of fact, even ones as remote-sounding as the existence of extraterrestrial life.

When I read this, my first reaction was that it was absurdly wrong and laughable.  I confess that I was even angry, to see something so counter to everything I knew asserted with such blithe professorial confidence.  Yet, in a theme that will surely be familiar with anyone who’s engaged with Robin or his writing, I struggled to articulate exactly why the argument was wrong.  My first guess was that, just like typical straitjacketed economists, Robin and Tyler had simply forgotten that real humans lack unlimited time to think and converse with each other.  Putting those obvious limitations back into the theory, I felt, would surely reinstate the verdict of common sense, that of course two people can agree to disagree without violating any dictates of rationality.

Now, if only I’d had the benefit of a modern education on Twitter and Facebook, I would’ve known that I could’ve stopped right there, with the first counterargument that popped into my head.  I could’ve posted something like the following on all my social media accounts:

“Hanson and Cowen, typical narrow-minded economists, ludicrously claim that rational agents with common priors can’t agree to disagree. They stupidly ignore the immense communication and computation that reaching agreement would take.  Why are these clowns allowed to teach?  SAD!”

Alas, back in 2003, I hadn’t yet been exposed to the epistemological revolution wrought by the 280-character smackdown, so I got the idea into my head that I actually needed to prove my objection was as devastating as I thought.  So I sat down with pen and paper for some hours—and discovered, to my astonishment, that my objection didn’t work at all.  According to my complexity-theoretic refinement of Aumann’s agreement theorem, which I later published in STOC’2005, two Bayesian agents with a common prior can ensure that they agree to within ±ε about the value of a [0,1]-valued random variable, with probability at least 1-δ over their shared prior, by exchanging only O(1/(δε2)) bits of information—completely independent of how much knowledge the agents have.  My conclusion was that, if Aumann’s Nobel-prizewinning theorem fails to demonstrate the irrationality of real-life disagreements, then it’s not for reasons of computational or communication efficiency; it has to be for other reasons instead.  (See also my talk on this at the SPARC summer camp.)

In my and Robin’s conversations—first about Aumann’s theorem, then later about the foundations of quantum mechanics and AI and politics and everything else you can imagine—Robin was unbelievably generous with his time and insights, willing to spend days with me, then a totally unknown postdoc, to get to the bottom of whatever was the dispute at hand.  When I visited Robin at George Mason, I got to meet his wife and kids, and see for myself the almost comical contrast between the conventional nature of his family life and the destabilizing radicalism (some would say near-insanity) of his thinking.  But I’ll say this for Robin: I’ve met many eccentric intellectuals in my life, but I have yet to meet anyone whose curiosity is more genuine than Robin’s, or whose doggedness in following a chain of reasoning is more untouched by considerations of what all the cool people will say about him at the other end.

So if you believe that the life of the mind benefits from a true diversity of opinions, from thinkers who defend positions that actually differ in novel and interesting ways from what everyone else is saying—then no matter how vehemently you disagree with any of his views, Robin seems like the prototype of what you want more of in academia.  To anyone who claims that Robin’s apparent incomprehension of moral taboos, his puzzlement about social norms, are mere affectations masking some sinister Koch-brothers agenda, I reply: I’ve known Robin for years, and while I might be ignorant of many things, on this I know you’re mistaken.  Call him wrongheaded, naïve, tone-deaf, insensitive, even an asshole, but don’t ever accuse him of insincerity or hidden agendas.  Are his open, stated agendas not wild enough for you??

In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it.  Most famously, Robin is one of the major developers of prediction markets, and also the inventor of futarchy—a proposed system of government that would harness prediction markets to get well-calibrated assessments of the effects of various policies.  Robin also first articulated the concept of the Great Filter in the evolution of life in our universe.  It’s Great Filter reasoning that tells us, for example, that if we ever discover fossil microbial life on Mars (or worse yet, simple plants and animals on extrasolar planets), then we should be terrified, because it would mean that several solutions to the Fermi paradox that don’t involve civilizations like ours killing themselves off would have been eliminated.  Sure, once you say it, it sounds pretty obvious … but did you think of it?

Earlier this year, Robin published a book together with Kevin Simler, entitled The Elephant In The Brain: Hidden Motives In Everyday Life.  I was happy to provide feedback on the manuscript and then to offer a jacket blurb (though the publisher cut nearly everything I wrote, leaving only that I considered the book “a masterpiece”).  The book’s basic thesis is that a huge fraction of human behavior, possibly the majority of it, is less about its ostensible purpose than about signalling what kind of people we are—and that this has implications for healthcare and education spending, among many other topics.  (Thus, the book covers some of the same ground as The Case Against Education, by Robin’s GMU colleague Bryan Caplan, which I reviewed here.)

I view The Elephant In The Brain as Robin’s finest work so far, though a huge part of the credit surely goes to Kevin Simler.  Robin’s writing style tends to be … spare.  telegraphic.  He gives you the skeleton of an argument, but leaves it to you to add the flesh, the historical context and real-world examples and caveats.  And he never holds your hand by saying anything like: “I know this is going to sound weird, but…”  Robin doesn’t care how weird it sounds.  With EITB, you get the best of both worlds: Robin’s unique-on-this-planet trains of logic, and Kevin’s considerable gifts at engaging prose.  It’s a powerful combination.

I’m by no means an unqualified Hanson fan.  If you’ve ever felt completely infuriated by Robin—if you’ve ever thought, fine, maybe this guy turned out to be unpopularly right some other times, but this time he’s really just being willfully and even dangerously obtuse—then know that I’ve shared that feeling more than most over the past decade.  I recall in particular a lecture that Robin gave years ago in which he argued—and I apologize to Robin if I mangle a detail, but this was definitely the essence—that even if you grant that anthropogenic climate change will destroy human civilization and most complex ecosystems hundreds of years from now, that’s not necessarily something you should worry about, because if you apply the standard exponential time-discounting that economists apply to everything else, along with reasonable estimates for the monetary value of everything on earth, you discover that all life on earth centuries from now just isn’t worth very much in today’s dollars.

On hearing this, the familiar Hanson-emotions filled me: White-hot, righteous rage.  Zeal to cut Robin down, put him in his place, for the sake of all that’s decent in humanity.  And then … confusion about where exactly his argument fails.

For whatever it’s worth, I’d probably say today that Robin is wrong on this, because economists’ exponential discounting implicitly assumes that civilization’s remarkable progress of the last few centuries will continue unabated, which is the very point that the premise of the exercise denies.  But notice what I can’t say: “shut up Robin, we’ve all heard this right-wing libertarian nonsense before.”  Even when Robin spouts nonsense, it’s often nonsense that no one has heard before, brought back from intellectual continents that wouldn’t be on the map had Robin not existed.


So why am I writing about Robin now?  If you haven’t been living in a non-wifi-equipped cave, you probably know the answer.

A week ago, alas, Robin blogged his confusion about why the people most concerned about inequalities of wealth, never seem to be concerned about inequalities of romantic and sexual fulfillment—even though, in other contexts, those same people would probably affirm that relationships are much more important to their personal happiness than wealth is.  As a predictable result of his prodding this angriest hornet’s-nest on the planet, Robin has now been pilloried all over the Internet, in terms that make the attacks on me three years ago over the comment-171 affair look tender and kind by comparison.  The attacks included a Slate hit-piece entitled “Is Robin Hanson America’s Creepiest Economist?” (though see also this in-depth followup interview), a Wonkette post entitled “This Week In Garbage Men: Incels Sympathizers [sic] Make Case for Redistribution of Vaginas,” and much more.  Particularly on Twitter, Robin’s attackers have tended to use floridly profane language, and to target his physical appearance and assumed sexual proclivities and frustrations; some call for his firing or death.  I won’t link to the stuff; you can find it.

Interestingly, many of the Twitter attacks assume that Robin himself must be an angry “incel” (short for “involuntary celibate”), since who else could treat that particular form of human suffering as worthy of reply?  Few seem to have done the 10-second research to learn that, in reality, Robin is a happily married father of two.

I noticed the same strange phenomenon during the comment-171 affair: commentators on both left and right wanted to make me the poster child for “incels,” with a few offering me advice, many swearing they would’ve guessed it immediately from my photograph.  People apparently didn’t read just a few paragraphs into my story—to the part where, once I finally acquired some of the norms that mainstream culture refuses to tell people, I enjoyed a normal or even good dating life, eventually marrying a brilliant fellow theoretical computer scientist, with whom I started raising a rambunctious daughter (who’s now 5, and who’s been joined by our 1-year-old son).  If not for this happy ending, I too might have entertained my critics’ elaborate theories about my refusal to accept my biological inferiority, my simply having lost the genetic lottery (ability to do quantum computing research notwithstanding).  But what can one do faced with the facts?


For the record: I think that Robin should never, ever have made this comparison, and I wish he’d apologize for it now.  Had he asked my advice, I would’ve screamed “DON’T DO IT” at the top of my lungs.  I once contemplated such a comparison myself—and even though it was many years ago, in the depths of a terrifying relapse of the suicidal depression that had characterized much of my life, I still count it among my greatest regrets.  I hereby renounce and disown the comparison forever.  And I beg forgiveness from anyone who was hurt or offended by it—or for that matter, by anything else I ever said, on this blog or elsewhere.

Indeed, let me go further: if you were ever hurt or offended by anything I said, and if I can make partial restitution to you by taking some time to field your questions about quantum computing and information, or math, CS, and physics more generally, or academic career advice, or anything else where I’m said to know something, please shoot me an email.  I’m also open to donating to your favorite charity.

My view is this: the world in which a comparison between the sufferings of the romantically and the monetarily impoverished could increase normal people’s understanding of the former, is so different from our world as to be nearly unrecognizable.  To say that this comparison is outside the Overton window is a comic understatement: it’s outside the Overton galaxy.  Trying to have the conversation that Robin wanted to have on social media, is a little like trying to have a conversation about microaggressions in 1830s Alabama.  At first, your listeners will simply be confused—but their confusion will be highly unstable, like a Higgs boson, and will decay in about 10-22 seconds into righteous rage.

For experience shows that, if you even breathe a phrase like “the inequality of romantic and sexual fulfillment,” no one who isn’t weird in certain ways common in the hard sciences (e.g., being on the autism spectrum) will be able to parse you as saying anything other than that sex ought to be “redistributed” by the government in the same way that money is redistributed, which in turn suggests a dystopian horror scenario where women are treated like property, married against their will, and raped.  And it won’t help if you shout from the rooftops that you want nothing of this kind, oppose it as vehemently as your listeners do.  For, not knowing what else you could mean, the average person will continue to impose the nightmare scenario on anything you say, and will add evasiveness and dishonesty to the already severe charges against you.

Before going any further in this post, let me now say that any male who wants to call himself my ideological ally ought to agree to the following statement.

I hold the bodily autonomy of women—the principle that women are freely-willed agents rather than the chattel they were treated as for too much of human history; that they, not their fathers or husbands or anyone else, are the sole rulers of their bodies; and that they must never under any circumstances be touched without their consent—to be my Zeroth Commandment, the foundation-stone of my moral worldview, the starting point of every action I take and every thought I think.  This principle of female bodily autonomy, for me, deserves to be chiseled onto tablets of sapphire, placed in a golden ark adorned with winged cherubim sitting atop a pedestal inside the Holy of Holies in a temple on Mount Moriah.

This, or something close to it, is really what I believe.  And I advise any lonely young male nerd who might be reading this blog to commit to the Zeroth Commandment as well, and to the precepts of feminism more broadly.

To such a nerd, I say: yes, throughout your life you’ll encounter many men and women who will despise you for being different, in ways that you’re either powerless to change, or could change only at the cost of renouncing everything you are.  Yet, far from excusing any moral lapses on your part, this hatred simply means that you need to adhere to a higher moral standard than most people.  For whenever you stray even slightly from the path of righteousness, the people who detest nerds will leap excitedly, seeing irrefutable proof of all their prejudices.  Do not grant them that victory.  Do not create a Shanda fur die Normies.

I wish I believed in a God who could grant you some kind of eternal salvation, in return for adhering to a higher moral standard throughout your life, and getting in return at best grudging toleration, as well as lectures about your feminist failings by guys who’ve obeyed the Zeroth Commandment about a thousandth as scrupulously as you have.  As an atheist, though, the most I can offer you is that you can probably understand the proof of Cantor’s theorem, while most of those who despise you probably can’t.  And also: as impossible as it might seem right now, there are ways that even you can pursue the ordinary, non-intellectual kinds of happiness in life, and there will be many individuals along the way ready to help you: the ones who remember their humanity and forget their ideology.  I wish you the best.


Amid the many vitriolic responses to Robin—fanned, it must be admitted, by Robin’s own refusal to cede any ground to his critics, or to modulate his style or tone in the slightest—the one striking outlier was a New York Times essay by Ross Douthat.  This essay, which has itself now been widely panned, uses Robin as an example of how, in Douthat’s words, “[s]ometimes the extremists and radicals and weirdos see the world more clearly than the respectable and moderate and sane.  Douthat draws an interesting parallel between Robin and the leftist feminist philosopher Amia Srinivasan, who recently published a beautifully-written essay in the London Review of Books entitled Does anyone have the right to sex?  In analyzing that question, Srinivasan begins by discussing male “incels,” but then shifts her attention to far more sympathetic cases: women and men suffering severe physical or mental disabilities (and who, in some countries, can already hire sexual surrogates with government support); who were disfigured by accidents; who are treated as undesirable for racist reasons.  Let me quote from her conclusion:

The question, then, is how to dwell in the ambivalent place where we acknowledge that no one is obligated to desire anyone else, that no one has a right to be desired, but also that who is desired and who isn’t is a political question, a question usually answered by more general patterns of domination and exclusion … the radical self-love movements among black, fat and disabled women do ask us to treat our sexual preferences as less than perfectly fixed. ‘Black is beautiful’ and ‘Big is beautiful’ are not just slogans of empowerment, but proposals for a revaluation of our values … The question posed by radical self-love movements is not whether there is a right to sex (there isn’t), but whether there is a duty to transfigure, as best we can, our desires.

All over social media, there are howls of outrage that Douthat would dare to mention Srinivasan’s essay, which is wise and nuanced and humane, in the same breath as the gross, creepy, entitled rantings of Robin Hanson.  I would say: grant that Srinivasan and Hanson express themselves extremely differently, and also that Srinivasan is a trillion times better than Hanson at anticipating and managing her readers’ reactions.  Still, on the merits, is there any relevant difference between the two cases beyond: “undesirability” of the disabled, fat, and trans should be critically examined and interrogated, because those people are objects of progressive sympathy; whereas “undesirability” of nerdy white and Asian males should be taken as a brute fact or even celebrated, because those people are objects of progressive contempt?

To be fair, a Google search also turns up progressives who, dissenting from the above consensus, excoriate Srinivasan for her foray, however thoughtful, into taboo territory.  As best I can tell, the dissenters’ argument runs like so: as much as it might pain us, we must not show any compassion to women and men who are suicidally lonely and celibate by virtue of being severely disabled, disfigured, trans, or victims of racism.  For if we did, then consistency might eventually force us to show compassion to white male nerds as well.


Here’s the central point that I think Robin failed to understand: society, today, is not on board even with the minimal claim that the suicidal suffering of men left behind by the sexual revolution really exists—or, if it does, that it matters in the slightest or deserves any sympathy or acknowledgment whatsoever.  Indeed, the men in question pretty much need to be demonized as entitled losers and creeps, because if they weren’t, then sympathy for them—at least, for those among them who are friends, coworkers, children, siblings—might become hard to prevent.  In any event, it seems to me that until we as a society resolve the preliminary question, of whether to recognize a certain category of suffering as real, there’s no point even discussing how policy or culture might help to address the suffering, consistently with the Zeroth Commandment.

Seen in this light, Robin is a bit like the people who email me every week imagining they can prove P≠NP, yet who can’t even prove astronomically easier statements, even ones that are already known.  When trying to scale an intellectual Everest, you might as well start with the weakest statement that’s already unproven or non-obvious or controversial.

So where are we today?  Within the current Overton window, a perfectly appropriate response to suicidal loneliness and depression among the “privileged” (i.e., straight, able-bodied, well-educated white or Asian men) seems to be: “just kill yourselves already, you worthless cishet scum, and remove your garbage DNA from the gene pool.”  If you think I’m exaggerating, I beseech you to check for yourself on Twitter.  I predict you’ll find that and much worse, wildly upvoted, by people who probably go to sleep every night congratulating themselves for their progressivism, their egalitarianism, and—of course—their burning hatred for anything that smacks of eugenics.

A few days ago, Ellen Pao, the influential former CEO of Reddit, tweeted:

CEOs of big tech companies: You almost certainly have incels as employees. What are you going to do about it?

Thankfully, even many leftists reacted with horror to Pao’s profoundly illiberal question.  They wondered about the logistics she had in mind: does she want tech companies to spy on their (straight, male) employees’ sex lives, or lack thereof?  If any are discovered who are (1) celibate and (2) bitter at the universe about it, then will it be an adequate defense against firing if they’re also feminists, who condemn misogyny and violence and affirm the Zeroth Commandment?  Is it not enough that these men were permanently denied the third level of Maslow’s hierarchy of needs (the one right above physical safety); must they also be denied careers as a result?  And is this supposed to prevent their radicalization?

For me, the scariest part of Pao’s proposal is that, whatever in this field is on the leftmost fringe of the Overton window today, experience suggests we’ll find it smack in the center a decade from now.  So picture a future wherein, if you don’t support rounding up and firing your company’s romantically frustrated—i.e., the policy of “if you don’t get laid, you don’t get paid”—then that itself is a shockingly reactionary attitude, and grounds for your own dismissal.

Some people might defend Pao by pointing out that she was only asking a question, not proposing a specific policy.  But then, the same is true of Robin Hanson.


Why is it so politically difficult even to show empathy toward socially awkward, romantically challenged men—to say to them, “look, I don’t know what if anything can be done about your problem, but yeah, the sheer cosmic arbitrariness of it kind of sucks, and I sympathize with you”?  Why do enlightened progressives, if they do offer such words of comfort to their “incel” friends, seem to feel about it the same way Huck Finn did, at the pivotal moment in Western literature when he decides to help his friend Jim escape from slavery—i.e., not beaming with pride over his own moral courage, but ashamed of himself, and resigned that he’ll burn in hell for the sake of a mere personal friendship?

This is a puzzle, but I think I might know the answer.  We begin with the observation that virtually every news article, every thinkpiece, every blog post about “incels,” fronts contemptible mass murderers like Elliot Rodger and Alek Minassian, who sought bloody revenge on a world that failed to provide them the women to whom they felt entitled; as well as various Internet forums (many recently shut down) where this subhuman scum was celebrated by other scum.

The question is: why don’t people look at the broader picture, as they’ve learned to do in so many other cases?  In other words, why don’t they say:

  • There really do exist extremist Muslims, who bomb schools and buses, or cheer and pass out candies when that happens, and who wish to put the entire world under Sharia on point of the sword.  Fortunately, the extremists are outnumbered by hundreds of millions of reasonable Muslims, with whom anyone, even a Zionist Jew like me, can have a friendly conversation in which we discuss our respective cultures’ grievances and how they might be addressed in a win-win manner.  (My conversations with Iranian friends sometimes end with us musing that, if only they made them Ayatollah and me Israeli Prime Minister, we could sign a peace accord next week, then go out for kebabs and babaganoush.)
  • There really are extremist leftists—Marxist-Leninist-Maoist-whateverists—who smash store windows, kill people (or did, in the 60s), and won’t be satisfied by anything short of the total abolition of private property and the heads of the capitalists lining the streets on pikes.  But they’re vastly outnumbered by the moderate progressives, like me, who are less about proletarian revolution than they are about universal healthcare, federal investment in science and technology, a carbon tax, separation of church and state, and stronger protection of national parks.
  • In exactly the same way, there are “incel extremists,” like Rodger or Minassian, spiteful losers who go on killing sprees because society didn’t give them the sex they were “owed.”  But they’re outnumbered by tens of millions of decent, peaceful people who could reasonably be called “incels”—those who desperately want romantic relationships but are unable to achieve them, because of extreme shyness, poor social skills, tics, autism-spectrum traits, lack of conventional attractiveness, bullying, childhood traumas, etc.—yet who’d never hurt a fly.  These moderates need not be “losers” in all aspects of life: many have fulfilling careers and volunteer and give to charity and love their nieces and nephews, some are world-renowned scientists and writers.  For many of the moderates, it might be true that recent cultural shifts exacerbated their problems; that an unlucky genetic dice-roll “optimized” them for a world that no longer exists.  These people deserve the sympathy and support of the more fortunate among us; they constitute a political bloc entitled to advocate for its interests, as other blocs do; and all decent people should care about how we might help them, consistently with the Zeroth Commandment.

The puzzle, again, is: why doesn’t anyone say this?

And I think the answer is simply that no one ever hears from “moderate incels.”  And the reason, in turn, becomes obvious the instant you think about it.  Would you volunteer to march at the front of the Lifelong Celibacy Awareness Parade?  Or to be identified by name as the Vice President of the League of Peaceful and Moderate Incels?  Would you accept such a social death warrant?  It takes an individual with extraordinary moral courage, such as Scott Alexander, even to write anything whatsoever about this issue that tries to understand or help the sufferers rather than condemn them.  For this reason—i.e., purely, 100% a selection effect, nothing more—the only times the wider world ever hears anything about “incels” is when some despicable lunatic like Rodger or Minassian snaps and murders the innocent.  You might call this the worst PR problem in the history of the world.


So what’s the solution?  While I’m not a Christian, I find that Jesus’ prescription of universal compassion has a great deal to recommend it here—applied liberally, like suntan lotion, to every corner of the bitter “SJW vs. incel” online debate.

The usual stereotype of nerds is that, while we might be good at memorizing facts or proving theorems or coding up filesystems, we’re horrendously deficient in empathy and compassion, constantly wanting to reduce human emotions to numbers in spreadsheets or something.  As I’ve remarked elsewhere, I’ve scarcely encountered any stereotype that rings falser to my experience.  In my younger, depressed phase, when I was metaphorically hanging on to life by my fingernails, it was nerds and social misfits who offered me their hands up, while many of the “normal, well-adjusted, socially competent” people gleefully stepped on my fingers.

But my aspiration is not merely that we nerds can do just as well at compassion as those who hate us.  Rather, I hope we can do better.  This isn’t actually such an ambitious goal.  To achieve it, all we need to do is show universal, Jesus-style compassion, to politically favored and disfavored groups alike.

To me that means: compassion for the woman facing sexual harassment, or simply quizzical glances that wonder what she thinks she’s doing pursuing a PhD in physics.  Compassion for the cancer patient, for the bereaved parent, for the victim of famine.  Compassion for the undocumented immigrant facing deportation.  Compassion for the LGBT man or woman dealing with self-doubts, ridicule, and abuse.  Compassion for the nerdy male facing suicidal depression because modern dating norms, combined with his own shyness and fear of rule-breaking, have left him unable to pursue romance or love.  Compassion for the woman who feels like an ugly, overweight, unlovable freak who no one will ask on dates.  Compassion for the African-American victim of police brutality.  Compassion even for the pedophile who’d sooner kill himself than hurt a child, but who’s been given no support for curing or managing his condition.  This is what I advocate.  This is my platform.

If I ever decided to believe the portrait of me painted by Arthur Chu, or the other anti-Aaronson Twitter warriors, then I hope I’d have the moral courage to complete their unstated modus ponens, by quietly swallowing a bottle of sleeping pills.  After all, Chu’s vision of the ideal future seems to have no more room for me in it than Eichmann’s did.  But the paradoxical corollary is that, every time I remind myself why I think Chu is wrong, it feels like a splendorous affirmation of life itself.  I affirm my love for my wife and children and parents and brother, my bonds with my friends around the world, the thrill of tackling a new research problem and sharing my progress with colleagues, the joy of mentoring students of every background and religion and gender identity, the smell of fresh-baked soft pretzels and the beauty of the full moon over the Mediterranean.  If I had to find pearls in manure, I’d say: with their every attack, the people who hate me give me a brand-new opportunity to choose life over death, and better yet to choose compassion over hatred—even compassion for the haters themselves.

(Far be it from me to psychoanalyze him, as he constantly does to me, but Chu’s unremitting viciousness doesn’t strike me as coming from a place of any great happiness with his life.  So I say: may even Mr. Chu find whatever he’s looking for.  And while his utopia might have no place for me, I’m determined that mine should have a place for him—even if it’s just playing Jeopardy! and jumping around to find the Daily Doubles.)

It’s a commonplace that sometimes, the only way you can get a transformative emotional experience—like awe at watching the first humans walk on the moon, or joy at reuniting with a loved one after a transatlantic flight—is on top of a mountain of coldly rational engineering and planning.  But the current Robin Hanson affair reminds us that the converse is true as well.  I.e., the only way we can have the sort of austere, logical, norm-flouting conversations about the social world that Robin has been seeking to have for decades, without the whole thing exploding in thermonuclear anger, is on top of a mountain of empathy and compassion.  So let’s start building that mountain.


Endnotes. Already, in my mind’s eye, I can see the Twitter warriors copying and sharing whichever sentence of this post angered them the most, using it as proof that I’m some lunatic who should never be listened to about anything. I’m practically on my hands and knees begging you here: show that my fears are unjustified.  Respond, by all means, but respond to the entirety of what I had to say.

I welcome comments, so long as they’re written in a spirit of kindness and mutual respect. But because writing this post was emotionally and spiritually draining for me–not to mention draining in, you know, time—I hope readers won’t mind if I spend a day or two away, with my wife and kids and my research, before participating in the comments myself.


Update (May 7). Numerous commenters have successfully convinced me that the word “incel,” though it literally just means “involuntary celibate,” and was in fact coined by a woman to describe her own experience, has been permanently disgraced by its association with violent misogynists and their online fan clubs.  It will never again regain its original meaning, any more than “Adolf” will ever again be just a name; nor will one be able to discuss “moderate incels” as distinct from the extremist kind.  People of conscience will need to be extremely vigilant against motte-and-bailey tactics—wherein society’s opinion-makers will express their desire for all “incels” to be silenced or fired or removed from the gene pool or whatever, obviously having in mind all romantically frustrated male nerds (all of whom they despise), and will fall back when challenged (and only when challenged) on the defense that they only meant the violence-loving misogynists.  For those of us motivated by compassion rather than hatred, though, we need another word.  I suggest the older term “love-shy,” coined by Brian Gilmartin in his book on the subject.

Meanwhile, be sure to check out this comment by “Sniffnoy” for many insightful criticisms of this post, most of which I endorse.

What I believe II (ft. Sarah Constantin and Stacey Jeffery)

Tuesday, August 15th, 2017

Unrelated Update: To everyone who keeps asking me about the “new” P≠NP proof: I’d again bet $200,000 that the paper won’t stand, except that the last time I tried that, it didn’t achieve its purpose, which was to get people to stop asking me about it. So: please stop asking, and if the thing hasn’t been refuted by the end of the week, you can come back and tell me I was a closed-minded fool.


In my post “The Kolmogorov Option,” I tried to step back from current controversies, and use history to reflect on the broader question of how nerds should behave when their penchant for speaking unpopular truths collides head-on with their desire to be kind and decent and charitable, and to be judged as such by their culture.  I was gratified to get positive feedback about this approach from men and women all over the ideological spectrum.

However, a few people who I like and respect accused me of “dogwhistling.” They warned, in particular, that if I wouldn’t just come out and say what I thought about the James Damore Google memo thing, then people would assume the very worst—even though, of course, my friends themselves knew better.

So in this post, I’ll come out and say what I think.  But first, I’ll do something even better: I’ll hand the podium over to two friends, Sarah Constantin and Stacey Jeffery, both of whom were kind enough to email me detailed thoughts in response to my Kolmogorov post.


Sarah Constantin completed her PhD in math at Yale. I don’t think I’ve met her in person yet, but we have a huge number of mutual friends in the so-called “rationalist community.”  Whenever Sarah emails me about something I’ve written, I pay extremely close attention, because I have yet to read a single thing by her that wasn’t full of insight and good sense.  I strongly urge anyone who likes her beautiful essay below to check out her blog, which is called Otium.

Sarah Constantin’s Commentary:

I’ve had a women-in-STEM essay brewing in me for years, but I’ve been reluctant to actually write publicly on the topic for fear of stirring up a firestorm of controversy.  On the other hand, we seem to be at a cultural inflection point on the issue, especially in the wake of the leaked Google memo, and other people are already scared to speak out, so I think it’s past time for me to put my name on the line, and Scott has graciously provided me a platform to do so.

I’m a woman in tech myself. I’m a data scientist doing machine learning for drug discovery at Recursion Pharmaceuticals, and before that I was a data scientist at Palantir. Before that I was a woman in math — I got my PhD from Yale, studying applied harmonic analysis. I’ve been in this world all my adult life, and I obviously don’t believe my gender makes me unfit to do the work.

I’m also not under any misapprehension that I’m some sort of exception. I’ve been mentored by Ingrid Daubechies and Maryam Mirzakhani (the first female Fields Medalist, who died tragically young last month).  I’ve been lucky enough to work with women who are far, far better than me.  There are a lot of remarkable women in math and computer science — women just aren’t the majority in those fields. But “not the majority” doesn’t mean “rare” or “unknown.”

I even think diversity programs can be worthwhile. I went to the Institute for Advanced Studies’ Women and Math Program, which would be an excellent graduate summer school even if it weren’t all-female, and taught at its sister program for high school girls, which likewise is a great math camp independent of the gender angle. There’s a certain magic, if you’re in a male-dominated field, of once in a while being in a room full of women doing math, and I hope that everybody gets to have that experience once.  

But (you knew the “but” was coming), I think the Google memo was largely correct, and the way people conventionally talk about women in tech is wrong.

Let’s look at some of his claims. From the beginning of the memo:

  • Google’s political bias has equated the freedom from offense with psychological safety, but shaming into silence is the antithesis of psychological safety.
  • This silencing has created an ideological echo chamber where some ideas are too sacred to be honestly discussed.
  • The lack of discussion fosters the most extreme and authoritarian elements of this ideology.
  • Extreme: all disparities in representation are due to oppression
  • Authoritarian: we should discriminate to correct for this oppression

Okay, so there’s a pervasive assumption that any deviation from 50% representation of women in technical jobs is a.) due to oppression, and b.) ought to be corrected by differential hiring practices. I think it is basically true that people widely believe this, and that people can lose their jobs for openly contradicting it (as James Damore, the author of the memo, did).  I have heard people I work with advocating hiring quotas for women (i.e. explicitly earmarking a number of jobs for women candidates only).  It’s not a strawman.

Then, Damore disagrees with this assumption:

  • Differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.

Again, I agree with Damore. Note that this doesn’t mean that I must believe that sexism against women isn’t real and important (I’ve heard enough horror stories to be confident that some work environments are toxic to women).  It doesn’t even mean that I must be certain that the different rates of men and women in technical fields are due to genetics.  I’m very far from certain, and I’m not an expert in psychology. I don’t think I can do justice to the science in this post, so I’m not going to cover the research literature.

But I do think it’s irresponsible to assume a priori that there are no innate sex differences that might explain what we see.  It’s an empirical matter, and a topic for research, not dogma.

Moreover, I think discrimination on the basis of sex to reach equal representation is unfair and unproductive.  It’s unfair, because it’s not meritocratic.  You’re not choosing the best human for the job regardless of gender.

I think women might actually benefit from companies giving genuine meritocracy a chance. “Blind” auditions (in which the evaluator doesn’t see the performer) gave women a better chance of landing orchestra jobs; apparently, orchestras were prejudiced against female musicians, and the blinding canceled out that prejudice. Google’s own research has actually shown that the single best predictor of work performance is a work sample — testing candidates with a small project similar to what they’d do on the job. Work samples are easy to anonymize to reduce gender bias, and they’re more effective than traditional interviews, where split-second first impressions usually decide who gets hired, but don’t correlate at all with job performance. A number of tech companies have switched to work samples as part of their interview process.  I used work samples myself when I was hiring for a startup, just because they seemed more accurate at predicting who’d be good at the job; entirely without intending to, I got a 50% gender ratio.  If you want to reduce gender bias in tech, it’s worth at least considering blinded hiring via work samples.

Moreover, thinking about “representation” in science and technology reflects underlying assumptions that I think are quite dangerous.

You expect interest groups to squabble over who gets a piece of the federal budget. In politics, people will band together in blocs, and try to get the biggest piece of the spoils they can.  “Women should get such-and-such a percent of tech jobs” sounds precisely like this kind of politicking; women are assumed to be a unified bloc who will vote together, and the focus is on what size chunk they can negotiate for themselves. If a tech job (or a university position) were a cushy sinecure, a ticket to privilege, and nothing more, you might reasonably ask “how come some people get more goodies than others? Isn’t meritocracy just an excuse to restrict the goodies to your preferred group?”

Again, this is not a strawman. Here’s one Vox response to the memo stating explicitly that she believes women are a unified bloc:

The manifesto’s sleight-of-hand delineation between “women, on average” and the actual living, breathing women who have had to work alongside this guy failed to reassure many of those women — and failed to reassure me. That’s because the manifesto’s author overestimated the extent to which women are willing to be turned against their own gender.

Speaking for myself, it doesn’t matter to me how soothingly a man coos that I’m not like most women, when those coos are accompanied by misogyny against most women. I am a woman. I do not stop being one during the parts of the day when I am practicing my craft. There can be no realistic chance of individual comfort for me in an environment where others in my demographic categories (or, really, any protected demographic categories) are subjected to skepticism and condescension.

She can’t be comfortable unless everybody in any protected demographic category — note that this is a legal, governmental category — is given the benefit of the doubt?  That’s a pretty collectivist commitment!

Or, look at Piper Harron, an assistant professor in math who blogged on the American Mathematical Society’s website that universities should simply “stop hiring white cis men”, and explicitly says “If you are on a hiring committee, and you are looking at applicants and you see a stellar white male applicant, think long and hard about whether your department needs another white man. You are not hiring a researching robot who will output papers from a dark closet. You are hiring an educator, a role model, a spokesperson, an advisor, a committee person … There is no objectivity. There is no meritocracy.”

Piper Harron reflects an extreme, of course, but she’s explicitly saying, on America’s major communication channel for and by mathematicians, that whether you get to work in math should not be based on whether you’re actually good at math. For her, it’s all politics.  Life itself is political, and therefore a zero-sum power struggle between groups.  

But most of us, male or female, didn’t fall in love with science and technology for that. Science is the mission to explore and understand our universe. Technology is the project of expanding human power to shape that universe. What we do towards those goals will live longer than any “protected demographic category”, any nation, any civilization.  We know how the Babylonians mapped the stars.

Women deserve an equal chance at a berth on the journey of exploration not because they form a political bloc but because some of them are discoverers and can contribute to the human mission.

Maybe, in a world corrupted by rent-seeking, the majority of well-paying jobs have some element of unearned privilege; perhaps almost all of us got at least part of our salaries by indirectly expropriating someone who had as good a right to it as us.

But that’s not a good thing, and that’s not what we hope for science and engineering to be, and I truly believe that this is not the inevitable fate of the human race — that we can only squabble over scraps, and never create.  

I’ve seen creation, and I’ve seen discovery. I know they’re real.

I care a lot more about whether my company achieves its goal of curing 100 rare diseases in 10 years than about the demographic makeup of our team.  We have an actual mission; we are trying to do something beyond collecting spoils.  

Do I rely on brilliant work by other women every day? I do. My respect for myself and my female colleagues is not incompatible with primarily caring about the mission.

Am I “turning against my own gender” because I see women as individuals first? I don’t think so. We’re half the human race, for Pete’s sake! We’re diverse. We disagree. We’re human.

When you think of “women-in-STEM” as a talking point on a political agenda, you mention Ada Lovelace and Grace Hopper in passing, and move on to talking about quotas.  When you think of women as individuals, you start to notice how many genuinely foundational advances were made by women — just in my own field of machine learning, Adele Cutler co-invented random forests, Corrina Cortes co-invented support vector machines, and Fei Fei Li created the famous ImageNet benchmark dataset that started a revolution in image recognition.

As a child, my favorite book was Carl Sagan’s Contact, a novel about Ellie Arroway, an astronomer loosely based on his wife Ann Druyan. The name is not an accident; like the title character in Sinclair Lewis’ Arrowsmith, Ellie is a truth-seeking scientist who battles corruption, anti-intellectualism, and blind prejudice.  Sexism is one of the challenges she faces, but the essence of her life is about wonder and curiosity. She’s what I’ve always tried to become.

I hope that, in seeking to encourage the world’s Ellies in science and technology, we remember why we’re doing that in the first place. I hope we remember humans are explorers.


Now let’s hear from another friend who wrote to me recently, and who has a slightly different take.  Stacey Jeffery is a quantum computing theorist at one of my favorite research centers, CWI in Amsterdam.  She completed her PhD at University of Waterloo, and has done wonderful work on quantum query complexity and other topics close to my heart.  When I was being viciously attacked in the comment-171 affair, Stacey was one of the first people to send me a note of support, and I’ve never forgotten it.

Stacey Jeffery’s Commentary

I don’t think Google was right to fire Damore. This makes me a minority among people with whom I have discussed this issue.  Hopefully some people come out in the comments in support of the other position, so it’s not just me presenting that view, but the main argument I encountered was that what he said just sounded way too sexist for Google to put up with.  I agree with part of that, it did sound sexist to me.  In fact it also sounded racist to me. But that’s not because he necessarily said anything actually sexist or actually racist, but because he said the kinds of things that you usually only hear from sexist people, and in particular, the kind of sexist people who are also racist.  I’m very unlikely to try to pursue further interaction with a person who says these kinds of things for those reasons, but I think firing him for what he said between the lines sets a very bad precedent.  It seems to me he was fired for associating himself with the wrong ideas, and it does feel a bit like certain subjects are not up for rational discussion.  If Google wants an open environment, where employees can feel safe discussing company policy, I don’t think this contributes to that.  If they want their employees, and the world, to think that they aim for diversity because it’s the most rational course of action to achieve their overall objectives, rather than because it serves some secret agenda, like maintaining a PC public image, then I don’t think they’ve served that cause either.  Personally, this irritates me the most, because I feel they have damaged the image for a cause I feel strongly about.

My position is independent of the validity of Damore’s attempt at scientific argument, which is outside my area of expertise.  I personally don’t think it’s very productive for non-social-scientists to take authoritative positions on social science issues, especially ones that appear to be controversial within the field (but I say this as a layperson).  This may include some of the other commentary in this blog post, which I have not yet read, and might even extend to Scott’s decision to comment on this issue at all (but this bridge was crossed in the previous blog post).  However, I think one of the reasons that many of us do this is that the burden of solving the problem of too few women in STEM is often placed on us.  Some people in STEM feel they are blamed for not being welcoming enough to women (in fact, in my specific field, it’s my experience that the majority of people are very sympathetic).  Many scientific funding applications even ask applicants how they plan to address the issue of diversity, as if they should be the ones to come up with a solution for this difficult problem that nobody knows the answer to, and is not even within their expertise.  So it’s not surprising when these same people start to think about and form opinions on these social science issues.  Obviously, we working in STEM have valuable insight into how we might encourage women to pursue STEM careers, and we should be pushed to think about this, but we don’t have all the answers (and maybe we should remember that the next time we consider authoring an authoritative memo on the subject).


Scott’s Mansplaining Commentary

I’m incredibly grateful to Sarah and Stacey for sharing their views.  Now it’s time for me to mansplain my own thoughts in light of what they said.  Let me start with a seven-point creed.

1. I believe that science and engineering, both in academia and in industry, benefit enormously from contributions from people of every ethnic background and gender identity.  This sort of university-president-style banality shouldn’t even need to be said, but in a world where the President of the US criticizes neo-Nazis only under extreme pressure from his own party, I suppose it does.

2. I believe that there’s no noticeable difference in average ability between men and women in STEM fields—or if there’s some small disparity, for all I know the advantage goes to women. I have enough Sheldon Cooper in me that, if this hadn’t been my experience, I’d probably let it slip that it hadn’t been, but it has been.  When I taught 6.045 (undergrad computability and complexity) at MIT, women were only 20% or so of the students, but for whatever reasons they were wildly overrepresented among the top students.

3. I believe that women in STEM face obstacles that men don’t.  These range from the sheer awkwardness of sometimes being the only woman in a room full of guys, to challenges related to pregnancy and childcare, to actual belittlement and harassment.  Note that, even if men in STEM fields are no more sexist on average than men in other fields—or are less sexist, as one might expect from their generally socially liberal views and attitudes—the mere fact of the gender imbalance means that women in STEM will have many more opportunities to be exposed to whatever sexists there are.  This puts a special burden on us to create a welcoming environment for women.

4. Given that we know that gender gaps in interest and inclination appear early in life, I believe in doing anything we can to encourage girls’ interest in STEM fields.  Trust me, my four-year-old daughter Lily wishes I didn’t believe so fervently in working with her every day on her math skills.

5. I believe that gender diversity is valuable in itself.  It’s just nicer, for men and women alike, to have a work environment with many people of both sexes—especially if (as is often the case in STEM) so much of our lives revolves around our work.  I think that affirmative action for women, women-only scholarships and conferences, and other current efforts to improve gender diversity can all be defended and supported on that ground alone.

6. I believe that John Stuart Mill’s The Subjection of Women is one of the masterpieces of history, possibly the highest pinnacle that moral philosophy has ever reached.  Everyone should read it carefully and reflect on it if they haven’t already.

7. I believe it’s a tragedy that the current holder of the US presidency is a confessed sexual predator, who’s full of contempt not merely for feminism, but for essentially every worthwhile human value. I believe those of us on the “pro-Enlightenment side” now face the historic burden of banding together to stop this thug by every legal and peaceful means available. I believe that, whenever the “good guys” tear each other down in internecine warfare—e.g. “nerds vs. feminists”—it represents a wasted opportunity and an unearned victory for the enemies of progress.

OK, now for the part that might blow some people’s minds.  I hold that every single belief above is compatible with what James Damore wrote in his now-infamous memo—at least, if we’re talking about the actual words in it.  In some cases, Damore even makes the above points himself.  In particular, there’s nothing in what he wrote about female Googlers being less qualified on average than male Googlers, or being too neurotic to code, or anything like that: the question at hand is just why there are fewer women in these positions, and that in turn becomes a question about why there are fewer women earlier in the CS pipeline.  Reasonable people need not agree about the answers to those questions, or regard them as known or obvious, to see that the failure to make this one elementary distinction, between quality and quantity, already condemns 95% of Damore’s attackers as not having read or understood what he wrote.

Let that be the measure of just how terrifyingly efficient the social-media outrage machine has become at twisting its victims’ words to fit a clickbait narrative—a phenomenon with which I happen to be personally acquainted.  Strikingly, it seems not to make the slightest difference if (as in this case) the original source text is easily available to everyone.

Still, while most coverage of Damore’s memo was depressing in its monotonous incomprehension, dissent was by no means confined to the right-wingers eager to recruit Damore to their side.  Peter Singer—the legendary leftist moral philosopher, and someone whose fearlessness and consistency I’ve always admired whether I’ve agreed with him or not—wrote a powerful condemnation of Google’s decision to fire Damore.  Scott Alexander was brilliant as usual in picking apart bad arguments.  Megan McArdle drew on her experiences to illustrate some of Damore’s contentions.  Steven Pinker tweeted that Damore’s firing “makes [the] job of anti-Trumpists harder.”

Like Peter Singer, and also like Sarah Constantin and Stacey Jeffery above, I have no plans to take any position on biological differences in male and female inclinations and cognitive styles, and what role (if any) such differences might play in 80% of Google engineers being male—or, for that matter, what role they might play in 80% of graduating veterinarians now being female, or other striking gender gaps.  I decline to take a position not only because I’m not an expert, but also because, as Singer says, doing so isn’t necessary to reach the right verdict about Damore’s firing.  It suffices to note that the basic thesis being discussed—namely, that natural selection doesn’t stop at the neck, and that it’s perfectly plausible that it acted differently on women and men in ways that might help explain many of the population-level differences that we see today—can also be found in, for example, The Blank Slate by Steven Pinker, and other mainstream works by some of the greatest thinkers alive.

And therefore I say: if James Damore deserves to be fired from Google, for treating evolutionary psychology as potentially relevant to social issues, then Steven Pinker deserves to be fired from Harvard for the same offense.

Yes, I realize that an employee of a private company is different from a tenured professor.  But I don’t see why it’s relevant here.  For if someone really believes that mooting the hypothesis of an evolutionary reason for average differences in cognitive styles between men and women, is enough by itself to create a hostile environment for women—well then, why should tenure be a bar to firing, any more than it is in cases of sexual harassment?

But the reductio needn’t stop there.  It seems to me that, if Damore deserves to be fired, then so do the 56% of Googlers who said in a poll that they opposed his firing.  For isn’t that 56% just as responsible for maintaining a hostile environment as Damore himself was? (And how would Google find out which employees opposed the firing? Well, if there’s any company on earth that could…)  Furthermore, after those 56% of Googlers are fired, any of the remaining 44% who think the 56% shouldn’t have been fired should be fired as well!  And so on iteratively, until only an ideologically reliable core remains, which might or might not be the empty set.

OK, but while the wider implications of Damore’s firing have frightened and depressed me all week, as I said, I depart from Damore on the question of affirmative action and other diversity policies.  Fundamentally, what I want is a sort of negotiated agreement or bargain, between STEM nerds and the wider culture in which they live.  The agreement would work like this: STEM nerds do everything they can to foster diversity, including by creating environments that are welcoming for women, and by supporting affirmative action, women-only scholarships and conferences, and other diversity policies.  The STEM nerds also agree never to talk in public about possible cognitive-science explanations for gender disparities in which careers people choose, or overlapping bell curves,  or anything else potentially inflammatory.  In return, just two things:

  1. Male STEM nerds don’t regularly get libelled as misogynist monsters, who must be scaring all the women away with their inherently gross, icky, creepy, discriminatory brogrammer maleness.
  2. The fields beloved by STEM nerds are suffered to continue to exist, rather than getting destroyed and rebuilt along explicitly ideological lines, as already happened with many humanities and social science fields.

So in summary, neither side advances its theories about the causes of gender gaps; both sides simply agree that there are more interesting topics to explore.  In concrete terms, the social-justice side gets to retain 100% of what it has now, or maybe even expand it.  And all it has to offer in exchange is “R-E-S-P-E-C-T“!  Like, don’t smear and shame male nerds as a class, or nerdy disciplines themselves, for gender gaps that the male nerds would be as happy as anybody to see eradicated.

The trouble is that, fueled by outrage-fests on social media, I think the social-justice side is currently failing to uphold its end of this imagined bargain.  Nearly every day the sun rises to yet another thinkpiece about the toxic “bro culture” of Silicon Valley: a culture so uniquely and incorrigibly misogynist, it seems, that it still intentionally keeps women out, even after law and biology and most other white-collar fields have achieved or exceeded gender parity, their own “bro cultures” notwithstanding.  The trouble with this slander against male STEM nerds, besides its fundamental falsity (which Scott Alexander documented), is that puts the male nerds into an impossible position.  For how can they refute the slander without talking about other possible explanations for fields like CS being 80% male, which is the very thing we all know they’re not supposed to talk about?

In Europe, in the Middle Ages, the Church would sometimes enjoy forcing the local Jews into “disputations” about whose religion was the true one.  At these events, a popular tactic on the Church’s side was to make statements that the Jews couldn’t possibly answer without blaspheming the name of Christ—which, of course, could lead to the Jews’ expulsion or execution if they dared it.

Maybe I have weird moral intuitions, but it’s hard for me to imagine a more contemptible act of intellectual treason, than deliberately trapping your opponents between surrender and blasphemy.  I’d actually rather have someone force me into one or the other, than make me choose, and thereby make me responsible for whichever choice I made.  So I believe the social-justice left would do well to forswear this trapping tactic forever.

Ironically, I suspect that in the long term, doing so would benefit no entity more than the social-justice left itself.  If I had to steelman, in one sentence, the argument that in the space of one year propelled the “alt-right” from obscurity in dark and hateful corners of the Internet, to the improbable and ghastly ascent of Donald Trump and his white-nationalist brigade to the most powerful office on earth, the argument would be this:

If the elites, the technocrats, the “Cathedral”-dwellers, were willing to lie to the masses about humans being blank slates—and they obviously were—then why shouldn’t we assume that they also lied to us about healthcare and free trade and guns and climate change and everything else?

We progressives deluded ourselves that we could permanently shame our enemies into silence, on pain of sexism, racism, xenophobia, and other blasphemies.  But the “victories” won that way were hollow and illusory, and the crumbling of the illusion brings us to where we are now: with a vindictive, delusional madman in the White House who has a non-negligible chance of starting a nuclear war this week.

The Enlightenment was a specific historical period in 18th-century Europe.  But the term can also be used much more broadly, to refer to every trend in human history that’s other than horrible.  Seen that way, the Enlightenment encompasses the scientific revolution, the abolition of slavery, the decline of all forms of violence, the spread of democracy and literacy, and the liberation of women from domestic drudgery to careers of their own choosing.  The invention of Google, which made the entire world’s knowledge just a search bar away, is now also a permanent part of the story of the Enlightenment.

I fantasize that, within my lifetime, the Enlightenment will expand further to tolerate a diversity of cognitive styles—including people on the Asperger’s and autism spectrum, with their penchant for speaking uncomfortable truths—as well as a diversity of natural abilities and inclinations.  Society might or might not get the “demographically correct” percentage of Ellie Arroways—Ellie might decide to become a doctor or musician rather than an astronomer, and that’s fine too—but most important, it will nurture all the Ellie Arroways that it gets, all the misfits and explorers of every background.  I wonder whether, while disagreeing on exactly what’s meant by it, all parties to this debate could agree that diversity represents a next frontier for the Enlightenment.


Comment Policy: Any comment, from any side, that attacks people rather than propositions will be deleted.  I don’t care if the comment also makes useful points: if it contains a single ad hominem, it’s out.

As it happens, I’m at a quantum supremacy workshop in Bristol, UK right now—yeah, yeah, I’m a closet supremacist after all, hur hur—so I probably won’t participate in the comments until later.

The Social Justice Warriors are right

Monday, May 29th, 2017

As you might know, I haven’t been exactly the world’s most consistent fan of the Social Justice movement, nor has it been the most consistent fan of me.

I cringe when I read about yet another conservative college lecture shut down by mob violence; or student protesters demanding the firing of a professor for trying gently to argue and reason with them; or an editor forced from his position for writing a (progressive) defense of “cultural appropriation”—a practice that I take to have been ubiquitous for all of recorded history, and without which there wouldn’t be any culture at all.  I cringe not only because I know that I was in the crosshairs once before and could easily be again, but also because, it seems to me, the Social Justice scalp-hunters are so astoundingly oblivious to the misdirection of their energies, to the power of their message for losing elections and neutering the progressive cause, to the massive gift their every absurdity provides to the world’s Fox Newses and Breitbarts and Trumps.

Yet there’s at least one issue where it seems to me that the Social Justice Warriors are 100% right, and their opponents 100% wrong. This is the moral imperative to take down every monument to Confederate “war heroes,” and to rename every street and school and college named after individuals whose primary contribution to the world was to defend chattel slavery.  As a now-Southerner, I have a greater personal stake here than I did before: UT Austin just recently removed its statue of Jefferson Davis, while keeping up its statue of Robert E. Lee.  My kids will likely attend what until very recently was called Robert E. Lee Elementary—this summer renamed Russell Lee Elementary.  (My suggestion, that the school be called T. D. Lee Parity Violation Elementary, was sadly never considered.)

So I was gratified that last week, New Orleans finally took down its monuments to slavers.  Mayor Mitch Landrieu’s speech, setting out the reasons for the removal, is worth reading.

I used to have little patience for “merely symbolic” issues: would that offensive statues and flags were the worst problems!  But it now seems to me that the fight over Confederate symbols is just a thinly-veiled proxy for the biggest moral question that’s faced the United States through its history, and also the most urgent question facing it in 2017.  Namely: Did the Union actually win the Civil War? Were the anti-Enlightenment forces—the slavers, the worshippers of blood and land and race and hierarchy—truly defeated? Do those forces acknowledge the finality and the rightness of their defeat?

For those who say that, sure, slavery was bad and all, but we need to keep statues to slavers up so as not to “erase history,” we need only change the example. Would we similarly defend statues of Hitler, Himmler, and Goebbels, looming over Berlin in heroic poses?  Yes, let Germans reflect somberly and often on this aspect of their heritage—but not by hoisting a swastika over City Hall.

For those who say the Civil War wasn’t “really” about slavery, I reply: this is the canonical example of a “Mount Stupid” belief, the sort of thing you can say only if you’ve learned enough to be wrong but not enough to be unwrong.  In 1861, the Confederate ringleaders themselves loudly proclaimed to future generations that, indeed, their desire to preserve slavery was their overriding reason to secede. Here’s CSA Vice-President Alexander Stephens, in his famous Cornerstone Speech:

Our new government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.

Here’s Texas’ Declaration of Secession:

We hold as undeniable truths that the governments of the various States, and of the confederacy itself, were established exclusively by the white race, for themselves and their posterity; that the African race had no agency in their establishment; that they were rightfully held and regarded as an inferior and dependent race, and in that condition only could their existence in this country be rendered beneficial or tolerable. That in this free government all white men are and of right ought to be entitled to equal civil and political rights; that the servitude of the African race, as existing in these States, is mutually beneficial to both bond and free, and is abundantly authorized and justified by the experience of mankind, and the revealed will of the Almighty Creator, as recognized by all Christian nations; while the destruction of the existing relations between the two races, as advocated by our sectional enemies, would bring inevitable calamities upon both and desolation upon the fifteen slave-holding states.

It was only when defeat looked inevitable that the slavers started changing their story, claiming that their real grievance was never about slavery per se, but only “states’ rights” (states’ right to do what, exactly?). So again, why should we take the slavers’ rationalizations any more seriously than we take the postwar epiphanies of jailed Nazis that actually, they’d never felt any personal animus toward Jews, that the Final Solution was just the world’s biggest bureaucratic mishap?  Of course there’s a difference: when the Allies occupied Germany, they insisted on de-Nazification.  They didn’t suffer streets to be named after Hitler. And today, incredibly, fascism and white nationalism are greater threats here in the US than they are in Germany.  One reads about the historic irony of some American Jews, who are eligible for German citizenship because of grandparents expelled from there, now seeking to move there because they’re terrified about Trump.

By contrast, after a brief Reconstruction, the United States lost its will to continue de-Confederatizing the South.  The leaders were left free to write book after book whitewashing their cause, even to hold political office again.  And probably not by coincidence, we then got nearly a hundred years of Jim Crow—and still today, a half-century after the civil rights movement, southern governors and legislatures that do everything in their power to disenfranchise black voters.

For those who ask: but wasn’t Robert E. Lee a great general who was admired by millions? Didn’t he fight bravely for a cause he believed in?  Maybe it’s just me, but I’m allergic to granting undue respect to history’s villains just because they managed to amass power and get others to go along with them.  I remember reading once in some magazine that, yes, Genghis Khan might have raped thousands and murdered millions, but since DNA tests suggest that ~1% of humanity is now descended from him, we should also celebrate Khan’s positive contribution to “peopling the world.” Likewise, Hegel and Marx and Freud and Heidegger might have been wrong in nearly everything they said, sometimes with horrific consequences, but their ideas still need to be studied reverently, because of the number of other intellectuals who took them seriously.  As I reject those special pleas, so I reject the analogous ones for Jefferson Davis, Alexander Stephens, and Robert E. Lee, who as far as I can tell, should all (along with the rest of the Confederate leadership) have been sentenced for treason.

This has nothing to do with judging the past by standards of the present. By all means, build statues to Washington and Jefferson even though they held slaves, to Lincoln even though he called blacks inferior even while he freed them, to Churchill even though he fought the independence of India.  But don’t look for moral complexity where there isn’t any.  Don’t celebrate people who were terrible even for their own time, whose public life was devoted entirely to what we now know to be evil.

And if, after the last Confederate general comes down, the public spaces are too empty, fill them with monuments to Alan Turing, Marian Rejewski, Bertrand Russell, Hypatia of Alexandria, Emmy Noether, Lise Meitner, Mark Twain, Srinivasa Ramanujan, Frederick Douglass, Vasili Arkhipov, Stanislav Petrov, Raoul Wallenberg, even the inventors of saltwater taffy or Gatorade or the intermittent windshield wiper.  There are, I think, enough people who added value to the world to fill every city square and street sign.

May reason trump the Trump in all of us

Wednesday, October 19th, 2016

Two years ago, when I was the target of an online shaming campaign, what helped me through it were hundreds of messages of support from friends, slight acquaintances, and strangers of every background.  I vowed then to return the favor, by standing up when I saw decent people unfairly shamed.  Today I have an opportunity to make good.

Some time ago I had the privilege of interacting a bit with Sam Altman, president of the famed startup incubator Y Combinator (and a guy who’s thanked in pretty much everything Paul Graham writes).  By way of our mutual friend, the renowned former quantum computing researcher Michael Nielsen, Sam got in touch with me to solicit suggestions for “outside-the-box” scientists and writers, for a new grant program that Y Combinator was starting. I found Sam eager to delve into the merits of any suggestion, however outlandish, and was delighted to be able to make a difference for a few talented people who needed support.

Sam has also been one of the Silicon Valley leaders who’s written most clearly and openly about the threat to America posed by Donald Trump and the need to stop him, and he’s donated tens of thousands of dollars to anti-Trump causes.  Needless to say, I supported Sam on that as well.

Now Sam is under attack on social media, and there are even calls for him to resign as the president of Y Combinator.  Like me two years ago, Sam has instantly become the corporeal embodiment of the “nerd privilege” that keeps the marginalized out of Silicon Valley.

Why? Because, despite his own emphatic anti-Trump views, Sam rejected demands to fire Peter Thiel (who has an advisory role at Y Combinator) because of Thiel’s support for Trump.  Sam explained his reasoning at some length:

[A]s repugnant as Trump is to many of us, we are not going to fire someone over his or her support of a political candidate.  As far as we know, that would be unprecedented for supporting a major party nominee, and a dangerous path to start down (of course, if Peter said some of the things Trump says himself, he would no longer be part of Y Combinator) … The way we got into a situation with Trump as a major party nominee in the first place was by not talking to people who are very different than we are … I don’t understand how 43% of the country supports Trump.  But I’d like to find out, because we have to include everyone in our path forward.  If our best ideas are to stop talking to or fire anyone who disagrees with us, we’ll be facing this whole situation again in 2020.

The usual criticism of nerds is that we might have narrow technical abilities, but we lack wisdom about human affairs.  It’s ironic, then, that it appears to have fallen to Silicon Valley nerds to guard some of the most important human wisdom our sorry species ever came across—namely, the liberal ideals of the Enlightenment.  Like Sam, I despise pretty much everything Trump stands for, and I’ve been far from silent about it: I’ve blogged, donated money, advocated vote swapping, endured anonymous comments like “kill yourself kike”—whatever seemed like it might help even infinitesimally to ensure the richly-deserved electoral thrashing that Trump mercifully seems to be headed for in a few weeks.

But I also, I confess, oppose the forces that apparently see Trump less as a global calamity to be averted, than as a golden opportunity to take down anything they don’t like that’s ever been spotted within a thousand-mile radius of Trump Tower.  (Where does this Kevin Bacon game end, anyway?  Do “six degrees of Trump” suffice to contaminate you?)

And not only do I not feel a shadow of a hint of a moral conflict here, but it seems to me that precisely the same liberal Enlightenment principles are behind both of these stances.

But I’d go yet further.  It sort of flabbergasts me when social-justice activists don’t understand that, if we condemn not only Trump, not only his supporters, but even vociferous Trump opponents who associate with Trump supporters (!), all we’ll do is feed the narrative that got Trumpism as far as it has—namely, that of a smug, bubble-encased, virtue-signalling leftist elite subject to runaway political correctness spirals.  Like, a hundred million Americans’ worldviews revolve around the fear of liberal persecution, and we’re going to change their minds by firing anyone who refuses to fire them?  As a recent Washington Post story illustrates, the opposite approach is harder but can bear spectacular results.

Now, as for Peter Thiel: three years ago, he funded a small interdisciplinary workshop on the coast of France that I attended.  With me there were a bunch of honest-to-goodness conservative Christians, a Freudian psychoanalyst, a novelist, a right-wing radio host, some scientists and Silicon Valley executives, and of course Thiel himself.  Each, I found, offered tons to disagree about but also some morsels to learn.

Thiel’s worldview, focused on the technological and organizational greatness that (in his view) Western civilization used to have and has subsequently lost, was a bit too dark and pessimistic for me, and I’m a pretty dark and pessimistic person.  Thiel gave a complicated, meandering lecture that involved comparing modern narratives about Silicon Valley entrepreneurs against myths of gods, heroes, and martyrs throughout history, such as Romulus and Remus (the legendary founders of Rome).  The talk might have made more sense to Thiel than to his listeners.

At the same time, Thiel’s range of knowledge and curiosity was pretty awesome.  He avidly followed all the talks (including mine, on P vs. NP and quantum complexity theory) and asked pertinent questions. When the conversation turned to D-Wave, and Thiel’s own decision not to invest in it, he laid out the conclusions he’d come to from an extremely quick look at the question, then quizzed me as to whether he’d gotten anything wrong.  He hadn’t.

From that conversation among others, I formed the impression that Thiel’s success as an investor is, at least in part, down neither to luck nor to connections, but to a module in his brain that most people lack, which makes blazingly fast and accurate judgments about tech startups.  No wonder Y Combinator would want to keep him as an adviser.

But, OK, I’m so used to the same person being spectacularly right on some things and spectacularly wrong on others, that it no longer causes even slight cognitive dissonance.  You just take the issues one by one.

I was happy, on balance, when it came out that Thiel had financed the lawsuit that brought down Gawker Media.  Gawker really had used its power to bully the innocent, and it had broken the law to do it.  And if it’s an unaccountable, anti-egalitarian, billionaire Godzilla against a vicious, privacy-violating, nerd-baiting King Kong—well then, I guess I’m with Godzilla.

More recently, I was appalled when Thiel spoke at the Republican convention, pandering to the crowd with Fox-News-style attack lines that were unworthy of a mind of his caliber.  I lost a lot of respect for Thiel that day.  But that’s the thing: unlike with literally every other speaker at the GOP convention, my respect for Thiel had started from a point that made a decrease possible.

I reject huge parts of Thiel’s worldview.  I also reject any worldview that would threaten me with ostracism for talking to Thiel, attending a workshop he sponsors, or saying anything good about him.  This is not actually a difficult balance.

Today, when it sometimes seems like much of the world has united in salivating for a cataclysmic showdown between whites and non-whites, Christians and Muslims, “dudebros” and feminists, etc., and that the salivators differ mostly just in who they want to see victorious in the coming battle and who humiliated, it can feel lonely to stick up for naïve, outdated values like the free exchange of ideas, friendly disagreement, the presumption of innocence, and the primacy of the individual over the tribe.  But those are the values that took us all the way from a bronze spear through the enemy’s heart to a snarky rebuttal on the arXiv, and they’ll continue to build anything worth building.

And now to watch the third debate (I’ll check the comments afterward)…


Update (Oct. 20): See also this post from a blog called TheMoneyIllusion. My favorite excerpt:

So let’s see. Not only should Trump be shunned for his appalling political views, an otherwise highly respected Silicon Valley entrepreneur who just happens to support Trump (along with 80 million other Americans) should also be shunned. And a person who despises Trump and works against him but who defends Thiel’s right to his own political views should also resign. Does that mean I should be shunned too? After all, I’m a guy who hates Trump, writing a post that defends a guy who hates Trump, who wrote a post defending a guy’s freedom to support Trump, who in turn supports Trump. And suppose my mother sticks up for me? Should she also be shunned?

It’s almost enough to make me vote . . . no, just kidding.

Question … Which people on the left are beyond the pale? Suppose Thiel had supported Hugo Chavez? How about Castro? Mao? Pol Pot? Perhaps the degrees of separation could be calibrated to the awfulness of the left-winger:

Chavez: One degree of separation. (Corbyn, Sean Penn, etc.)

Castro: Two degrees of separation is still toxic.

Lenin: Three degrees of separation.

Mao: Four degrees of separation.

Pol Pot: Five degrees of separation.