Archive for the ‘The Fate of Humanity’ Category

The morality of quantum computing

Thursday, November 7th, 2019

This morning a humanities teacher named Richard Horan, having read my NYT op-ed on quantum supremacy, emailed me the following question about it:

Is this pursuit [of scalable quantum computation] just an arms race? A race to see who can achieve it first? To what end? Will this achievement yield advances in medical science and human quality of life, or will it threaten us even more than we are threatened presently by our technologies? You seem rather sanguine about its possible development and uses. But how close does the hand on that doomsday clock move to midnight once we “can harness an exponential number of amplitudes for computation”?

I thought this question might possibly be of some broader interest, so here’s my response (with some light edits).

Dear Richard,

A radio interviewer asked me a similar question a couple weeks ago—whether there’s an ethical dimension to quantum computing research.  I replied that there’s an ethical dimension to everything that humans do.

A quantum computer is not like a nuclear weapon: it’s not going to directly kill anybody (unless the dilution refrigerator falls on them or something?).  It’s true that a full, scalable QC, if and when it’s achieved, will give a temporary advantage to people who want to break certain cryptographic codes.  The morality of that, of course, could strongly depend on whether the codebreakers are working for the “good guys” (like the Allies during WWII) or the “bad guys” (like, perhaps, Trump or Vladimir Putin or Xi Jinping).

But in any case, there’s already a push to switch to new cryptographic codes that already exist and that we think are quantum-resistant.  An actual scalable QC on the horizon would of course massively accelerate that push.  And once people make the switch, we expect that security on the Internet will be more-or-less back where it started.

Meanwhile, the big upside potential from QCs is that they’ll provide an unprecedented ability to simulate physics and chemistry at the molecular level.  That could at least potentially help with designing new medicines, as well as new batteries and solar cells and carbon capture technologies—all things that the world desperately needs right now.

Also, the theory developed around QC has already led to many new and profound insights about physics and computation.  Some of us regard that as an inherent good, in the same way that art and music and literature are.

Now, one could argue that the climate crisis, or various other crises that our civilization faces, are so desperate that instead of working to build QCs, we should all just abandon our normal work and directly confront the crises, as (for example) Greta Thunberg is doing.  On some days I share that position.  But of course, were the position upheld, it would have implications not just for quantum computing researchers but for almost everyone on earth—including humanities teachers like yourself.

Best,
Scott

Book Review: ‘The AI Does Not Hate You’ by Tom Chivers

Sunday, October 6th, 2019

A couple weeks ago I read The AI Does Not Hate You: Superintelligence, Rationality, and the Race to Save the World, the first-ever book-length examination of the modern rationalist community, by British journalist Tom Chivers. I was planning to review it here, before it got preempted by the news of quantum supremacy (and subsequent news of classical non-supremacy). Now I can get back to rationalists.

Briefly, I think the book is a triumph. It’s based around in-person conversations with many of the notable figures in and around the rationalist community, in its Bay Area epicenter and beyond (although apparently Eliezer Yudkowsky only agreed to answer technical questions by Skype), together of course with the voluminous material available online. There’s a good deal about the 1990s origins of the community that I hadn’t previously known.

The title is taken from Eliezer’s aphorism, “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” In other words: as soon as anyone succeeds in building a superhuman AI, if we don’t take extreme care that the AI’s values are “aligned” with human ones, the AI might be expected to obliterate humans almost instantly as a byproduct of pursuing whatever it does value, more-or-less as we humans did with woolly mammoths, moas, and now gorillas, rhinos, and thousands of other species.

Much of the book relates Chivers’s personal quest to figure out how seriously he should take this scenario. Are the rationalists just an unusually nerdy doomsday cult? Is there some non-negligible chance that they’re actually right about the AI thing? If so, how much more time do we have—and is there even anything meaningful that can be done today? Do the dramatic advances in machine learning over the past decade change the outlook? Should Chivers be worried about his own two children? How does this risk compare to the more “prosaic” civilizational risks, like climate change or nuclear war? I suspect that Chivers’s exploration will be most interesting to readers who, like me, regard the answers to none of these questions as obvious.

While it sounds extremely basic, what makes The AI Does Not Hate You so valuable to my mind is that, as far as I know, it’s nearly the only examination of the rationalists ever written by an outsider that tries to assess the ideas on a scale from true to false, rather than from quirky to offensive. Chivers’s own training in academic philosophy seems to have been crucial here. He’s not put off by people who act weirdly around him, even needlessly cold or aloof, nor by utilitarian thought experiments involving death or torture or weighing the value of human lives. He just cares, relentlessly, about the ideas—and about remaining a basically grounded and decent person while engaging them. Most strikingly, Chivers clearly feels a need—anachronistic though it seems in 2019—actually to understand complicated arguments, be able to repeat them back correctly, before he attacks them.

Indeed, far from failing to understand the rationalists, it occurs to me that the central criticism of Chivers’s book is likely to be just the opposite: he understands the rationalists so well, extends them so much sympathy, and ends up endorsing so many aspects of their worldview, that he must simply be a closet rationalist himself, and therefore can’t write about them with any pretense of journalistic or anthropological detachment. For my part, I’d say: it’s true that The AI Does Not Hate You is what you get if you treat rationalists as extremely smart (if unusual) people from whom you might learn something of consequence, rather than as monkeys in a zoo. On the other hand, Chivers does perform the journalist’s task of constantly challenging the rationalists he meets, often with points that (if upheld) would be fatal to their worldview. One of the rationalists’ best features—and this precisely matches my own experience—is that, far from clamming up or storming off when faced with such challenges (“lo! the visitor is not one of us!”), the rationalists positively relish them.

It occurred to me the other day that we’ll never know how the rationalists’ ideas would’ve developed, had they continued to do so in a cultural background like that of the late 20th century. As Chivers points out, the rationalists today are effectively caught in the crossfire of a much larger cultural war—between, to their right, the recrudescent know-nothing authoritarians, and to their left, what one could variously describe as woke culture, call-out culture, or sneer culture. On its face, it might seem laughable to conflate the rationalists with today’s resurgent fascists: many rationalists are driven by their utilitarianism to advocate open borders and massive aid to the Third World; the rationalist community is about as welcoming of alternative genders and sexualities as it’s humanly possible to be; and leading rationalists like Scott Alexander and Eliezer Yudkowsky strongly condemned Trump for the obvious reasons.

Chivers, however, explains how the problem started. On rationalist Internet forums, many misogynists and white nationalists and so forth encountered nerds willing to debate their ideas politely, rather than immediately banning them as more mainstream venues would. As a result, many of those forces of darkness (and they probably don’t mind being called that) predictably congregated on the rationalist forums, and their stench predictably wore off on the rationalists themselves. Furthermore, this isn’t an easy-to-fix problem, because debating ideas on their merits, extending charity to ideological opponents, etc. is sort of the rationalists’ entire shtick, whereas denouncing and no-platforming anyone who can be connected to an ideological enemy (in the modern parlance, “punching Nazis”) is the entire shtick of those condemning the rationalists.

Compounding the problem is that, as anyone who’s ever hung out with STEM nerds might’ve guessed, the rationalist community tends to skew WASP, Asian, or Jewish, non-impoverished, and male. Worse yet, while many rationalists live their lives in progressive enclaves and strongly support progressive values, they’ll also undergo extreme anguish if they feel forced to subordinate truth to those values.

Chivers writes that all of these issues “blew up in spectacular style at the end of 2014,” right here on this blog. Oh, what the hell, I’ll just quote him:

Scott Aaronson is, I think it’s fair to say, a member of the Rationalist community. He’s a prominent theoretical computer scientist at the University of Texas at Austin, and writes a very interesting, maths-heavy blog called Shtetl-Optimised.

People in the comments under his blog were discussing feminism and sexual harassment. And Aaronson, in a comment in which he described himself as a fan of Andrea Dworkin, described having been terrified of speaking to women as a teenager and young man. This fear was, he said, partly that of being thought of as a sexual abuser or creep if any woman ever became aware that he sexually desired them, a fear that he picked up from sexual-harassment-prevention workshops at his university and from reading feminist literature. This fear became so overwhelming, he said in the comment that came to be known as Comment #171, that he had ‘constant suicidal thoughts’ and at one point ‘actually begged a psychiatrist to prescribe drugs that would chemically castrate me (I had researched which ones), because a life of mathematical asceticism was the only future that I could imagine for myself.’ So when he read feminist articles talking about the ‘male privilege’ of nerds like him, he didn’t recognise the description, and so felt himself able to declare himself ‘only’ 97 per cent on board with the programme of feminism.

It struck me as a thoughtful and rather sweet remark, in the midst of a long and courteous discussion with a female commenter. But it got picked up, weirdly, by some feminist bloggers, including one who described it as ‘a yalp of entitlement combined with an aggressive unwillingness to accept that women are human beings just like men’ and that Aaronson was complaining that ‘having to explain my suffering to women when they should already be there, mopping my brow and offering me beers and blow jobs, is so tiresome.’

Scott Alexander (not Scott Aaronson) then wrote a furious 10,000-word defence of his friend… (p. 214-215)

And then Chivers goes on to explain Scott Alexander’s central thesis, in Untitled, that privilege is not a one-dimensional axis, so that (to take one example) society can make many women in STEM miserable while also making shy male nerds miserable in different ways.

For nerds, perhaps an alternative title for Chivers’s book could be “The Normal People Do Not Hate You (Not All of Them, Anyway).” It’s as though Chivers is demonstrating, through understated example, that taking delight in nerds’ suffering, wanting them to be miserable and alone, mocking their weird ideas, is not simply the default, well-adjusted human reaction, with any other reaction being ‘creepy’ and ‘problematic.’ Some might even go so far as to apply the latter adjectives to the sneerers’ attitude, the one that dresses up schoolyard bullying in a social-justice wig.

Reading Chivers’s book prompted me to reflect on my own relationship to the rationalist community. For years, I interacted often with the community—I’ve known Robin Hanson since ~2004 and Eliezer Yudkowsky since ~2006, and our blogs bounced off each other—but I never considered myself a member.  I never ranked paperclip-maximizing AIs among humanity’s more urgent threats—indeed, I saw them as a distraction from an all-too-likely climate catastrophe that will leave its survivors lucky to have stone tools, let alone AIs. I was also repelled by what I saw as the rationalists’ cultier aspects.  I even once toyed with the idea of changing the name of this blog to “More Wrong” or “Wallowing in Bias,” as a play on the rationalists’ LessWrong and OvercomingBias.

But I’ve drawn much closer to the community over the last few years, because of a combination of factors:

  1. The comment-171 affair. This was not the sort of thing that could provide any new information about the likelihood of a dangerous AI being built, but was (to put it mildly) the sort of thing that can tell you who your friends are. I learned that empathy works a lot like intelligence, in that those who boast of it most loudly are often the ones who lack it.
  2. The astounding progress in deep learning and reinforcement learning and GANs, which caused me (like everyone else, perhaps) to update in the direction of human-level AI in our lifetimes being an actual live possibility,
  3. The rise of Scott Alexander. To the charge that the rationalists are a cult, there’s now the reply that Scott, with his constant equivocations and doubts, his deep dives into data, his clarity and self-deprecating humor, is perhaps the least culty cult leader in human history. Likewise, to the charge that the rationalists are basement-dwelling kibitzers who accomplish nothing of note in the real world, there’s now the reply that Scott has attracted a huge mainstream following (Steven Pinker, Paul Graham, presidential candidate Andrew Yang…), purely by offering up what’s self-evidently some of the best writing of our time.
  4. Research. The AI-risk folks started publishing some research papers that I found interesting—some with relatively approachable problems that I could see myself trying to think about if quantum computing ever got boring. This shift seems to have happened at roughly around the same time my former student, Paul Christiano, “defected” from quantum computing to AI-risk research.

Anyway, if you’ve spent years steeped in the rationalist blogosphere, read Eliezer’s “Sequences,” and so on, The AI Does Not Hate You will probably have little that’s new, although it might still be interesting to revisit ideas and episodes that you know through a newcomer’s eyes. To anyone else … well, reading the book would be a lot faster than spending all those years reading blogs! I’ve heard of some rationalists now giving out copies of the book to their relatives, by way of explaining how they’ve chosen to spend their lives.

I still don’t know whether there’s a risk worth worrying about that a misaligned AI will threaten human civilization in my lifetime, or my children’s lifetimes, or even 500 years—or whether everyone will look back and laugh at how silly some people once were to think that (except, silly in which way?). But I do feel fairly confident that The AI Does Not Hate You will make a positive difference—possibly for the world, but at any rate for a little well-meaning community of sneered-at nerds obsessed with the future and with following ideas wherever they lead.

Blurry but clear enough

Friday, September 20th, 2019

My vision is blurry right now, because yesterday I had a procedure called corneal cross-linking, intended to prevent further deterioration of my eyes as I get older. But I can see clearly enough to tap out a post with random thoughts about the world.

I’m happy that the Netanyahu era might finally be ending in Israel, after which Netanyahu will hopefully face some long-delayed justice for his eye-popping corruption. If only there were a realistic prospect of Trump facing similar justice. I wish Benny Gantz success in putting together a coalition.

I’m happy that my two least favorite candidates, Bill de Blasio and Kirsten Gillibrand, have now both dropped out of the Democratic primary. Biden, Booker, Warren, Yang—I could enthusiastically support pretty much any of them, if they looked like they had a good chance to defeat Twitler. Let’s hope.

Most importantly, I wish to register my full-throated support for the climate strikes taking place today all over the world, including here in Austin. My daughter Lily, age 6, is old enough to understand the basics of what’s happening and to worry about her future. I urge the climate strikers to keep their eyes on things that will actually make a difference (building new nuclear plants, carbon taxes, geoengineering) and ignore what won’t (banning plastic straws).

As for Greta Thunberg: she is, or is trying to be, the real-life version of the Comet King from Unsong. You can make fun of her, ask what standing or expertise she has as some random 16-year-old to lead a worldwide movement. But I suspect that this is always what it looks like when someone takes something that’s known to (almost) all, and then makes it common knowledge. If civilization makes it to the 22nd century at all, then in whatever form it still exists, I can easily imagine that it will have more statues of Greta than of MLK or Gandhi.

On a completely unrelated and much less important note, John Horgan has a post about “pluralism in math” that includes some comments by me.

Oh, and on the quantum supremacy front—I foresee some big news very soon. You know which blog to watch for more.

A rare classified ad

Saturday, September 7th, 2019

Dana and I are searching for a live-in nanny for our two kids, Lily (age 6) and Daniel (age 2). We can offer $750/week. We can also offer a private room with a full bathroom and a beautiful view in our home in central Austin, TX, as well as free food and other amenities. The responsibilities include helping to take the kids to and from school and drive them to various activities, helping to get them ready for school/daycare in the morning and ready for sleep at night, cooking and other housework. We’d ask for no more than 45 hours per week, and could give several days off at a time depending on scheduling constraints.

If interested, please shoot me an email, tell me all about yourself and provide references.

Obviously, feel free to let anyone else know who you think might be interested (but who might not read this blog).

I’m really sorry to be doing this here! We tried on classified sites and didn’t find a good match.

A nerdocratic oath

Friday, August 30th, 2019

Recently, my Facebook wall was full of discussion about instituting an oath for STEM workers, analogous to the Hippocratic oath for doctors.  Perhaps some of the motivation for this comes from a worldview I can’t get behind—one that holds STEM nerds almost uniquely responsible for the world’s evils.  Nevertheless, on reflection, I find myself in broad support of the idea.

But I prefer writing the oath myself. Here’s my attempt:

1. I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems.  Moral implications are outside my scope.”

2. If I build or supply tools that are used to do evil or cause suffering, I’ll be horrified as soon as I learn about it.  Yes, I might judge that the good of the tools outweighs the bad, that the bad can’t be prevented, etc.  But I’ll be hyper-alert to the possibility of self-serving bias in such reflections, and will choose a different course of action whenever the reflections are no longer persuasive to my highest self.

3. I will pursue the truth, and hold the sharing of truth and exposing of falsehoods among my highest moral values.

4. I will make a stink, resign, leak to the press, sabotage, rather than go along quietly with decisions inimical to my values.

5. I will put everything on the line for my students, advisees, employees—my time, funds, reputation, and credibility.  And not only because it can somewhat make up for failings in the other areas.

6. Black, white, male, female, trans, gay, straight, Israeli, Palestinian, young, old.  Whatever ideologies I might subscribe to about which groups are advantaged and which disadvantaged in which aspects of life—when it comes time to interact with a person, I will throw ideology into the ocean and treat them solely as an individual, not as a representative of a group.

7. I will not be Jeffrey Epstein—and not just in the narrow sense of not collecting underage girls on a private sex island.  I’ll see myself always as accountable to the moral judgment of history.  Whenever I’m publicly accused of wrongdoing, I’ll consider only two options: (a) if guilty, then confess, offer restitution, beg for forgiveness, or (b) if innocent, then mount a full public defense.  Finding some escape that avoids the need for either of these—from legal maneuvering to suicide—will never be on the table for me.

8. I’m under no obligation to blog or tweet every detail of my private life. Yet even in my most private moments, I’ll act in such a way that, if my actions were made public, I’d have a defense of which I was unashamed.

9. To whatever extent I was gifted at birth with a greater-than-average ability to prove theorems or write code or whatever, I’ll treat it as just that—a gift, which I didn’t earn or deserve. It doesn’t make me inherently worthier than anyone else, but it does give me a moral obligation to use the gift for good. And whenever I’m tempted to be jealous of various non-nerds—of their ease in social or romantic situations, wealth, looks, power, athletic ability, or anything else about them—I’ll remember the gift, and that all in all, I made out better than I had a right to expect.

10. I’ll be conscious always of living in a universe where catastrophes—genocides, destructions of civilizations, extinctions of magnificent species—have happened and will happen again. The burning of the Amazon, the deaths of children, the bleaching of coral reefs, will weigh on me daily, to the maximum extent consistent with being able to get out of bed in the morning, live, and work. While it’s not obvious that any of these problems are open to a STEM-nerd solution, of the sort I could plausibly think of or implement—nevertheless, I’ll keep asking myself whether any of them are. And if I ever do find myself before one of the levers of history, I’ll pull with all my strength to try to prevent these catastrophes.

Wonderful world

Thursday, April 11th, 2019

I was saddened about the results of the Israeli election. The Beresheet lander, which lost its engine and crashed onto the moon as I was writing this, seems like a fitting symbol for where the country is now headed. Whatever he was at the start of his career, Netanyahu has become the Israeli Trump—a breathtakingly corrupt strongman who appeals to voters’ basest impulses, sacrifices his country’s future and standing in the world for short-term electoral gain, considers himself unbound by laws, recklessly incites hatred of minorities, and (despite zero personal piety) cynically panders to religious fundamentalists who help keep him in power. Just like with Trump, it’s probably futile to hope that lawyers will swoop in and free the nation from the demagogue’s grip: legal systems simply aren’t designed for assaults of this magnitude.

(If, for example, you were designing the US Constitution, how would you guard against a presidential candidate who openly supported and was supported by a hostile foreign power, and won anyway? Would it even occur to you to include such possibilities in your definitions of concepts like “treason” or “collusion”?)

The original Zionist project—the secular, democratic vision of Herzl and Ben-Gurion and Weizmann and Einstein, which the Nazis turned from a dream to a necessity—matters more to me than most things in this world, and that was true even before I’d spent time in Israel and before I had a wife and kids who are Israeli citizens. It would be depressing if, after a century of wildly improbable victories against external threats, Herzl’s project were finally to eat itself from the inside. Of course I have analogous worries (scaled up by two orders of magnitude) for the US—not to mention the UK, India, Brazil, Poland, Hungary, the Philippines … the world is now engaged in a massive test of whether Enlightenment liberalism can long endure, or whether it’s just a metastable state between one Dark Age and the next. (And to think that people have accused me of uncritical agreement with Steven Pinker, the world’s foremost optimist!)

In times like this, one takes one’s happiness where one can find it.

So, yeah: I’m happy that there’s now an “image of a black hole” (or rather, of radio waves being bent around a black hole’s silhouette). If you really want to understand what the now-ubiquitous image is showing, I strongly recommend this guide by Matt Strassler.

I’m happy that integer multiplication can apparently now be done in O(n log n), capping a decades-long quest (see also here).

I’m happy that there’s now apparently a spectacular fossil record of the first minutes after the asteroid impact that ended the Cretaceous period. Even better will be if this finally proves that, yes, some non-avian dinosaurs were still alive on impact day, and had not gone extinct due to unrelated climate changes slightly earlier. (Last week, my 6-year-old daughter sang a song in a school play about how “no one knows what killed the dinosaurs”—the verses ran through the asteroid and several other possibilities. I wonder if they’ll retire that song next year.) If you haven’t yet read it, the New Yorker piece on this is a must.

I’m happy that my good friend Zach Weinersmith (legendary author of SMBC Comics), as well as the GMU economist Bryan Caplan (rabblerousing author of The Case Against Education, which I reviewed here), have a new book out: a graphic novel that makes a moral and practical case for open borders (!). Their book is now available for pre-order, and at least at some point cracked Amazon’s top 10. Just last week I met Bryan for the first time, when he visited UT Austin to give a talk based on the book. He said that meeting me (having known me only from the blog) was like meeting a fictional character; I said the feeling was mutual. And as for Bryan’s talk? It felt great to spend an hour visiting a fantasyland where the world’s economies are run by humane rationalist technocrats, and where walls are going down rather than up.

I’m happy that, according to this Vanity Fair article, Facebook will still ban you for writing that “men are scum” or that “women are scum”—having ultimately rejected the demands of social-justice activists that it ban only the latter sentence, not the former. According to the article, everyone on Facebook’s Community Standards committee agreed with the activists that this was the right result: dehumanizing comments about women have no place on the platform, while (superficially) dehumanizing comments about men are an important part of feminist consciousness-raising that require protection. The problem was simply that the committee couldn’t come up with any general principle that would yield that desired result, without also yielding bad results in other cases.

I’m happy that the 737 Max jets are grounded and will hopefully be fixed, with no thanks to the FAA. Strangely, while this was still the top news story, I gave a short talk about quantum computing to a group of Boeing executives who were visiting UT Austin on a previously scheduled trip. The title of the session they put me in? “Disruptive Computation.”

I’m happy that Greta Thunberg, the 15-year-old Swedish climate activist, has attracted a worldwide following and might win the Nobel Peace Prize. I hope she does—and more importantly, I hope her personal story will help galvanize the world into accepting things that it already knows are true, as with the example of Anne Frank (or for that matter, Gandhi or MLK). Confession: back when I was an adolescent, I often daydreamed about doing exactly what Thunberg is doing right now, leading a worldwide children’s climate movement. I didn’t, of course. In my defense, I wouldn’t have had the charisma for it anyway—and also, I got sidetracked by even more pressing problems, like working out the quantum query complexity of recursive Fourier sampling. But fate seems to have made an excellent choice in Thunberg. She’s not the prop of any adult—just a nerdy girl with Asperger’s who formed the idea to sit in front of Parliament every day to protest the destruction of the world, because she couldn’t understand why everyone else wasn’t.

I’m happy that the college admissions scandal has finally forced Americans to confront the farcical injustice of our current system—a system where elite colleges pretend to peer into applicants’ souls (or the souls of the essay consultants hired by the applicants’ parents?), and where they preen about the moral virtue of their “holistic, multidimensional” admissions criteria, which amount in practice to shutting out brilliant working-class Asian kids in favor of legacies and rich badminton players. Not to horn-toot, but Steven Pinker and I tried to raise the alarm about this travesty five years ago (see for example this post), and were both severely criticized for it. I do worry, though, that people will draw precisely the wrong lessons from the scandal. If, for example, a few rich parents resorted to outright cheating on the SAT—all their other forms of gaming and fraud apparently being insufficient—then the SAT itself must be to blame so we should get rid of it. In reality, the SAT (whatever its flaws) is almost the only bulwark we have against the complete collapse of college admissions offices into nightclub bouncers. This Quillette article says it much better than I can.

I’m happy that there will a symposium from May 6-9 at the University of Toronto, to honor Stephen Cook and the (approximate) 50th anniversary of the discovery of NP-completeness. I’m happy that I’ll be attending and speaking there. If you’re interested, there’s still time to register!

Finally, I’m happy about the following “Sierpinskitaschen” baked by CS grad student and friend-of-the-blog Jess Sorrell, and included with her permission (Jess says she got the idea from Debs Gardner).

Image may contain: food

Boof

Tuesday, October 2nd, 2018

(Just a few politics-related comments to get off my chest.  Feel free to skip if American politics isn’t your 5-liter bottle of Coke.)

FiveThirtyEight currently gives Beto O’Rourke a ~29% chance of winning Ted Cruz’s Senate seat.  I wish it were higher, but I think this will be such a spectacular upset if it happens, and so transformative for Texas, that it’s well worth our support.  I’ve also been impressed by the enthusiasm of Beto’s campaign—including a rally in Austin this weekend where the 85-year-old Willie Nelson, headlining the first political event of his 60-year music career, performed a new song (“Vote ‘Em Out”).  I’ll tell you what: if anyone donates to Beto’s campaign within the next two days as a result of reading this post, and emails or leaves a comment to tell me about it, I’ll match their donation, up to my personal Tsirelson bound of $853.

Speaking of which, if you’re a US citizen and are not currently registered to vote, please do so!  And then show up and vote in the midterms!  My personal preference is to treat voting as simply a categorical imperative.  But if you’d like a mathematical discussion of the expected utility of voting, then check out this, by my former MIT undergraduate advisee Shaunak Kishore.

But what about the highest questions currently facing the American republic: namely, the exact meanings of “boofing,” “Devil’s triangle,” and “Renate alumnius”?  I’ve been reading the same articles and analyses as everybody else, and have no privileged insight.  For what it’s worth, though, I think it’s likely that Blasey Ford is teling the truth.  And I think it’s likely that Kavanaugh is lying—if not about the assault itself (which he might genuinely have no memory of—blackout is a real phenomenon), then certainly about his teenage drinking and other matters.  And while, absent some breakthrough in the FBI investigation, none of this rises to the beyond-a-reasonable-doubt standard, I think it likely should be seen as disqualifying for the Supreme Court.  (Admittedly, I’m not a good arbiter of that question, since there are about 200 unrelated reasons why I don’t want Kavanaugh near the Court.)  I also think it’s perfectly reasonable of Senate Democrats to fight this one to the bitter end, particularly after what the Republicans did to Merrick Garland, and what Kavanaugh himself did to Bill Clinton.  If you’re worried about the scorched-earth, all-defect equilibrium that seems to prevail in Congress—well, the Democrats are not the ones who started it.

All of that would be one thing, coming from some hardened social-justice type who might have happily convicted Kavanaugh of aggravated white male douchiness even before his umbilical cord was cut.  But I daresay that it means a bit more, coming from an individual who hundreds of online activists once denounced just as fervently as they now denounce Kavanaugh—someone who understands perfectly well that not even the allegation of wrongdoing is needed any longer for a person to be marked for flattening by the steamroller of Progress.  What can I say?  The enemy of my enemy is sometimes still my enemy.  My friend is anybody, of whatever party or creed, who puts their humanity above their ideology.  Justice is no respecter of persons.  Sometimes those who earn the mob’s ire are nevertheless guilty.

I was actually in the DC area the week of the Kavanaugh hearings, to speak at a quantum information panel on Capitol Hill convened by the House Science Committee, to participate in a quantum machine learning workshop at UMD, and to deliver the Nathan Krasnopoler Memorial Lecture at Johns Hopkins, which included the incredibly moving experience of meeting Nathan’s parents.

The panel went fine, I think.  Twenty or thirty Congressional staffers attended, including many of those involved in the National Quantum Initiative bill.  They asked us about the US’s standing relative to China in QIS; the relations among academia, industry, and national labs; and how to train a ‘quantum workforce.’  We panelists came prepared with a slide about what qubits and interference are, but ended up never needing it: the focus was emphatically on policy, not science.

Kamala Harris (D-CA) is the leader in the Senate for what’s now called the Quantum Computing Research Act.  One of Sen. Harris’s staffers conveyed to me that, given her great enthusiasm for quantum computing, the Senator would have been delighted to meet with me, but was unfortunately too busy with Kavanaugh-related matters.  This was better than what I’d feared, namely: “following the lead of various keyboard warriors on Twitter and Reddit, Sen. Harris denounces you, Dr. Aaronson, as a privileged white male techbro and STEMlord, and an enemy of the people.”  So once again I was face-to-face with the question: is it conceivable that social-media discourse is a bit … unrepresentative of the wider world?

Review of Steven Pinker’s Enlightenment Now

Thursday, March 22nd, 2018

It’s not every day that I check my office mailbox and, amid the junk brochures, find 500 pages on the biggest questions facing civilization—all of them, basically—by possibly the single person on earth most qualified to tackle those questions.  That’s what happened when, on a trip back to Austin from my sabbatical, I found a review copy of Steven Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

I met with Steve while he was writing this book, and fielded his probing questions about the relationships among the concepts of information, entropy, randomness, Kolmogorov complexity, and coarse graining, in a way that might have affected a few paragraphs in Chapter 2.  I’m proud to be thanked in the preface—well, as “Scott Aronson.”  I have a lot of praise for the book, but let’s start with this: the omission of the second “a” from my surname was the worst factual error that I found.

If you’ve read anything else by Pinker, then you more-or-less know what to expect: an intellectual buffet that’s pure joy to devour, even if many of the dishes are ones you’ve tasted before.  For me, the writing alone is worth the admission price: Pinker is, among many other distinctions, the English language’s master of the comma-separated list.  I can see why Bill Gates recently called Enlightenment Now his “new favorite book of all time“—displacing his previous favorite, Pinker’s earlier book The Better Angels of Our Nature.  If you’ve read Better Angels, to which Enlightenment Now functions as a sort of sequel, then you know even more specifically what to expect: a saturation bombing of line graphs showing you how, despite the headlines, the world has been getting better in almost every imaginable way—graphs so thorough that they’ll eventually drag the most dedicated pessimist, kicking and screaming, into sharing Pinker’s sunny disposition, at least temporarily (but more about that later).

The other book to which Enlightenment Now bears comparison is David Deutsch’s The Beginning of Infinity.  The book opens with one of Deutsch’s maxims—“Everything that is not forbidden by laws of nature is achievable, given the right knowledge”—and Deutsch’s influence can be seen throughout Pinker’s new work, as when Pinker repeats the Deutschian mantra that “problems are solvable.”  Certainly Deutsch and Pinker have a huge amount in common: classical liberalism, admiration for the Enlightenment as perhaps the best thing that ever happened to the human species, and barely-perturbable optimism.

Pinker’s stated aim is to make an updated case for the Enlightenment—and specifically, for the historically unprecedented “ratchet of progress” that humankind has been on for the last few hundred years—using the language and concepts of the 21st century.  Some of his chapter titles give a sense of the scope of the undertaking:

  • Life
  • Health
  • Wealth
  • Inequality
  • The Environment
  • Peace
  • Safety
  • Terrorism
  • Equal Rights
  • Knowledge
  • Happiness
  • Reason
  • Science

When I read these chapter titles aloud to my wife, she laughed, as if to say: how could anyone have the audacity to write a book on just one of these enormities, let alone all of them?  But you can almost hear the gears turning in Pinker’s head as he decided to do it: well, someone ought to take stock in a single volume of where the human race is and where it’s going.  And if, with the rise of thuggish autocrats all over the world, the principles of modernity laid down by Locke, Spinoza, Kant, Jefferson, Hamilton, and Mill are under attack, then someone ought to rise to those principles’ unironic defense.  And if no one else will do it, it might as well be me!  If that’s how Pinker thought, then I agree: it might as well have been him.

I also think Pinker is correct that Enlightenment values are not so anodyne that they don’t need a defense.  Indeed, nothing demonstrates the case for Pinker’s book, the non-obviousness of his thesis, more clearly than the vitriolic reviews the book has been getting in literary venues.  Take this, for example, from John Gray in The New Statesman: “Steven Pinker’s embarrassing new book is a feeble sermon for rattled liberals.”

Pinker is an ardent enthusiast for free-market capitalism, which he believes produced most of the advance in living standards over the past few centuries. Unlike [Herbert Spencer, the founder of Social Darwinism], he seems ready to accept that some provision should be made for those who have been left behind. Why he makes this concession is unclear. Nothing is said about human kindness, or fairness, in his formula. Indeed, the logic of his dictum points the other way.

Many early-20th-century Enlightenment thinkers supported eugenic policies because they believed “improving the quality of the population” – weeding out human beings they deemed unproductive or undesirable – would accelerate the course of human evolution…

Exponents of scientism in the past have used it to promote Fabian socialism, Marxism-Leninism, Nazism and more interventionist varieties of liberalism. In doing so, they were invoking the authority of science to legitimise the values of their time and place. Deploying his cod-scientific formula to bolster market liberalism, Pinker does the same.

You see, when Pinker says he supports Enlightenment norms of reason and humanism, he really means to say that he supports unbridled capitalism and possibly even eugenics.  As I read this sort of critique, the hair stands on my neck, because the basic technique of hostile mistranslation is so familiar to me.  It’s the technique that once took a comment in which I pled for shy nerdy males and feminist women to try to understand each other’s suffering, as both navigate a mating market unlike anything in previous human experience—and somehow managed to come away with the take-home message, “so this entitled techbro wants to return to a past when society would just grant him a female sex slave.”

I’ve noticed that everything Pinker writes bears the scars of the hostile mistranslation tactic.  Scarcely does he say anything before he turns around and says, “and here’s what I’m not saying”—and then proceeds to ward off five different misreadings so wild they wouldn’t have occurred to me, but then if you read Leon Wieseltier or John Gray or his other critics, there the misreadings are, trotted out triumphantly; it doesn’t even matter how much time Pinker spent trying to prevent them.


OK, but what of the truth or falsehood of Pinker’s central claims?

I share Pinker’s sense that the Enlightenment may be the best thing that ever happened in our species’ sorry history.  I agree with his facts, and with his interpretations of the facts.  We rarely pause to consider just how astounding it is—how astounding it would be to anyone who lived before modernity—that child mortality, hunger, and disease have plunged as far as they have, and we show colossal ingratitude toward the scientists and inventors and reformers who made it possible.  (Pinker lists the following medical researchers and public health crusaders as having saved more than 100 million lives each: Karl Landsteiner, Abel Wolman, Linn Enslow, William Foege, Maurice Hilleman, John Enders.  How many of them had you heard of?  I’d heard of none.)  This is, just as Pinker says, “the greatest story seldom told.”

Beyond the facts, I almost always share Pinker’s moral intuitions and policy preferences.  He’s right that, whether we’re discussing nuclear power, terrorism, or GMOs, going on gut feelings like disgust and anger, or on vivid and memorable incidents, is a terrible way to run a civilization.  Instead we constantly need to count: how many would be helped by this course of action, how many would be harmed?  As Pinker points out, that doesn’t mean we need to become thoroughgoing utilitarians, and start fretting about whether the microscopic proto-suffering of a bacterium, multiplied by the 1031 bacteria that there are, outweighs every human concern.  It just means that we should heed the utilitarian impulse to quantify way more than is normally done—at the least, in every case where we’ve already implicitly accepted the underlying values, but might be off by orders of magnitude in guessing what they imply about our choices.

The one aspect of Pinker’s worldview that I don’t share—and it’s a central one—is his optimism.  My philosophical temperament, you might say, is closer to that of Rebecca Newberger Goldstein, the brilliant novelist and philosopher (and Pinker’s wife), who titled a lecture given shortly after Trump’s election “Plato’s Despair.”

Somehow, I look at the world from more-or-less the same vantage point as Pinker, yet am terrified rather than hopeful.  I’m depressed that Enlightenment values have made it so far, and yet there’s an excellent chance (it seems to me) that it will be for naught, as civilization slides back into authoritarianism, and climate change and deforestation and ocean acidification make the one known planet fit for human habitation increasingly unlivable.

I’m even depressed that Pinker’s book has gotten such hostile reviews.  I’m depressed, more broadly, that for centuries, the Enlightenment has been met by its beneficiaries with such colossal incomprehension and ingratitude.  Save 300 million people from smallpox, and you can expect in return a lecture about your naïve and arrogant scientistic reductionism.  Or, electronically connect billions of people to each other and to the world’s knowledge, in a way beyond the imaginings of science fiction half a century ago, and people will use the new medium to rail against the gross, basement-dwelling nerdbros who made it possible, then upvote and Like each other for their moral courage in doing so.

I’m depressed by the questions: how can a human race that reacts in that way to the gifts of modernity possibly be trusted to use those gifts responsibly?  Does it even “deserve” the gifts?

As I read Pinker, I sometimes imagined a book published in 1923 about the astonishing improvements in the condition of Europe’s Jews following their emancipation.  Such a book might argue: look, obviously past results don’t guarantee future returns; all this progress could be wiped out by some freak future event.  But for that to happen, an insane number of things would need to go wrong simultaneously: not just one European country but pretty much all of them would need to be taken over by antisemitic lunatics who were somehow also hyper-competent, and who wouldn’t just harass a few Jews here and there until the lunatics lost power, but would systematically hunt down and exterminate all of them with an efficiency the world had never before seen.  Also, for some reason the Jews would need to be unable to escape to Palestine or the US or anywhere else.  So the sane, sober prediction is that things will just continue to improve, of course with occasional hiccups (but problems are solvable).

Or I thought back to just a few years ago, to the wise people who explained that, sure, for the United States to fall under the control of a racist megalomaniac like Trump would be a catastrophe beyond imagining.  Were such a comic-book absurdity realized, there’d be no point even discussing “how to get democracy back on track”; it would already have suffered its extinction-level event.  But the good news is that it will never happen, because the voters won’t allow it: a white nationalist authoritarian could never even get nominated, and if he did, he’d lose in a landslide.  What did Pat Buchanan get, less than 1% of the vote?

I don’t believe in a traditional God, but if I did, the God who I’d believe in is one who’s constantly tipping the scales of fate toward horribleness—a God who regularly causes catastrophes to happen, even when all the rational signs point toward their not happening—basically, the God who I blogged about here.  The one positive thing to be said about my God is that, unlike the just and merciful kind, I find that mine rarely lets me down.

Pinker is not blind.  Again and again, he acknowledges the depths of human evil and idiocy, the forces that even now look to many of us like they’re leaping up at Pinker’s exponential improvement curves with bared fangs.  It’s just that each time, he recommends putting an optimistic spin on the situation, because what’s the alternative?  Just to get all, like, depressed?  That would be unproductive!  As Deutsch says, problems will always arise, but problems are solvable, so let’s focus on what it would take to solve them, and on the hopeful signs that they’re already being solved.

With climate change, Pinker gives an eloquent account of the enormity of the crisis, echoing the mainstream scientific consensus in almost every particular.  But he cautions that, if we tell people this is plausibly the end of civilization, they’ll just get fatalistic and paralyzed, so it’s better to talk about solutions.  He recommends an aggressive program of carbon pricing, carbon capture and storage, nuclear power, research into new technologies, and possibly geoengineering, guided by strong international cooperation—all things I’d recommend as well.  OK, but what are the indications that anything even close to what’s needed will get done?  The right time to get started, it seems to me, was over 40 years ago.  Since then, the political forces that now control the world’s largest economy have spiralled into ever more vitriolic denial, the more urgent the crisis has gotten and the more irrefutable the evidence.  Pinker writes:

“We cannot be complacently optimistic about climate change, but we can be conditionally optimistic.  We have some practicable ways to prevent the harms and we have the means to learn more.  Problems are solvable.  That does not mean that they will solve themselves, but it does mean that we can solve them if we sustain the benevolent forces of modernity that have allowed us to solve problems so far…” (p. 154-155)

I have no doubt that conditional optimism is a useful stance to adopt, in this case as in many others.  The trouble, for me, is the gap between the usefulness of a view and its probable truth—a gap that Pinker would be quick to remind me about in other contexts.  Even if a placebo works for those who believe in it, how do you make yourself believe in what you understand to be a placebo?  Even if all it would take, for the inmates to escape a prison, is simultaneous optimism that they’ll succeed if they work together—still, how can an individual inmate be optimistic, if he sees that the others aren’t, and rationally concludes that dying in prison is his probable fate?  For me, the very thought of the earth gone desolate—its remaining land barely habitable, its oceans a sewer, its radio beacons to other worlds fallen silent—all for want of ability to coordinate a game-theoretic equilibrium, just depresses me even more.

Likewise with thermonuclear war: Pinker knows, of course, that even if there were “only” an 0.5% chance of one per year, multiplied across the decades of the nuclear era that’s enormously, catastrophically too high, and there have already been too many close calls.  But look on the bright side: the US and Russia have already reduced their arsenals dramatically from their Cold War highs.  There’d be every reason for optimism about continued progress, if we weren’t in this freak branch of the wavefunction where the US and Russia (not to mention North Korea and other nuclear states) were now controlled by authoritarian strongmen.

With Trump—for how could anyone avoid him in a book like this?—Pinker spends several pages reviewing the damage he’s inflicted on democratic norms, the international order, the environment, and the ideal of truth itself:

“Trump’s barefaced assertion of canards that can instantly be debunked … shows that he sees public discourse not as a means of finding common ground based on objective reality but as a weapon with which to project dominance and humiliate rivals” (p. 336).

Pinker then writes a sentence that made me smile ruefully: “Not even a congenital optimist can see a pony in this Christmas stocking” (p. 337).  Again, though, Pinker looks at poll data suggesting that Trump and the world’s other resurgent quasi-fascists are not the wave of the future, but the desperate rearguard actions of a dwindling and aging minority that feels itself increasingly marginalized by the modern world (and accurately so).  The trouble is, Nazism could also be seen as “just” a desperate, failed attempt to turn back the ratchet of cosmopolitanism and moral progress, by people who viscerally understood that time and history were against them.  Yet even though Nazism ultimately lost (which was far from inevitable, I think), the damage it inflicted on its way out was enough, you might say, to vindicate the shrillest pessimist of the 1930s.

Then there’s the matter of takeover by superintelligent AI.  I’ve now spent years hanging around communities where it’s widely accepted that “AI value alignment” is the most pressing problem facing humanity.  I strongly disagree with this view—but on reflection, not because I don’t think AI could be a threat; only because I think other, more prosaic things are much more imminent threats!  I feel the urge to invent a new, 21st-century Yiddish-style proverb: “oy, that we should only survive so long to see the AI-bots become our worst problem!”

Pinker’s view is different: he’s dismissive of the fear (even putting it in the context of the Y2K bug, and people marching around sidewalks with sandwich boards that say “REPENT”), and thinks the AI-risk folks are simply making elementary mistakes about the nature of intelligence.  Pinker’s arguments are as follows: first, intelligence is not some magic, all-purpose pixie dust, which humans have more of than animals, and which a hypothetical future AI would have more of than humans.  Instead, the brain is a bundle of special-purpose modules that evolved for particular reasons, so “the concept [of artificial general intelligence] is barely coherent” (p. 298).  Second, it’s only humans’ specific history that causes them to think immediately about conquering and taking over, as goals to which superintelligence would be applied.  An AI could have different motivations entirely—and it will, if its programmers have any sense.  Third, any AI would be constrained by the resource limits of the physical world.  For example, just because an AI hatched a brilliant plan to recursively improve itself, doesn’t mean it could execute that plan without (say) building a new microchip fab, acquiring the necessary raw materials, and procuring the cooperation of humans.  Fourth, it’s absurd to imagine a superintelligence converting the universe into paperclips because of some simple programming flaw or overliteral interpretation of human commands, since understanding nuances is what intelligence is all about:

“The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence.  So is the ability to interpret the intentions of a language user in context” (p. 300).

I’ll leave it to those who’ve spent more time thinking about these issues to examine these arguments in detail (in the comments of this post, if they like).  But let me indicate briefly why I don’t think they fare too well under scrutiny.

For one thing, notice that the fourth argument is in fundamental tension with the first and second.  If intelligence is not an all-purpose elixir but a bundle of special-purpose tools, and if those tools can be wholly uncoupled from motivation, then why couldn’t we easily get vast intelligence expended toward goals that looked insane from our perspective?  Have humans never been known to put great intelligence in the service of ends that strike many of us as base, evil, simpleminded, or bizarre?  Consider the phrase often applied to men: “thinking with their dicks.”  Is there any sub-Einsteinian upper bound on the intelligence of the men who’ve been guilty of that?

Second, while it seems clear that there are many special-purpose mental modules—the hunting instincts of a cat, the mating calls of a bird, the pincer-grasping or language-acquisition skills of a human—it seems equally clear that there is some such thing as “general problem-solving ability,” which Newton had more of than Roofus McDoofus, and which even Roofus has more of than a chicken.  But whatever we take that ability to consist of, and whether we measure it by a scalar or a vector, it’s hard to imagine that Newton was anywhere near whatever limits on it are imposed by physics.  His brain was subject to all sorts of archaic evolutionary constraints, from the width of the birth canal to the amount of food available in the ancestral environment, and possibly also to diminishing returns on intelligence in humans’ social environment (Newton did, after all, die a virgin).  But if so, then given the impact that Newton, and others near the ceiling of known human problem-solving ability, managed to achieve even with their biology-constrained brains, how could we possibly see the prospect of removing those constraints as just a narrow technological matter, like building a faster calculator or a more precise clock?

Third, the argument about intelligence being constrained by physical limits would seem to work equally well for a mammoth or cheetah scoping out the early hominids.  The mammoth might say: yes, these funny new hairless apes are smarter than me, but intelligence is just one factor among many, and often not the decisive one.  I’m much bigger and stronger, and the cheetah is faster.  (If the mammoth did say that, it would be an unusually smart mammoth as well, but never mind.)  Of course we know what happened: from wild animals’ perspective, the arrival of humans really was a catastrophic singularity, comparable to the Chicxulub asteroid (and far from over), albeit one that took between 104 and 106 years depending on when we start the clock.  Over the short term, the optimistic mammoths would be right: pure, disembodied intelligence can’t just magically transform itself into spears and poisoned arrows that render you extinct.  Over the long term, the most paranoid mammoth on the tundra couldn’t imagine the half of what the new “superintelligence” would do.

Finally, any argument that relies on human programmers choosing not to build an AI with destructive potential, has to contend with the fact that humans did invent, among other things, nuclear weapons—and moreover, for what seemed like morally impeccable reasons at the time.  And a dangerous AI would be a lot harder to keep from proliferating, since it would consist of copyable code.  And it would only take one.  You could, of course, imagine building a good AI to neutralize the bad AIs, but by that point there’s not much daylight left between you and the AI-risk people.


As you’ve probably gathered, I’m a worrywart by temperament (and, I like to think, experience), and I’ve now spent a good deal of space on my disagreements with Pinker that flow from that.  But the funny part is, even though I consistently see clouds where he sees sunshine, we’re otherwise looking at much the same scene, and our shared view also makes us want the same things for the world.  I find myself in overwhelming, nontrivial agreement with Pinker about the value of science, reason, humanism, and Enlightenment; about who and what deserves credit for the stunning progress humans have made; about which tendencies of civilization to nurture and which to recoil in horror from; about how to think and write about any of those questions; and about a huge number of more specific issues.

So my advice is this: buy Pinker’s book and read it.  Then work for a future where the book’s optimism is justified.

Quickies

Monday, December 4th, 2017

Updates (Dec. 5): The US Supreme Court has upheld Trump’s latest travel ban. I’m grateful to all the lawyers who have thrown themselves in front of the train of fascism, desperately trying to slow it down—but I could never, ever have been a lawyer myself. Law is fundamentally a make-believe discipline. Sure, there are times when it involves reason and justice, possibly even resembles mathematics—but then there are times when the only legally correct thing to say is, “I guess that, contrary to what I thought, the Establishment Clause of the First Amendment does let you run for president promising to discriminate against a particular religious group, and then find a pretext under which to do it. The people with the power to decide that question have decided it.” I imagine that I’d last about half a day before tearing up my law-school diploma in disgust, which is surely a personality flaw on my part.

In happier news, many of you may have seen that papers by the groups of Chris Monroe and of Misha Lukin, reporting ~50-qubit experiments with trapped ions and optical lattices respectively, have been published back-to-back in Nature. (See here and here for popular summaries.) As far as I can tell, these papers represent an important step along the road to a clear quantum supremacy demonstration. Ideally, one wants a device to solve a well-defined computational problem (possibly a sampling problem), and also highly-optimized classical algorithms for solving the same problem and for simulating the device, which both let one benchmark the device’s performance and verify that the device is solving the problem correctly. But in a curious convergence, the Monroe group and Lukin group work suggests that this can probably be achieved with trapped ions and/or optical lattices at around the same time that Google and IBM are closing in on the goal with superconducting circuits.


As everyone knows, the flaming garbage fire of a tax bill has passed the Senate, thanks to the spinelessness of John McCain, Lisa Murkowski, Susan Collins, and Jeff Flake.  The fate of American higher education will now be decided behind closed doors, in the technical process of “reconciling” the House bill (which includes the crippling new tax on PhD students) with the Senate bill (which doesn’t—that one merely guts a hundred other things).  It’s hard to imagine that this particular line item will occassion more than about 30 seconds of discussion.  But, I dunno, maybe calling your Senator or Representative could help.  Me, I left a voicemail message with the office of Texas Senator Ted Cruz, one that I’m confident Cruz and his staff will carefully consider.

Here’s talk show host Seth Meyers (scroll to 5:00-5:20):

“By 2027, half of all US households would pay more in taxes [under the new bill].  Oh my god.  Cutting taxes was the one thing Republicans were supposed to be good at.  What’s even the point of voting for a Republican if they’re going to raise your taxes?  That’s like tuning in to The Kardashians only to see Courtney giving a TED talk on quantum computing.”


Speaking of which, you can listen to an interview with me about quantum computing, on a podcast called Data Skeptic. We discuss the basics and then the potential for quantum machine learning algorithms.


I got profoundly annoyed by an article called The Impossibility of Intelligence Explosion by François Chollet.  Citing the “No Free Lunch Theorem”—i.e., the (trivial) statement that you can’t outperform brute-force search on random instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.  In this case, Chollet then goes on to argue that most intelligence doesn’t reside in individuals but rather in culture; that there are hard limits to intelligence and to its usefulness; that we know of those limits because people with stratospheric intelligence don’t achieve correspondingly extraordinary results in life [von Neumann? Newton? Einstein? –ed.]; and finally, that recursively self-improving intelligence is impossible because we, humans, don’t recursively improve ourselves.  Scattered throughout the essay are some valuable critiques, but nothing comes anywhere close to establishing the impossibility advertised in the title.  Like, there’s a standard in CS for what it takes to show something’s impossible, and Chollet doesn’t even reach the same galaxy as that standard.  The certainty that he exudes strikes me as wholly unwarranted, just as much as (say) the near-certainty of a Ray Kurzweil on the other side.

I suppose this is as good a place as any to say that my views on AI risk have evolved.  A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game.  But now that we know these things, I think intellectual honesty requires updating on them.  And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.”  (Related: Eliezer Yudkowsky’s There’s No Fire Alarm for Artificial General Intelligence.)

Who knows how much of the human cognitive fortress might fall to a few more orders of magnitude in processing power?  I don’t—not in the sense of “I basically know but am being coy,” but really in the sense of not knowing.

To be clear, I still think that by far the most urgent challenges facing humanity are things like: resisting Trump and the other forces of authoritarianism, slowing down and responding to climate change and ocean acidification, preventing a nuclear war, preserving what’s left of Enlightenment norms.  But I no longer put AI too far behind that other stuff.  If civilization manages not to destroy itself over the next century—a huge “if”—I now think it’s plausible that we’ll eventually confront questions about intelligences greater than ours: do we want to create them?  Can we even prevent their creation?  If they arise, can we ensure that they’ll show us more regard than we show chimps?  And while I don’t know how much we can say about such questions that’s useful, without way more experience with powerful AI than we have now, I’m glad that a few people are at least trying to say things.

But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later.  Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI.  Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.


Speaking of superintelligences, I strongly recommend an interview of Ed Witten by Quanta magazine’s Natalie Wolchover: one of the best interviews of Witten I’ve read.  Some of Witten’s prouncements still tend toward the oracular—i.e., we’re uncovering facets of a magnificent new theoretical structure, but it’s almost impossible to say anything definite about it, because we’re still missing too many pieces—but in this interview, Witten does stick his neck out in some interesting ways.  In particular, he speculates (as Einstein also did, late in life) about whether physics should be reformulated without any continuous quantities.  And he reveals that he’s recently been rereading Wheeler’s old “It from Bit” essay, because: “I’m trying to learn about what people are trying to say with the phrase ‘it from qubit.'”


I’m happy to report that a group based mostly in Rome has carried out the first experimental demonstration of PAC-learning of quantum states, applying my 2006 “Quantum Occam’s Razor Theorem” to reconstruct optical states of up to 6 qubits.  Better yet, they insisted on adding me to their paper!


I was at Cornell all of last week to give the Messenger Lectures: six talks in all (!!), if you include the informal talks that I gave at student houses (including Telluride House, where I lived as a Cornell undergrad from 1998 to 2000).  The subjects were my usual beat (quantum computing, quantum supremacy, learnability of quantum states, firewalls and AdS/CFT, big numbers).  Intimidatingly, the Messenger Lectures are the series in which Richard Feynman presented The Character of Physical Law in 1964, and in which many others (Eddington, Oppenheimer, Pauling, Weinberg, …) set a standard that my crass humor couldn’t live up to in a trillion years.  Nevertheless, thanks so much to Paul Ginsparg for hosting my visit, and for making it both intellectually stimulating and a trip down memory lane, with meetings with many of the professors from way back when who helped to shape my thinking, including Bart Selman, Jon Kleinberg, and Lillian Lee.  Cornell is much as I remember it from half a lifetime ago, except that they must’ve made the slopes twice as steep, since I don’t recall so much huffing and puffing on my way to class each morning.

At one of the dinners, my hosts asked me about the challenges of writing a blog when people on social media might vilify you for what you say.  I remarked that it hasn’t been too bad lately—indeed that these days, to whatever extent I write anything ‘controversial,’ mostly it’s just inveighing against Trump.  “But that is scary!” someone remarked.  “You live in Texas now!  What if someone with a gun got angry at you?”  I replied that the prospect of enraging such a person doesn’t really keep me awake at night, because it seems like the worst they could do would be to shoot me.  By contrast, if I write something that angers leftists, they can do something far scarier: they can make me feel guilty!


I’ll be giving a CS colloquium at Georgia Tech today, then attending workshops in Princeton and NYC the rest of the week, so my commenting might be lighter than usual … but yours need not be.

The destruction of graduate education in the United States

Friday, November 17th, 2017

If and when you emerged from your happiness bubble to read the news, you’ll have seen (at least if you live in the US) that the cruel and reckless tax bill has passed the House of Representatives, and remains only to be reconciled with an equally-vicious Senate bill and then voted on by the Republican-controlled Senate.  The bill will add about $1.7 trillion to the national debt and raise taxes for about 47.5 million people, all in order to deliver a massive windfall to corporations, and to wealthy estates that already pay some of the lowest taxes in the developed world.

In a still-functioning democracy, those of us against such a policy would have an intellectual obligation to seek out the strongest arguments in favor of the policy and try to refute them.  By now, though, it seems to me that the Republicans hold the public in such contempt, and are so sure of the power of gerrymandering and voter restrictions to protect themselves from consequences, that they didn’t even bother to bring anything to the debate more substantive than the schoolyard bully’s “stop punching yourself.”  I guess some of them still repeat the fairytale about the purpose of tax cuts for the super-rich being to trickle down and help everyone else—but can even they advance that “theory” anymore without stifling giggles?  Mostly, as far as I can tell, they just brazenly deny that they’re doing what they obviously are doing: i.e., gleefully setting on fire anything that anyone, regardless of their ideology, could recognize as the national interest, in order to enrich a small core of supporters.

But none of that is what interests me in this post—because it’s “merely” as bad as, and no worse than, what one knew to expect when a coalition of thugs, kleptocrats, and white-nationalist demagogues seized control of Hamilton’s and Jefferson’s experiment.  My concern here is only with the “kill shot” that the Republicans have now aimed, with terrifying precision, at the system that’s kept American academic science the envy of the world in spite of the growing dysfunction all around it.

As you’ve probably heard, one of the ways Republicans intend to pay for their tax giveaway, is to change the tax code so that graduate students will now need to pay taxes on “tuition”—a large sum of money (as much as $50,000/year) that PhD students never actually see, that can easily exceed the stipends they do see, and that’s basically just an accounting trick that serves the internal needs of universities and granting agencies.  Again, to eliminate any chance of misunderstanding: PhD students, who are effectively low-wage employees, already pay taxes on their actual stipends.  The new proposal is that they’ll also have to pay taxes on a whopping, make-believe “X” on their payroll sheet that’s always exactly balanced out by “-X.”

For detailed analyses of the impacts, see, e.g. Luca Trevisan’s post or Inside Higher Ed or the Chronicle of Higher Ed or Vox or NPR.  Briefly, though, the proposal would raise taxes by a few thousand dollars per year, or in some cases as much as $10,000 per year (!), on PhD students who already live hand-to-mouth-to-ramen-bowl, with the largest impact falling on students in STEM fields.  For many students who aren’t independently wealthy, this could push a PhD beyond the realm of affordability, and cause them to leave academia or to do their graduate work in other countries.

“But isn’t there some workaround?”  Indeed, financial ignoramus that I am, my first reaction was to ask: if PhD tuition is basically an accounting fiction anyway, then why can’t the universities just declare that the tuition in question no longer exists, or is now zero dollars?  Feel free to explain further in the comments if you understand this stuff, but as far as I can tell, the answer is: because PhD tuition is used to calculate how much “tax” the universities can take from professors’ grant money.  If universities could no longer take that tax, and they had no other way to make up for it, then except for the richest few universities, they’d have to scale back research and teaching pretty drastically.  To avoid that outcome, the universities would be relying on the granting agencies to let them keep taking the overhead they needed to operate, even though the “PhD tuition” no longer existed.  But the granting agencies aren’t set up for this: you can’t just throw a bomb into one part of a complicated bureaucratic machine built up over decades, and expect the machine to continue working with no disruption to science.

But more ominously: as my friend Daniel Harlow and many others pointed out, it’s hard to look at the indefensible, laser-specific meanness of this policy, without suspecting that for many in Congress, the destruction of American higher education isn’t a regrettable byproduct, but the goal—just another piece of red meat to throw to the base.  If so, then we’d expect Congress to direct federal granting agencies not to loosen their rules about overhead, thereby forcing the students to pay the tax, and achieving the desired destruction.  (Note that the Trump administration has already made tightening overhead rules—i.e., doing the exact opposite of what would be needed to counteract the new tax—a central focus of its attempt to cut federal research funding.)

OK, two concluding thoughts:

  1. When Republicans in Congress defended Trump’s travel ban, they at least had the craven excuse that they were only following the lead of the populist strongman who’d taken over their party.  Here they don’t even have that.  As far as I know, this targeted destruction of American higher education was Congress’s initiative, not Trump’s—which to me, underscores again the feather-thinness of any moral distinction between the Vichy GOP leadership and the administration with which it collaborates.  Trump didn’t emerge from nowhere.  It took decades of effort—George W. Bush, Sarah Palin, Karl Rove, Rush Limbaugh, Mitch McConnell, and all the rest—to transform the GOP into the pure seething cauldron of anti-intellectual resentment and hatred that we know today.
  2. Given the existential risk to American higher education, why didn’t I blog about this earlier?  The answer is embarrassing to admit, and reflects no credit on me.  It’s simply that I didn’t believe it—even given all the other stuff that could “never happen in the US,” until it happened this past year.  I didn’t believe it, not because it was too far from me but because it was too close—because if true, it would mean the crippling of the research world in which I’ve spent most of my life since age 15, so therefore it couldn’t be true.  Surely even the House Republicans would realize they’d screwed up this time, and would take out this crazy provision before the full bill was voted on?  Or surely there’s some workaround that makes the whole thing less awful than it sounds?  There has to be … right?

Anyway, what else is there to say, except to call your representative, if you’re American and still have the faith in the system that such an act implies.