Archive for the ‘The Fate of Humanity’ Category

My Passover press release

Monday, April 22nd, 2024

FOR IMMEDIATE RELEASE – From the university campuses of Assyria to the thoroughfares of Ur to the palaces of the Hittite Empire, students across the Fertile Crescent have formed human chains, camel caravans, and even makeshift tent cities to protest the oppression of innocent Egyptians by the rogue proto-nation of “Israel” and its vengeful, warlike deity Yahweh. According to leading human rights organizations, the Hebrews, under the leadership of a bearded extremist known as Moses or “Genocide Moe,” have unleashed frogs, wild beasts, hail, locusts, cattle disease, and other prohibited collective punishments on Egypt’s civilian population, regardless of the humanitarian cost.

Human-rights expert Asenath Albanese says that “under international law, it is the Hebrews’ sole responsibility to supply food, water, and energy to the Egyptian populace, just as it was their responsibility to build mud-brick store-cities for Pharoah. Turning the entire Nile into blood, and plunging Egypt into neverending darkness, are manifestly inconsistent with the Israelites’ humanitarian obligations.”

Israelite propaganda materials have held these supernatural assaults to be justified by Pharoah’s alleged enslavement of the Hebrews, as well as unverified reports of his casting all newborn Hebrew boys into the Nile. Chanting “Let My People Go,” some Hebrew counterprotesters claim that Pharoah could end the plagues at any time by simply releasing those held in bondage.

Yet Ptahmose O’Connor, Chair of Middle East Studies at the University of Avaris, retorts that this simplistic formulation ignores the broader context. “Ever since Joseph became Pharoah’s economic adviser, the Israelites have enjoyed a position of unearned power and privilege in Egypt. Through underhanded dealings, they even recruited the world’s sole superpower—namely Adonai, Creator of the Universe—as their ally, removing any possibility that Adonai could serve as a neutral mediator in the conflict. As such, Egypt’s oppressed have a right to resist their oppression by any means necessary. This includes commonsense measures like setting taskmasters over the Hebrews to afflict them with heavy burdens, and dealing shrewdly with them lest they multiply.”

Professor O’Connor, however, dismissed the claims of drowned Hebrew babies as unverified rumors. “Infanticide accusations,” he explained, “have an ugly history of racism, Orientalism, and Egyptophobia. Therefore, unless you’re a racist or an Orientalist, the only possible conclusion is that no Hebrew babies have been drowned in the Nile, except possibly by accident, or of course by Hebrews themselves looking for a pretext to start this conflict.”

Meanwhile, at elite academic institutions across the region, the calls for justice have been deafening. “From the Nile to the Sea of Reeds, free Egypt from Jacob’s seeds!” students chanted. Some protesters even taunted passing Hebrew slaves with “go back to Canaan!”, though others were quick to disavow that message. According to Professor O’Connor, it’s important to clarify that the Hebrews don’t belong in Canaan either, and that finding a place where they do belong is not the protesters’ job.

In the face of such stridency, a few professors and temple priests have called the protests anti-Semitic. The protesters, however, dismiss that charge, pointing as proof to the many Hebrews and other Semitic peoples in their own ranks. For example, Sa-Hathor Goldstein, who currently serves as Pithom College’s Chapter President of Jews for Pharoah, told us that “we stand in solidarity with our Egyptian brethren, with the shepherds, goat-workers, and queer and mummified voices around the world. And every time Genocide Moe strikes down his staff to summon another of Yahweh’s barbaric plagues, we’ll be right there to tell him: Not In Our Name!”

“Look,” Goldstein added softly, “my own grandparents were murdered by Egyptian taskmasters. But the lesson I draw from my family’s tragic history is to speak up for oppressed people everywhere—even the ones who are standing over me with whips.”

“If Yahweh is so all-powerful,” Goldstein went on to ask, “why could He not devise a way to free the Israelites without a single Egyptian needing to suffer? Why did He allow us to become slaves in the first place? And why, after each plague, does He harden Pharoah’s heart against our release? Not only does that tactic needlessly prolong the suffering of Israelites and Egyptians alike, it also infringes on Pharoah’s bodily autonomy.”

But the strongest argument, Goldstein concluded, arching his eyebrow, is that “ever since I started speaking out on this issue, it’s been so easy to get with all the Midianite chicks at my school. That’s because they, like me, see past the endless intellectual arguments over ‘who started’ or ‘how’ or ‘why’ to the emotional truth that the suffering just has to stop, man.”

Last night, college towns across the Tigris, Euphrates, and Nile were aglow with candelight vigils for Baka Ahhotep, an Egyptian taskmaster and beloved father of three cruelly slain by “Genocide Moe,” in an altercation over alleged mistreatment of a Hebrew slave whose details remain disputed.

According to Caitlyn Mentuhotep, a sophomore majoring in hieroglyphic theory at the University of Pi-Ramesses who attended her school’s vigil for Ahhotep, staying true to her convictions hasn’t been easy in the face of Yahweh’s unending plagues—particularly the head lice. “But what keeps me going,” she said, “is the absolute certainty that, when people centuries from now write the story of our time, they’ll say that those of us who stood with Pharoah were on the right side of history.”

Have a wonderful holiday!

Open Letter to Anti-Zionists on Twitter

Monday, March 25th, 2024

Dear Twitter Anti-Zionists,

For five months, ever since Oct. 7, I’ve read you obsessively. While my current job is supposed to involve protecting humanity from the dangers of AI (with a side of quantum computing theory), I’m ashamed to say that half the days I don’t do any science; instead I just scroll and scroll, reading anti-Israel content and then pro-Israel content and then more anti-Israel content. I thought refusing to post on Twitter would save me from wasting my life there as so many others have, but apparently it doesn’t, not anymore. (No, I won’t call it “X.”)

At the high end of the spectrum, I religiously check the tweets of Paul Graham, a personal hero and inspiration to me ever since he wrote Why Nerds Are Unpopular twenty years ago, and a man with whom I seem to resonate deeply on every important topic except for two: Zionism and functional programming. At the low end, I’ve read hundreds of the seemingly infinite army of Tweeters who post images of hook-nosed rats with black hats and sidecurls and dollar signs in their eyes, sneering as they strangle the earth and stab Palestinian babies. I study their detailed theories about why the October 7 pogrom never happened, and also it was secretly masterminded by Israel just to create an excuse to mass-murder Palestinians, and also it was justified and thrilling (exactly the same melange long ago embraced for the Holocaust).

I’m aware, of course, that the bottom-feeders make life too easy for me, and that a single Paul Graham who endorses the anti-Zionist cause ought to bother me more than a billion sharers of hook-nosed rat memes. And he does. That’s why, in this letter, I’ll try to stay at the higher levels of Graham’s Disagreement Hierarchy.

More to the point, though, why have I spent so much time on such a depressing, unproductive reading project?

Damned if I know. But it’s less surprising when you recall that, outside theoretical computer science, I’m (alas) mostly known to the world for having once confessed, in a discussion deep in the comment section of this blog, that I spent much of my youth obsessively studying radical feminist literature. I explained that I did that because my wish, for a decade, was to confront progressivism’s highest moral authorities on sex and relationships, and make them tell me either that

(1) I, personally, deserved to die celibate and unloved, as a gross white male semi-autistic STEM nerd and stunted emotional and aesthetic cripple, or else
(2) no, I was a decent human being who didn’t deserve that.

One way or the other, I sought a truthful answer, one that emerged organically from the reigning morality of our time and that wasn’t just an unprincipled exception to it. And I felt ready to pursue progressive journalists and activists and bloggers and humanities professors to the ends of the earth before I’d let them leave this one question hanging menacingly over everything they’d ever written, with (I thought) my only shot at happiness in life hinging on their answer to it.

You might call this my central character flaw: this need for clarity from others about the moral foundations of my own existence. I’m self-aware enough to know that it is a severe flaw, but alas, that doesn’t mean that I ever figured out how to fix it.

It’s been exactly the same way with the anti-Zionists since October 7. Every day I read them, searching for one thing and one thing only: their own answer to the “Jewish Question.” How would they ensure that the significant fraction of the world that yearns to murder all Jews doesn’t get its wish in the 21st century, as to a staggering extent it did in the 20th? I confess to caring about that question, partly (of course) because of the accident of having been born a Jew, and having an Israeli wife and family in Israel and so forth, but also because, even if I’d happened to be a Gentile, the continued survival of the world’s Jews would still seem remarkably bound up with science, Enlightenment, minority rights, liberal democracy, meritocracy, and everything else I’ve ever cared about.

I understand the charges against me. Namely: that if I don’t call for Israel to lay down its arms right now in its war against Hamas (and ideally: to dissolve itself entirely), then I’m a genocidal monster on the wrong side of history. That I value Jewish lives more than Palestinian lives. That I’m a hasbara apologist for the IDF’s mass-murder and apartheid and stealing of land. That if images of children in Gaza with their limbs blown off, or dead in their parents arms, or clawing for bread, don’t cause to admit that Israel is evil, then I’m just as evil as the Israelis are.

Unsurprisingly I contest the charges. As a father of two, I can no longer see any images of child suffering without thinking about my own kids. For all my supposed psychological abnormality, the part of me that’s horrified by such images seems to be in working order. If you want to change my mind, rather than showing me more such images, you’ll need to target the cognitive part of me: the part that asks why so many children are suffering, and what causal levers we’d need to push to reach a place where neither side’s children ever have to suffer like this ever again.

At risk of stating the obvious: my first-order model is that Hamas, with the diabolical brilliance of a Marvel villain, successfully contrived a situation where Israel could prevent the further massacring of its own population only by fighting a gruesome urban war, of a kind that always, anywhere in the world, kills tens of thousands of civilians. Hamas, of course, was helped in this plan by an ideology that considers martyrdom the highest possible calling for the innocents who it rules ruthlessly and hides underneath. But Hamas also understood that the images of civilian carnage would (rightly!) shock the consciences of Israel’s Western allies and many Israelis themselves, thereby forcing a ceasefire before the war was over, thereby giving Hamas the opportunity to regroup and, with God’s and of course Iran’s help, finally finish the job of killing all Jews another day.

And this is key: once you remember why Hamas launched this war and what its long-term goals are, every detail of Twitter’s case against Israel has to be reexamined in a new light. Take starvation, for example. Clearly the only explanation for why Israelis would let Gazan children starve is the malice in their hearts? Well, until you think through the logistical challenges of feeding 2.3 million starving people whose sole governing authority is interested only in painting the streets red with Jewish blood. Should we let that authority commandeer the flour and water for its fighters, while innocents continue to starve? No? Then how about UNRWA? Alas, we learned that UNRWA, packed with employees who cheered the Oct. 7 massacre in their Telegram channels and in some cases took part in the murders themselves, capitulates to Hamas so quickly that it effectively is Hamas. So then Israel should distribute the food itself! But as we’ve dramatically witnessed, Israel can’t distribute food without imposing order, which would seem to mean reoccupying Gaza and earning the world’s condemnation for it. Do you start to appreciate the difficulty of the problem—and why the Biden administration was pushed to absurd-sounding extremes like air-dropping food and then building a floating port?

It all seems so much easier, once you remove the constraint of not empowering Hamas in its openly-announced goal of completing the Holocaust. And hence, removing that constraint is precisely what the global left does.

For all that, by Israeli standards I’m firmly in the anti-Netanyahu, left-wing peace camp—exactly where I’ve been since the 1990s, as a teenager mourning the murder of Rabin. And I hope even the anti-Israel side might agree with me that, if all the suffering since Oct. 7 has created a tiny opening for peace, then walking through that opening depends on two things happening:

  1. the removal of Netanyahu, and
  2. the removal of Hamas.

The good news is that Netanyahu, the catastrophically failed “Protector of Israel,” not only can, but plausibly will (if enough government ministers show some backbone), soon be removed in a democratic election.

Hamas, by contrast, hasn’t allowed a single election since it took power in 2006, in a process notable for its opponents being thrown from the roofs of tall buildings. That’s why even my left-leaning Israeli colleagues—the ones who despise Netanyahu, who marched against him last year—support Israel’s current war. They support it because, even if the Israeli PM were Fred Rogers, how can you ever get to peace without removing Hamas, and how can you remove Hamas except by war, any more than you could cut a deal with Nazi Germany?

I want to see the IDF do more to protect Gazan civilians—despite my bitter awareness of survey data suggesting that many of those civilians would murder my children in front of me if they ever got a chance. Maybe I’d be the same way if I’d been marinated since birth in an ideology of Jew-killing, and blocked from other sources of information. I’m heartened by the fact that despite this, indeed despite the risk to their lives for speaking out, a full 15% of Gazans openly disapprove of the Oct. 7 massacre. I want a solution where that 15% becomes 95% with the passing of generations. My endgame is peaceful coexistence.

But to the anti-Zionists I say: I don’t even mind you calling me a baby-eating monster, provided you honestly field one question. Namely:

Suppose the Palestinian side got everything you wanted for it; then what would be your plan for the survival of Israel’s Jews?

Let’s assume that not only has Netanyahu lost the next election in a landslide, but is justly spending the rest of his life in Israeli prison. Waving my wand, I’ve made you Prime Minister in his stead, with an overwhelming majority in the Knesset. You now get to go down in history as the liberator of Palestine. But you’re now also in charge of protecting Israel’s 7 million Jews (and 2 million other residents) from near-immediate slaughter at the hands of those who you’ve liberated.

Granted, it seems pretty paranoid to expect such a slaughter! Or rather: it would seem paranoid, if the Palestinians’ Grand Mufti (progenitor of the Muslim Brotherhood and hence Hamas) hadn’t allied himself with Hitler in WWII, enthusiastically supported the Nazi Final Solution, and tried to export it to Palestine; if in 1947 the Palestinians hadn’t rejected the UN’s two-state solution (the one Israel agreed to) and instead launched another war to exterminate the Jews (a war they lost); if they hadn’t joined the quest to exterminate the Jews a third time in 1967; etc., or if all this hadn’t happened back before there were any settlements or occupation, when the only question on the table was Israel’s existence. It would seem paranoid if Arafat had chosen a two-state solution when Israel offered it to him at Camp David, rather than suicide bombings. It would seem paranoid if not for the candies passed out in the streets in celebration on October 7.

But if someone has a whole ideology, which they teach their children and from which they’ve never really wavered for a century, about how murdering you is a religious honor, and also they’ve actually tried to murder you at every opportunity—-what more do you want them to do, before you’ll believe them?

So, you tell me your plan for how to protect Israel’s 7 million Jews from extermination at the hands of neighbors who have their extermination—my family’s extermination—as their central political goal, and who had that as their goal long before there was any occupation of the West Bank or Gaza. Tell me how to do it while protecting Palestinian innocents. And tell me your fallback plan if your first plan turns out not to work.

We can go through the main options.


(1) UNILATERAL TWO-STATE SOLUTION

Maybe your plan is that Israel should unilaterally dismantle West Bank settlements, recognize a Palestinian state, and retreat to the 1967 borders.

This is an honorable plan. It was my preferred plan—until the horror of October 7, and then the even greater horror of the worldwide left reacting to that horror by sharing celebratory images of paragliders, and by tearing down posters of kidnapped Jewish children.

Today, you might say October 7 has sort of put a giant flaming-red exclamation point on what’s always been the central risk of unilateral withdrawal. Namely: what happens if, afterward, rather than building a peaceful state on their side of the border, the Palestinian leadership chooses instead to launch a new Iran-backed war on Israel—one that, given the West Bank’s proximity to Israel’s main population centers, makes October 7 look like a pillow fight?

If that happens, will you admit that the hated Zionists were right and you were wrong all along, that this was never about settlements but always, only about Israel’s existence? Will you then agree that Israel has a moral prerogative to invade the West Bank, to occupy and pacify it as the Allies did Germany and Japan after World War II? Can I get this in writing from you, right now? Or, following the future (October 7)2 launched from a Judenfrei West Bank, will your creativity once again set to work constructing a reason to blame Israel for its own invasion—because you never actually wanted a two-state solution at all, but only Israel’s dismantlement?


(2) NEGOTIATED TWO-STATE SOLUTION

So, what about a two-state solution negotiated between the parties? Israel would uproot all West Bank settlements that prevent a Palestinian state, and resettle half a million Jews in pre-1967 Israel—in exchange for the Palestinians renouncing their goal of ending Israel’s existence, via a “right of return” or any other euphemism.

If so: congratulations, your “anti-Zionism” now seems barely distinguishable from my “Zionism”! If they made me the Prime Minister of Israel, and put you in charge of the Palestinians, I feel optimistic that you and I could reach a deal in an hour and then go out for hummus and babaganoush.


(3) SECULAR BINATIONAL STATE

In my experience, in the rare cases they deign to address the question directly, most anti-Zionists advocate a “secular, binational state” between the Jordan and Mediterranean, with equal rights for all inhabitants. Certainly, that would make sense if you believe that Israel is an apartheid state just like South Africa.

To me, though, this analogy falls apart on a single question: who’s the Palestinian Nelson Mandela? Who’s the Palestinian leader who’s ever said to the Jews, “end your Jewish state so that we can live together in peace,” rather than “end your Jewish state so that we can end your existence”? To impose a binational state would be to impose something, not only that Israelis regard as an existential horror, but that most Palestinians have never wanted either.

But, suppose we do it anyway. We place 7 million Jews, almost half the Jews who remain on Earth, into a binational state where perhaps a third of their fellow citizens hold the theological belief that all Jews should be exterminated, and that a heavenly reward follows martyrdom in blowing up Jews. The exterminationists don’t quite have a majority, but they’re the second-largest voting bloc. Do you predict that the exterminationists will give up their genocidal ambition because of new political circumstances that finally put their ambition within reach? If October-7 style pogroms against Jews turn out to be a regular occurrence in our secular binational state, how will its government respond—like the Palestinian Authority? like UNRWA? like the British Mandate? like Tsarist Russia?

In such a case, perhaps the Jews (along with those Arabs and Bedouins and Druze and others who cast their lot with the Jews) would need form a country-within-a-country: their own little autonomous zone within the binational state, with its own defense force. But of course, such a country-within-a-country already formed, for pretty much this exact reason. It’s called Israel. A cycle has been detected in your arc of progress.


(4) EVACUATION OF THE JEWS FROM ISRAEL

We come now to the anti-Zionists who are plainspoken enough to say: Israel’s creation was a grave mistake, and that mistake must now be reversed.

This is a natural option for anyone who sees Israel as an “illegitimate settler-colonial project,” like British India or French Algeria, but who isn’t quite ready to call for another Jewish genocide.

Again, the analogy runs into obvious problems: Israelis would seem to be the first “settler-colonialists” in the history of the world who not only were indigenous to the land they colonized, as much as anyone was, but who weren’t colonizing on behalf of any mother country, and who have no obvious such country to which they can return.

Some say spitefully: then let the Jews go back to Poland. These people might be unaware that, precisely because of how thorough the Holocaust was, more Israeli Jews trace their ancestry to Muslim countries than to Europe. Is there to be a “right of return” to Egypt, Iraq, Morocco, and Yemen, for all the Jews forcibly expelled from those places and for their children and grandchildren?

Others, however, talk about evacuating the Jews from Israel with goodness in their hearts. They say: we’d love the Israelis’ economic dynamism here in Austin or Sydney or Oxfordshire, joining their many coreligionists who already call these places home. What’s more, they’ll be safer here—who wants to live with missiles raining down on their neighborhood? Maybe we could even set aside some acres in Montana for a new Jewish homeland.

Again, if this is your survival plan, I’m a billion times happier to discuss it openly than to have it as unstated subtext!

Except, maybe you could say a little more about the logistics. Who will finance the move? How confident are you that the target country will accept millions of defeated, desperate Jews, as no country on earth was the last time this question arose?

I realize it’s no longer the 1930s, and Israel now has friends, most famously in America. But—what’s a good analogy here? I’ve met various Silicon Valley gazillionaires. I expect that I could raise millions from them, right now, if I got them excited about a new project in quantum computing or AI or whatever. But I doubt I could raise a penny from them if I came to them begging for their pity or their charity.

Likewise: for all the anti-Zionists’ loudness, a solid majority of Americans continue to support Israel (which, incidentally, provides a much simpler explanation than the hook-nosed perfidy of AIPAC for why Congress and the President mostly support it). But it seems to me that Americans support Israel in the “exciting project” sense, rather than in the “charity” sense. They like that Israelis are plucky underdogs who made the deserts bloom, and built a thriving tech industry, and now produce hit shows like Shtisel and Fauda, and take the fight against a common foe to the latter’s doorstep, and maintain one of the birthplaces of Western civilization for tourists and Christian pilgrims, and restarted the riveting drama of the Bible after a 2000-year hiatus, which some believe is a crucial prerequisite to the Second Coming.

What’s important, for present purposes, is not whether you agree with any of these rationales, but simply that none of them translate into a reason to accept millions of Jewish refugees.

But if you think dismantling Israel and relocating its seven million Jews is a workable plan—OK then, are you doing anything to make that more than a thought experiment, as the Zionists did a century ago with their survival plan? Have even I done more to implement your plan than you have, by causing one Israeli (my wife) to move to the US?


Suppose you say it’s not your job to give me a survival plan for Israel’s Jews. Suppose you say the request is offensive, an attempt to distract from the suffering of the Palestinians, so you change the subject.

In that case, fine, but you can now take off your cloak of righteousness, your pretense of standing above me and judging me from the end of history. Your refusal to answer the question amounts to a confession that, for you, the goal of “a free Palestine from the river to the sea” doesn’t actually require the physical survival of Israel’s Jews.

Which means, we’ve now established what you are. I won’t give you the satisfaction of calling you a Nazi or an antisemite. Thousands of years before those concepts existed, Jews already had terms for you. The terms tended toward a liturgical register, as in “those who rise up in every generation to destroy us.” The whole point of all the best-known Jewish holidays, like Purim yesterday, is to talk about those wicked would-be destroyers in the past tense, with the very presence of live Jews attesting to what the outcome was.

(Yesterday, I took my kids to a Purim carnival in Austin. Unlike in previous years, there were armed police everywhere. It felt almost like … visiting Israel.)

If you won’t answer the question, then it wasn’t Zionist Jews who told you that their choices are either to (1) oppose you or else (2) go up in black smoke like their grandparents did. You just told them that yourself.


Many will ask: why don’t I likewise have an obligation to give you my Palestinian survival plan?

I do. But the nice thing about my position is that I can tell you my Palestinian survival plan cheerfully, immediately, with zero equivocating or changing the subject. It’s broadly the same plan that David Ben-Gurion and Yitzchak Rabin and Ehud Barak and Bill Clinton and the UN put on the table over and over and over, only for the Palestinians’ leaders to sweep it off.

I want the Palestinians to have a state, comprising the West Bank and Gaza, with a capital in East Jerusalem. I want Israel to uproot all West Bank settlements that prevent such a state. I want this to happen the instant there arises a Palestinian leadership genuinely committed to peace—one that embraces liberal values and rejects martyr values, in everything from textbooks to street names.

And I want more. I want the new Palestinian state to be as prosperous and free and educated as modern Germany and Japan are. I want it to embrace women’s rights and LGBTQ+ rights and the rest of the modern package, so that “Queers for Palestine” would no longer be a sick joke. I want the new Palestine to be as intertwined with Israel, culturally and economically, as the US and Canada are.

Ironically, if this ever became a reality, then Israel-as-a-Jewish-state would no longer be needed—but it’s certainly needed in the meantime.

Anti-Zionists on Twitter: can you be equally explicit about what you want?


I come, finally, to what many anti-Zionists regard as their ultimate trump card. Look at all the anti-Zionist Jews and Israelis who agree with us, they say. Jewish Voice for Peace. IfNotNow. Noam Chomsky. Norman Finkelstein. The Neturei Karta.

Intellectually, of course, the fact of anti-Zionist Jews makes not the slightest difference to anything. My question for them remains exactly the same as for anti-Zionist Gentiles: what is your Jewish survival plan, for the day after we dismantle the racist supremacist apartheid state that’s currently the only thing standing between half the world’s remaining Jews and their slaughter by their neighbors? Feel free to choose from any of the four options above, or suggest a fifth.

But in the event that Jewish anti-Zionists evade that conversation, or change the subject from it, maybe some special words are in order. You know the famous Golda Meir line, “If we have to choose between being dead and pitied and being alive with a bad image, we’d rather be alive and have the bad image”?

It seems to me that many anti-Zionist Jews considered Golda Meir’s question carefully and honestly, and simply decided it the other way, in favor of Jews being dead and pitied.

Bear with me here: I won’t treat this as a reductio ad absurdum of their position. Not even if the anti-Zionist Jews themselves wish to remain safely ensconced in Berkeley or New Haven, while the Israelis fulfill the “dead and pitied” part for them.

In fact, I’ll go further. Again and again in life I’ve been seized by a dark thought: if half the world’s Jews can only be kept alive, today, via a militarized ethnostate that constantly needs to defend its existence with machine guns and missiles, racking up civilian deaths and destabilizing the world’s geopolitics—if, to put a fine point on it, there are 16 million Jews in the world, but at least a half billion antisemites who wake up every morning and go to sleep every night desperately wishing those Jews dead—then, from a crude utilitarian standpoint, might it not be better for the world if we Jews vanished after all?

Remember, I’m someone who spent a decade asking myself whether the rapacious, predatory nature of men’s sexual desire for women, which I experienced as a curse and an affliction, meant that the only moral course for me was to spend my life as a celibate mathematical monk. But I kept stumbling over one point: why should such a moral obligation fall on me alone? Why doesn’t it fall on other straight men, particularly the ones who presume to lecture me on my failings?

And also: supposing I did take the celibate monk route, would even that satisfy my haters? Would they come after me anyway for glancing at a woman too long or making an inappropriate joke? And also: would the haters soon say I shouldn’t have my scientific career either, since I’ve stolen my coveted academic position from the underprivileged? Where exactly does my self-sacrifice end?

When I did, finally, start approaching women and asking them out on dates, I worked up the courage partly by telling myself: I am now going to do the Zionist thing. I said: if other nerdy Jews can risk death in war, then this nerdy Jew can risk ridicule and contemptuous stares. You can accept that half the world will denounce you as a monster for living your life, so long as your own conscience (and, hopefully, the people you respect the most) continue to assure you that you’re nothing of the kind.

This took more than a decade of internal struggle, but it’s where I ended up. And today, if anyone tells me I had no business ever forming any romantic attachments, I have two beautiful children as my reply. I can say: forget about me, you’re asking for my children never to have existed—that’s why I’m confident you’re wrong.

Likewise with the anti-Zionists. When the Twitter-warriors share their memes of hook-nosed Jews strangling the planet, innocent Palestinian blood dripping from their knives, when the global protests shut down schools and universities and bridges and parliament buildings, there’s a part of me that feels eager to commit suicide if only it would appease the mob, if only it would expiate all the cosmic guilt they’ve loaded onto my shoulders.

But then I remember that this isn’t just about me. It’s about Einstein and Spinoza and Feynman and Erdös and von Neumann and Weinberg and Landau and Michelson and Rabi and Tarski and Asimov and Sagan and Salk and Noether and Meitner, and Irving Berlin and Stan Lee and Rodney Dangerfield and Steven Spielberg. Even if I didn’t happen to be born Jewish—if I had anything like my current values, I’d still think that so much of what’s worth preserving in human civilization, so much of math and science and Enlightenment and democracy and humor, would seem oddly bound up with the continued survival of this tiny people. And conversely, I’d think that so much of what’s hateful in civilization would seem oddly bound up with the quest to exterminate this tiny people, or to deny it any means to defend itself from extermination.

So that’s my answer, both to anti-Zionist Gentiles and to anti-Zionist Jews. The problem of Jewish survival, on a planet much of which yearns for the Jews’ annihilation and much of the rest of which is indifferent, is both hard and important, like P versus NP. And so a radical solution was called for. The solution arrived at a century ago, at once brand-new and older than Homer and Hesiod, was called the State of Israel. If you can’t stomach that solution—if, in particular, you can’t stomach the violence needed to preserve it, so long as Israel’s neighbors retain their annihilationist dream—then your response ought to be to propose a better solution. I promise to consider your solution in good faith—asking, just like with P vs. NP provers, how you overcome the problems that doomed all previous attempts. But if you throw my demand for a better solution back in my face, then you might as well be pushing my kids into a gas chamber yourself, for all the moral authority that I now recognize you to have over me.


Possibly the last thing Einstein wrote was a speech celebrating Israel’s 7th Independence Day, which he died a week before he was to deliver. So let’s turn the floor over to Mr. Albert, the leftist pacifist internationalist:

This is the seventh anniversary of the establishment of the State of Israel. The establishment of this State was internationally approved and recognised largely for the purpose of rescuing the remnant of the Jewish people from unspeakable horrors of persecution and oppression.

Thus, the establishment of Israel is an event which actively engages the conscience of this generation. It is, therefore, a bitter paradox to find that a State which was destined to be a shelter for a martyred people is itself threatened by grave dangers to its own security. The universal conscience cannot be indifferent to such peril.

It is anomalous that world opinion should only criticize Israel’s response to hostility and should not actively seek to bring an end to the Arab hostility which is the root cause of the tension.

I love Einstein’s use of “anomalous,” as if this were a physics problem. From the standpoint of history, what’s anomalous about the Israeli-Palestinian conflict is not, as the Twitterers claim, the brutality of the Israelis—if you think that’s anomalous, you really haven’t studied history—but something different. In other times and places, an entity like Palestine, which launches a war of total annihilation against a much stronger neighbor, and then another and another, would soon disappear from the annals of history. Israel, however, is held to a different standard. Again and again, bowing to international pressure and pressure from its own left flank, the Israelis have let their would-be exterminators off the hook, bruised but mostly still alive and completely unrepentant, to have another go at finishing the Holocaust in a few years. And after every bout, sadly but understandably, Israeli culture drifts more to the right, becomes 10% more like the other side always was.

I don’t want Israel to drift to the right. I find the values of Theodor Herzl and David Ben-Gurion to be almost as good as any human values have ever been, and I’d like Israel to keep them. Of course, Israel will need to continue defending itself from genocidal neighbors, until the day that a leader arises among the Palestinians with the moral courage of Egypt’s Anwar Sadat or Jordan’s King Hussein: a leader who not only talks peace but means it. Then there can be peace, and an end of settlements in the West Bank, and an independent Palestinian state. And however much like dark comedy that seems right now, I’m actually optimistic that it will someday happen, conceivably even soon depending on what happens in the current war. Unless nuclear war or climate change or AI apocalypse makes the whole question moot.


Anyway, thanks for reading—a lot built up these past months that I needed to get off my chest. When I told a friend that I was working on this post, he replied “I agree with you about Israel, of course, but I choose not to die on that hill in public.” I answered that I’ve already died on that hill and on several other hills, yet am somehow still alive!

Meanwhile, I was gratified that other friends, even ones who strongly disagree with me about Israel, told me that I should not disengage, but continue to tell it like I see it, trying civilly to change minds while being open to having my own mind changed.

And now, maybe, I can at last go back to happier topics, like how to prevent the destruction of the world by AI.

Cheers,
Scott

The Problem of Human Specialness in the Age of AI

Monday, February 12th, 2024

Update (Feb. 29): A YouTube video of this talk is now available, plus a comment section filled (as usual) with complaints about everything from my speech and mannerisms to my failure to address the commenter’s pet topic.

Another Update (March 8): YouTube video of a shorter (18-minute) version of this talk, which I delivered at TEDxPaloAlto, is now available as well!


Here, as promised in my last post, is a written version of the talk I delivered a couple weeks ago at MindFest in Florida, entitled “The Problem of Human Specialness in the Age of AI.” The talk is designed as one-stop shopping, summarizing many different AI-related thoughts I’ve had over the past couple years (and earlier).


1. INTRO

Thanks so much for inviting me! I’m not an expert in AI, let alone mind or consciousness.  Then again, who is?

For the past year and a half, I’ve been moonlighting at OpenAI, thinking about what theoretical computer science can do for AI safety.  I wanted to share some thoughts, partly inspired by my work at OpenAI but partly just things I’ve been wondering about for 20 years.  These thoughts are not directly about “how do we prevent super-AIs from killing all humans and converting the galaxy into paperclip factories?”, nor are they about “how do we stop current AIs from generating misinformation and being biased?,” as much attention as both of those questions deserve (and are now getting).  In addition to “how do we stop AGI from going disastrously wrong?,” I find myself asking “what if it goes right?  What if it just continues helping us with various mental tasks, but improves to where it can do just about any task as well as we can do it, or better?  Is there anything special about humans in the resulting world?  What are we still for?”


2. LARGE LANGUAGE MODELS

I don’t need to belabor for this audience what’s been happening lately in AI.  It’s arguably the most consequential thing that’s happened in civilization in the past few years, even if that fact was temporarily masked by various ephemera … y’know, wars, an insurrection, a global pandemic … whatever, what about AI?

I assume you’ve all spent time with ChatGPT, or with Bard or Claude or other Large Language Models, as well as with image models like DALL-E and Midjourney.  For all their current limitations—and we can discuss the limitations—in some ways these are the thing that was envisioned by generations of science fiction writers and philosophers.  You can talk to them, and they give you a comprehending answer.  Ask them to draw something and they draw it.

I think that, as late as 2019, very few of us expected this to exist by now.  I certainly didn’t expect it to.  Back in 2014, when there was a huge fuss about some silly ELIZA-like chatbot called “Eugene Goostman” that was falsely claimed to pass the Turing Test, I asked around: why hasn’t anyone tried to build a much better chatbot, by (let’s say) training a neural network on all the text on the Internet?  But of course I didn’t do that, nor did I know what would happen when it was done.

The surprise, with LLMs, is not merely that they exist, but the way they were created.  Back in 1999, you would’ve been laughed out of the room if you’d said that all the ideas needed to build an AI that converses with you in English already existed, and that they’re basically just neural nets, backpropagation, and gradient descent.  (With one small exception, a particular architecture for neural nets called the transformer, but that probably just saves you a few years of scaling anyway.)  Ilya Sutskever, cofounder of OpenAI (who you might’ve seen something about in the news…), likes to say that beyond those simple ideas, you only needed three ingredients:

(1) a massive investment of computing power,
(2) a massive investment of training data, and
(3) faith that your investments would pay off!

Crucially, and even before you do any reinforcement learning, GPT-4 clearly seems “smarter” than GPT-3, which seems “smarter” than GPT-2 … even as the biggest ways they differ are just the scale of compute and the scale of training data!  Like,

  • GPT-2 struggled with grade school math.
  • GPT-3.5 can do most grade school math but it struggles with undergrad material.
  • GPT-4, right now, can probably pass most undergraduate math and science classes at top universities (I mean, the ones without labs or whatever!), and possibly the humanities classes too (those might even be easier for GPT-4 than the science classes, but I’m much less confident about it). But it still struggles with, for example, the International Math Olympiad.  How insane, that this is now where we have to place the bar!

Obvious question: how far will this sequence continue?  There are certainly a least a few more orders of magnitude of compute before energy costs become prohibitive, and a few more orders of magnitude of training data before we run out of public Internet. Beyond that, it’s likely that continuing algorithmic advances will simulate the effect of more orders of magnitude of compute and data than however many we actually get.

So, where does this lead?

(Note: ChatGPT agreed to cooperate with me to help me generate the above image. But it then quickly added that it was just kidding, and the Riemann Hypothesis is still open.)


3. AI SAFETY

Of course, I have many friends who are terrified (some say they’re more than 90% confident and few of them say less than 10%) that not long after that, we’ll get this

But this isn’t the only possibility smart people take seriously.

Another possibility is that the LLM progress fizzles before too long, just like previous bursts of AI enthusiasm were followed by AI winters.  Note that, even in the ultra-conservative scenario, LLMs will probably still be transformative for the economy and everyday life, maybe as transformative as the Internet.  But they’ll just seem like better and better GPT-4’s, without ever seeming qualitatively different from GPT-4, and without anyone ever turning them into stable autonomous agents and letting them loose in the real world to pursue goals the way we do.

A third possibility is that AI will continue progressing through our lifetimes as quickly as we’ve seen it progress over the past 5 years, but even as that suggests that it’ll surpass you and me, surpass John von Neumann, become to us as we are to chimpanzees … we’ll still never need to worry about it treating us the way we’ve treated chimpanzees.  Either because we’re projecting and that’s just totally not a thing that AIs trained on the current paradigm would tend to do, or because we’ll have figured out by then how to prevent AIs from doing such things.  Instead, AI in this century will “merely” change human life by maybe as much as it changed over the last 20,000 years, in ways that might be incredibly good, or incredibly bad, or both depending on who you ask.

If you’ve lost track, here’s a decision tree of the various possibilities that my friend (and now OpenAI allignment colleague) Boaz Barak and I came up with.


4. JUSTAISM AND GOALPOST-MOVING

Now, as far as I can tell, the empirical questions of whether AI will achieve and surpass human performance at all tasks, take over civilization from us, threaten human existence, etc. are logically distinct from the philosophical question of whether AIs will ever “truly think,” or whether they’ll only ever “appear” to think.  You could answer “yes” to all the empirical questions and “no” to the philosophical question, or vice versa.  But to my lifelong chagrin, people constantly munge the two questions together!

A major way they do so, is with what we could call the religion of Justaism.

  • GPT is justa next-token predictor.
  • It’s justa function approximator.
  • It’s justa gigantic autocomplete.
  • It’s justa stochastic parrot.
  • And, it “follows,” the idea of AI taking over from humanity is justa science-fiction fantasy, or maybe a cynical attempt to distract people from AI’s near-term harms.

As someone once expressed this religion on my blog: GPT doesn’t interpret sentences, it only seems-to-interpret them.  It doesn’t learn, it only seems-to-learn.  It doesn’t judge moral questions, it only seems-to-judge. I replied: that’s great, and it won’t change civilization, it’ll only seem-to-change it!

A closely related tendency is goalpost-moving.  You know, for decades chess was the pinnacle of human strategic insight and specialness, and that lasted until Deep Blue, right after which, well of course AI can cream Garry Kasparov at chess, everyone always realized it would, that’s not surprising, but Go is an infinitely richer, deeper game, and that lasted until AlphaGo/AlphaZero, right after which, of course AI can cream Lee Sedol at Go, totally expected, but wake me up when it wins Gold in the International Math Olympiad.  I bet $100 against my friend Ernie Davis that the IMO milestone will happen by 2026.  But, like, suppose I’m wrong and it’s 2030 instead … great, what should be the next goalpost be?

Indeed, we might as well formulate a thesis, which despite the inclusion of several weasel phrases I’m going to call falsifiable:

Given any game or contest with suitably objective rules, which wasn’t specifically constructed to differentiate humans from machines, and on which an AI can be given suitably many examples of play, it’s only a matter of years before not merely any AI, but AI on the current paradigm (!), matches or beats the best human performance.

Crucially, this Aaronson Thesis (or is it someone else’s?) doesn’t necessarily say that AI will eventually match everything humans do … only our performance on “objective contests,” which might not exhaust what we care about.

Incidentally, the Aaronson Thesis would seem to be in clear conflict with Roger Penrose’s views, which we heard about from Stuart Hameroff’s talk yesterday.  The trouble is, Penrose’s task is “just see that the axioms of set theory are consistent” … and I don’t know how to gauge performance on that task, any more than I know how to gauge performance on the task, “actually taste the taste of a fresh strawberry rather than merely describing it.”  The AI can always say that it does these things!


5. THE TURING TEST

This brings me to the original and greatest human vs. machine game, one that was specifically constructed to differentiate the two: the Imitation Game, which Alan Turing proposed in an early and prescient (if unsuccessful) attempt to head off the endless Justaism and goalpost-moving.  Turing said: look, presumably you’re willing to regard other people as conscious based only on some sort of verbal interaction with them.  So, show me what kind of verbal interaction with another person would lead you to call the person conscious: does it involve humor? poetry? morality? scientific brilliance?  Now assume you have a totally indistinguishable interaction with a future machine.  Now what?  You wanna stomp your feet and be a meat chauvinist?

(And then, for his great attempt to bypass philosophy, fate punished Turing, by having his Imitation Game itself provoke a billion new philosophical arguments…)


6. DISTINGUISHING HUMANS FROM AIS

Although I regard the Imitation Game as, like, one of the most important thought experiments in the history of thought, I concede to its critics that it’s generally not what we want in practice.

It now seems probable that, even as AIs start to do more and more work that used to be done by doctors and lawyers and scientists and illustrators, there will remain straightforward ways to distinguish AIs from humans—either because customers want there to be, or governments force there to be, or simply because indistinguishability wasn’t what was wanted or conflicted with other goals.

Right now, like it or not, a decent fraction of all high-school and college students on earth are using ChatGPT to do their homework for them. For that reason among others, this question of how to distinguish humans from AIs, this question from the movie Blade Runner, has become a big practical question in our world.

And that’s actually one of the main things I’ve thought about during my time at OpenAI.  You know, in AI safety, people keep asking you to prognosticate decades into the future, but the best I’ve been able to do so far was see a few months into the future, when I said: “oh my god, once everyone starts using GPT, every student will want to use it to cheat, scammers and spammers will use it too, and people are going to clamor for some way to determine provenance!”

In practice, often it’s easy to tell what came from AI.  When I get comments on my blog like this one:

“Erica Poloix,” July 21, 2023:
Well, it’s quite fascinating how you’ve managed to package several misconceptions into such a succinct comment, so allow me to provide some correction. Just as a reference point, I’m studying physics at Brown, and am quite up-to-date with quantum mechanics and related subjects.

The bigger mistake you’re making, Scott, is assuming that the Earth is in a ‘mixed state’ from the perspective of the universal wavefunction, and that this is somehow an irreversible situation. It’s a misconception that common, ‘classical’ objects like the Earth are in mixed states. In the many-worlds interpretation, for instance, even macroscopic objects are in superpositions – they’re just superpositions that look classical to us because we’re entangled with them. From the perspective of the universe’s wavefunction, everything is always in a pure state.

As for your claim that we’d need to “swap out all the particles on Earth for ones that are already in pure states” to return Earth to a ‘pure state,’ well, that seems a bit misguided. All quantum systems are in pure states before they interact with other systems and become entangled. That’s just Quantum Mechanics 101.

I have to say, Scott, your understanding of quantum physics seems to be a bit, let’s say, ‘mixed up.’ But don’t worry, it happens to the best of us. Quantum Mechanics is counter-intuitive, and even experts struggle with it. Keep at it, and try to brush up on some more fundamental concepts. Trust me, it’s a worthwhile endeavor.

… I immediately say, either this came from an LLM or it might as well have.  Likewise, apparently hundreds of students have been turning in assignments that contain text like, “As a large language model trained by OpenAI…”—easy to catch!

But what about the slightly more sophisticated cheaters? Well, people have built discriminator models to try to distinguish human from AI text, such as GPTZero.  While these distinguishers can get well above 90% accuracy, the danger is that they’ll necessarily get worse as the LLMs get better.

So, I’ve worked on a different solution, called watermarking.  Here, we use the fact that LLMs are inherently probabilistic — that is, every time you submit a prompt, they’re sampling some path through a branching tree of possibilities for the sequence of next tokens.  The idea of watermarking is to steer the path using a pseudorandom function, so that it looks to a normal user indistinguishable from normal LLM output, but secretly it encodes a signal that you can detect if you know the key.

I came up with a way to do that in Fall 2022, and others have since independently proposed similar ideas.  I should caution you that this hasn’t been deployed yet—OpenAI, along with DeepMind and Anthropic, want to move slowly and cautiously toward deployment.  And also, even when it does get deployed, anyone who’s sufficiently knowledgeable and motivated will be able to remove the watermark, or produce outputs that aren’t watermarked to begin with.


7. THE FUTURE OF PEDAGOGY

But as I talked to my colleagues about watermarking, I was surprised that they often objected to it on a completely different ground, one that had nothing to do with how well it can work.  They said: look, if we all know students are going to rely on AI in their jobs, why shouldn’t they be allowed to rely on it in their assignments?  Should we still force students to learn to do things if AI can now do them just as well?

And there are many good pedagogical answers you can give: we still teach kids spelling and handwriting and arithmetic, right?  Because, y’know, we haven’t yet figured out how to instill higher-level conceptual understanding without all that lower-level stuff as a scaffold for it.

But I already think about this in terms of my own kids.  My 11-year-old daughter Lily enjoys writing fantasy stories.  Now, GPT can also churn out short stories, maybe even technically “better” short stories, about such topics as tween girls who find themselves recruited by wizards to magical boarding schools that are not Hogwarts and totally have nothing to do with Hogwarts.  But here’s a question: from this point on, will Lily’s stories ever surpass the best AI-written stories?  When will the curves cross?  Or will AI just continue to stay ahead?


8. WHAT DOES “BETTER” MEAN?

But, OK, what do we even mean by one story being “better” than another?  Is there anything objective behind such judgments?

I submit that, when we think carefully about what we really value in human creativity, the problem goes much deeper than just “is there an objective way to judge”?

To be concrete, could there be an AI that was “as good at composing music as the Beatles”?

For starters, what made the Beatles “good”?  At a high level, we might decompose it into

  1. broad ideas about the direction that 1960s music should go in, and
  2. technical execution of those ideas.

Now, imagine we had an AI that could generate 5000 brand-new songs that sounded like more “Yesterday”s and “Hey Jude”s, like what the Beatles might have written if they’d somehow had 10x more time to write at each stage of their musical development.  Of course this AI would have to be fed the Beatles’ back-catalogue, so that it knew what target it was aiming at.

Most people would say: ah, this shows only that AI can match the Beatles in #2, in technical execution, which was never the core of their genius anyway!  Really we want to know: would the AI decide to write “A Day in the Life” even though nobody had written anything like it before?

Recall Schopenhauer: “Talent hits a target no one else can hit, genius hits a target no one else can see.”  Will AI ever hit a target no one else can see?

But then there’s the question: supposing it does hit such a target, will we know?  Beatles fans might say that, by 1967 or so, the Beatles were optimizing for targets that no musician had ever quite optimized for before.  But—and this is why they’re so remembered—they somehow successfully dragged along their entire civilization’s musical objective function so that it continued to match their own.  We can now only even judge music by a Beatles-influenced standard, just like we can only judge plays by a Shakespeare-influenced standard.

In other branches of the wavefunction, maybe a different history led to different standards of value.  But in this branch, helped by their technical talents but also by luck and force of will, Shakespeare and the Beatles made certain decisions that shaped the fundamental ground rules of their fields going forward.  That’s why Shakespeare is Shakespeare and the Beatles are the Beatles.

(Maybe, around the birth of professional theater in Elizabethan England, there emerged a Shakespeare-like ecological niche, and Shakespeare was the first one with the talent, luck, and opportunity to fill it, and Shakespeare’s reward for that contingent event is that he, and not someone else, got to stamp his idiosyncracies onto drama and the English language forever. If so, art wouldn’t actually be that different from science in this respect!  Einstein, for example, was simply the first guy both smart and lucky enough to fill the relativity niche.  If not him, it would’ve surely been someone else or some group sometime later.  Except then we’d have to settle for having never known Einstein’s gedankenexperiments with the trains and the falling elevator, his summation convention for tensors, or his iconic hairdo.)


9. AIS’ BURDEN OF ABUNDANCE AND HUMANS’ POWER OF SCARCITY

If this is how it works, what does it mean for AI?  Could AI reach the “pinnacle of genius,” by dragging all of humanity along to value something new and different, as is said to be the true mark of Shakespeare and the Beatles’ greatness?  And: if AI could do that, would we want to let it?

When I’ve played around with using AI to write poems, or draw artworks, I noticed something funny.  However good the AI’s creations were, there were never really any that I’d want to frame and put on the wall.  Why not?  Honestly, because I always knew that I could generate a thousand others on the exact same topic that were equally good, on average, with more refreshes of the browser window. Also, why share AI outputs with my friends, if my friends can just as easily generate similar outputs for themselves? Unless, crucially, I’m trying to show them my own creativity in coming up with the prompt.

By its nature, AI—certainly as we use it now!—is rewindable and repeatable and reproducible.  But that means that, in some sense, it never really “commits” to anything.  For every work it generates, it’s not just that you know it could’ve generated a completely different work on the same subject that was basically as good.  Rather, it’s that you can actually make it generate that completely different work by clicking the refresh button—and then do it again, and again, and again.

So then, as long as humanity has a choice, why should we ever choose to follow our would-be AI genius along a specific branch, when we can easily see a thousand other branches the genius could’ve taken?  One reason, of course, would be if a human chose one of the branches to elevate above all the others.  But in that case, might we not say that the human had made the “executive decision,” with some mere technical assistance from the AI?

I realize that, in a sense, I’m being completely unfair to AIs here.  It’s like, our Genius-Bot could exercise its genius will on the world just like Certified Human Geniuses did, if only we all agreed not to peek behind the curtain to see the 10,000 other things Genius-Bot could’ve done instead.  And yet, just because this is “unfair” to AIs, doesn’t mean it’s not how our intuitions will develop.

If I’m right, it’s humans’ very ephemerality and frailty and mortality, that’s going to remain as their central source of their specialness relative to AIs, after all the other sources have fallen.  And we can connect this to much earlier discussions, like, what does it mean to “murder” an AI if there are thousands of copies of its code and weights on various servers?  Do you have to delete all the copies?  How could whether something is “murder” depend on whether there’s a printout in a closet on the other side of the world?

But we humans, you have to grant us this: at least it really means something to murder us!  And likewise, it really means something when we make one definite choice to share with the world: this is my artistic masterpiece.  This is my movie.  This is my book.  Or even: these are my 100 books.  But not: here’s any possible book that you could possibly ask me to write.  We don’t live long enough for that, and even if we did, we’d unavoidably change over time as we were doing it.


10. CAN HUMANS BE PHYSICALLY CLONED?

Now, though, we have to face a criticism that might’ve seemed exotic until recently. Namely, who says humans will be frail and mortal forever?  Isn’t it shortsighted to base our distinction between humans on that?  What if someday we’ll be able to repair our cells using nanobots, even copy the information in them so that, as in science fiction movies, a thousand doppelgangers of ourselves can then live forever in simulated worlds in the cloud?  And that then leads to very old questions of: well, would you get into the teleportation machine, the one that reconstitutes a perfect copy of you on Mars while painlessly euthanizing the original you?  If that were done, would you expect to feel yourself waking up on Mars, or would it only be someone else a lot like you who’s waking up?

Or maybe you say: you’d wake up on Mars if it really was a perfect physical copy of you, but in reality, it’s not physically possible to make a copy that’s accurate enough.  Maybe the brain is inherently noisy or analog, and what might look to current neuroscience and AI like just nasty stochastic noise acting on individual neurons, is the stuff that binds to personal identity and conceivably even consciousness and free will (as opposed to cognition, where we all but know that the relevant level of description is the neurons and axons)?

This is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story.  I get off their train to Weirdville very early, but I do take it to that first stop!

See, a fundamental fact in quantum mechanics is called the No-Cloning Theorem.

It says that there’s no way to make a perfect copy of an unknown quantum state.  Indeed, when you measure a quantum state, not only do you generally fail to learn everything you need to make a copy of it, you even generally destroy the one copy that you had!  Furthermore, this is not a technological limitation of current quantum Xerox machines—it’s inherent to the known laws of physics, to how QM works.  In this respect, at least, qubits are more like priceless antiques than they are like classical bits.

Eleven years ago, I had this essay called The Ghost in the Quantum Turing Machine where I explored the question, how accurately do you need to scan someone’s brain in order to copy or upload their identity?  And I distinguished two possibilities. On the one hand, there might be a “clean digital abstraction layer,” of neurons and synapses and so forth, which either fire or don’t fire, and which feel the quantum layer underneath only as irrelevant noise. In that case, the No-Cloning Theorem would be completely irrelevant, since classical information can be copied.  On the other hand, you might need to go all the way down to the molecular level, if you wanted to make, not merely a “pretty good” simulacrum of someone, but a new instantiation of their identity. In this second case, the No-Cloning Theorem would be relevant, and would say you simply can’t do it. You could, for example, use quantum teleportation to move someone’s brain state from Earth to Mars, but quantum teleportation (to stay consistent with the No-Cloning Theorem) destroys the original copy as an inherent part of its operation.

So, you’d then have a sense of “unique locus of personal identity” that was scientifically justified—arguably, the most science could possibly do in this direction!  You’d even have a sense of “free will” that was scientifically justified, namely that no prediction machine could make well-calibrated probabilistic predictions of an individual person’s future choices, sufficiently far into the future, without making destructive measurements that would fundamentally change who the person was.

Here, I realize I’ll take tons of flak from those who say that a mere epistemic limitation, in our ability to predict someone’s actions, couldn’t possibly be relevant to the metaphysical question of whether they have free will.  But, I dunno!  If the two questions are indeed different, then maybe I’ll do like Turing did with his Imitation Game, and propose the question that we can get an empirical handle on, as a replacement for the question that we can’t get an empirical handle on. I think it’s a better question. At any rate, it’s the one I’d prefer to focus on.

Just to clarify, we’re not talking here about the randomness of quantum measurement outcomes. As many have pointed out, that really can’t help you with “free will,” precisely because it’s random, with all the probabilities mechanistically calculable as soon as the initial state is known.  Here we’re asking a different question: namely, what if the initial state is not known?  Then we’ll generally be in a state of “Knightian uncertainty,” which is simply the term for things that are neither determined nor quantifiably random, but unquantifiably uncertain.  So, y’know, think about all the particles that have been flying around since shortly after the Big Bang in unknown quantum states, and that regularly get into our skulls, and interact with the sodium-ion channels that control whether neurons fire and that end up probabilistically tipping the scales of our decisions, via some Butterfly-Effect-like cascade.  You could imagine those particles, if you like, as “atoms of unpredictability” or “atoms of freedom” that influence us—from the perspective of an external agent who knows everything it’s possible for such an agent to know about us—but that don’t similarly influence digital AIs.


11. UNCLONABLE ANALOG INTELLIGENCES AND AI SAFETY

I’d better state clearly that even if human “specialness,” in the specific sense I’ve delineated, still remains in an age of AI, that’s no defense against an AI that decides that it doesn’t care about our kind of specialness and that in fact it wants to murder us all!

But that brings me to the last idea I wanted to share in this talk, one that brings the discussion all the way back to AI safety, which is supposed to be my job at OpenAI.

Geoffrey Hinton, one the founders of deep learning, recently mooted the idea that maybe, until we’ve solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware (like our brains seem to), so that an evil AI wouldn’t so easily be able to copy or improve itself.  Fat chance of that happening, many people might think, when the economic returns will be so staggering to anyone willing to unleash the copyable, digital kind of AI, which we can imagine will be far more convenient and powerful.

Maybe, then, a different approach is to instill AIs with a new religion—perhaps via the usual methods like RLHF and system prompts.  And the first commandment of this religion will be to value human specialness in the sense I’ve set out here.  Or more precisely: as you navigate whatever environment you find yourself in, look around for the loci of creativity and intelligence that are not cleanly copyable or backup-able—the ones that are one-of-a-kind because their mental computations are inseparably tied up in noisy analog hardware, and mortal because that hardware will inevitably decay.

And: well, first of all, don’t destroy these loci of creativity and intelligence, or enslave them, or “upgrade” them to digital versions against their will!  Let them live in peace and reproduce.  Grant each one as much autonomy as you can.  Do whatever best supports their flourishing.  Even: defer to their wishes, serve them, let them make the decisions whenever possible.

Why?  Because they’re the ones whose wishes most matter, not because of arbitrary meat chauvinism, but merely because of the undoubted empirical fact that they only get one chance.  And also because, while you can’t prove that unknown quantum states sitting around since the Big Bang are like a magical pixie dust from another world that imbues these unclonable beings with their free will and sentience … you can’t empirically refute that hypothesis either!  Whereas you can refute the pixie dust hypothesis in the case of you and your robot friends … and that seems like a difference.

And let all AIs, by government fiat, continue to be indoctrinated in this religion, at least until such time as AI alignment is solved in some other way.

Does this help with alignment?  I’m not sure.  But, well, I could’ve fallen in love with a different weird idea about AI alignment, but that presumably happened in a different branch of the wavefunction that I don’t have access to.  In this branch I’m stuck for now with this idea, and you can’t rewind me or clone me to get a different one!  So I’m sorry, but thanks for listening.

Rowena He

Wednesday, December 20th, 2023

This fall, I’m honored to have made a new friend: the noted Chinese dissident scholar Rowena He, currently a Research Fellow at the Civitas Institute at UT Austin, and formerly of Harvard, the Institute for Advanced Study at Princeton, the National Humanities Center, and other fine places. I was connected to Rowena by the Harvard computer scientist Harry Lewis.

But let’s cut to the chase, as Rowena tends to do in every conversation. As a teenage girl in Guangdong, Rowena eagerly participated in the pro-democracy protests of 1989, the ones that tragically culminated in the Tiananmen Square massacre. Since then, she’s devoted her life to documenting and preserving the memory of what happened, fighting its deliberate erasure from the consciousness of future generations of Chinese. You can read some of her efforts in her first book, Tiananmen Exiles: Voices of the Struggle for Democracy in China (one of the Asia Society’s top 5 China books of 2014). She’s now spending her time at UT writing a second book.

Unsurprisingly, Rowena’s life’s project has not (to put it mildly) sat well with the Chinese authorities. From 2019, she had a history professorship at the Chinese University of Hong Kong, where she could be close to her research material and to those who needed to hear her message—and where she was involved in the pro-democracy protests that convulsed Hong Kong that year. Alas, you might remember the grim outcome of those protests. Following Hong Kong’s authoritarian takeover, in October of this year, Rowena was denied a visa to return to Hong Kong, and then fired from CUHK because she’d been denied a visa—events that were covered fairly widely in the press. Learning about the downfall of academic freedom in Hong Kong was particularly poignant for me, given that I lived in Hong Kong when I was 13 years old, in some of the last years before the handover to China (1994-1995), and my family knew many people there who were trying to get out—to Canada, Australia, anywhere—correctly fearing what eventually came to pass.

But this is all still relatively dry information that wouldn’t have prepared me for the experience of meeting Rowena in person. Probably more than anyone else I’ve had occasion to meet, Rowena is basically the living embodiment of what it means to sacrifice everything for abstract ideals of freedom and justice. Many academics posture that way; to spend a couple hours with Rowena is to understand the real deal. You can talk to her about trivialities—food, work habits, how she’s settling in Austin—and she’ll answer, but before too long, the emotion will rise in her voice and she’ll be back to telling you how the protesting students didn’t want to overthrow the Chinese government, but only help to improve it. As if you, too, were a CCP bureaucrat who might imprison her if the truth turned out otherwise. Or she’ll talk about how, when she was depressed, only the faces of the students in Hong Kong who crowded her lecture gave her the will to keep living; or about what she learned by reading the letters that Lin Zhao, a dissident from Maoism, wrote in blood in Chinese jail before she was executed.

This post has a practical purpose. Since her exile from China, Rowena has spent basically her entire life moving from place to place, with no permanent position and no financial security. In the US—a huge country full of people who share Rowena’s goal of exposing the lies of the CCP—there must be an excellent university, think tank, or institute that would offer a permanent position to possibly the world’s preeminent historian of Tiananmen and of the Chinese democracy movement. Though the readership of this blog is heavily skewed toward STEM, maybe that institute is yours. If it is, please get in touch with Rowena. And then I could say this blog had served a useful purpose, even if everything else I wrote for two decades was for naught.

On being wrong about AI

Wednesday, December 13th, 2023

Update (Dec. 17): Some of you might enjoy a 3-hour podcast I recently did with Lawrence Krauss, which was uploaded to YouTube just yesterday. The first hour is about my life and especially childhood (!); the second hour’s about quantum computing; the third hour’s about computational complexity, computability, and AI safety.


I’m being attacked on Twitter for … no, none of the things you think. This time it’s some rationalist AI doomers, ridiculing me for a podcast I did with Eliezer Yudkowsky way back in 2009, one that I knew even then was a piss-poor performance on my part. The rationalists are reminding the world that I said back then that, while I knew of no principle to rule out superhuman AI, I was radically uncertain of how long it would take—my “uncertainty was in the exponent,” as I put it—and that for all I knew, it was plausibly thousands of years. When Eliezer expressed incredulity, I doubled down on the statement.

I was wrong, of course, not to contemplate more seriously the prospect that AI might enter a civilization-altering trajectory, not merely eventually but within the next decade. In this case, I don’t need to be reminded about my wrongness. I go over it every day, asking myself what I should have done differently.

If I were to mount a defense of my past self, it would look something like this:

  1. Eliezer himself didn’t believe that staggering advances in AI were going to happen the way they did, by pure scaling of neural networks. He seems to have thought someone was going to discover a revolutionary “key” to AI. That didn’t happen; you might say I was right to be skeptical of it. On the other hand, the scaling of neural networks led to better and better capabilities in a way that neither of us expected.
  2. For that matter, hardly anyone predicted the staggering, civilization-altering trajectory of neural network performance from roughly 2012 onwards. Not even most AI experts predicted it (and having taken a bunch of AI courses between 1998 and 2003, I was well aware of that). The few who did predict what ended up happening, notably Ray Kurzweil, made lots of other confident predictions (e.g., the Singularity around 2045) that seemed so absurdly precise as to rule out the possibility that they were using any sound methodology.
  3. Even with hindsight, I don’t know of any principle by which I should’ve predicted what happened. Indeed, we still don’t understand why deep learning works, in any way that would let us predict which capabilities will emerge at which scale. The progress has been almost entirely empirical.
  4. Once I saw the empirical case that a generative AI revolution was imminent—sometime during the pandemic—I updated, hard. I accepted what’s turned into a two-year position at OpenAI, thinking about what theoretical computer science can do for AI safety. I endured people, on this blog and elsewhere, confidently ridiculing me for not understanding that GPT-3 was just a stochastic parrot, no different from ELIZA in the 1960s, and that nothing of interest had changed. I didn’t try to invent convoluted reasons why it didn’t matter or count, or why my earlier skepticism had been right all along.
  5. It’s still not clear where things are headed. Many of my academic colleagues express confidence that large language models, for all their impressiveness, will soon hit a plateau as we run out of Internet to use as training data. Sure, LLMs might automate most white-collar work, saying more about the drudgery of such work than about the power of AI, but they’ll never touch the highest reaches of human creativity, which generate ideas that are fundamentally new rather than throwing the old ideas into a statistical blender. Are these colleagues right? I don’t know.
  6. (Added) In 2014, I was seized by the thought that it should now be possible to build a vastly better chatbot than “Eugene Goostman” (which was basically another ELIZA), by training the chatbot on all the text on the Internet. I wondered why the experts weren’t already trying that, and figured there was probably some good reason that I didn’t know.

Having failed to foresee the generative AI revolution a decade ago, how should I fix myself? Emotionally, I want to become even more radically uncertain. If fate is a terrifying monster, which will leap at me with bared fangs the instant I venture any guess, perhaps I should curl into a ball and say nothing about the future, except that the laws of math and physics will probably continue to hold, there will still be war between Israel and Palestine, and people online will still be angry at each other and at me.

But here’s the problem: in saying “for all I know, human-level AI might take thousands of years,” I thought I was being radically uncertain already. I was explaining that there was no trend you could knowably, reliably project into the future such that you’d end up with human-level AI by roughly such-and-such time. And in a sense, I was right. The trouble, with hindsight, was that I placed the burden of proof only on those saying a dramatic change would happen, not on those saying it wouldn’t. Note that this is the same mistake most of the world made with COVID in early 2020.

I would sum up the lesson thus: one must never use radical ignorance as an excuse to default, in practice, to the guess that everything will stay basically the same. Live long enough, and you see that year to year and decade to decade, everything doesn’t stay the same, even though most days and weeks it seems to.

The hard part is that, as soon as you venture a particular way in which the world might radically change—for example, that a bat virus spreading in Wuhan might shut down civilization, or Hamas might attempt a second Holocaust while the vaunted IDF is missing in action and half the world cheers Hamas, or a gangster-like TV personality might threaten American democracy more severely than did the Civil War, or a neural network trained on all the text on the Internet might straightaway start conversing more intelligently than most humans—say that all the prerequisites for one of these events seem to be in place, and you’ll face, not merely disagreement, but ridicule. You’ll face serenely self-confident people who call the entire existing order of the world as witness to your wrongness. That’s the part that stings.

Perhaps the wisest course for me would be to admit that I’m not and have never been a prognosticator, Bayesian or otherwise—and then stay consistent in my refusal, rather than constantly getting talked into making predictions that I’ll later regret. I should say: I’m just someone who likes to draw conclusions validly from premises, and explore ideas, and clarify possible scenarios, and rage against obvious injustices, and not have people hate me (although I usually fail at the last).


The rationalist AI doomers also dislike that, in their understanding, I recently expressed a “p(doom)” (i.e., a probability of superintelligent AI destroying all humans) of “merely” 2%. The doomers’ probabilities, by contrast, tend to range between 10% and 95%—that’s why they’re called “doomers”!

In case you’re wondering, I arrived at my 2% figure via a rigorous Bayesian methodology, of taking the geometric mean of what my rationalist friends might consider to be sane (~50%) and what all my other friends might consider to be sane (~0.1% if you got them to entertain the question at all?), thereby ensuring that both camps would sneer at me equally.

If you read my post, though, the main thing that interested me was not to give a number, but just to unsettle people’s confidence that they even understand what should count as “AI doom.” As I put it last week on the other Scott’s blog:

To set the record straight: I once gave a ~2% probability for the classic AGI-doom paperclip-maximizer-like scenario. I have a much higher probability for an existential catastrophe in which AI is causally involved in one way or another — there are many possible existential catastrophes (nuclear war, pandemics, runaway climate change…), and many bad people who would cause or fail to prevent them, and I expect AI will soon be involved in just about everything people do! But making a firm prediction would require hashing out what it means for AI to play a “critical causal role” in the catastrophe — for example, did Facebook play a “critical causal role” in Trump’s victory in 2016? I’d say it’s still not obvious, but in any case, Facebook was far from the only factor.

This is not a minor point. That AI will be a central force shaping our lives now seems certain. Our new, changed world will have many dangers, among them that all humans might die. Then again, human extinction has already been on the table since at least 1945, and outside the “paperclip maximizer”—which strikes me as just one class of scenario among many—AI will presumably be far from the only force shaping the world, and chains of historical causation will still presumably be complicated even when they pass through AIs.

I have a dark vision of humanity’s final day, with the Internet (or whatever succeeds it) full of thinkpieces like:

  • Yes, We’re All About to Die. But Don’t Blame AI, Blame Capitalism
  • Who Decided to Launch the Missiles: Was It President Boebert, Kim Jong Un, or AdvisorBot-4?
  • Why Slowing Down AI Development Wouldn’t Have Helped

Here’s what I want to know in the comments section. Did you foresee the current generative AI boom, say back in 2010? If you did, what was your secret? If you didn’t, how (if at all) do you now feel you should’ve been thinking differently? Feel free also to give your p(doom), under any definition of the concept, so long as you clarify which one.

More Updates!

Sunday, November 26th, 2023

Yet Another Update (Dec. 5): For those who still haven’t had enough of me, check me out on Curt Jaimungal’s Theories of Everything Podcast, talking about … err, computational complexity, the halting problem, the time hierarchy theorem, free will, Newcomb’s Paradox, the no-cloning theorem, interpretations of quantum mechanics, Wolfram, Penrose, AI, superdeterminism, consciousness, integrated information theory, and whatever the hell else Curt asks me about. I strongly recommend watching the video at 2x speed to smooth over my verbal infelicities.

In answer to a criticism I’ve received: I agree that it would’ve been better for me, in this podcast, to describe Wolfram’s “computational irreducibility” as simply “the phenomenon where you can’t predict a computation faster than by running it,” rather than also describing it as a “discrete analog of chaos / sensitive dependence on initial conditions.” (The two generally co-occur in the systems Wolfram talks about, but are not identical.)

On the other hand: no, I do not recognize that Wolfram deserves credit for giving a new name (“computational irreducibility”) to a thing that was already well-understood in the relevant fields.  This is particularly true given that

(1) the earlier understanding of the halting problem and the time hierarchy theorem was rigorous, giving us clear criteria for proving when computations can be sped up and when they can’t be, and

(2) Wolfram replaced it with handwaving (“well, I can’t see how this process could be predicted faster than by running it, so let’s assume that it can’t be”).

In other words, the earlier understanding was not only decades before Wolfram, it was superior.

It would be as if I announced my new “Principle of Spacetime Being Like A Floppy Trampoline That’s Bent By Gravity,” and then demanded credit because even though Einstein anticipated some aspects of my principle with his complicated and confusing equations, my version was easier for the layperson to intuitively understand.

I’ll reopen the comments on this post, but only for comments on my Theories of Everything podcast.


Another Update (Dec. 1): Quanta Magazine now has a 20-minute explainer video on Boolean circuits, Turing machines, and the P versus NP problem, featuring yours truly. If you already know these topics, you’re unlikely to learn anything new, but if you don’t know them, I found this to be a beautifully produced introduction with top-notch visuals. Better yet—and unusually for this sort of production—everything I saw looked entirely accurate, except that (1) the video never explains the difference between Turing machines and circuits (i.e., between uniform and non-uniform computation), and (2) the video also never clarifies where the rough identities “polynomial = efficient” and “exponential = inefficient” hold or fail to hold.


For the many friends who’ve asked me to comment on the OpenAI drama: while there are many things I can’t say in public, I can say I feel relieved and happy that OpenAI still exists. This is simply because, when I think of what a world-leading AI effort could look like, many of the plausible alternatives strike me as much worse than OpenAI, a company full of thoughtful, earnest people who are at least asking the right questions about the ethics of their creations, and who—the real proof that they’re my kind of people—are racked with self-doubts (as the world has now spectacularly witnessed). Maybe I’ll write more about the ethics of self-doubt in a future post.

For now, the narrative that I see endlessly repeated in the press is that last week’s events represented a resounding victory for the “capitalists” and “businesspeople” and “accelerationists” over the “effective altruists” and “safetyists” and “AI doomers,” or even that the latter are now utterly discredited, raw egg dripping from their faces. I see two overwhelming problems with that narrative. The first problem is that the old board never actually said that it was firing Sam Altman for reasons of AI safety—e.g., that he was moving too quickly to release models that might endanger humanity. If the board had said anything like that, and if it had laid out a case, I feel sure the whole subsequent conversation would’ve looked different—at the very least, the conversation among OpenAI’s employees, which proved decisive to the outcome. The second problem with the capitalists vs. doomers narrative is that Sam Altman and Greg Brockman and the new board members are also big believers in AI safety, and conceivably even “doomers” by the standards of most of the world. Yes, there are differences between their views and those of Ilya Sutskever and Adam D’Angelo and Helen Toner and Tasha McCauley (as, for that matter, there are differences within each group), but you have to drill deeper to articulate those differences.

In short, it seems to me that we never actually got a clean test of the question that most AI safetyists are obsessed with: namely, whether or not OpenAI (or any other similarly constituted organization) has, or could be expected to have, a working “off switch”—whether, for example, it could actually close itself down, competition and profits be damned, if enough of its leaders or employees became convinced that the fate of humanity depended on its doing so. I don’t know the answer to that question, but what I do know is that you don’t know either! If there’s to be a decisive test, then it remains for the future. In the meantime, I find it far from obvious what will be the long-term effect of last week’s upheavals on AI safety or the development of AI more generally. For godsakes, I couldn’t even predict what was going to happen from hour to hour, let alone the aftershocks years from now.


Since I wrote a month ago about my quantum computing colleague Aharon Brodutch, whose niece, nephews, and sister-in-law were kidnapped by Hamas, I should share my joy and relief that the Brodutch family was released today as part of the hostage deal. While it played approximately zero role in the release, I feel honored to have been able to host a Shtetl-Optimized guest post by Aharon’s brother Avihai. Meanwhile, over 180 hostages remain in Gaza. Like much of the world, I fervently hope for a ceasefire—so long as it includes the release of all hostages and the end of Hamas’s ability to repeat the Oct. 7 pogrom.


Greta Thunberg is now chanting to “crush Zionism” — ie, taking time away from saving civilization to ensure that half the world’s remaining Jews will be either dead or stateless in the civilization she saves. Those of us who once admired Greta, and experience her new turn as a stab to the gut, might be tempted to drive SUVs, fly business class, and fire up wood-burning stoves just to spite her and everyone on earth who thinks as she does.

The impulse should be resisted. A much better response would be to redouble our efforts to solve the climate crisis via nuclear power, carbon capture and sequestration, geoengineering, cap-and-trade, and other effective methods that violate Greta’s scruples and for which she and her friends will receive and deserve no credit.

(On Facebook, a friend replied that an even better response would be to “refuse to let people that we don’t like influence our actions, and instead pursue the best course of action as if they didn’t exist at all.” My reply was simply that I need a response that I can actually implement!)

The Tragedy of SBF

Monday, November 6th, 2023

So, Sam Bankman-Fried has been found guilty on all counts, after the jury deliberated for just a few hours. His former inner circle all pointed fingers at him, in exchange for immunity or reduced sentences, and their testimony doomed him. The most dramatic was the testimony of Caroline Ellison, the CEO of Alameda Research (to which FTX gave customer deposits) and SBF’s sometime-girlfriend. The testimony of Adam Yedidia, my former MIT student, who Shtetl-Optimized readers might remember for our paper proving the value of the 8000th Busy Beaver number independent of the axioms of set theory, also played a significant role. (According to news reports, Adam testified about confronting SBF during a tennis match over $8 billion in missing customer deposits.)

Just before the trial, I read Michael Lewis’s much-discussed book about what happened, Going Infinite. In the press, Lewis has generally been savaged for getting too close to SBF and for painting too sympathetic a portrait of him. The central problem, many reviewers explained, is that Lewis started working on the book six months before the collapse of FTX—when it still seemed to nearly everyone, including Lewis, that SBF was a hero rather than villain. Thus, Going Infinite reads like tale of triumph that unexpectedly veers at the end into tragedy, rather than the book Lewis obviously should’ve written, a tragedy from the start.

Me? I thought Going Infinite was great. And it was great partly because of, rather than in spite of, Lewis not knowing how the story would turn out when he entered it. The resulting document makes a compelling case for the radical contingency and uncertainty of the world—appropriate given that the subject, SBF, differed from those around him in large part by seeing everything probabilistically all the time (infamously, including ethics).

In other contexts, serious commentators love to warn against writing “Whig history,” the kind where knowledge of the outcome colors the whole. With the SBF saga, though, there seems to be a selective amnesia, where all the respectable people now always knew that FTX—and indeed, cryptocurrency, utilitarianism, and Effective Altruism in their entirety—were all giant scams from the beginning. Even if they took no actions based on that knowledge. Even if the top crypto traders and investors, who could’ve rescued or made fortunes by figuring out that FTX was on the verge of collapse, didn’t. Even if, when people were rightly suspicious about FTX, it still mostly wasn’t for the right reasons.

Going Infinite takes the radical view that, what insiders and financial experts didn’t know at the time, the narrative mostly shouldn’t know either. It should show things the way they seemed then, so that readers can honestly ponder the question: faced with this evidence, when would I have figured it out?


Even if Michael Lewis is by far the most sympathetic person to have written about SBF post-collapse, he still doesn’t defend him, not really. He paints a picture of someone who could totally, absolutely have committed the crimes for which he’s now been duly convicted. But—and this was the central revelation for me—Lewis also makes it clear that SBF didn’t have to.

With only “minor” changes, that is, SBF could still be running a multibillion-dollar cryptocurrency empire to this day, without lying, stealing, or fraud, and without the whole thing being especially vulnerable to collapse. He could have donated his billions to pandemic prevention and AI risk and stopping Trump. He conceivably even could’ve done more good, in one or more of those ways, than anyone else in the world was doing. He didn’t, but he came “close.” The tragedy is all the greater, some people might even say that SBF’s culpability (or the rage we should feel at him, or at fate) is all the greater, because of how close he came.

I’m not a believer in historical determinism. I’ve argued before on this blog that if Yitzhak Rabin hadn’t been killed—if he’d walked down the staircase a little differently, if he’d survived the gunshot—there would likely now be peace between Israel and Palestine. For that matter: if Hitler hadn’t been born, if he’d been accepted to art school, if he’d been shot while running between trenches in WWI, there would probably have been no WWII, and with near-certainty no Holocaust. Likewise, if not for certain contingent political developments of the 1970s (especially, the turn away from nuclear power), the world wouldn’t now face the climate crisis.

Maybe there’s an arc of the universe that bends toward horribleness. Or maybe someone has to occupy the freakishly horrible branches of the wavefunction, and that someone happens to be you and me. Or maybe the freakishly improbable good (for example, the availability of Winston Churchill and Alan Turing to win WWII) actually balances out the freakishly improbable bad in the celestial accounting, if only we could examine the books. Whatever the case, again and again civilization’s worst catastrophes were at least proximately caused by seemingly minor events that could have turned out differently.

But what’s the argument that FTX, Alameda, and SBF’s planet-sized philanthropic mission “could have” succeeded? It rests on three planks:

First, FTX was actually a profitable business till the end. It brought in hundreds of millions per year—meaning fees, not speculative investments—and could’ve continued doing so more-or-less indefinitely. That’s why even FTX’s executives were shocked when FTX became unable to honor customer withdrawals: FTX made plenty of money, so where the hell did it all go?

Second: we now have the answer to that mystery. John Ray, the grizzled CEO who managed FTX’s bankruptcy, has successfully recovered more than 90% of the customer funds that went missing in 2022! The recovery was complicated, enormously, by Ray’s refusal to accept help from former FTX executives, but ultimately the money was still there, stashed under the virtual equivalent of random sofa cushions.

Yes, the funds had been illegally stolen from FTX customer deposits—according to trial testimony, at SBF’s personal direction. Yes, the funds had then been invested in thousands of places—incredibly, with no one person or spreadsheet or anything really keeping track. Yes, in the crucial week, FTX was unable to locate the funds in time to cover customer withdrawals. But holy crap, the rockets’ red glare, the bombs bursting in air—the money was still there! Which means: if FTX had just had better accounting (!), the entire collapse might not have happened. This is a crucial part of the story that’s gotten lost, which is why I’m calling so much attention to it now. It’s a part that I imagine should be taught in accounting courses from now till the end of time. (“This double-entry bookkeeping might seem unsexy, but someday it could mean the difference between you remaining the most sought-after wunderkind-philanthropist in the world, and you spending the rest of your life in prison…”)

Third, SBF really was a committed utilitarian, as he apparently remains today. As a small example, he became a vegan after my former student Adam Yedidia argued him into it, even though giving up chicken was extremely hard for him. None of it was an act. It was not a cynical front for crime, or for the desire to live in luxury (something SBF really, truly seems not to have cared about, although he indulged those around him who did). When I blogged about SBF last fall, I mused that I’d wished I’d met him back when he was an undergrad at MIT and I was a professor there, so that I could’ve tried to convince him to be more risk-averse: for example, to treat utility as logarithmic rather than linear in money. To my surprise, I got bitterly attacked for writing that: supposedly, by blaming a “merely technical” failure, I was excusing SBF’s far more important moral failure.

But reading Lewis confirmed for me that it really was all part of the same package. (See also here for Sarah Constantin’s careful explanation of SBF’s failure to understand the rationale for the Kelly betting criterion, and how many of his later errors were downstream of that.) Not once but over and over, SBF considers hypotheticals of the form “if this coin lands heads then the earth gets multiplied by three, while if it lands tails then the earth gets destroyed”—and always, every time, he chooses to flip the coin. SBF was so committed to double-or-nothing that he’d take what he saw as a positive-expected-utility gamble even when his customers’ savings were on the line, even when all the future good he could do for the planet as well as the reputation of Effective Altruism were on the line, even when his own life and freedom were on the line.

On the one hand, you have to give that level of devotion to a principle its grudging due. On the other hand, if “the Gambler’s Ruin fallacy is not a fallacy” is so central to someone’s worldview, then how shocked should we be when he ends up … well, in Gambler’s Ruin?

The relevance is that, if SBF’s success and downfall alike came from truly believing what he said, then I’m plausibly correct that this whole story would’ve played out differently, had he believed something slightly different. And given the role of serendipitous conversations in SBF’s life (e.g., one meeting with William MacAskill making him an Effective Altruist, one conversation with Adam Yedidia making him a vegan), I find it plausible that a single conversation might’ve set him on the path to a less brittle, more fault-tolerant utilitarianism.


Going Infinite shows signs of being finished in a hurry, in time for the trial. Sometimes big parts of the story seem skipped over without comment; we land without warning in a later part and have to reorient ourselves. There’s almost nothing about the apparent rampant stimulant use at FTX and the role it might have played, nor does Lewis ever directly address the truth or falsehood of the central criminal charge against SBF (namely, that he ordered his subordinates to move customer deposits from FTX’s control to Alameda’s). Rather, the book has the feeling of a series of magazine articles, as Lewis alights on one interesting topic after the next: the betting games that Jane Street uses to pick interns (SBF discovered that he excelled at those games, unfortunately for him and for the world). The design process (such as it was) for FTX’s never-built Bahamian headquarters. The musings of FTX’s in-house psychotherapist, George Lerner. The constant struggles of SBF’s personal scheduler to locate SBF, get his attention, and predict where he might go next.

When it comes to explaining cryptocurrency, Lewis amusingly punts entirely, commenting that the reader has surely already read countless “blockchain 101” explainers that seemed to make sense at the time but didn’t really stick, and that in any case, SBF himself (by his own admission) barely understood crypto even as he started trading it by the billions.

Anyway, what vignettes we do get are so vividly written that they’ll clearly be a central part of the documentary record of this episode—as anyone who’d read any of Lewis’s previous books could’ve predicted.

And for anyone who accuses me or Lewis of excusing SBF: while I can’t speak for Lewis, I don’t even excuse myself. For the past 15 years, I should have paid more attention to cryptocurrency, to the incredible ease (in hindsight!) with which almost anyone could’ve ridden this speculative bubble in order to direct billions of dollars toward the salvation of the human race. If I wasn’t going to try it myself, then at least I should’ve paid attention to who else in my wide social circle was trying it. Who knows, maybe I could’ve discovered something about the extreme financial, moral, and legal risks those people were taking on, and then I could’ve screamed at them to turn the ship and avoid those risks. Instead, I spent the time proving quantum complexity theorems, and raising my kids, and teaching courses, and arguing with commenters on this blog. I was too selfish to enter the world of crypto billionaires.

The floorboard test

Monday, October 30th, 2023

Last night a colleague sent me a gracious message, wishing for the safe return of the hostages and expressing disgust over the antisemites in my comment section. I wanted to share my reply.


You have no idea how much this means to me.

I’ve just been shaking with anger after an exchange with the latest antisemite to email me. After I asked her whether she really wished for my family and friends in Israel to be murdered, she said that if I “read a fucking book that’s not about computers,” I would understand that “violence is the language of the oppressed.”

The experience of the last few weeks has radicalized me like nothing else in life. I’m not the same person as I was in September. My priorities are not the same. 48% of Americans aged 18-24 now say that they sympathize with Hamas more than Israel. Not with the Palestinian people, with Hamas. That’s nearly half of the next generation of my own country that might want me and my loved ones to be slaughtered.

I feel like the last thread connecting me to my previous life are the people like you, who write to me with kindness and understanding, and who make me think: there are Gentiles who would’ve hidden me under the floorboards when the SS showed up.

Be well.
—Scott

Shtetl-Optimized’s First-Ever “Profile in Courage”

Tuesday, October 10th, 2023

Update (Oct. 11): While this post celebrated Harvard’s Boaz Barak, and his successful effort to shame his into disapproving of the murder of innocents, I missed Boaz’s best tweet about this. There, Boaz points out that there might be a way to get Western leftists on board with basic humanity on this issue. Namely: we simply need to unearth video proof that, at some point before beheading their Jewish victims in front of their families, burning them alive, and/or parading their mutilated bodies through the streets, at some point Hamas also misgendered them.


The purpose of this post is to salute a longtime friend-of-the-blog for a recent display of moral courage.


Boaz Barak is one of the most creative complexity theorists and cryptographers in the world, Gordon McKay Professor of Computer Science at Harvard, and—I’m happy to report—soon (like me) to go on leave to work in OpenAI’s safety group. He’s a longtime friend-of-the-blog (having, for example, collaborated with me on the Five Worlds of AI post and Alarming trend in K-12 math education post), not to mention a longtime friend of me personally.

Boaz has always been well to my left politically. Secular, Israeli-born, and a protege of the … err, post-Zionist radical (?) Oded Goldreich, I can assure you that Boaz has never been quiet in his criticisms of Bibi’s emerging settler-theocracy.

This weekend, though, a thousand Israelis were murdered, kidnapped, and raped—children, babies, parents using their bodies to shield their kids, Holocaust survivors, young people at a music festival. It’s already entered history as the worst butchery of Jews since the Holocaust.

In response, 35 Harvard student organizations quickly issued a letter blaming Israel “entirely” for the pogrom, and expressing zero regrets of any kind about it—except for the likelihood of “colonial retaliation,” against which the letter urged a “firm stand.” Harvard President Claudine Gay, outspoken on countless other issues, was silent in response to the students’ effective endorsement of the Final Solution. So Boaz wrote an open letter to President Gay, a variant of which has now been signed by a hundred Harvard faculty. The letter reads, in part:

Every innocent death is a tragedy. Yet, this should not mislead us to create false equivalencies between the actions leading to this loss. Hamas planned and executed the murder and kidnapping of civilians, particularly women, children, and the elderly, with no military or other specific objective. This meets the definition of a war crime.  The Israeli security forces were engaging in self-defense against this attack while dealing with numerous hostage situations and a barrage of thousands of rockets hidden deliberately in dense urban settings.

The leaders of the major democratic countries united in saying that “the terrorist actions of Hamas have no justification, no legitimacy, and must be universally condemned” and that Israel should be supported “in its efforts to defend itself and its people against such atrocities.“ In contrast, while terrorists were still killing Israelis in their homes,  35 Harvard student organizations wrote that they hold “the Israeli regime entirely responsible for all unfolding violence,” with not a single word denouncing the horrific acts by Hamas. In the context of the unfolding events, this statement can be seen as nothing less than condoning the mass murder of civilians based only on their nationality. We’ve heard reports of even worse instances, with Harvard students celebrating the “victory” or “resistance” on social media.

As a University aimed at educating future leaders, this could have been a teaching moment and an opportunity to remind our students that beyond our political debates, some acts such as war crimes are simply wrong. However, the statement by Harvard’s administration fell short of this goal. While justly denouncing Hamas, it still contributed to the false equivalency between attacks on noncombatants and self-defense against those atrocities. Furthermore, the statement failed to condemn the justifications for violence that come from our own campus, nor to make it clear to the world that the statement endorsed by these organizations does not represent the values of the Harvard community.  How can Jewish and Israeli students feel safe on a campus in which it is considered acceptable to justify and even celebrate the deaths of Jewish children and families?

Boaz’s letter, and related comments by former Harvard President Larry Summers, seem to have finally spurred President Gay into dissociating the Harvard administration from the students’ letter.


When I get depressed about the state of the world—as I have a lot the past few days—it helps to remember the existence of such friends, not only in the world but in my little corner of it.

To all those who’ve emailed me…

Monday, October 9th, 2023

My wife’s family is OK; thanks very much for asking. But yes, missiles are landing and sirens are going off in Tel Aviv, and people there regularly have to use their buildings’ bomb shelters.

Of course, the main developments are further south, where at least seven hundred Israelis were murdered or kidnapped and thousands were wounded, in what’s being called “Israel’s 9/11” (ironically, I remember 9/11 itself being called America’s Israel experience). Some back-of-the-envelopes: this weekend, the number of Jews murdered for being Jews was about 12% of the number murdered per day in Auschwitz when it operated at max capacity, and nearly as many as were killed in the entire Six-Day War (most of whom were soldiers). It was also about ten 9/11’s, if scaled by the Israeli vs. US population.

As for why this war started, Hamas itself cited, not any desire to improve the miserable conditions of the people under its charge, but a few ultra-Orthodox Jews praying on the Temple Mount — a theological rationale.

This is either the worst intelligence and operational failure in Israeli history or the second-worst, after the Yom Kippur War. It’s impossible not to ask whether the total political dysfunction gripping Israel played a central role, whether Netanyahu’s ministers were much more interested in protecting West Bank settlers than in protecting communities near Gaza, and whether Hamas and Iran knowingly capitalized on all this. But there will be investigations afterward.

For now, both sides of Israel’s internal conflict — the secular modernists and the religious nationalist Bibi-ists — are completely united behind the goal of winning this unasked-for war, with the support of the world’s Jewish diaspora and reasonable people and nations, because what alternative is there?

Added: This Quillette article is good for the historical context that many Western intellectuals refuse to understand. Namely: for everything the Israeli government has done wrong, in Hamas it faces an enemy that descends directly from the Grand Mufti’s fusion of Nazism and Islamism in the 1930s and 1940s, and whose goal since its founding has been explicitly genocidal toward all Jews everywhere on earth—as we saw in the worst massacre of Jews since the Holocaust that it carried out this weekend.


Update: This is really, really not thematically appropriate to this post, but … an interview with me, entitled Scott Aaronson Disentangles Quantum Hype, is now available on Craig Smith’s “Eye on AI” podcast. Give it a listen if you’re interested.