Archive for the ‘Nerd Interest’ Category

Beth Harmon and the Inner World of Squares

Monday, December 14th, 2020

The other day Dana and I finished watching The Queen’s Gambit, Netflix’s fictional saga of an orphaned girl in the 1960’s, Beth Harmon, who breaks into competitive chess and destroys one opponent after the next in her quest to become the world champion, while confronting her inner demons and addictions.

The show is every bit as astoundingly good as everyone says it is, and I might be able to articulate why. It’s because, perhaps surprisingly given the description, this is a story where chess actually matters—and indeed, the fact that chess matters so deeply to Beth and most of the other characters is central to the narrative.  (As in two pivotal scenes where Beth has sex with a male player, and then either she or he goes right back to working on chess.)

I’ve watched a lot of TV shows and movies, supposedly about scientists, where the science was just an interchangeable backdrop to what the writers clearly regarded as a more important story.  (As one random example, the drama NUMB3RS, supposedly about an FBI mathematician, where “math” could’ve been swapped out for “mystical crime-fighting intuition” with barely any change.)

It’s true that a fictional work about scientists shouldn’t try to be a science documentary, just like Queen’s Gambit doesn’t try to be a chess documentary.  But if you’re telling a story about characters who are obsessed with topic X, then you need to make their obsession plausible, make the entire story hinge on it, and even make the audience vicariously feel the same obsession.

This is precisely what Queen’s Gambit does for chess.  It’s a chess drama where the characters are constantly talking about chess, thinking about chess, and playing chess—and that actually succeeds in making that riveting.  (Even if most of the audience can’t follow what’s happening on the board, it turns out that it doesn’t matter, since you can simply convey the drama through the characters’ faces and the reactions of those around them.)

Granted, a few aspects of competitive chess in the series stood out as jarringly unrealistic even to a novice like me: for example, the almost complete lack of draws.  But as for the board positions—well, apparently Kasparov was a consultant, and he helped meticulously design each one to reflect the characters’ skill levels and what was happening in the plot.

While the premise sounds like a feminist wish-fulfillment fantasy—orphan girl faces hundreds of intimidating white men in the sexist 1960s, orphan girl beats them all at their own game with style and aplomb—this is not at all a MeToo story, or a story about male crudity or predation.  It’s after bigger fish than that.  The series, you might say, conforms to all the external requirements of modern woke ideology, yet the actual plot subverts the tenets of that ideology, or maybe just ignores them, in its pursuit of more timeless themes.

At least once Beth Harmon enters the professional chess world, the central challenges she needs to overcome are internal and mental—just like they’re supposed to be in chess.  It’s not the Man or the Patriarchy or any other external power (besides, of course, skilled opponents) holding her down.  Again and again, the top male players are portrayed not as sexist brutes but as gracious, deferential, and even awestruck by Beth’s genius after she’s humiliated them on the chessboard.  And much of the story is about how those vanquished opponents then turn around and try to help Beth, and about how she needs to learn to accept their help in order to evolve as a player and a human being.

There’s also that, after defeating male player after male player, Beth sleeps with them, or at least wants to.  I confess that, as a teenager, I would’ve found that unlikely and astonishing.  I would’ve said: obviously, the only guys who’d even have a chance to prove themselves worthy of the affection of such a brilliant and unique woman would be those who could beat her at chess.  Anyone else would just be dirt between her toes.  In the series, though, each male player propositions Beth only after she’s soundly annihilated him.  And she’s never once shown refusing.

Obviously, I’m no Beth Harmon; I’ll never be close in my field to what she is in hers.  Equally obviously, I grew up in a loving family, not an orphanage.  Still, I was what some people would call a “child prodigy,” what with the finishing my PhD at 22 and whatnot, so naturally that colored my reaction to the show.

There’s a pattern that goes like this: you’re obsessively interested, from your first childhood exposure, in something that most people aren’t.  Once you learn what the something is, it’s evident to you that your life’s vocation couldn’t possibly be anything else, unless some external force prevents you.  Alas, in order to pursue the something, you first need to get past bullies and bureaucrats, who dismiss you as a nobody, put barriers in your way, despise whatever you represent to them.  After a few years, though, the bullies can no longer stop you: you’re finally among peers or superiors in your chosen field, regularly chatting with them on college campuses or at conferences in swanky hotels, and the main limiting factor is just the one between your ears. 

You feel intense rivalries with your new colleagues, of course, you desperately want to excel them, but the fact that they’re all on the same obsessive quest as you means you can never actually hate them, as you did the bureaucrats or the bullies.  There’s too much of you in your competitors, and of them in you.

As you pursue your calling, you feel yourself torn in the following way.  On the one hand, you feel close to a moral obligation to humanity not to throw away whatever “gift” you were “given” (what loaded terms), to take the calling as far as it will go.  On the other hand, you also want the same things other people want, like friendship, validation, and of course sex.

In such a case, two paths naturally beckon.  The first is that of asceticism: making a virtue of eschewing all temporal attachments, romance or even friendship, in order to devote yourself entirely to the calling.  The second is that of renouncing the calling, pretending it never existed, in order to fit in and have a normal life.  Your fundamental challenge is to figure out a third path, to plug yourself into a community where the relentless pursuit of the unusual vocation and the friendship and the sex can all complement each other rather than being at odds.

It would be an understatement to say that I have some familiarity with this narrative arc.

I’m aware, of course, of the irony, that I can identify with so many contours of Beth Harmon’s journey—I, Scott Aaronson, who half the Internet denounced six years ago as a misogynist monster who denies the personhood and interiority of women.  In that life-alteringly cruel slur, there was a microscopic grain of truth, and it’s this: I’m not talented at imagining myself into the situations of people different from me.  It’s never been my strong suit.  I might like and admire people different from me, I might sympathize with their struggles and wish them every happiness, but I still don’t know what they’re thinking until they tell me.  And even then, I don’t fully understand it.

As one small but illustrative example, I have no intuitive understanding—zero—of what it’s like to be romantically attracted to men, or what any man could do or say or look like that could possibly be attractive to women.  If you have such an understanding, then imagine yourself sighted and me blind.  Intellectually, I might know that confidence or height or deep brown eyes or brooding artistry are supposed to be attractive in human males, but only because I’m told.  As far as my intuition is concerned, pretty much all men are equally hairy, smelly, and gross, a large fraction of women are alluring and beautiful and angelic, and both of those are just objective features of reality that no one could possibly see otherwise.

Thus, whenever I read or watch fiction starring a female protagonist who dates men, it’s very easy for me to imagine that protagonist judging me, enumerating my faults, and rejecting me, and very hard for me to do what I’m supposed to do, which is to put myself into her shoes.  I could watch a thousand female protagonists kiss a thousand guys onscreen, or wake up in bed next to them, and the thousandth-and-first time I’d still be equally mystified about what she saw in such a sweaty oaf and why she didn’t run from him screaming, and I’d snap out of vicariously identifying with her.  (Understanding gay men of course presents similar difficulties; understanding lesbians is comparatively easy.)

It’s possible to overcome this, but it takes an extraordinary female protagonist, brought to life by an extraordinary writer.  Off the top of my head, I can think of only a few.  There were Renee Feuer and Eva Mueller, the cerebral protagonists of Rebecca Newberger Goldstein’s The Mind-Body Problem and The Late Summer Passion of a Woman of Mind.  Maybe Ellie Arroway from Carl Sagan’s Contact.  And then there’s Beth Harmon.  With characters like these, I can briefly enter a space where their crushes on men seem no weirder or more inexplicable to me than my own teenage crushes … just, you know, inverted.  Sex is in any case secondary to the character’s primary urge to discover timeless truths, an urge that I fully understand because I’ve shared it.

Granted, the timeless truths of chess, an arbitrary and invented game, are less profound than those of quantum gravity or the P vs. NP problem, but the psychology is much the same, and The Queen’s Gambit does a good job of showing that.  To understand the characters of this series is to understand why they could be happier to lose an interesting game than to win a boring one.  And I could appreciate that, even if I was by no means the strongest player at my elementary school’s chess club, and the handicap with which I can beat my 7-year-old daughter is steadily decreasing.

The Complete Idiot’s Guide to the Independence of the Continuum Hypothesis: Part 1 of <=Aleph_0

Saturday, October 31st, 2020

A global pandemic, apocalyptic fires, and the possible descent of the US into violent anarchy three days from now can do strange things to the soul.

Bertrand Russell—and if he’d done nothing else in his long life, I’d love him forever for it—once wrote that “in adolescence, I hated life and was continually on the verge of suicide, from which, however, I was restrained by the desire to know more mathematics.” This summer, unable to bear the bleakness of 2020, I obsessively read up on the celebrated proof of the unsolvability of the Continuum Hypothesis (CH) from the standard foundation of mathematics, the Zermelo-Fraenkel axioms of set theory. (In this post, I’ll typically refer to “ZFC,” which means Zermelo-Fraenkel plus the famous Axiom of Choice.)

For those tuning in from home, the Continuum Hypothesis was formulated by Georg Cantor, shortly after his epochal discovery that there are different orders of infinity: so for example, the infinity of real numbers (denoted C for continuum, or \( 2^{\aleph_0} \)) is strictly greater than the infinity of integers (denoted ℵ0, or “Aleph-zero”). CH is simply the statement that there’s no infinity intermediate between ℵ0 and C: that anything greater than the first is at least the second. Cantor tried in vain for decades to prove or disprove CH; the quest is believed to have contributed to his mental breakdown. When David Hilbert presented his famous list of 23 unsolved math problems in 1900, CH was at the very top.

Halfway between Hilbert’s speech and today, the question of CH was finally “answered,” with the solution earning the only Fields Medal that’s ever been awarded for work in set theory and logic. But unlike with any previous yes-or-no question in the history of mathematics, the answer was that there provably is no answer from the accepted axioms of set theory! You can either have intermediate infinities or not; neither possibility can create a contradiction. And if you do have intermediate infinities, it’s up to you how many: 1, 5, 17, ∞, etc.

The easier half, the consistency of CH with set theory, was proved by incompleteness dude Kurt Gödel in 1940; the harder half, the consistency of not(CH), by Paul Cohen in 1963. Cohen’s work introduced the method of forcing, which was so fruitful in proving set-theoretic questions unsolvable that it quickly took over the whole subject of set theory. Learning Gödel and Cohen’s proofs had been a dream of mine since teenagerhood, but one I constantly put off.

This time around I started with Cohen’s retrospective essay, as well as Timothy Chow’s Forcing for Dummies and A Beginner’s Guide to Forcing. I worked through Cohen’s own Set Theory and the Continuum Hypothesis, and Ken Kunen’s Set Theory: An Introduction to Independence Proofs, and Dana Scott’s 1967 paper reformulating Cohen’s proof. I emailed questions to Timothy Chow, who was ridiculously generous with his time. When Tim and I couldn’t answer something, we tried Bob Solovay (one of the world’s great set theorists, who later worked in computational complexity and quantum computing), or Andreas Blass or Asaf Karagila. At some point mathematician and friend-of-the-blog Greg Kuperberg joined my quest for understanding. I thank all of them, but needless to say take sole responsibility for all the errors that surely remain in these posts.

On the one hand, the proof of the independence of CH would seem to stand with general relativity, the wheel, and the chocolate bar as a triumph of the human intellect. It represents a culmination of Cantor’s quest to know the basic rules of infinity—all the more amazing if the answer turns out to be that, in some sense, we can’t know them.

On the other hand, perhaps no other scientific discovery of equally broad interest remains so sparsely popularized, not even (say) quantum field theory or the proof of Fermat’s Last Theorem. I found barely any attempts to explain how forcing works to non-set-theorists, let alone to non-mathematicians. One notable exception was Timothy Chow’s Beginner’s Guide to Forcing, mentioned earlier—but Chow himself, near the beginning of his essay, calls forcing an “open exposition problem,” and admits that he hasn’t solved it. My modest goal, in this post and the following ones, is to make a further advance on the exposition problem.

OK, but why a doofus computer scientist like me? Why not, y’know, an actual expert? I won’t put forward my ignorance as a qualification, although I have often found that the better I learn a topic, the more completely I forget what initially confused me, and so the less able I become to explain things to beginners.

Still, there is one thing I know well that turns out to be intimately related to Cohen’s forcing method, and that made me feel like I had a small “in” for this subject. This is the construction of oracles in computational complexity theory. In CS, we like to construct hypothetical universes where P=NP or P≠NP, or P≠BQP, or the polynomial hierarchy is infinite, etc. To do so, we, by fiat, insert a new function—an oracle—into the universe of computational problems, carefully chosen to make the desired statement hold. Often the oracle needs to satisfy an infinite list of conditions, so we handle them one by one, taking care that when we satisfy a new condition we don’t invalidate the previous conditions.

All this, I kept reading, is profoundly analogous to what the set theorists do when they create a mathematical universe where the Axiom of Choice is true but CH is false, or vice versa, or any of a thousand more exotic possibilities. They insert new sets into their models of set theory, sets that are carefully constructed to “force” infinite lists of conditions to hold. In fact, some of the exact same people—such as Solovay—who helped pioneer forcing in the 1960s, later went on to pioneer oracles in computational complexity. We’ll say more about this connection in a future post.

How Could It Be?

How do you study a well-defined math problem, and return the answer that, as far as the accepted axioms of math can say, there is no answer? I mean: even supposing it’s true that there’s no answer, how do you prove such a thing?

Arguably, not even Gödel’s Incompleteness Theorem achieved such a feat. Recall, the Incompleteness Theorem says loosely that, for every formal system F that could possibly serve as a useful foundation for mathematics, there exist statements even of elementary arithmetic that are true but unprovable in F—and Con(F), a statement that encodes F’s own consistency, is an example of one. But the very statement that Con(F) is unprovable is equivalent to Con(F)’s being true (since an inconsistent system could prove anything, including Con(F)). In other words, if the Incompleteness Theorem as applied to F holds any interest, then that’s only because F is, in fact, consistent; it’s just that resources beyond F are needed to prove this.

Yes, there’s a “self-hating theory,” F+Not(Con(F)), which believes in its own inconsistency. And yes, by Gödel, this self-hating theory is consistent if F itself is. This means that it has a model—involving “nonstandard integers,” formal artifacts that effectively promise a proof of F’s inconsistency without ever actually delivering it. We’ll have much, much more to say about models later on, but for now, they’re just collections of objects, along with relationships between the objects, that satisfy all the axioms of a theory (thus, a model of the axioms of group theory is simply … any group!).

In any case, though, the self-hating theory F+Not(Con(F)) can’t be arithmetically sound: I mean, just look at it! It’s either unsound because F is consistent, or else it’s unsound because F is inconsistent. In general, this is one of the most fundamental points in logic: consistency does not imply soundness. If I believe that the moon is made of cheese, that might be consistent with all my other beliefs about the moon (for example, that Neil Armstrong ate delicious chunks of it), but that doesn’t mean my belief is true. Like the classic conspiracy theorist, who thinks that any apparent evidence against their hypothesis was planted by George Soros or the CIA, I might simply believe a self-consistent collection of absurdities. Consistency is purely a syntactic condition—it just means that I can never prove both a statement and its opposite—but soundness goes further, asserting that whatever I can prove is actually the case, a relationship between what’s inside my head and what’s outside it.

So again, assuming we had any business using F in the first place, the Incompleteness Theorem gives us two consistent ways to extend F (by adding Con(F) or by adding Not(Con(F))), but only one sound way (by adding Con(F)). But the independence of CH from the ZFC axioms of set theory is of a fundamentally different kind. It will give us models of ZFC+CH, and models of ZFC+Not(CH), that are both at least somewhat plausible as “sketches of mathematical reality”—and that both even have defenders. The question of which is right, or whether it’s possible to decide at all, will be punted to the future: to the discovery (or not) of some intuitively compelling foundation for mathematics that, as Gödel hoped, answers the question by going beyond ZFC.

Four Levels to Unpack

While experts might consider this too obvious to spell out, Gödel’s and Cohen’s analyses of CH aren’t so much about infinity, as they are about our ability to reason about infinity using finite sequences of symbols. The game is about building self-contained mathematical universes to order—universes where all the accepted axioms about infinite sets hold true, and yet that, in some cases, seem to mock what those axioms were supposed to mean, by containing vastly fewer objects than the mathematical universe was “meant” to have.

In understanding these proofs, the central hurdle, I think, is that there are at least four different “levels of description” that need to be kept in mind simultaneously.

At the first level, Gödel’s and Cohen’s proofs, like all mathematical proofs, are finite sequences of symbols. Not only that, they’re proofs that can be formalized in elementary arithmetic (!). In other words, even though they’re about the axioms of set theory, they don’t themselves require those axioms. Again, this is possible because, at the end of the day, Gödel’s and Cohen’s proofs won’t be talking about infinite sets, but “only” about finite sequences of symbols that make statements about infinite sets.

At the second level, the proofs are making an “unbounded” but perfectly clear claim. They’re claiming that, if someone showed you a proof of either CH or Not(CH), from the ZFC axioms of set theory, then no matter how long the proof or what its details, you could convert it into a proof that ZFC itself was inconsistent. In symbols, they’re proving the “relative consistency statements”

Con(ZFC) ⇒ Con(ZFC+CH),
Con(ZFC) ⇒ Con(ZFC+Not(CH)),

and they’re proving these as theorems of elementary arithmetic. (Note that there’s no hope of proving Con(ZF+CH) or Con(ZFC+Not(CH)) outright within ZFC, since by Gödel, ZFC can’t even prove its own consistency.)

This translation is completely explicit; the independence proofs even yield algorithms to convert proofs of inconsistencies in ZFC+CH or ZFC+Not(CH), supposing that they existed, into proofs of inconsistencies in ZFC itself.

Having said that, as Cohen himself often pointed out, thinking about the independence proofs in terms of algorithms to manipulate sequences of symbols is hopeless: to have any chance of understanding these proofs, let alone coming up with them, at some point you need to think about what the symbols refer to.

This brings us to the third level: the symbols refer to models of set theory, which could also be called “mathematical universes.” Crucially, we always can and often will take these models to be only countably infinite: that is, to contain an infinity of sets, but “merely” ℵ0 of them, the infinity of integers or of finite strings, and no more.

The fourth level of description is from within the models themselves: each model imagines itself to have an uncountable infinity of sets. As far as the model’s concerned, it comprises the entire mathematical universe, even though “looking in from outside,” we can see that that’s not true. In particular, each model of ZFC thinks it has uncountably many sets, many themselves of uncountable cardinality, even if “from the outside” the model is countable.

Say what? The models are mistaken about something as basic as their own size, about how many sets they have? Yes. The models will be like The Matrix (the movie, not the mathematical object), or The Truman Show. They’re self-contained little universes whose inhabitants can never discover that they’re living a lie—that they’re missing sets that we, from the outside, know to exist. The poor denizens of the Matrix will never even be able to learn that their universe—what they mistakenly think of as the universe—is secretly countable! And no Morpheus will ever arrive to enlighten them, although—and this is crucial to Cohen’s proof in particular—the inhabitants will be able to reason more-or-less intelligibly about what would happen if a Morpheus did arrive.

The Löwenheim-Skolem Theorem, from the early 1920s, says that any countable list of first-order axioms that has any model at all (i.e., that’s consistent), must have a model with at most countably many elements. And ZFC is a countable list of first-order axioms, so Löwenheim-Skolem applies to it—even though ZFC implies the existence of an uncountable infinity of sets! Before taking the plunge, we’ll need to not merely grudgingly accept but love and internalize this “paradox,” because pretty much the entire proof of the independence of CH is built on top of it.

Incidentally, once we realize that it’s possible to build self-consistent yet “fake” mathematical universes, we can ask the question that, incredibly, the Matrix movies never ask. Namely, how do we know that our own, larger universe isn’t similarly a lie? The answer is that we don’t! As an example—I hope you’re sitting down for this—even though Cantor proved that there are uncountably many real numbers, that only means there are uncountably many reals for us. We can’t rule out the possibly that God, looking down on our universe, would see countably many reals.

Cantor’s Proof Revisited

To back up: the whole story of CH starts, of course, with Cantor’s epochal discovery of the different orders of infinity, that for example, there are more subsets of positive integers (or equivalently real numbers, or equivalently infinite binary sequences) than there are positive integers. The devout Cantor thought his discovery illuminated the nature of God; it’s never been entirely obvious to me that he was wrong.

Recall how Cantor’s proof works: we suppose by contradiction that we have an enumeration of all infinite binary sequences: for example,

s(0) = 00000000…
s(1) = 01010101…
s(2) = 11001010….
s(3) = 10000000….

We then produce a new infinite binary sequence that’s not on the list, by going down the diagonal and flipping each bit, which in the example above would produce 1011…

But look more carefully. What Cantor really shows is only that, within our mathematical universe, there can’t be an enumeration of all the reals of our universe. For if there were, we could use it to define a new real that was in the universe but not in the enumeration. The proof doesn’t rule out the possibility that God could enumerate the reals of our universe! It only shows that, if so, there would need to be additional, heavenly reals that were missing from even God’s enumeration (for example, the one produced by diagonalizing against that enumeration).

Which reals could possibly be “missing” from our universe? Every real you can name—42, π, √e, even uncomputable reals like Chaitin’s Ω—has to be there, right? Yes, and there’s the rub: every real you can name. Each name is a finite string of symbols, so whatever your naming system, you can only ever name countably many reals, leaving 100% of the reals nameless.

Or did you think of only the rationals or algebraic numbers as forming a countable dust of discrete points, with numbers like π and e filling in the solid “continuum” between them? If so, then I hope you’re sitting down for this: every real number you’ve ever heard of belongs to the countable dust! The entire concept of “the continuum” is only needed for reals that don’t have names and never will.

From ℵ0 Feet

Gödel and Cohen’s achievement was to show that, without creating any contradictions in set theory, we can adjust size of this elusive “continuum,” put more reals into it or fewer. How does one even start to begin to prove such a statement?

From a distance of ℵ0 feet, Gödel proves the consistency of CH by building minimalist mathematical universes: one where “the only sets that exist, are the ones required to exist by the ZFC axioms.” (These universes can, however, differ from each other in how “tall” they are: that is, in how many ordinals they have, and hence how many sets overall. More about that in a future post!) Gödel proves that, if the axioms of set theory are consistent—that is, if they describe any universes at all—then they also describe these minimalist universes. He then proves that, in any of these minimalist universes, from the standpoint of someone within that universe, there are exactly ℵ1 real numbers, and hence CH holds.

At an equally stratospheric level, Cohen proves the consistency of not(CH) by building … well, non-minimalist mathematical universes! A simple way is to start with Gödel’s minimalist universe—or rather, an even more minimalist universe than his, one that’s been cut down to have only countably many sets—and then to stick in a bunch of new real numbers that weren’t in that universe before. We choose the new real numbers to ensure two things: first, we still have a model of ZFC, and second, that we make CH false. The details of how to do that will, of course, concern us later.

My Biggest Confusion

In subsequent posts, I’ll say more about the character of the ZFC axioms and how one builds models of them to order. Just as a teaser, though, to conclude this post I’d like to clear up a fundamental misconception I had about this subject, from roughly the age of 16 until a couple months ago.

I thought: the way Gödel proves the consistency of CH, must be by examining all the sets in his minimalist universe, and checking that each one has either at most ℵ0 elements or else at least C of them. Likewise, the way Cohen proves the consistency of not(CH), must be by “forcing in” some extra sets, which have more than ℵ0 elements but fewer than C elements.

Except, it turns out that’s not how it works. Firstly, to prove CH in his universe, Gödel is not going to check each set to make sure it doesn’t have intermediate cardinality; instead, he’s simply going to count all the reals to make sure that there are only ℵ1 of them—where 1 is the next infinite cardinality after ℵ0. This will imply that C=ℵ1, which is another way to state CH.

More importantly, to build a universe where CH is false, Cohen is going to start with a universe where C=ℵ1, like Gödel’s universe, and then add in more reals: say, ℵ2 of them. The ℵ1 “original” reals will then supply our set of intermediate cardinality between the ℵ0 integers and the ℵ2 “new” reals.

Looking back, the core of my confusion was this. I had thought: I can visualize what ℵ0 means; that’s just the infinity of integers. I can also visualize what \( C=2^{\aleph_0} \) means; that’s the infinity of points on a line. Those, therefore, are the two bedrocks of clarity in this discussion. By contrast, I can’t visualize a set of intermediate cardinality between ℵ0 and C. The intermediate infinity, being weird and ghostlike, is the one that shouldn’t exist unless we deliberately “force” it to.

Turns out I had things backwards. For starters, I can’t visualize the uncountable infinity of real numbers. I might think I’m visualizing the real line—it’s solid, it’s black, it’s got little points everywhere—but how can I be sure that I’m not merely visualizing the ℵ0 rationals, or (say) the computable or definable reals, which include all the ones that arise in ordinary math?

The continuum C is not at all the bedrock of clarity that I’d thought it was. Unlike its junior partner ℵ0, the continuum is adjustable, changeable—and we will change it when we build different models of ZFC. What’s (relatively) more “fixed” in this game is something that I, like many non-experts, had always given short shrift to: Cantor’s sequence of Alephs ℵ0, ℵ1, ℵ2, etc.

Cantor, who was a very great man, didn’t merely discover that C>ℵ0; he also discovered that the infinite cardinalities form a well-ordered sequence, with no infinite descending chains. Thus, after ℵ0, there’s a next greater infinity that we call ℵ1; after ℵ1 comes ℵ2; after the entire infinite sequence ℵ0,ℵ1,ℵ2,ℵ3,… comes ℵω; after ℵω comes ℵω+1; and so on. These infinities will always be there in any universe of set theory, and always in the same order.

Our job, as engineers of the mathematical universe, will include pegging the continuum C to one of the Alephs. If we stick in a bare minimum of reals, we’ll get C=ℵ1, if we stick in more we can get C=ℵ2 or C=ℵ3, etc. We can’t make C equal to ℵ0—that’s Cantor’s Theorem—and we also can’t make C equal to ℵω, by an important theorem of König that we’ll discuss later (yes, this is an umlaut-heavy field). But it will turn out that we can make C equal to just about any other Aleph: in particular, to any infinity other than ℵ0 that’s not the supremum of a countable list of smaller infinities.

In some sense, this is the whole journey that we need to undertake in this subject: from seeing the cardinality of the continuum as a metaphysical mystery, which we might contemplate by staring really hard at a black line on white paper, to seeing the cardinality of the continuum as an engineering problem.

Stay tuned! Next installment coming after the civilizational Singularity in three days, assuming there’s still power and Internet and food and so forth.

Oh, and happy Halloween. Ghostly sets of intermediate cardinality … spoooooky!

On the destruction of America’s best high school

Sunday, October 4th, 2020

[C]hildren with special abilities and skills need to be nourished and encouraged. They are a national treasure. Challenging programs for the “gifted” are sometimes decried as “elitism.” Why aren’t intensive practice sessions for varsity football, baseball, and basketball players and interschool competition deemed elitism? After all, only the most gifted athletes participate. There is a self-defeating double standard at work here, nationwide.
—Carl Sagan, The Demon-Haunted World (1996)

I’d like you to feel about the impending destruction of Virginia’s Thomas Jefferson High School for Science and Technology, the same way you might’ve felt when the Taliban threatened to blow up the Bamyan Buddhas, and then days later actually did blow them up. Or the way you felt when human negligence caused wildfires that incinerated half the koalas in Australia, or turned the San Francisco skyline into an orange hellscape. For that matter, the same way most of us felt the day Trump was elected. I’d like you to feel in the bottom of your stomach the avoidability, and yet the finality, of the loss.

For thousands of kids in the DC area, especially first- or second-generation immigrants, TJHS represented a lifeline. Score high enough on an entrance exam—something hard but totally within your control—and you could attend a school where, instead of the other kids either tormenting or ignoring you, they might teach you Lisp or the surreal number system. Where you could learn humility instead of humiliation.

When I visited TJHS back in 2012 to give a quantum computing talk, I toured the campus, chatted with students, fielded their questions, and thought: so this is the teenagerhood—the ironically normal teenagerhood—that I was denied by living someplace else. I found myself wishing that a hundred more TJHS’s, large and small, would sprout up across the country. I felt like if I could further that goal then, though the universe return to rubble, my life would’ve had a purpose.

Instead, of course, our sorry country is destroying the few such schools that exist. Stuyvesant and Bronx Science in New York, and the Liberal Arts and Science Academy here in Austin, are also under mortal threat right now. The numerous parents who moved, who arranged their lives, specifically so that these schools might later be available for “high-risk” kids were suckered.

Assuming you haven’t just emerged from 30 years in a Tibetan cave, you presumably know why this is happening. As the Washington Post‘s Jay Matthews explains, the Fairfax County School Board is “embarrassed” to have a school that, despite all its outreach attempts, remains only 5% Black and Latino—even though, crucially, the school also happens to be only 19% White (it’s now ~75% Asian).

You might ask: so then why doesn’t TJHS just institute affirmative action, like almost every university does? It seems there’s an extremely interesting answer: they did in the 1990s, and Black and Hispanic enrollment surged. But then the verdicts of court cases, brought by right-wing groups, made the school district fear that they’d be open to lawsuits if they continued with affirmative action, so they dropped it. Now the boomerang has returned, and the School Board has decided on a more drastic remedy: namely, to eliminate the TJHS entrance exam entirely, and replace it by a lottery for anyone whose GPA exceeds 3.5.

The trouble is, TJHS without an entrance exam is no longer TJHS. More likely than not, such a place would simply converge to become another of the thousands of schools across the US where success is based on sports, networking, and popularity. And if by some miracle it avoided that fate, still it would no longer be available to most of the kids who‘d most need it.

So yes, the district is embarrassed—note that the Washington Post writer explains it as if that’s the most obvious, natural reaction in the world—to host a school that’s regularly ranked #1 in the US, with the highest average SATs and a distinguished list of alumni. To avoid this embarrassment, the solution is (in effect) to burn the school to the ground.

In a world-historic irony, the main effect of this “solution” will be to drastically limit the number of Asian students, while drastically increasing (!!!) the number of White students. The proportion of Black and Hispanic students is projected to increase a bit but remain small. Let me say that one more time: in practice, TJHS’s move from a standardized test to a lottery will be overwhelmingly pro-White, anti-Asian, and anti-immigrant; only as a much smaller effect will it be pro-underrepresented-minority.

In spite of covid and everything else going on, hundreds of students and parents have been protesting in front of TJHS to try to prevent the school’s tragic and pointless destruction. But it sounds like TJHS’s fate might be sealed. The school board tolerated excellence for 35 more years than it wanted to; now its patience is at an end.

Some will say: sure, the end of TJHS is unfortunate, Scott, but why do you let this stuff weigh on you so heavily? This is merely another instance of friendly fire, of good people fighting the just war against racism, and in one case hitting a target that, yeah, OK, probably should’ve been spared. On reflection, though, I can accept that only insofar as I accept that it was “friendly fire” when Bolsheviks targeted the kulaks, or (much more comically, less importantly, and less successfully) when Arthur Chu, Amanda Marcotte, and a thousand other woke-ists targeted me. With friendly fire like that, who needs enemy fire?

If you care about the gifted Black and Hispanic kids of Fairfax County, then like me, you should demand a change in the law to allow the reinstatement of affirmative action for them. You should acknowledge that the issue lies there and not with TJHS itself.

I don’t see how you reach the point of understanding all the facts and still wanting to dismantle TJHS, over the desperate pleas of the students and parents, without a decent helping of resentment toward the kind of student who flourishes there—without a wish to see those uppity, “fresh off the boat” Chinese and Indian grinds get dragged down to where they belong. And if you tell me that such magnet programs need to end even though you yourself once benefitted from them—well, isn’t that more contemptible still? Aren’t you knowingly burning a bridge you crossed so that a younger generation can’t follow you, basically reassuring the popular crowd that if they’ll only accept you, then there won’t be a hundred more greasy nerds in your tow? And if, on some level, you already know these things about yourself, then the only purpose of this post has been to remind you of them.

As for the news that dominates the wires and inevitably preempts what I’ve written: I wish for his successful recovery, followed by his losing the election and spending the rest of his life in New York State prison. (And I look forward to seeing how woke Twitter summarizes the preceding statement—e.g., “Aaronson, his mask finally off, conveys well-wishes to Donald Trump”…)

See further discussion of this post on Hacker News.

My Utility+ podcast with Matthew Putman

Thursday, September 3rd, 2020

Another Update (Sep. 15): Sorry for the long delay; new post coming soon! To tide you over—or just to distract you from the darkness figuratively and literally engulfing our civilization—here’s a Fortune article about today’s announcement by IBM of its plans for the next few years in superconducting quantum computing, with some remarks from yours truly.

Another Update (Sep. 8): A reader wrote to let me know about a fundraiser for Denys Smirnov, a 2015 IMO gold medalist from Ukraine who needs an expensive bone marrow transplant to survive Hodgkin’s lymphoma. I just donated and I hope you’ll consider it too!

Update (Sep. 5): Here’s another quantum computing podcast I did, “Dunc Tank” with Duncan Gammie. Enjoy!

Thanks so much to Shtetl-Optimized readers, so far we’ve raised $1,371 for the Biden-Harris campaign and $225 for the Lincoln Project, which I intend to match for $3,192 total. If you’d like to donate by tonight (Thursday night), there’s still $404 to go!

Meanwhile, a mere three days after declaring my “new motto,” I’ve come up with a new new motto for this blog, hopefully a more cheerful one:

When civilization seems on the brink of collapse, sometimes there’s nothing left to talk about but maximal separations between randomized and quantum query complexity.

On that note, please enjoy my new one-hour podcast on Spotify (if that link doesn’t work, try this one) with Matthew Putman of Utility+. Alas, my umming and ahhing were more frequent than I now aim for, but that’s partly compensated for by Matthew’s excellent decision to speed up the audio. This was an unusually wide-ranging interview, covering everything from SlateStarCodex to quantum gravity to interdisciplinary conferences to the challenges of teaching quantum computing to 7-year-olds. I hope you like it!

The Busy Beaver Frontier

Thursday, July 23rd, 2020

Update (July 27): I now have a substantially revised and expanded version (now revised and expanded even a second time), which incorporates (among other things) the extensive feedback that I got from this blog post. There are new philosophical remarks, some lovely new open problems, and an even-faster-growing (!) integer sequence. Check it out!

Another Update (August 13): Nick Drozd now has a really nice blog post about his investigations of my Beeping Busy Beaver (BBB) function.

A life that was all covid, cancellations, and Trump, all desperate rearguard defense of the beleaguered ideals of the Enlightenment, would hardly be worth living. So it was an exquisite delight, these past two weeks, to forget current events and write an 18-page survey article about the Busy Beaver function: the staggeringly quickly-growing function that probably encodes a huge portion of all interesting mathematical truth in its first hundred values, if only we could know those values or exploit them if we did.

Without further ado, here’s the title, abstract, and link:

The Busy Beaver Frontier
by Scott Aaronson

The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB(20) independent of set theory? Can we prove that BB(n+1)>2BB(n) for large enough n? Given BB(n), how many advice bits are needed to compute BB(n+1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable whether BB(n) is even or odd?

The article is slated to appear soon in SIGACT News. I’m grateful to Bill Gasarch for suggesting it—even with everything else going on, this was a commission I felt I couldn’t turn down!

Besides Bill, I’m grateful to the various Busy Beaver experts who answered my inquiries, to Marijn Heule and Andy Drucker for suggesting some of the open problems, to Marijn for creating a figure, and to Lily, my 7-year-old daughter, for raising the question about the first value of n at which the Busy Beaver function exceeds the Ackermann function. (Yes, Lily’s covid homeschooling has included multiple lessons on very large positive integers.)

There are still a few days until I have to deliver the final version. So if you spot anything wrong or in need of improvement, don’t hesitate to leave a comment or send an email. Thanks in advance!

Of course Busy Beaver has been an obsession that I’ve returned to many times in my life: for example, in that Who Can Name the Bigger Number? essay that I wrote way back when I was 18, in Quantum Computing Since Democritus, in my public lecture at Festivaletteratura, and in my 2016 paper with Adam Yedidia that showed that the values of all Busy Beaver numbers beyond the 7910th are independent of the axioms of set theory (Stefan O’Rear has since shown that independence starts at the 748th value or sooner). This survey, however, represents the first time I’ve tried to take stock of BusyBeaverology as a research topic—collecting in one place all the lesser-known theorems and empirical observations and open problems that I found the most striking, in the hope of inspiring not just contemplation or wonderment but actual progress.

Within the last few months, the world of deep mathematics that you can actually explain to a child lost two of its greatest giants: John Conway (who died of covid, and who I eulogized here) and Ron Graham. One thing I found poignant, and that I didn’t know before I started writing, is that Conway and Graham both play significant roles in the story of the Busy Beaver function. Conway, because most of the best known candidates for Busy Beaver Turing machines turn out, when you analyze them, to be testing variants of the notorious Collatz Conjecture—and Conway is the one who proved, in 1972, that the set of “Collatz-like questions” is Turing-undecidable. And Graham because of Graham’s number from Ramsey theory—a candidate for the biggest number that’s ever played a role in mathematical research—and because of the discovery, four years ago, that the 18th Busy Beaver number exceeds Graham’s number.

(“Just how big is Graham’s number? So big that the 17th Busy Beaver number is not yet known to exceed it!”)

Anyway, I tried to make the survey pretty accessible, while still providing enough technical content to sink one’s two overgrown front teeth into (don’t worry, there are no such puns in the piece itself). I hope you like reading it at least 1/BB(10) as much as I liked writing it.

Update (July 24): Longtime commenter Joshua Zelinsky gently reminded me that one of the main questions discussed in the survey—namely, whether we can prove BB(n+1)>2BB(n) for all large enough n—was first brought to my attention by him, Joshua, in a 2013 Ask-Me-Anything session on this blog! I apologize to Joshua for the major oversight, which has now been corrected. On the positive side, we just got a powerful demonstration both of the intellectual benefits of blogging, and of the benefits of sharing paper drafts on one’s blog before sending them to the editor!

Pseudonymity as a trivial concession to genius

Tuesday, June 23rd, 2020

Update (6/24): For further thoughts and context about this unfolding saga, see this excellent piece by Tom Chivers (author of The AI Does Not Hate You, so far the only book about the rationalist community, one that I reviewed here).

This morning, like many others, I woke up to the terrible news that Scott Alexander—the man I call “the greatest Scott A. of the Internet”—has deleted SlateStarCodex in its entirety. The reason, Scott explains, is that the New York Times was planning to run an article about SSC. Even though the article was going to be positive, NYT decided that by policy, it would need to include Scott’s real surname (Alexander is his middle name). Scott felt that revealing his name to the world would endanger himself and his psychiatry patients. Taking down his entire blog was the only recourse that he saw.

The NYT writer, Cade Metz, was someone who I’d previously known and trusted from his reporting on Google’s quantum supremacy experiment. So in recent weeks, I’d spent a couple hours on the phone with Cade, answering his questions about the rationality community, the history of my interactions with it, and why I thought SlateStarCodex spoke to so many readers. Alas, when word got around the rationality community that Cade was writing a story, a huge panic arose that he was planning on some sort of Gawker-style hit piece or takedown. Trying to tamp down the fire, I told Scott Alexander and others that I knew Cade, his intentions were good, he was only trying to understand the community, and everyone should help him by talking to him openly.

In a year of historic ironies, here’s another one: that it was the decent, reasonable, and well-meaning Cade Metz, rather than any of the SneerClubbers or Twitter-gangsters who despised Scott Alexander for sharing his honest thoughts on hot-button issues, who finally achieved the latter’s dark dream of exiling Scott from the public sphere.

The recent news had already been bad enough: Trump’s “temporary suspension” of J1 and H1B visas (which will deal a body blow to American universities this year, and to all the foreign scientists who planned to work at them), on top of the civil unrest, on top of the economic collapse, on top of the now-resurgent coronavirus. But with no more SlateStarCodex, now I really feel like my world is coming to an end.

I’ve considered SSC to be the best blog on the Internet since not long after discovering it five years ago.  Of course my judgment is colored by one of the most notorious posts in SSC’s history (“Untitled”) being a ferocious defense of me, when thousands were attacking me and it felt like my life was finished.  But that’s merely what brought me there in the first place. I stayed because of Scott’s insights about everything else, and because of the humor and humanity and craftsmanship of his prose.  Since then I had the privilege to become friends with Scott, not only virtually but in real life, and to meet dozens of others in the SSC community, in its Bay Area epicenter and elsewhere.

In my view, for SSC to be permanently deleted would be an intellectual loss on the scale of, let’s say, John Stuart Mill or Mark Twain burning their collected works.  That might sound like hyperbole, but not (I don’t think) to the tens of thousands who read Scott’s essays and fiction, particularly during their 2013-2016 heyday, and who went from casual enjoyment to growing admiration to the gradual recognition that they were experiencing, “live,” the works that future generations of teachers will assign their students when they cover the early twenty-first century.  The one thing that mitigates this tragedy is the hope that it will yet be reversed (and, of course, the fact that backups still exist in the bowels of the Internet).

When I discovered Scott Alexander in early 2015, the one issue that gave me pause was his strange insistence on maintaining pseudonymity, even as he was already then becoming more and more of a public figure. In effect, Scott was trying to erect a firewall between his Internet persona and his personal and professional identities, and was relying on the entire world’s goodwill not to breach that firewall.  I thought to myself, “this can’t possibly last!  Scott simply writes too well to evade mainstream notice forever—and once he’s on the world’s radar, he’ll need to make a choice, about who he is and whether he’s ready to own his gifts to posterity under his real name.”  In retrospect, what astonishes me is that Scott has been able to maintain the “double life” for as long as he has!

In his takedown notice, Scott writes that it’s considered vitally important in psychiatry for patients to know almost nothing about their doctors, beyond their names and their areas of expertise. That caused me to wonder: OK, but doesn’t the world already have enough psychiatrists who are ciphers to their patients?  Would it be so terrible to have one psychiatrist with a clear public persona—possibly even one who patients sought out because of his public persona, because his writings gave evidence that he’d have sympathy or insight about their conditions?  To become a psychiatrist, does one really need to take a lifelong vow of boringness—a vow never to do or say anything notable enough that one would be “outed” to one’s patients?  What would Freud, or Jung, or any of the other famous therapist-intellectuals of times past have thought about such a vow?

Scott also mentions that he’s gotten death threats, and harassing calls to his workplace, from people who hate him because of his blog (and who found his real name by sleuthing). I wish I knew a solution to that. For what it’s worth, my blogging has also earned me a death threat, and threats to sue me, and accusatory letters to the president of my university—although in my case, the worst threats came neither from Jew-hating neo-Nazis nor from nerd-bashing SJWs, but from crackpots enraged that I wouldn’t use my blog to credit their proof of P≠NP or their refutation of quantum mechanics.

When I started Shtetl-Optimized back in 2005, I remember thinking: this is it.  From now on, the only secrets I’ll have in life will be ephemeral and inconsequential ones.  From this day on, every student in my class, every prospective employer, every woman who I ask on a date (I wasn’t married yet), can know whatever they want to know about my political sympathies, my deepest fears and insecurities, any of it, with a five-second Google search.  Am I ready for that?  I decided that I was—partly just because I‘ve never had the mental space to maintain multiple partitioned identities anyway, to remember what each one is or isn’t allowed to know and say!  I won’t pretend that this is the right decision for everyone, but it was my decision, and I stuck with it, and it wasn’t always easy but I’m still here and so evidently are you.

I’d be overjoyed if Scott Alexander were someday to reach a place in his life where he felt comfortable deciding similarly.  That way, not only could he enjoy the full acclaim that he’s earned for what he’s given to the world, but (much more importantly) his tens of thousands of fans would be able to continue benefitting from his insights.

For now, though, the brute fact is that Scott is obviously not comfortable making that choice.  That being so, it seems to me that, if the NYT was able to respect the pseudonymity of Banksy and many others who it’s reported on in the past, when revealing their real names would serve no public interest, then it should also be able to respect Scott Alexander’s pseudonymity.  Especially now that Scott has sent the most credible signal imaginable of how much he values that pseudonymity, a signal that astonished even me.  The world does not exist only to serve its rare geniuses, but surely it can make such trivial concessions to them.

AirToAll: Another guest post by Steve Ebin

Monday, April 20th, 2020

Scott’s foreword: Today I’m honored to host another guest post by friend-of-the-blog Steve Ebin, who not only published a beautiful essay here a month ago (the one that I titled “First it came from Wuhan”), but also posted an extremely informative timeline of what he understood when about the severity of the covid crisis, from early January until March 31st. By the latter date, Steve had quit his job, having made a hefty sum shorting airline stocks, and was devoting his full time to a new nonprofit to manufacture low-cost ventilators, called AirToAll. A couple weeks ago, Steve was kind enough to include me in one of AirToAll’s regular Zoom meetings; I learned more about pistons than I had in my entire previous life (admittedly, still not much). Which brings me to what Steve wants to talk about today: what he and others are doing and how you can help.

Without further ado, Steve’s guest post:

In my last essay on Coronavirus, I argued that Coronavirus will radically change society. In this blog post, I’d like to propose a structure for how we can organize to fight the virus. I will also make a call to action for readers of this blog to help a non-profit I co-founded, AirToAll, build safe, low-cost ventilators and other medical devices and distribute them across the world at scale.

There are four ways we can help fight coronavirus:

  1. Reduce exposure to the virus. Examples: learn where the virus is through better testing; attempt to be where the virus isn’t through social distancing, quarantining, and other means.
  2. Reduce the chance of exposure leading to infection. Examples: Wash your hands; avoid touching your face; wear personal protective equipment.
  3. Reduce the chance of infection leading to serious illness. Examples: improve your aerobic and pulmonary health; make it more difficult for coronavirus’s spike protein to bind to ACE-2 receptors; scale antibody therapies; consume adequate vitamin D; get more sleep; develop a vaccine.
  4. Reduce the chance of serious illness leading to death. Examples: ramp up the production and distribution of certain drugs; develop better drugs; build more ventilators; help healthcare workers.

Obviously, not every example I listed is practical, advisable, or will work, and some options, like producing a vaccine, may be better solutions than others. But we must pursue all approaches.

I’ve been devoting my own time to pursuing the fourth approach, reducing the chance that the illness will lead to death. Specifically, along with Neil Thanedar, I co-founded AirToAll, a nonprofit that helps bring low-cost, reliable, and clinically tested ventilators to market. I know lots of groups are working on this problem, so I thought I’d talk about it briefly.

First, like many groups, we’re designing our own ventilators. Although designing ventilators and bringing them to market at scale poses unique challenges, particularly in an environment where supply chains are strained, this is much easier than it must have been to build iron lungs in the early part of the 20th century, when Zoom conferencing wasn’t yet invented. When it comes to the ventilators we’re producing, we’re focused on safety and clinical validation rather than speed to market. We are not the farthest along here, but we’ve made good progress.

Second, our nonprofit is helping other groups produce safe and reliable ventilators by doing direct consultations with them and also by producing whitepapers to help them think through the issues at hand (h/t to Harvey Hawes, Abdullah Saleh, and our friends at ICChange).

Third, we’re working to increase the manufacturing capacity for currently approved ventilators.

The current shortage of ventilators is a symptom of a greater underlying problem: namely, the world is not good at recognizing healthcare crises early and responding to them quickly. While our nonprofit helps bring more ventilators to market, we are also trying to solve this greater underlying problem. I look at our work in ventilator-land as a first step towards our ultimate goal of making medical devices cheaper and more available through an open-source nonprofit model.

I am writing this post as a call to action to you, dear Shtetl-Optimized reader, to get involved.

You don’t have to be an engineer, pulmonologist, virologist, or epidemiologist to help us, although those skillsets are of course helpful and if you are we’d love to have you. If you have experience in data science and modeling, supply chain and manufacturing, public health, finance, operations, community management, or anything else a rapidly scaling organization needs, you can help us too. 

We are a group of 700+ volunteers and growing rapidly. If you’d like to help, we’d love to have you. If you might be interested in volunteering, click here. Donors click here. Everyone else, please email me at and include a clear subject line so I can direct you to the right person.

The quantum computer that knows all

Tuesday, April 14th, 2020

This is my first post in more than a month that’s totally unrelated to the covid crisis. Or rather, it’s related only insofar as it’s about a Hulu miniseries, the sort of thing that many of us have more occasion to watch while holed up at home.

Three weeks ago, a journalist named Ben Lindbergh—who’d previously asked me to comment on the scientific accuracy of Avengers: Endgame—asked me the same question about the miniseries Devs, which I hadn’t previously heard of.

[Warning: Spoilers follow]

‘Devs,’ I learned, is a spooky sci-fi action thriller about a secretive Silicon Valley company that builds a quantum computer that can perfectly reconstruct the past, down to what Jesus looked like on the cross, and can also (at least up to a point) predict the future.

And I was supposed, not only to endure such a show, but to comment on the accuracy of its invocations of quantum computing? This didn’t sound promising.

But, y’know, I was at home quarantined. So I agreed to watch the first episode. Which quickly turned into the second, third, fourth, fifth, sixth, and seventh episodes (the eighth and final one isn’t out yet).

It turns out that ‘Devs’ isn’t too bad, except that it’s not particularly about quantum computers. The latter is simply a buzzword chosen by the writers for a plot concept that would’ve been entirely familiar to the ancient Greeks, who called it the Delphic Oracle. You know, the mysterious entity that prophesies your fate, so then you try to escape the prophecy, but your very evasive maneuvers make the prophecy come true? Picture that, except with qubits—and for some reason, in a gleaming golden laboratory that has components that float in midair.

Devs Trailer Reveals New Look at FX-Hulu's Upcoming Limited Series
If you’re never visited a real quantum computing lab: they’re messier and a lot less golden.

At this point, I’ll just link you to Ben Lindbergh’s article about the show: Making Sense of the Science and Philosophy of ‘Devs.’ His long and excellent piece quotes me extensively enough that I see no need also to analyze the show in this blog post. (It also quotes several academic philosophers.)

Instead, I’ll just share a few tidbits that Ben left out, but that might be amusing to quantum computing fans.

  • The first episode opens with a conversation between two characters about how even “elliptical curve” cryptography is insecure against attack by quantum computers. So I immediately knew both that the writers had one or more consultants who actually knew something about QC, and also that those consultants were not as heavily involved as they could’ve been.
  • Similarly: in a later scene, some employees at the secretive company hold what appears to be a reading group about Shor’s algorithm. They talk about waves that interfere and cancel each other out, which is great, but beyond that their discussion sounded to me like nonsense. In particular, their idea seemed to be that the waves would reinforce at the prime factors p and q themselves, rather than at inverse multiples of the period of a periodic function that only indirectly encodes the factoring problem. (What do you say: should we let this one slide?)
  • “How many qubits does this thing have?” “A number that there would be no point in describing as a number.” ROFL
  • In the show, a crucial break comes when the employees abandon a prediction algorithm based on the deBroglie-Bohm pilot wave interpretation, and substitute one based on Everett’s many-worlds interpretation. Which I could actually almost believe, except that the many-worlds interpretation seems to contradict the entire premise of the rest of the show?
  • A new employee, after he sees the code of the superpowerful quantum computer for the first time, is so disoriented and overwhelmed that he runs and vomits into a toilet. I, too, have had that reaction to the claims of certain quantum computing companies, although in some sense for the opposite reason.

Anyway, none of the above addresses the show’s central conceit: namely, that the Laplace demon can be made real, the past and future rendered fully knowable (with at most occasional breaks and exceptions) by a machine that’s feasible to build. This conceit is fascinating to explore, but also false.

In the past, if you’d asked me to justify its falsity, I would’ve talked about chaos, and quantum mechanics, and the unknowability of the fine details of the universe’s state; I might’ve even pointed you to my Ghost in the Quantum Turing Machine essay. I also would’ve mentioned the severe conceptual difficulties in forcing Nature to find a fixed-point of a universe where you get to see your own future and act on that information (these difficulties are just a variant of the famous Grandfather Paradox).

But it occurs to me that, just as the coronavirus has now made plain the nature of exponential growth, even to the world’s least abstract-minded person, so too it’s made plain the universe’s unpredictability. Let’s put it this way: do you find it plausible that the quantum computer from ‘Devs,’ had you booted it up six months ago, would’ve known the exact state of every nucleotide in every virus in every bat in Wuhan? No? Then it wouldn’t have known our future.

And I see now that I’ve violated my promise that this post would have nothing to do with covid.

John Horton Conway (1937-2020)

Sunday, April 12th, 2020

Update (4/13): Check out the comments on this post for some wonderful firsthand Conway stories. Or for the finest tribute I’ve seen so far, see a MathOverflow thread entitled Conway’s lesser known results. Virtually everything there is a gem to be enjoyed by amateurs and experts alike. And if you actually click through to any of Conway’s papers … oh my god, what a rebuke to the way most of us write papers!

John Horton Conway, one of the great mathematicians and math communicators of the past half-century, has died at age 82.

Update: John’s widow, Diana Conway, left a nice note in the comments section of this post. I wish to express my condolences to her and to all of the Conway children and grandchildren.

Just a week ago, as part of her quarantine homeschooling, I introduced my seven-year-old daughter Lily to the famous Conway’s Game of Life. Compared to the other stuff we’ve been doing, like fractions and right triangles and the distributive property of multiplication, the Game of Life was a huge hit: Lily spent a full hour glued to the screen, watching the patterns evolve, trying to guess when they’d finally die out. So this first-grader knew who John Conway was, when I told her the sad news of his passing.

“Did he die from the coronavirus?” Lily immediately asked.

“I doubt it, but I’ll check,” I said.

Apparently it was the coronavirus. Yes, the self-replicating snippet of math that’s now terrorizing the whole human race, in part because those in power couldn’t or wouldn’t understand exponential growth. Conway is perhaps the nasty bugger’s most distinguished casualty so far.

I regrettably never knew Conway, although I did attend a few of his wildly popular and entertaining lectures. His The Book of Numbers (coauthored with Richard Guy, who himself recently passed away at age 103) made a huge impression on me as a teenager. I worked through every page, gasping at gems like eπ√163 (“no, you can’t be serious…”), embarrassed to be learning so much from a “fun, popular” book but grateful that my ignorance of such basic matters was finally being remedied.

A little like Pascal with his triangle or Möbius with his strip, Conway was fated to become best-known to the public not for his deepest ideas but for his most accessible—although for Conway, a principal puzzle-supplier to Martin Gardner for decades, the boundary between the serious and the recreational may have been more blurred than for any other contemporary mathematician. Conway invented the surreal number system, discovered three of the 26 sporadic simple groups, was instrumental in the discovery of monstrous moonshine, and did many other things that bloggers more qualified than I will explain in the coming days.

Closest to my wheelhouse, Conway together with Simon Kochen waded into the foundations of quantum mechanics in 2006, with their “Free Will Theorem”—a result Conway liked to summarize provocatively as “if human experimenters have free will, then so do the elementary particles they measure.” I confess that I wasn’t a fan at the time—partly because Conway and Kochen’s theorem was really about “freshly-generated randomness,” rather than free will in any sense related to agency, but also partly because I’d already known the conceptual point at issue, but had considered it folklore (see, e.g., my 2002 review of Stephen Wolfram’s A New Kind of Science). Over time, though, the “Free Will Theorem” packaging grew on me. Much like with the No-Cloning Theorem and other simple enormities, sometimes it’s worth making a bit of folklore so memorable and compelling that it will never be folklore again.

At a lecture of Conway’s that I attended, someone challenged him that his proposed classification of knots worked only in special cases. “Oh, of course, this only classifies 0% of knots—but 0% is a start!” he immediately replied, to roars from the audience. That’s just one line that I remember, but nearly everything out of his mouth was of a similar flavor. I noted that part of it was in the delivery.

As a mathematical jokester and puzzler who could delight and educate anyone from a Fields Medalist to a first-grader, Conway had no equal. For no one else who I can think of, even going back centuries and millennia, were entertainment and mathematical depth so closely marbled together. Here’s to a well-lived Life.

Feel free to share your own Conway memories in the comments.

Freeman Dyson and Boris Tsirelson

Saturday, February 29th, 2020

Today, as the world braces for the possibility of losing millions of lives to the new coronavirus—to the hunger for pangolin meat, of all things (combined with the evisceration of competent public health agencies like the CDC)—we also mourn the loss of two incredibly special lives, those of Freeman Dyson (age 96) and Boris Tsirelson (age 69).

Freeman Dyson was sufficiently legendary, both within and beyond the worlds of math and physics, that there’s very little I can add to what’s been said. It seemed like he was immortal, although I’d heard from mutual friends that his health was failing over the past year. When I spent a year as a postdoc at the Institute for Advanced Study, in 2004-5, I often sat across from Dyson in the common room, while he drank tea and read the news. That I never once struck up a conversation with him is a regret that I’ll now carry with me forever.

My only exchange with Dyson came when he gave a lecture at UC Berkeley, about how life might persist infinitely far into the future, even after the last stars had burnt out, by feeding off steadily dimishing negentropy flows in the nearly-thermal radiation. During the Q&A, I challenged Dyson that his proposal seemed to assume an analog model of computation. But, I asked, once we took on board the quantum-gravity insights of Jacob Bekenstein and others, suggesting that nature behaves like a (quantum) digital computer at the Planck scale, with at most ~1043 operations per second and ~1069 qubits per square meter and so forth, wasn’t this sort of proposal ruled out? “I’m not going to argue with you,” was Dyson’s response. Yes, he’d assumed an analog computational model; if computation was digital then that surely changed the picture.

Sometimes—and not just with his climate skepticism, but also (e.g.) with his idea that general relativity and quantum mechanics didn’t need to be reconciled, that it was totally fine for the deepest layer of reality to be a patchwork of inconsistent theories—Dyson’s views struck me as not merely contrarian but as a high-level form of trolling. Even so, Dyson’s book Disturbing the Universe had had a major impact on me as a teenager, for the sparkling prose as much as for the ideas.

With Dyson’s passing, the scientific world has lost one of its last direct links to a heroic era, of Einstein and Oppenheimer and von Neumann and a young Richard Feynman, when theoretical physics stood at the helm of civilization like never before or since. Dyson, who apparently remained not only lucid but mathematically powerful (!) well into his last year, clearly remembered when the Golden Age of science fiction looked like simply sober forecasting; when the smartest young people, rather than denouncing each other on Twitter, dreamed of scouting the solar system in thermonuclear-explosion-powered spacecraft and seriously worked to make that happen.

Boris Tsirelson (homepage, Wikipedia), who emigrated from the Soviet Union and then worked at Tel Aviv University (where my wife Dana attended his math lectures), wasn’t nearly as well known as Dyson to the wider world, but was equally beloved within the quantum computing and information community. Tsirelson’s bound, which he proved in the 1980s, showed that even quantum mechanics could only violate the Bell inequality by so much and by no more, could only let Alice and Bob win the CHSH game with probability cos2(π/8). This seminal result anticipated many of the questions that would only be asked decades later with the rise of quantum information. Tsirelson’s investigations of quantum nonlocality also led him to pose the famous Tsirelson’s problem: loosely speaking, can all sets of quantum correlations that can arise from an infinite amount of entanglement, be arbitrarily well approximated using finite amounts of entanglement? The spectacular answer—no—was only announced one month ago, as a corollary of the MIP*=RE breakthrough, something that Tsirelson happily lived to see although I don’t know what his reaction was (update: I’m told that he indeed learned of it in his final weeks, and was happy about it). Sadly, for some reason, I never met Tsirelson in person, although I did have lively email exchanges with him 10-15 years ago about his problem and other topics. This amusing interview with Tsirelson gives some sense for his personality (hat tip to Gil Kalai, who knew Tsirelson well).

Please share any memories of Dyson or Tsirelson in the comments section.