Archive for the ‘The Fate of Humanity’ Category

Was Scientific American Sokal’d?

Friday, September 24th, 2021

Here’s yesterday’s clickbait offering from Scientific American, the once-legendary home of Martin Gardner’s Mathematical Games column:

Why the Term ‘JEDI’ Is Problematic for Describing Programs That Promote Justice, Equity, Diversity and Inclusion

The sad thing is, I see few signs that this essay was meant as a Sokal-style parody, although in many ways it’s written as one. The essay actually develops a 100% cogent, reasoned argument: namely, that the ideology of the Star Wars films doesn’t easily fit with the new ideology of militant egalitarianism at the expense of all other human values, including irony, humor, joy, and the nurturing of unusual talents. The authors are merely oblivious to the conclusion that most people would draw from their argument: namely, so much the worse for the militant egalitarianism then!

I predict that this proposal—to send the acronym “JEDI” the way of “mankind,” “blacklist,” and, err, “quantum supremacy”—will meet with opposition even from the wokeists themselves, a huge fraction of whom (in my experience) have soft spots for the Star Wars franchise. Recall for example that in 2014, Laurie Penny used Star Wars metaphors in her interesting response to my comment-171, telling male nerds like me that we need to learn to accept that “[we’re] not the Rebel Alliance, [we’re] actually part of the Empire and have been all along.” Admittedly, I’ve never felt like part of an Empire, although I’ll confess to some labored breathing lately when ascending flights of stairs.

As for me, I spent much of my life opposed in principle to Star Wars—I hated how the most successful “science fiction” franchise of all time became that way precisely by ditching any pretense of science and fully embracing mystical woo—but sure, when the chips are down, I’m crazy and radical enough to take the side of Luke Skywalker, even if a team of woke theorists is earnestly, unironically explaining to me that lightsabers are phallocentric and that Vader ranks higher on the intersectional oppression axis because of his breathing problem.

Meantime, of course, the US continues to careen toward its worst Constitutional crisis since the Civil War, as Trump prepares to run again in 2024, and as this time around, the Republicans are systematically purging state governments of their last Brad Raffenspergers, of anyone who might stand in the way of them simply setting aside the vote totals and declaring Trump the winner regardless of the actual outcome. It’s good to know that my fellow progressives have their eyes on the ball—so that when that happens, at least universities will no longer be using offensive acronyms like “JEDI”!

Please cheer me up

Friday, August 27th, 2021


Update: Come to think of it, let’s circle back to the thing about kids under 13 getting banned from taking the SAT, as a ridiculous unintended consequence of some federal regulation. I wonder whether this is a campaign this blog could spearhead that would have an actual chance of making a positive difference in the world (!!), rather than just giving me space to express myself, to vent my impotent rage at the tragic failures of our civilization and the blankfaces who sleep soundly despite knowing that they caused those failures.  What if, like, a whole bunch of us wrote to the College Board, or whatever federal agency enforces the regulation that the College Board is worried about, and we asked them whether a solution might be found in which parents gave permission on the web form for their under-13s to take the SAT, given how memorable this opportunity was for many of us, how it was a nerd rite of passage, and how surely none of us have any wish to deny that opportunity to the next generation, so let’s work together to solve this?


I’m depressed that, all over the world, the values of the Enlightenment are humiliated and in retreat, while the values of the Taliban are triumphant. The literal Taliban of course, but also a thousand mini-Talibans of every flavor, united in their ideological certainty.

I’m depressed that now and for the future, the image of the United States before the world—deservedly so—is one of desperate Afghans plunging to their deaths from the last airplanes out of Kabul. I’m depressed that, while this historic calamity was set in motion by Donald Trump, the president who bears direct, immediate moral responsibility for it is the one I voted for. And knowing what I know now, I’d still have voted for him—but with an ashen face.

I’m depressed that, on social media, the same people who seven years ago floridly denounced me, because, while explaining how as a young person I overcame the urge to suicide and finally achieved some semblance of a normal life, I made a passing reference to a vanished culture of arranged marriages, to which I seemed better-adapted than to the world of today—these very same people are the ones sagely resigned to millions of Afghan women and girls actually forced into unwanted marriages, tortured, and raped, who explain that there’s nothing the US can or should do about this, even that it was folly to imagine we could impose parochial Western values, like women’s rights, on a culture that doesn’t want them. These are the people who saw fit to lecture me on my feminist failings.

I’m depressed that there’s an exceedingly good chance that both of my kids will get covid, as they’ve returned to school and preschool in Austin, TX, where the Delta variant is raging out of control, new reports of cases among the kids’ schoolmates come almost every day, Daniel has been quarantined at home for the past week because of one such case, there’s no vaccination mandate (and a looming battle over mask mandates), and—crucially, tragically, incredibly—the FDA has not only slow-walked approval of covid vaccines for children under 12, but has pushed back the approval even further than it previously planned, ignoring unprecedented public objections from the American Academy of Pediatrics. The FDA blankfaces have done this in spite of the reality, obvious to anyone with eyes and a brain, that they’re thereby consigning thousands of children to their deaths, that whatever ultra-rare risks the vaccine poses to children are infinitesimal compared to the overwhelming benefit.

Since I worry that I wasn’t clear enough, how about this: in a just world, the FDA in its current form would be dismantled, and all those who needlessly delayed the delivery of covid vaccines to children would be tried for manslaughter [while I still think the case for authorizing covid vaccines for kids right now is overwhelmingly clear, I hereby retract this particular remark, which was based on a factor 5-10 overestimate of the covid mortality risk for kids—for more, see this comment]. The blankfaces have already killed more people through pointlessly delaying the approval of covid vaccines than their agency could plausibly have saved through its entire history: do they need to take the children as well? As far as I’m concerned, those who defend the status quo—those who meet the on-the-ground reality of overflowing pediatric hospitals with obfuscatory words about procedures and best practices and the need for yet more data—are no better either morally or intellectually than the anti-vaxx conspiracy theorists, their rightly-reviled cousins.

As icing on the cake, I’m depressed that the College Board is no longer administering the SAT to children under 13, apparently because of federal regulations—-which means that Johns Hopkins CTY’s famed Study of Exceptional Talent, a program that made a big difference in my life three decades ago, has been suspended indefinitely. Imagine being a nerdy 11-year-old in 2021: no more tracking, no more gifted programs, no more magnet schools, no more acceleration, no getting vaccinated against deadly disease (!!), … oh, and if perchance you felt the urge to take the SAT, just to prove that you could outscore the grownups who decided to impose all this on you, then no, you’re no longer allowed to do that either.

The one bright spot in the endlessly bleak picture is that Daniel, my 4-year-old son, now plays a pretty mean chess game, if not quite at the level of Beth Harmon. Having just learned the rules a few months ago, Daniel now gives me and Dana (admittedly, no one would mistake either of us for Magnus Carlsen) extremely competitive matches; just yesterday he beat several adults in a park. Daniel has come to spend much of his free time (and now that he’s quarantined, he has a lot) playing chess against his iPad and watching chess videos. To be clear, he has very little emotional maturity even for a 4-year-old, and unlike me at the same age, he has no overwhelming passion for numbers or counting, but with chess I’ve finally found a winner. Now I just need to hope that they don’t ban chess-playing for children under 13.

So that’s it, it’s off my chest. Commenters: what else have you got that might cheer me up?

Steven Weinberg (1933-2021): a personal view

Saturday, July 24th, 2021

Steven Weinberg sitting in front of a chalkboard covered in equations

Steven Weinberg was, perhaps, the last truly towering figure of 20th-century physics. In 1967, he wrote a 3-page paper saying in effect that as far as he could see, two of the four fundamental forces of the universe—namely, electromagnetism and the weak nuclear force—had actually been the same force until a tiny fraction of a second after the Big Bang, when a broken symmetry caused them to decouple. Strangely, he had developed the math underlying this idea for the strong nuclear force, and it didn’t work there, but it did seem to work for the weak force and electromagnetism. Steve noted that, if true, this would require the existence of two force-carrying particles that hadn’t yet been seen — the W and Z bosons — and would also require the existence of the famous Higgs boson.

By 1979, enough of this picture had been confirmed by experiment that Steve shared the Nobel Prize in Physics with Sheldon Glashow—Steve’s former high-school classmate—as well as with Abdus Salam, both of whom had separately developed pieces of the same puzzle. As arguably the central architect of what we now call the Standard Model of elementary particles, Steve was in the ultra-rarefied class where, had he not won the Nobel Prize, it would’ve been a stain on the prize rather than on him.

Steve once recounted in my hearing that Richard Feynman initially heaped scorn on the electroweak proposal. Late one night, however, Steve was woken up by a phone call. It was Feynman. “I believe your theory now,” Feynman announced. “Why?” Steve asked. Feynman, being Feynman, gave some idiosyncratic reason that he’d worked out for himself.

It used to happen more often that someone would put forward a bold new proposal about the most fundamental laws of nature … and then the experimentalists would actually go out and confirm it. Besides with the Standard Model, though, there’s approximately one other time that that’s happened in the living memory of most of today’s physicists. Namely, when astronomers discovered in 1998 that the expansion of the universe was accelerating, apparently due to a dark energy that behaved like Einstein’s long-ago-rejected cosmological constant. Very few had expected such a result. There was one prominent exception, though: Steve Weinberg had written in 1987 that he saw no reason why the cosmological constant shouldn’t take a nonzero value that was still tiny enough to be consistent with galaxy formation and so forth.


In his long and illustrious career, one of the least important things Steve did, six years ago, was to play a major role in recruiting me and my wife Dana to UT Austin. The first time I met Steve, his first question to me was “have we met before? you look familiar.” It turns out that he’d met my dad, Steve Aaronson, way back in the 1970s, when my dad (then a young science writer) had interviewed Weinberg for a magazine article. I was astonished that Weinberg would remember such a thing across decades.

Steve was then gracious enough to take me, Dana, and both of my parents out to dinner in Austin as part of my and Dana’s recruiting trip.

We talked, among other things, about Telluride House at Cornell, where Steve had lived as an undergrad in the early 1950s and where I’d lived as an undergrad almost half a century later. Steve said that, while he loved the intellectual atmosphere at Telluride, he tried to have as little to do as possible with the “self-government” aspect, since he found the political squabbles that convulsed many of the humanities majors there to be a waste of time. I burst out laughing, because … well, imagine you got to have dinner with James Clerk Maxwell, and he opened up about some ridiculously specific pet peeve from his college years, and it was your ridiculously specific pet peeve from your college years.

(Steve claimed to us, not entirely convincingly, that he was a mediocre student at Cornell, more interested in “necking” with his fellow student and future wife Louise than in studying physics.)

After Dana and I came to Austin, Steve was kind enough to invite me to the high-energy theoretical physics lunches, where I chatted with him and the other members of his group every week (or better yet, simply listened). I’d usually walk to the faculty club ten minutes early. Steve, having arrived by car, would be sitting alone in an armchair, reading a newspaper, while he waited for the other physicists to arrive by foot. No matter how scorching the Texas sun, Steve would always be wearing a suit (usually a tan one) and a necktie, his walking-cane by his side. I, typically in ratty shorts and t-shirt, would sit in the armchair next to him, and we’d talk—about the latest developments in quantum computing and information (Steve, a perpetual student, would pepper me with questions), or his recent work on nonlinear modifications of quantum mechanics, or his memories of Cambridge, MA, or climate change or the anti-Israel protests in Austin or whatever else. These conversations, brief and inconsequential as they probably were to him, were highlights of my week.

There was, of course, something a little melancholy about getting to know such a great man only in the twilight of his life. To be clear, Steve Weinberg in his mid-to-late 80s was far more cogent, articulate, and quick to understand what was said to him than just about anyone you’d ever met in their prime. But then, after a short conversation, he’d have to leave for a nap. Steve was as clear-eyed and direct about his age and impending mortality as he was about everything else. “Scott!” he once greeted me. “I just saw the announcement for your physics colloquium about quantum supremacy. I hope I’m still alive next month to attend it.”

(As it happens, the colloquium in question was on November 9, 2016, the day we learned that Trump would become president. I offered to postpone the talk, since no one could concentrate on physics on such a day. While several of the physicists agreed that that was the right call, Steve convinced me to go ahead with the following message: “I sympathize, but I do want to hear you … There is some virtue in just plowing on.”)

I sometimes felt, as well, like I was speaking with Steve across a cultural chasm even greater than the half-century that separated us in age. Steve enjoyed nothing more than to discourse at length, in his booming New-York-accented baritone, about opera, or ballet, or obscure corners of 18th-century history. It would be easy to feel like a total philistine by comparison … and I did. Steve also told me that he never reads blogs or other social media, since he’s unable believe any written work is “real” unless it’s published, ideally on paper. I could only envy such an attitude.


If you did try to judge by the social media that he never read, you might conclude that Steve would be remembered by the wider world less for any of his epochal contributions to physics than for a single viral quote of his:

With or without religion, good people can behave well and bad people can do evil; but for good people to do evil — that takes religion.

I can testify that Steve fully lived his atheism. Four years ago, I invited him (along with many other UT colleagues) to the brit milah of my newborn son Daniel. Steve said he’d be happy to come over our house another time (and I’m happy to say that he did a year later), but not to witness any body parts being cut.

Despite his hostility to Judaism—along with every other religion—Steve was a vociferous supporter of the state of Israel, almost to the point of making me look like Edward Said or Noam Chomsky. For Steve, Zionism was not in spite of his liberal, universalist Enlightenment ideals but because of them.

Anyway, there’s no need even to wonder whether Steve had any sort of deathbed conversion. He’d laugh at the thought.


In 2016, Steve published To Explain the World, a history of human progress in physics and astronomy from the ancient Greeks to Newton (when, Steve says, the scientific ethos reached the form that it still basically has today). It’s unlike any other history-of-science book that I’ve read. Of course I’d read other books about Aristarchus and Ptolemy and so forth, but I’d never read a modern writer treating them not as historical subjects, but as professional colleagues merely separated in time. Again and again, Steve would redo ancient calculations, finding errors that had escaped historical notice; he’d remark on how Eratosthenes or Kepler could’ve done better with the data available to them; he’d grade the ancients by how much of modern physics and cosmology they’d correctly anticipated.

To Explain the World was savaged in reviews by professional science historians. Apparently, Steve had committed the unforgivable sin of “Whig history”: that is, judging past natural philosophers by the standards of today. Steve clung to the naïve, debunked, scientistic notions that there’s such a thing as “actual right answers” about how the universe works; that we today are, at any rate, much closer to those right answers than the ancients were; and that we can judge the ancients by how close they got to the right answers that we now know.

As I read the sneering reviews, I kept thinking: so suppose Archimedes, Copernicus, and all the rest were brought back from the dead. Who would they rather talk to: historians seeking to explore every facet of their misconceptions, like anthropologists with a paleolithic tribe; or Steve Weinberg, who’d want to bring them up to speed as quickly as possible so they could continue the joint quest?


When it comes to the foundations of quantum mechanics, Steve took the view that no existing interpretation is satisfactory, although the Many-Worlds Interpretation is perhaps the least bad of the bunch. Steve felt that our reaction to this state of affairs should be to test quantum mechanics more precisely—for example, by looking for tiny nonlinearities in the Schrödinger equation, or other signs that QM itself is only a limit of some more all-encompassing theory. This is, to put it mildly, not a widely-held view among high-energy physicists—but it provided a fascinating glimpse into how Steve’s mind works.

Here was, empirically, the most successful theoretical physicist alive, and again and again, his response to conceptual confusion was not to ruminate more about basic principles but to ask for more data or do a more detailed calculation. He never, ever let go of a short tether to the actual testable consequences of whatever was being talked about, or future experiments that might change the situation.

(Steve worked on string theory in the early 1980s, and he remained engaged with it for the rest of his life, for example by recruiting the string theorists Jacques Distler and Willy Fischler to UT Austin. But he later soured on the prospects for getting testable consequences out of string theory within a reasonable timeframe. And he once complained to me that the papers he’d read about “It from Qubit,” AdS/CFT, and the black hole information problem had had “too many words and not enough equations.”)


Steve was, famously, about as hardcore a reductionist as has ever existed on earth. He was a reductionist not just in the usual sense that he believed there are fundamental laws of physics, from which, together with the initial conditions, everything that happens in our universe can be calculated in principle (if not in practice), at least probabilistically. He was a reductionist in the stronger sense that he thought the quest to discover the fundamental laws of the universe had a special pride of place among all human endeavors—a place not shared by the many sciences devoted to the study of complex emergent behavior, interesting and important though they might be.

This came through clearly in Steve’s critical review of Stephen Wolfram’s A New Kind of Science, where Steve (Weinberg, that is) articulated his views of why “free-floating” theories of complex behavior can’t take the place of a reductionistic description of our actual universe. (Of course, I was also highly critical of A New Kind of Science in my review, but for somewhat different reasons than Steve was.) Steve’s reductionism was also clearly expressed in his testimony to Congress in support of continued funding for the Superconducting Supercollider. (Famously, Phil Anderson testified against the SSC, arguing that the money would better be spent on condensed-matter physics and other sciences of emergent behavior. The result: Congress did cancel the SSC, and it redirected precisely zero of the money to other sciences. But at least Steve lived to see the LHC dramatically confirm the existence of the Higgs boson, as the SSC would have.)

I, of course, have devoted my career to theoretical computer science, which you might broadly call a “science of emergent behavior”: it tries to figure out the ultimate possibilities and limits of computation, taking the underlying laws of physics as given. Quantum computing, in particular, takes as its input a physical theory that was already known by 1926, and studies what can be done with it. So you might expect me to disagree passionately with Weinberg on reductionism versus holism.

In reality, I have a hard time pinpointing any substantive difference. Mostly I see a difference in opportunities: Steve saw a golden chance to contribute something to the millennia-old quest to discover the fundamental laws of nature, at the tail end of the heroic era of particle physics that culminated in what we now call the Standard Model. He was brilliant enough to seize that chance. I didn’t see a similar chance: possibly because it no longer existed; almost certainly because, even if it did, I wouldn’t have had the right mind for it. I found a different chance, to work at the intersection of physics and computer science that was finally kicking into high gear at the end of the 20th century. Interestingly, while I came to that intersection from the CS side, quite a few who were originally trained as high-energy physicists ended up there as well—including a star PhD student of Steve Weinberg’s named John Preskill.

Despite his reductionism, Steve was as curious and enthusiastic about quantum computation as he was about a hundred other topics beyond particle physics—he even ended his quantum mechanics textbook with a chapter about Shor’s factoring algorithm. Having said that, a central reason for his enthusiasm about QC was that he clearly saw how demanding a test it would be of quantum mechanics itself—and as I mentioned earlier, Steve was open to the possibility that quantum mechanics might not be exactly true.


It would be an understatement to call Steve “left-of-center.” He believed in higher taxes on rich people like himself to service a robust social safety net. When Trump won, Steve remarked to me that most of the disgusting and outrageous things Trump would do could be reversed in a generation or so—but not the aggressive climate change denial; that actually could matter on the scale of centuries. Steve made the news in Austin for openly defying the Texas law forcing public universities to allow concealed carry on campus: he said that, regardless of what the law said, firearms would not be welcome in his classroom. (Louise, Steve’s wife for 67 years and a professor at UT Austin’s law school, also wrote perhaps the definitive scholarly takedown of the shameful Bush vs. Gore Supreme Court decision, which installed George W. Bush as president.)

All the same, during the “science wars” of the 1990s, Steve was scathing about the academic left’s postmodernist streak and deeply sympathetic to what Alan Sokal had done with his Social Text hoax. Steve also once told me that, when he (like other UT faculty) was required to write a statement about what he would do to advance Diversity, Equity, and Inclusion, he submitted just a single sentence: “I will seek the best candidates, without regard to race or sex.” I remarked that he might be one of the only academics who could get away with that.

I confess that, for the past five years, knowing Steve was a greater source of psychological strength for me than, from a rational standpoint, it probably should have been. Regular readers will know that I’ve spent months of my life agonizing over various nasty things people have said me about on Twitter and Reddit—that I’m a sexist white male douchebag, a clueless techbro STEMlord, a neoliberal Zionist shill, and I forget what else.

But I lately have had a secret emotional weapon that helped somewhat: namely, the certainty that Steven Weinberg had more intellectual power in a single toenail clipping than these Twitter-attackers had collectively experienced over the course of their lives. It’s like, have you heard the joke where two rabbis are arguing some point of Talmud, and then God speaks from a booming thundercloud to declare that the first rabbi is right, and then the second rabbi says “OK fine, now it’s 2 against 1?” For the W and Z bosons and Higgs boson that you predicted to turn up at the particle accelerator is not exactly God declaring from a thundercloud that the way your mind works is aligned with the way the world actually is—Steve, of course, would wince at the suggestion—but it’s about the closest thing available in this universe. My secret emotional weapon was that I knew the man who’d experienced this, arguably more than any of the 7.6 billion other living humans, and not only did that man not sneer at me, but by some freakish coincidence, he seemed to have reached roughly the same views as I had on >95% of controversial questions where we both had strong opinions.


My final conversations with Steve Weinberg were about a laptop. When covid started in March 2020, Steve and Louise, being in their late 80s, naturally didn’t want to take chances, and rigorously sheltered at home. But an issue emerged: Steve couldn’t install Zoom on his Bronze Age computer, and so couldn’t participate in the virtual meetings of his own group, nor could he do Zoom calls with his daughter and granddaughter. While as a theoretical computer scientist, I don’t normally volunteer myself as tech support staff, I decided that an exception was more than warranted in this case. The quickest solution was to configure one of my own old laptops with everything Steve needed and bring it over to his house.

Later, Steve emailed me to say that, while the laptop had worked great and been a lifesaver, he’d finally bought his own laptop, so I should come by to pick mine up. I delayed and delayed with that, but finally decided I should do it before leaving Austin at the beginning of this summer. So I emailed Steve to tell him I’d be coming. He replied to me asking Louise to leave the laptop on the porch — but the email was addressed only to me, not her.

At that moment, I knew something had changed: only a year before, incredibly, I’d been more senile and out-of-it as a 39-year-old than Steve had been as an 87-year-old. What I didn’t know at the time was that Steve had sent that email from the hospital when he was close to death. It was the last I heard from him.

(Once I learned what was going on, I did send a get-well note, which I hope Steve saw, saying that I hoped he appreciated that I wasn’t praying for him.)


Besides the quote about good people, bad people, and religion, the other quote of Steve’s that he never managed to live down came from the last pages of The First Three Minutes, his classic 1970s popularization of big-bang cosmology:

The more the universe seems comprehensible, the more it also seems pointless.

In the 1993 epilogue, Steve tempered this with some more hopeful words, nearly as famous:

The effort to understand the universe is one of the very few things which lifts human life a little above the level of farce and gives it some of the grace of tragedy.

It’s not my purpose here to resolve the question of whether life or the universe have a point. What I can say is that, even in his last years, Steve never for a nanosecond acted as if life was pointless. He already had all the material comforts and academic renown anyone could possibly want. He could have spent all day in his swimming pool, or listening to operas. Instead, he continued publishing textbooks—a quantum mechanics textbook in 2012, an astrophysics textbook in 2019, and a “Foundations of Modern Physics” textbook in 2021 (!). As recently as this year, he continued writing papers—and not just “great man reminiscing” papers, but hardcore technical papers. He continued writing with nearly unmatched lucidity for a general audience, in the New York Review of Books and elsewhere. And I can attest that he continued peppering visiting speakers with questions about stellar evolution or whatever else they were experts on—because, more likely than not, he had redone some calculation himself and gotten a subtly different result from what was in the textbooks.

If God exists, I can’t believe He or She would find nothing more interesting to do with Steve than to torture him for his unbelief. More likely, I think, God is right now talking to Steve the same way Steve talked to Aristarchus in To Explain the World: “yes, you were close about the origin of neutrino masses, but here’s the part you were missing…” While, of course, Steve is redoing God’s calculation to be sure.


Feel free to use the comments as a place to share your own memories.


More Steven Weinberg memorial links (I’ll continue adding to this over the next few days):


Miscellaneous Steven Weinberg links

Slowly emerging from blog-hibervacation

Wednesday, July 21st, 2021

Alright everyone:

  1. Victor Galitski has an impassioned rant against out-of-control quantum computing hype, which I enjoyed and enthusiastically recommend, although I wished Galitski had engaged a bit more with the strongest arguments for optimism (e.g., the recent sampling-based supremacy experiments, the extrapolations that show gate fidelities crossing the fault-tolerance threshold within the next decade). Even if I’ve been saying similar things on this blog for 15 years, I clearly haven’t been doing so in a style that works for everyone. Quantum information needs as many people as possible who will tell the truth as best they see it, unencumbered by any competing interests, and has nothing legitimate to fear from that. The modern intersection of quantum theory and computer science has raised profound scientific questions that will be with us for decades to come. It’s a lily that need not be gilded with hype.
  2. Last month Limaye, Srinivasan, and Tavenas posted an exciting preprint to ECCC, which apparently proves the first (slightly) superpolynomial lower bound on the size of constant-depth arithmetic circuits, over fields of characteristic 0. Assuming it’s correct, this is another small victory in the generations-long war against the P vs. NP problem.
  3. I’m grateful to the Texas Democratic legislators who fled the state to prevent the legislature, a couple miles from my house, having a quorum to enact new voting restrictions, and who thereby drew national attention to the enormity of what’s at stake. It should go without saying that, if a minority gets to rule indefinitely by forcing through laws to suppress the votes of a majority that would otherwise unseat it, thereby giving itself the power to force through more such laws, etc., then we no longer live in a democracy but in a banana republic. And there’s no symmetry to the situation: no matter how terrified you (or I) might feel about wokeists and their denunciation campaigns, the Democrats have no comparable effort to suppress Republican votes. Alas, I don’t know of any solutions beyond the obvious one, of trying to deal the conspiracy-addled grievance party crushing defeats in 2022 and 2024.
  4. Added: Here’s the video of my recent Astral Codex Ten ask-me-anything session.

What I told my kids

Saturday, May 15th, 2021

You’ll hear that it’s not as simple as the Israelis are good guys and Palestinians are bad guys, or vice versa. And that’s true.

But it’s also not so complicated that there are no clearly identifiable good guys or bad guys. It’s just that they cut across the sides.

The good guys are anyone, on either side, whose ideal end state is two countries, Israel and Palestine, living side by side in peace.

The bad guys are anyone, on either side, whose ideal end state is the other side being, if not outright exterminated, then expelled from its current main population centers (ones where it’s been for several generations or more) and forcibly resettled someplace far away.

(And those whose ideal end state is everyone living together with no border — possibly as part of the general abolition of nation-states? They’re not bad guys; they can plead insanity. [Update: See here for clarifications!])

Hamas are bad guys. They fire rockets indiscriminately at population centers, hoping to kill as many civilians as they can. (Unfortunately for them and fortunately for Israel, they’re not great at that, and also they’re aiming at a target that’s world-historically good at defending itself.)

The IDF, whatever else you say about it, sends evacuation warnings to civilians before it strikes the missile centers that are embedded where they live. Even if Hamas could aim its missiles, the idea of it extending the same courtesy to Israeli civilians is black comedy.

Netanyahu is not as bad as Hamas, because he has the power to kill millions of Palestinians and yet kills only hundreds … whereas if Hamas had the power to kill all Jews, it told the world in its charter that it would immediately do so, and it’s acted consistently with its word.

(An aside: I’m convinced that Hamas has the most top-heavy management structure of any organization in the world. Every day, Israel takes out another dozen of its most senior, highest-level commanders, apparently leaving hundreds more. How many senior commanders do they have? Do they have even a single junior commander?)

Anyway, not being as bad as Hamas is an extremely low bar, and Netanyahu is a thoroughly bad guy. He’s corrupt and power-mad. Like Trump, he winks at his side’s monstrous extremists without taking moral responsibility for them. And if it were ever possible to believe that he wanted two countries as the ideal end state, it hasn’t been possible to believe that for at least a decade.

Netanyahu and Hamas are allies, not enemies. Both now blatantly, obviously rely on the other to stay in power, to demonstrate their worldview and thereby beat their internal adversaries.

Whenever you see anyone opine about this conflict, on Facebook or Twitter or in an op-ed or anywhere else, keep your focus relentlessly on the question of what that person wants, of what they’d do if they had unlimited power. If they’re a Zionist who talks about how “there’s no such place as Palestine,” how it’s a newly invented political construct: OK then, does that mean they’d relocate the 5 million self-described Palestinians to Jordan? Or where? If, on the other side, someone keeps talking about the “Zionist occupation,” always leaving it strategically unspecified whether they mean just the West Bank and parts of East Jerusalem or also Tel Aviv and Haifa, if they talk about the Nakba (catastrophe) of Israel’s creation in 1947 … OK then, what’s to be done with the 7 million Jews now living there? Should they go back to the European countries that murdered their families, or the Arab countries that expelled them? Should the US take them all? Out with it!

Don’t let them dodge the question. Don’t let them change the subject to something they’d much rather talk about, like the details of the other side’s latest outrage. Those details always seem so important, and yet everyone’s stance on every specific outrage is like 80% predictable if you know their desired end state. So just keep asking directly about their desired end state.

If, like me, you favor two countries living in peace, then you need never fear anyone asking you the same thing. You can then shout your desired end state from the rooftops, leaving unsettled only the admittedly-difficult “engineering problem” of how to get there. Crucially, whatever their disagreements or rivalries, everyone trying to solve the same engineering problem is in a certain sense part of the same team. At least, there’s rarely any reason to kill someone trying to solve the same problem that you are.

“What is this person’s ideal end state?” Just keep asking that and there’s a limit to how wrong you can ever be about this. You can still make factual mistakes, but it’s then almost impossible to make a moral mistake.

The easiest exercise in the moral philosophy book

Sunday, April 25th, 2021

Peter Singer, in the parable that came to represent his whole worldview and that of the effective altruism movement more generally, asked us to imagine that we could save a drowning child at the cost of jumping into a lake and ruining an expensive new suit. Assuming we’d do that, he argued that we do in fact face an ethically equivalent choice; if we don’t donate most of our income to save children in the Third World, then we need to answer for why, as surely as the person who walked past the kid thrashing in the water.

In this post, I don’t want to take a position on Singer’s difficult but important hypothetical. I merely want to say: suppose that to save the child, you didn’t even have to jump in the water. Suppose you just had to toss a life preserver, one you weren’t using. Or suppose you just had to assure the child that it was OK to grab your life raft that was already in the water.

That, it seems, is the situation that the US and other rich countries will increasingly face with covid vaccines. What’s happening in India right now looks on track to become a humanitarian tragedy, if it isn’t already. Even if, as Indian friends tell me, this was a staggering failure of the Modi government, people shouldn’t pay for it with their lives. And we in the US now have tens of millions of vaccine doses sitting in warehouses unused, for regulatory and vaccine hesitancy reasons—stupidly, but we do. We’re past the time, in my opinion, when it’s morally obligatory either to use the doses or to give them away. Anyone in a position to manufacture more vaccines for distribution to poor countries, should also immediately get the intellectual property rights to do so.

I was glad to read, just this weekend, that the US is finally starting to move in the right direction. I hope it moves faster.

And I’m sorry that this brief post doesn’t contain any information or insight that you can’t find elsewhere. It just made me feel better to write it, is all.

On standing up sans backbone

Monday, February 15th, 2021

Note: To get myself into the spirit of writing this post, tonight I watched the 2019 movie Mr. Jones, about the true story of the coverup of Stalin’s 1932-3 mass famine by New York Times journalist Walter Duranty. Recommended!

In my last post, I wrote that despite all my problems with Cade Metz’s New York Times hit piece on Scott Alexander, I’d continue talking to journalists—even Metz himself, I added, assuming he’d still talk to me after my public disparagement of his work. Over the past few days, though, the many counterarguments in my comments section and elsewhere gradually caused me to change my mind. I now feel like to work with Metz again, even just on some quantum computing piece, would be to reward—and to be seen as rewarding—journalistic practices that are making the world worse, and that this consideration overrides even my extreme commitment to openness.

At the least, before I could talk to Metz again, I’d need a better understanding of how the hit piece happened. What was the role of the editors? How did the original hook—namely, the rationalist community’s early rightness about covid-19—disappear entirely from the article? How did the piece manage to evince so little curiosity about such an unusual subculture and such a widely-admired writer? How did it fail so completely to engage with the rationalists’ ideas, instead jumping immediately to “six degrees of Peter Thiel” and other reductive games? How did an angry SneerClubber, David Gerard, end up (according to his own boast) basically dictating the NYT piece’s content?

It’s always ripping-off-a-bandage painful to admit when trust in another person was wildly misplaced—for then who else can we not trust? But sometimes that’s the truth of it.

I continue to believe passionately in the centrality of good journalism to a free society. I’ll continue to talk to journalists often, about quantum computing or whatever else. I also recognize that the NYT is a large, heterogeneous institution (I myself published in it twice); it’s not hard to imagine that many of its own staff take issue with the SSC piece.

But let’s be clear about the stakes here. In the discussion of my last post, I described the NYT as “still the main vessel of consensus reality in human civilization” [alright, alright, American civilization!]. What’s really at issue, beyond the treatment of a single blogger, is whether the NYT can continue serving that central role in a world reshaped by social media, resurgent fascism, and entitled wokery.

Sure, we all know that the NYT has been disastrously wrong before: it ridiculed Goddard’s dream of spaceflight, denied the Holodomor, relegated the Holocaust to the back pages while it was happening, published the fabricated justifications for the Iraq War. But the NYT and a few other publications were still the blockchain of reality, the engine of the consensus of all that is, the last bulwark against the conspiracists and the anti-vaxxers and the empowered fabulists and the horned insurrectionists storming the Capitol, because there was no ability to coordinate around any serious alternative. I’m still skeptical that there’s a serious alternative, but I now look more positively than I did just a few days ago on attempts to create one.

To all those who called me naïve or a coward for having cooperated with the NYT: believe me, I’m well aware that I wasn’t born with much backbone. (I am, after all, that guy on the Internet who famously once planned on a life of celibate asceticism, or more likely suicide, rather than asking women out and thereby risking eternal condemnation as a misogynistic sexual harasser by the normal, the popular, the socially adept, the … humanities grads and the journalists.) But whenever I need a pick-me-up, I tell myself that rather than being ashamed about my lack of a backbone, I can take pride in having occasionally managed to stand even without one.

A grand anticlimax: the New York Times on Scott Alexander

Saturday, February 13th, 2021

Updates (Feb. 14, 2021): Scott Alexander Siskind responds here.

Last night, it occurred to me that despite how disjointed it feels, the New York Times piece does have a central thesis: namely, that rationalism is a “gateway drug” to dangerous beliefs. And that thesis is 100% correct—insofar as once you teach people that they can think for themselves about issues of consequence, some of them might think bad things. It’s just that many of us judge the benefit worth the risk!

Happy Valentine’s Day everyone!


Back in June, New York Times technology reporter Cade Metz, who I’d previously known from his reporting on quantum computing, told me that he was writing a story about Scott Alexander, Slate Star Codex, and the rationalist community. Given my position as someone who knew the rationalist community without ever really being part of it, Cade wondered whether I’d talk with him. I said I’d be delighted to.

I spent many hours with Cade, taking his calls and emails morning or night, at the playground with my kids or wherever else I was, answering his questions, giving context for his other interviews, suggesting people in the rationalist community for him to talk to, in exactly the same way I might suggest colleagues for a quantum computing story. And then I spent just as much time urging those people to talk to Cade. (“How could you possibly not want to talk? It’s the New York Times!”) Some of the people I suggested agreed to talk; others refused; a few were livid at me for giving a New York Times reporter their email addresses without asking them. (I apologized; lesson learned.)

What happened next is already the stuff of Internet history: the NYT’s threat to publish Scott’s real surname; Scott deleting his blog as a way to preempt that ‘doxing’; 8,000 people, including me, signing a petition urging the NYT to respect Scott’s wish to keep his professional and blog identities separate; Scott resigning from his psychiatry clinic and starting his own low-cost practice, Lorien Psychiatry; his moving his blog, like so many other writers this year, to Substack; then, a few weeks ago, his triumphant return to blogging under his real name of Scott Siskind. All this against the backdrop of an 8-month period that was world-changingly historic in so many other ways: the failed violent insurrection against the United States and the ouster, by democratic means, of the president who incited it; the tragedy of covid and the long-delayed start of the vaccination campaign; the BLM protests; the well-publicized upheavals at the NYT itself, including firings for ideological lapses that would’ve made little sense to our remote ancestors of ~2010.

And now, as an awkward coda, the New York Times article itself is finally out (non-paywalled version here).

It could’ve been worse. I doubt it will do lasting harm. Of the many choices I disagreed with, I don’t know which were Cade’s and which his editors’. But no, I was not happy with it. If you want a feature-length, pop condensation of the rationalist community and its ideas, I preferred this summer’s New Yorker article (but much better still is the book by Tom Chivers).

The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present. I repeatedly muttered to myself, as I read: “dude, you could make anything sound shady with this exact same rhetorical toolkit!”

Without further ado, here’s a partial list of my issues:

  1. The piece includes the following ominous sentence: “But in late June of last year, when I approached Siskind to discuss the blog, it vanished.”  This framing, it seems to me, would be appropriate for some conman trying to evade accountability without ever explaining himself. It doesn’t make much sense for a practicing psychiatrist who took the dramatic step of deleting his blog in order to preserve his relationship with his patients—thereby complying with an ethical code that’s universal among psychiatrists, even if slightly strange to the rest of us—and who immediately explained his reasoning to the entire world. In the latter framing, of course, Scott comes across less like a fugitive on the run and more like an innocent victim of a newspaper’s editorial obstinacy.
  2. As expected, the piece devotes enormous space to the idea of rationalism as an on-ramp to alt-right extremism.  The trouble is, it never presents the idea that rationalism also can be an off-ramp from extremism—i.e., that it can provide a model for how even after you realize that mainstream sources are confidently wrong on some issue, you don’t respond by embracing conspiracy theories and hatreds, you respond by simply thinking carefully about each individual question rather than buying a worldview wholesale from anyone.  Nor does the NYT piece mention how Scott, precisely because he gives right-wing views more charity than some of us might feel they deserve, actually succeeded in dissuading some of his readers from voting for Trump—which is more success than I can probably claim in that department! I had many conversations with Cade about these angles that are nowhere reflected in the piece.
  3. The piece gets off on a weird foot, by describing the rationalists as “a group that aimed to re-examine the world through cold and careful thought.”  Why “cold”?  Like, let’s back up a few steps: what is even the connection in the popular imagination between rationality and “coldness”? To me, as to many others, the humor, humanity, and warmth of Scott’s writing were always among its most notable features.
  4. The piece makes liberal use of scare quotes. Most amusingly, it puts scare quotes around the phrase “Bayesian reasoning”!
  5. The piece never mentions that many rationalists (Zvi Mowshowitz, Jacob Falkovich, Kelsey Piper…) were right about the risk of covid-19 in early 2020, and then again right about masks, aerosol transmission, faster-spreading variants, the need to get vaccines into arms faster, and many other subsidiary issues, even while public health authorities and the mainstream press struggled for months to reach the same obvious (at least in retrospect) conclusions.  This omission is significant because Cade told me, in June, that the rationalist community’s early rightness about covid was part of what led him to want to write the piece in the first place (!).  If readers knew about that clear success, would it put a different spin on the rationalists’ weird, cultlike obsession with “Bayesian reasoning” and “consequentialist ethics” (whatever those are), or their nerdy, idiosyncratic worries about the more remote future?
  6. The piece contains the following striking sentence: “On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.” Well, yes, except this framing makes it sound like this is a fringe belief of some radical Silicon Valley tribe, rather than just the standard expectation of most of the billions of people who’ve used the Internet for most of its half-century of existence.
  7. Despite thousands of words about the content of SSC, the piece never gives Scott a few uninterrupted sentences in his own voice, to convey his style. This is something the New Yorker piece did do, and which would help readers better understand the wit, humor, charity, and self-doubt that made SSC so popular.  To see what I mean, read the NYT’s radically-abridged quotations from Scott’s now-classic riff on the Red, Blue, and Gray Tribes and decide for yourself whether they capture the spirit of the original (alright, I’ll quote the relevant passage myself at the bottom of this post). Scott has the property, shared by many of my favorite writers, that if you just properly quote him, the words leap off the page, wriggling free from the grasp of any bracketing explanations and making a direct run for the reader’s brain. All the more reason to quote him!
  8. The piece describes SSC as “astoundingly verbose.”  A more neutral way to put it would be that Scott has produced a vast quantity of intellectual output.  When I finish a Scott Alexander piece, only in a minority of cases do I feel like he spent more words examining a problem than its complexities really warranted.  Just as often, I’m left wanting more.
  9. The piece says that Scott once “aligned himself” with Charles Murray, then goes on to note Murray’s explosive views about race and IQ. That might be fair enough, were it also mentioned that the positions ascribed to Murray that Scott endorses in the relevant post—namely, “hereditarian leftism” and universal basic income—are not only unrelated to race but are actually progressive positions.
  10. The piece says that Scott once had neoreactionary thinker Nick Land on his blogroll. Again, important context is missing: this was back when Land was mainly known for his strange writings on AI and philosophy, before his neoreactionary turn.
  11. The piece says that Scott compared “some feminists” to Voldemort.  It didn’t explain what it took for certain specific feminists (like Amanda Marcotte) to prompt that comparison, which might have changed the coloration. (Another thing that would’ve complicated the picture: the rationalist community’s legendary openness to alternative gender identities and sexualities, before such openness became mainstream.)
  12. Speaking of feminists—yeah, I’m a minor part of the article.  One of the few things mentioned about me is that I’ve stayed in a rationalist group house.  (If you must know: for like two nights, when I was in Bay Area, with my wife and kids. We appreciated the hospitality!) The piece also says that I was “turned off by the more rigid and contrarian beliefs of the Rationalists.” It’s true that I’ve disagreed with many beliefs espoused by rationalists, but not because they were contrarian, or because I found them noticeably more “rigid” than most beliefs—only because I thought they were mistaken!
  13. The piece describes Eliezer Yudkowsky as a “polemicist and self-described AI researcher.”  It’s true that Eliezer opines about AI despite a lack of conventional credentials in that field, and it’s also true that the typical NYT reader might find him to be comically self-aggrandizing.  But had the piece mentioned the universally recognized AI experts, like Stuart Russell, who credit Yudkowsky for a central role in the AI safety movement, wouldn’t that have changed what readers perceived as the take-home message?
  14. The piece says the following about Shane Legg and Demis Hassabis, the founders of DeepMind: “Like the Rationalists, they believed that AI could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.”  This strikes me as a brilliant way to reframe a concern around AI safety as something vaguely sinister.  Imagine if the following framing had been chosen instead: “Amid Silicon Valley’s mad rush to invest in AI, here are the voices urging that it be done safely and in accord with human welfare…”

Reading this article, some will say that they told me so, or even that I was played for a fool.  And yet I confess that, even with hindsight, I have no idea what I should have done differently, how it would’ve improved the outcome, or what I will do differently the next time. Was there some better, savvier way for me to help out? For each of the 14 points listed above, were I ever tempted to bang my head and say, “dammit, I wish I’d told Cade X, so his story could’ve reflected that perspective”—well, the truth of the matter is that I did tell him X! It’s just that I don’t get to decide which X’s make the final cut, or which ideological filter they’re passed through first.

On reflection, then, I’ll continue to talk to journalists, whenever I have time, whenever I think I might know something that might improve their story. I’ll continue to rank bend-over-backwards openness and honesty among my most fundamental values. Hell, I’d even talk to Cade for a future story, assuming he’ll talk to me after all the disagreements I’ve aired here! [Update: commenters’ counterarguments caused me to change my stance on this; see here.]

For one thing that became apparent from this saga is that I do have a deep difference with the rationalists, one that will likely prevent me from ever truly joining them. Yes, there might be true and important things that one can’t say without risking one’s livelihood. At least, there were in every other time and culture, so it would be shocking if Western culture circa 2021 were the lone exception. But unlike the rationalists, I don’t feel the urge to form walled gardens in which to say those things anyway. I simply accept that, in the age of instantaneous communication, there are no walled gardens: anything you say to a dozen or more people, you might as well broadcast to the planet. Sure, we all have things we say only in the privacy of our homes or to a few friends—a privilege that I expect even the most orthodox would like to preserve, at any rate for themselves. Beyond that, though, my impulse has always been to look for non-obvious truths that can be shared openly, and that might light little candles of understanding in one or two minds—and then to shout those truths from the rooftops under my own name, and learn what I can from whatever sounds come in reply.

So I’m thrilled that Scott Alexander Siskind has now rearranged his life to have the same privilege. Whatever its intentions, I hope today’s New York Times article draws tens of thousands of curious new readers to Scott’s new-yet-old blog, Astral Codex Ten, so they can see for themselves what I and so many others saw in it. I hope Scott continues blogging for decades. And whatever obscene amount of money Substack is now paying Scott, I hope they’ll soon be paying him even more.


Alright, now for the promised quote, from I Can Tolerate Anything Except the Outgroup.

The Red Tribe is most classically typified by conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak, drinking Coca-Cola, driving SUVs, watching lots of TV, enjoying American football, getting conspicuously upset about terrorists and commies, marrying early, divorcing early, shouting “USA IS NUMBER ONE!!!”, and listening to country music.

The Blue Tribe is most classically typified by liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, reading lots of books, being highly educated, mocking American football, feeling vaguely like they should like soccer but never really being able to get into it, getting conspicuously upset about sexists and bigots, marrying later, constantly pointing out how much more civilized European countries are than America, and listening to “everything except country”.

(There is a partly-formed attempt to spin off a Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk – but for our current purposes this is a distraction and they can safely be considered part of the Blue Tribe most of the time)

… Even in something as seemingly politically uncharged as going to California Pizza Kitchen or Sushi House for dinner, I’m restricting myself to the set of people who like cute artisanal pizzas or sophsticated foreign foods, which are classically Blue Tribe characteristics.

Once we can see them, it’s too late

Saturday, January 30th, 2021

[updates: here’s the paper, and here’s Robin’s brief response to some of the comments here]

This month Robin Hanson, the famous and controversy-prone George Mason University economics professor who I’ve known since 2004, was visiting economists here in Austin for a few weeks. So, while my fear of covid considerably exceeds Robin’s, I met with him a few times in the mild Texas winter in an outdoor, socially-distanced way. It took only a few minutes for me to remember why I enjoy talking to Robin so much.

See, while I’d been moping around depressed about covid, the vaccine rollout, the insurrection, my inability to focus on work, and a dozen other things, Robin was bubbling with excitement about a brand-new mathematical model he was working on to understand the growth of civilizations across the universe—a model that, Robin said, explained lots of cosmic mysteries in one fell swoop and also made striking predictions. My cloth facemask was, I confess, unable to protect me from Robin’s infectious enthusiasm.

As I listened, I went through the classic stages of reaction to a new Hansonian proposal: first, bemusement over the sheer weirdness of what I was being asked to entertain, as well as Robin’s failure to acknowledge that weirdness in any way whatsoever; then, confusion about the unstated steps in his radically-condensed logic; next, the raising by me of numerous objections (each of which, it turned out, Robin had already thought through at length); finally, the feeling that I must have seen it this way all along, because isn’t it kind of obvious?

Robin has been explaining his model in a sequence of Overcoming Bias posts, and will apparently have a paper out about the model soon the paper is here! In this post, I’d like to offer my own take on what Robin taught me. Blame for anything I mangle lies with me alone.

To cut to the chase, Robin is trying to explain the famous Fermi Paradox: why, after 60+ years of looking, and despite the periodic excitement around Tabby’s star and ‘Oumuamua and the like, have we not seen a single undisputed sign of an extraterrestrial civilization? Why all this nothing, even though the observable universe is vast, even though (as we now know) organic molecules and planets in Goldilocks zones are everywhere, and even though there have been billions of years for aliens someplace to get a technological head start on us, expanding across a galaxy to the point where they’re easily seen?

Traditional answers to this mystery include: maybe the extraterrestrials quickly annihilate themselves in nuclear wars or environmental cataclysms, just like we soon will; maybe the extraterrestrials don’t want to be found (whether out of self-defense or a cosmic Prime Directive); maybe they spend all their time playing video games. Crucially, though, all answers of that sort founder against the realization that, given a million alien civilizations, each perhaps more different from the others than kangaroos are from squid, it would only take one, spreading across a billion light-years and transforming everything to its liking, for us to have noticed it.

Robin’s answer to the puzzle is as simple as it is terrifying. Such civilizations might well exist, he says, but if so, by the time we noticed one, it would already be nearly too late. Robin proposes, plausibly I think, that if you give a technological civilization 10 million or so years—i.e., an eyeblink on cosmological timescales—then either

  1. the civilization wipes itself out, or else
  2. it reaches some relatively quiet steady state, or else
  3. if it’s serious about spreading widely, then it “maxes out” the technology with which to do so, approaching the limits set by physical law.

In cases 1 or 2, the civilization will of course be hard for us to detect, unless it happens to be close by. But what about case 3? There, Robin says, the “civilization” should look from the outside like a sphere expanding at nearly the speed of light, transforming everything in its path.

Now think about it: when could we, on earth, detect such a sphere with our telescopes? Only when the sphere’s thin outer shell had reached the earth—perhaps carrying radio signals from the extraterrestrials’ early history, before their rapid expansion started. By that point, though, the expanding sphere itself would be nearly upon us!

What would happen to us once we were inside the sphere? Who knows? The expanding civilization might obliterate us, it might preserve us as zoo animals, it might merge us into its hive-mind, it might do something else that we can’t imagine, but in any case, detecting the civilization would presumably no longer be the relevant concern!

(Of course, one could also wonder what happens when two of these spheres collide: do they fight it out? do they reach some agreement? do they merge? Whatever the answer, though, it doesn’t matter for Robin’s argument.)

On the view described, there’s only a tiny cosmic window in which a SETI program could be expected to succeed: namely, when the thin surface of the first of these expanding bubbles has just hit us, and when that surface hasn’t yet passed us by. So, given our “selection bias”—meaning, the fact that we apparently haven’t yet been swallowed up by one of the bubbles—it’s no surprise if we don’t right now happen to find ourselves in the tiny detection window!

This basic proposal, it turns out, is not original to Robin. Indeed, an Overcoming Bias reader named Daniel X. Varga pointed out to Robin that he (Daniel) shared the same idea right here—in a Shtetl-Optimized comment thread—back in 2008! I must have read Daniel Varga’s comment then, but (embarrassingly) it didn’t make enough of an impression for me to have remembered it. I probably thought the same as you probably thought while reading this post:

“Sure, whatever. This is an amusing speculation that could make for a fun science-fiction story. Alas, like with virtually every story about extraterrestrials, there’s no good reason to favor this over a hundred other stories that a fertile imagination could just as easily spin. Who the hell knows?”

This is where Robin claims to take things further. Robin would say that he takes them further by developing a mathematical model, and fitting the parameters of the model to the known facts of cosmic history. Read Overcoming Bias, or Robin’s forthcoming paper, if you want to know the details of his model. Personally, I confess I’m less interested in those details than I am in the qualitative points, which (unless I’m mistaken) are easy enough to explain in words.

The key realization is this: when we contemplate the Fermi Paradox, we know more than the mere fact that we look and look and we don’t see any aliens. There are other relevant data points to fit, having to do with the one sample of a technological civilization that we do have.

For starters, there’s the fact that life on earth has been evolving for at least ~3.5 billion years—for most of the time the earth has existed—but life has a mere billion more years to go, until the expanding sun boils away the oceans and makes the earth barely habitable. In other words, at least on this planet, we’re already relatively close to the end. Why should that be?

It’s an excellent fit, Robin says, to a model wherein there are a few incredibly difficult, improbable steps along the way to a technological civilization like ours—steps that might include the origin of life, of multicellular life, of consciousness, of language, of something else—and wherein, having achieved some step, evolution basically just does a random search until it either stumbles onto the next step or else runs out of time.

Of course, given that we’re here to talk about it, we necessarily find ourselves on a planet where all the steps necessary for blog-capable life happen to have succeeded. There might be vastly more planets where evolution got stuck on some earlier step.

But here’s the interesting part: conditioned on all the steps having succeeded, we should find ourselves near the end of the useful lifetime of our planet’s star—simply because the more time is available on a given planet, the better the odds there. I.e., look around the universe and you should find that, on most of the planets where evolution achieves all the steps, it nearly runs out the planet’s clock in doing so. Also, as we look back, we should find the hard steps roughly evenly spaced out, with each one having taken a good fraction of the whole available time. All this is an excellent match for what we see.

OK, but it leads to a second puzzle. Life on earth is at least ~3.5 billion years old, while the observable universe is ~13.7 billion years old. Forget for a moment about the oft-stressed enormity of these two timescales and concentrate on their ratio, which is merely ~4. Life on earth stretches a full quarter of the way back in time to the Big Bang. Even as an adolescent, I remember finding that striking, and not at all what I would’ve guessed a priori. It seemed like obviously a clue to something, if I could only figure out what.

The puzzle is compounded once you realize that, even though the sun will boil the oceans in a billion years (and then die in a few billion more), other stars, primarily dwarf stars, will continue shining brightly for trillions more years. Granted, the dwarf stars don’t seem quite as hospitable to life as sun-like stars, but they do seem somewhat hospitable, and there will be lots of them—indeed, more than of sun-like stars. And they’ll last orders of magnitude longer.

To sum up, our temporal position relative to the lifetime of the sun makes it look as though life on earth was just a lucky draw from a gigantic cosmic Poisson process. By contrast, our position relative to the lifetime of all the stars makes it look as though we arrived crazily, freakishly early—not at all what you’d expect under a random model. So what gives?

Robin contends that all of these facts are explained under his bubble scenario. If we’re to have an experience remotely like the human one, he says, then we have to be relatively close to the beginning of time—since hundreds of billions of years from now, the universe will likely be dominated by near-light-speed expanding spheres of intelligence, and a little upstart civilization like ours would no longer stand a chance. I.e., even though our existence is down to some lucky accidents, and even though those same accidents probably recur throughout the cosmos, we shouldn’t yet see any of the other accidents, since if we did see them, it would already be nearly too late for us.

Robin admits that his account leaves a huge question open: namely, why should our experience have been a “merely human,” “pre-bubble” experience at all? If you buy that these expanding bubbles are coming, it seems likely that there will be trillions of times more sentient experiences inside them than outside. So experiences like ours would be rare and anomalous—like finding yourself at the dawn of human history, with Hammurabi et al., and realizing that almost every interesting thing that will ever happen is still to the future. So Robin simply takes as a brute fact that our experience is “earth-like” or “human-like”; he then tries to explain the other observations from that starting point.

Notice that, in Robin’s scenario, the present epoch of the universe is extremely special: it’s when civilizations are just forming, when perhaps a few of them will achieve technological liftoff, but before one or more of the civilizations has remade the whole of creation for its own purposes. Now is the time when the early intelligent beings like us can still look out and see quadrillions of stars shining to no apparent purpose, just wasting all that nuclear fuel in a near-empty cosmos, waiting for someone to come along and put the energy to good use. In that respect, we’re sort of like the Maoris having just landed in New Zealand, or Bill Gates surveying the microcomputer software industry in 1975. We’re ridiculously lucky. The situation is way out of equilibrium. The golden opportunity in front of us can’t possibly last forever.

If we accept the above, then a major question I had was the role of cosmology. In 1998, astronomers discovered that the present cosmological epoch is special for a completely different reason than the one Robin talks about. Namely, right now is when matter and dark energy contribute roughly similarly to the universe’s energy budget, with ~30% the former and ~70% the latter. Billions of years hence, the universe will become more and more dominated by dark energy. Our observable region will get sparser and sparser, as the dark energy pushes the galaxies further and further away from each other and from us, with more and more galaxies receding past the horizon where we could receive signals from them at the speed of light. (Which means, in particular, that if you want to visit a galaxy a few billion light-years from here, you’d better start out while you still can!)

So here’s my question: is it just a coincidence that the time—right now—when the universe is “there for the taking,” potentially poised between competing spacefaring civilizations, is also the time when it’s poised between matter and dark energy? Note that, in 2007, Bousso et al. tried to give a sophisticated anthropic argument for the value of the cosmological constant Λ, which measures the density of dark energy, and hence the eventual size of the observable universe. See here for my blog post on what they did (“The array size of the universe”). Long story short, for reasons that I explain in the post, it turns out to be essential to their anthropic explanation for Λ that civilizations flourish only (or mainly) in the present epoch, rather than trillions of years in the future. If we had to count civilizations that far into the future, then the calculations would favor values of Λ much smaller than what we actually observe. This, of course, seems to dovetail nicely with Robin’s account.

Let me end with some “practical” consequences of Robin’s scenario, supposing as usual that we take it seriously. The most immediate consequence is that the prospects for SETI are dimmer than you might’ve thought before you’d internalized all this. (Even after having interalized it, I’d still like at least an order of magnitude more resources devoted to SETI than what our civilization currently spares. Robin’s assumptions might be wrong!)

But a second consequence is that, if we want human-originated sentience to spread across the universe, then the sooner we get started the better! Just like Bill Gates in 1975, we should expect that there will soon be competitors out there. Indeed, there are likely competitors out there “already” (where “already” means, let’s say, in the rest frame of the cosmic microwave background)—it’s just that the light from them hasn’t yet reached us. So if we want to determine our own cosmic destiny, rather than having post-singularity extraterrestrials determine it for us, then it’s way past time to get our act together as a species. We might have only a few hundred million more years to do so.

Update: For more discussion of this post, see the SSC Reddit thread. I especially liked a beautiful comment by “Njordsier,” which fills in some important context for the arguments in this post:

Suppose you’re an alien anthropologist that sent a probe to Earth a million years ago, and that probe can send back one high-resolution image of the Earth every hundred years. You’d barely notice humans at first, though they’re there. Then, circa 10,000 years ago (99% of the way into the stream) you begin to see plots of land turned into farms. Houses, then cities, first in a few isolated places in river valleys, then exploding across five or six continents. Walls, roads, aqueducts, castles, fortresses. Four frames before the end of the stream, the collapse of the population on two of the continents as invaders from another continent bring disease. At T-minus three frames, a sudden appearance of farmland and cities on the coasts those continents. At T-minus two frames, half the continent. At the second to last frame, a roaring interconnected network of roads, cities, farms, including skyscrapers in the cities that were just trying villas three frames ago. And in the last frame, nearly 80 percent of all wilderness converted to some kind of artifice, and the sky is streaked with the trails of flying machines all over the world.

Civilizations rose and fell, cultures evolved and clashed, and great and terrible men and women performed awesome deeds. But what the alien anthropologist sees is a consistent, rapid, exponential explosion of a species bulldozing everything in its path.

That’s what we’re doing when we talk about the far future, or about hypothetical expansionist aliens, on long time scales. We’re zooming out past the level where you can reason about individuals or cultures, but see the strokes of much longer patterns that emerge from that messy, beautiful chaos that is civilization.

Update (Jan. 31): Reading the reactions here, on Hacker News, and elsewhere underscored for me that a lot of people get off Robin’s train well before it’s even left the station. Such people think of extraterrestrial civilizations as things that you either find or, if you haven’t found one, you just speculate or invent stories about. They’re not even in the category of things that you have any serious hope to reason about. For myself, I’d simply observe that trying to reason about matters far beyond current human experience, based on the microscopic shreds of fact available to us (e.g., about the earth’s spatial and temporal position within the universe), has led to some of our species’ embarrassing failures but also to some of its greatest triumphs. Since even the failures tend to be relatively cheap, I feel like we ought to be “venture capitalists” about such efforts to reason beyond our station, encouraging them collegially and mocking them only gently.

To all Trumpists who comment on this blog

Wednesday, January 6th, 2021

The violent insurrection now unfolding in Washington DC is precisely the thing you called me nuts, accused me of “Trump Derangement Syndrome,” for warning about since 2016. Crazy me, huh, always seeing brownshirts around the corner? And you called the other side violent anarchists? This is all your doing. So own it. Wallow in it. May you live the rest of your lives in shame.

Update (Jan. 7): As someone who hasn’t always agreed with BLM’s slogans and tactics, I viewed the stunning passivity of the police yesterday against white insurrectionists in the Capitol as one of the strongest arguments imaginable for BLM’s main contentions.