Archive for the ‘Announcements’ Category

Shorties!

Friday, September 30th, 2022

(1) Since I didn’t blog about this before: huge congratulations to David Deutsch, Charles Bennett, Gilles Brassard, and my former MIT colleague Peter Shor, and separately to Dan Spielman, for their well-deserved Breakthrough Prizes! Their contributions are all so epochal, so universally known to all of us in quantum information and theoretical computer science, that there’s little I can write to gild the lily, except to say how much I’ve learned by interacting with all five of them personally. I did enjoy this comment on the Breakthrough Prizes by someone on Twitter: “As long as that loudmouth Scott Aaronson keeps getting ignored, I’ll be happy.”

(2) My former UT colleague Ila Fiete brought to my attention an important scientists’ petition to the White House. The context is that the Biden administration has announced new rules requiring federally-funded research papers to be freely available to the public without delay. This is extremely welcome—in fact, I’ve advocated such a step since I first became aware of the scourge of predatory journals around 2004. But the looming danger is that publishers will just respond by leaning more heavily on the “author pays” model—i.e., hitting up authors or their institutions for thousands of dollars in page fees—and we’ll go from only the credentialed few being able to read papers that aren’t on preprint archives or the like, to only the credentialed few being able to publish them. The petition urges the White House to build, or fund the research community to build, an infrastructure that will make scientific publishing truly open to everyone. I’ve signed it, and I hope you’ll consider signing too.

(3) Bill Gasarch asked me to announce that he, my former MIT colleague Erik Demaine, and Mohammad Hajiaghayi have written a brand-new book entitled Computational Intractability: A Guide to Algorithmic Lower Bounds, and a free draft is available online. It looks excellent, like a Garey & Johnson for the 21st century. Bill and his coauthors are looking for feedback. I was happy to help them by advertising this—after all, it’s not as if Bill’s got his own complexity blog for such things!

(4) Back when Google was still a novelty—maybe 2000 or so—I had my best friend, the now-famous computer security researcher Alex Halderman, over for Rosh Hashanah dinner with my family. Alex and I were talking about how Google evaded the limitations of all the previous decades’ worth of information retrieval systems. One of my relatives, however, misheard “Google” as “kugel” (basically a dense block of noodles held together with egg), and so ended up passing the latter to Alex. “What is this?” Alex asked. Whereupon my uncle deadpanned, “it’s a noodle retrieval system.” Since then, every single Rosh Hashanah dinner, I think about querying the kugel to retrieve the noodles within, and how the desired search result is just the trivial “all of them.”

Win a $250,000 Scott Aaronson Grant for Advanced Precollege STEM Education!

Thursday, September 1st, 2022

Back in January, you might recall, Skype cofounder Jaan Tallinn’s Survival and Flourishing Fund (SFF) was kind enough to earmark $200,000 for me to donate to any charitable organizations of my choice. So I posted a call for proposals on this blog. You “applied” to my “foundation” by simply sending me an email, or leaving a comment on this blog, with a link to your organization’s website and a 1-paragraph explanation of what you wanted the grant for, and then answering any followup questions that I had.

After receiving about 20 awesome proposals in diverse areas, in the end I decided to split the allotment among organizations around the world doing fantastic, badly-needed work in math and science enrichment at the precollege level. These included Canada/USA Mathcamp, AddisCoder, a magnet school in Maine, a math circle in Oregon, a math enrichment program in Ghana, and four others. I chose to focus on advanced precollege STEM education both because I have some actual knowledge and experience there, and because I wanted to make a strong statement about an underfunded cause close to my heart that’s recently suffered unjust attacks.

To quote the immortal Carl Sagan, from shortly before his death:

[C]hildren with special abilities and skills need to be nourished and encouraged. They are a national treasure. Challenging programs for the “gifted” are sometimes decried as “elitism.” Why aren’t intensive practice sessions for varsity football, baseball, and basketball players and interschool competition deemed elitism? After all, only the most gifted athletes participate. There is a self-defeating double standard at work here, nationwide.

Anyway, the thank-you notes from the programs I selected were some of the most gratifying emails I’ve ever received.

But wait, it gets better! After reading about the Scott Aaronson Speculation Grants on this blog, representatives from a large, reputable family foundation contacted me to say that they wanted to be involved too. This foundation, which wishes to remain anonymous at this stage although not to the potential grant recipient, intends to make a single US$250,000 grant in the area of advanced precollege STEM education. They wanted my advice on where their grant should go.

Of course, I could’ve simply picked one of the same wonderful organizations that SFF and I helped in the first round. On reflection, though, I decided that it would be more on the up-and-up to issue a fresh call for proposals.

So: do you run a registered 501(c)(3) nonprofit dedicated to advanced precollege STEM education? If so, email me or leave a comment here by Friday, September 9, telling me a bit about what your organization does and what more it could do with an extra $250K. Include a rough budget, if that will help convince me that you can actually make productive use of that amount, that it won’t just sit in your bank account. Organizations that received a Scott Aaronson Speculation Grant the last time are welcome to reapply; newcomers are also welcome.

I’ll pass up to three finalists along to the funder, which will then make a final decision as to the recipient. The funder will be directly in touch with the potential grantee(s) and will proceed with its intake, review and due diligence process.

We expect to be able to announce a recipient on or around October 24. Can’t wait to see what people come up with!

Busy Beaver Updates: Now Even Busier

Tuesday, August 30th, 2022

Way back in the covid-filled summer of 2020, I wrote a survey article about the ridiculously-rapidly-growing Busy Beaver function. My survey then expanded to nearly twice its original length, with the ideas, observations, and open problems of commenters on this blog. Ever since, I’ve felt a sort of duty to blog later developments in BusyBeaverology as well. It’s like, I’ve built my dam, I’ve built my lodge, I’m here in the pond to stay!

So without further ado:

  • This May, Pavel Kropitz found a machine demonstrating that $$ BB(6) \ge 10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}}}}}}} $$ (15 times)—thereby blasting through his own 2010 record, that BB(6)≥1036,534. Or, for those tuning in from home: Kropitz constructed a 6-state, 2-symbol, 1-tape Turing machine that runs for at least the above number of steps, when started on an initially blank tape, and then halt. The machine was analyzed and verified by Pascal Michel, the modern keeper of Busy Beaver lore. In my 2020 survey, I’d relayed an open problem posed by my then 7-year-old daughter Lily: namely, what’s the first n such that BB(n) exceeds A(n), the nth value of the Ackermann function? All that’s been proven is that this n is at least 5 and at most 18. Kropitz and Michel’s discovery doesn’t settle the question—titanic though it is, the new lower bound on BB(6) is still less than A(6) (!!)—but in light of this result, I now strongly conjecture that the crossover happens at either n=6 or n=7. Huge congratulations to Pavel and Pascal!
  • Tristan Stérin and Damien Woods wrote to tell me about a new collaborative initiative they’ve launched called BB Challenge. With the participation of other leading figures in the neither-large-nor-sprawling Busy Beaver world, Tristan and Damien are aiming, not only to pin down the value of BB(5)—proving or disproving the longstanding conjecture that BB(5)=47,176,870—but to do so in a formally verified way, with none of the old ambiguities about which Turing machines have or haven’t been satisfactorily analyzed. In my survey article, I’d passed along a claim that, of all the 5-state machines, only 25 remained to be analyzed, to understand whether or not they run forever—the “default guess” being that they all do, but that proving it for some of them might require fearsomely difficult number theory. With their more formal and cautious approach, Tristan and Damien still count 1.5 million (!) holdout machines, but they hope to cut down that number extremely rapidly. If you’re feeling yourself successfully nerd-sniped, please join the quest and help them out!

A low-tech solution

Tuesday, July 19th, 2022

Thanks so much to everyone who offered help and support as this blog’s comment section endured the weirdest, most motivated and sophisticated troll attack in its 17-year history. For a week, a parade of self-assured commenters showed up to demand that I explain and defend my personal hygiene, private thoughts, sexual preferences, and behavior around female students (and, absurdly, to cajole me into taking my family on a specific Disney cruise ship). In many cases, the troll or trolls appropriated the names and email addresses of real academics, imitating them so convincingly that those academics’ closest colleagues told me they were confident it was really them. And when some trolls finally “outed” themselves, I had no way to know whether that was just another chapter in the trolling campaign. It was enough to precipitate an epistemic crisis, where one actively doubts the authenticity of just about every piece of text.

The irony isn’t lost on me that I’ve endured this just as I’m starting my year-long gig at OpenAI, to think, among other things, about the potential avenues for misuse of Large Language Models like GPT-3, and what theoretical computer science could contribute to mitigating them. To say this episode has given me a more vivid understanding of the risks would be an understatement.

But why didn’t I just block and ignore the trolls immediately? Why did I bother engaging?

At least a hundred people asked some variant of this question, and the answer is this. For most of my professional life, this blog has been my forum, where anyone in the world could show up to raise any issue they wanted, as if we were tunic-wearing philosophers in the Athenian agora. I prided myself on my refusal to take the coward’s way out and ignore anything—even, especially, severe personal criticism. I’d witnessed how Jon Stewart, let’s say, would night after night completely eviscerate George W. Bush, his policies and worldview and way of speaking and justifications and lies, and then Bush would just continue the next day, totally oblivious, never deigning to rebut any of it. And it became a core part of my identity that I’d never be like that. If anyone on earth had a narrative of me where I was an arrogant bigot, a clueless idiot, etc., I’d confront that narrative head-on and refute it—or if I couldn’t, I’d reinvent my whole life. What I’d never do is suffer anyone’s monstrous caricature of me to strut around the Internet unchallenged, as if conceding that only my academic prestige or tenure or power, rather than a reasoned rebuttal, could protect me from the harsh truths that the caricature revealed.

Over the years, of course, I carved out some exceptions: P=NP provers and quantum mechanics deniers enraged that I’d dismissed their world-changing insights. Raving antisemites. Their caricatures of me had no legs in any community I cared about. But if an attack carried the implied backing of the whole modern social-justice movement, of thousands of angry grad students on Twitter, of Slate and Salon and New York Times writers and Wikipedia editors and university DEI offices, then the coward’s way out was closed. The monstrous caricature then loomed directly over me; I could either parry his attacks or die.

With this stance, you might say, the astounding part is not that this blog’s “agora” model eventually broke down, but rather that it survived for so long! I started blogging in October 2005. It took until July 2022 for me to endure a full-scale “social/emotional denial of service attack” (not counting the comment-171 affair). Now that I have, though, it’s obvious even to me that the old way is no longer tenable.

So what’s the solution? Some of you liked the idea of requiring registration with real email addresses—but alas, when I tried to implement that, I found that WordPress’s registration system is a mess and I couldn’t see how to make it work. Others liked the idea of moving to Substack, but others actively hated it, and in any case, even if I moved, I’d still have to figure out a comment policy! Still others liked the idea of an army of volunteer moderators. At least ten people volunteered themselves.

On reflection, the following strikes me as most directly addressing the actual problem. I’m hereby establishing the Shtetl-Optimized Committee of Guardians, or SOCG (same acronym as the computational geometry conference 🙂 ). If you’re interested in joining, shoot me an email, or leave a comment on this post with your (real!) email address. I’ll accept members only if I know them in real life, personally or by reputation, or if they have an honorable history on this blog.

For now, the SOCG’s only job is this: whenever I get a comment that gives me a feeling of unease—because, e.g., it seems trollish or nasty or insincere, it asks a too-personal question, or it challenges me to rebut a hostile caricature of myself—I’ll email the comment to the SOCG and ask what to do. I commit to respecting the verdict of those SOCG members who respond, whenever a clear verdict exists. The verdict could be, e.g., “this seems fine,” “if you won’t be able to resist responding then don’t let this appear,” or “email the commenter first to confirm their identity.” And if I simply need reassurance that the commenter’s view of me is false, I’ll seek it from the SOCG before I seek it from the whole world.

Here’s what SOCG members can expect in return: I continue pouring my heart into this subscription-free, ad-free blog, and I credit you for making it possible—publicly if you’re comfortable with your name being listed, privately if not. I buy you a fancy lunch or dinner if we’re ever in the same town.

Eventually, we might move to a model where the SOCG members can log in to WordPress and directly moderate comments themselves. But let’s try it this way first and see if it works.

Choosing a new comment policy

Tuesday, July 12th, 2022

Update (July 13): I was honored to read this post by my friend Boaz Barak.

Update (July 14): By now, comments on this post allegedly from four CS professors — namely, Josh Alman, Aloni Cohen, Rana Hanocka, and Anna Farzindar — as well as from the graduate student “BA,” have been unmasked as from impersonator(s).

I’ve been the target of a motivated attack-troll (or multiple trolls, but I now believe just one) who knows about the CS community. This might be the single weirdest thing that’s happened to me in 17 years of blogging, surpassing even the legendary Ricoh printer episode of 2007. It obviously underscores the need for a new, stricter comment policy, which is what this whole post was about.


Yesterday and today, both my work and my enjoyment of the James Webb images were interrupted by an anonymous troll, who used the Shtetl-Optimized comment section to heap libelous abuse on me—derailing an anodyne quantum computing discussion to opine at length about how I’m a disgusting creep who surely, probably, maybe has lewd thoughts about his female students. Unwisely or not, I allowed it all to appear, and replied to all of it. I had a few reasons: I wanted to prove that I’m now strong enough to withstand bullying that might once have driven me to suicide. I wanted, frankly, many readers to come to my defense (thanks to those who did!). I at least wanted readers to see firsthand what I now regularly deal with: the emotional price of maintaining this blog. Most of all, I wanted my feminist, social-justice-supporting readers to either explicitly endorse or (hopefully) explicitly repudiate the unambiguous harassment that was now being gleefully committed in their name.

Then, though, the same commenter upped the ante further, by heaping misogynistic abuse on my wife Dana—while still, ludicrously and incongruously, cloaking themselves in the rhetoric of social justice. Yes: apparently the woke, feminist thing to do is now to rate female computer scientists on their looks.

Let me be blunt: I cannot continue to write Shtetl-Optimized while dealing with regular harassment of me and my family. At the same time, I’m also determined not to “surrender to the terrorists.” So, I’m weighing the following options:

  • Close comments except to commenters who provide a real identity—e.g., a full real name, a matching email address, a website.
  • Move to Substack, and then allow only commenters who’ve signed up.
  • Hire someone to pre-screen comments for me, and delete ones that are abusive or harassing (to me or others) before I even see them. (Any volunteers??)
  • Make the comment sections for readers only, eliminating any expectation that I’ll participate.

One thing that’s clear is that the status quo will not continue. I can’t “just delete” harassing or abusive comments, because the trolls have gotten too good at triggering me, and they will continue to weaponize my openness and my ethic of responding to all possible arguments against me.

So, regular readers: what do you prefer?

Because I couldn’t not post

Friday, June 24th, 2022

In 1973, the US Supreme Court enshrined the right to abortion—considered by me and ~95% of everyone I know to be a basic pillar of modernity—in such a way that the right could be overturned only if its opponents could somehow gain permanent minority rule, and thereby disregard the wills of three-quarters of Americans. So now, half a century later, that’s precisely what they’ve done. Because Ruth Bader Ginsburg didn’t live three more weeks, we’re now faced with a civilizational crisis, with tens of millions of liberals and moderates in the red states now under the authority of a social contract that they never signed. With this backwards leap, Curtis Yarvin’s notion that “Cthulhu only ever swims leftward” stands as decimated by events as any thesis has ever been. I wonder whether Yarvin is happy to have been so thoroughly refuted.

Most obviously for me, the continued viability of Texas as a place for science, for research, for technology companies, is now in severe doubt. Already this year, our 50-member CS department at UT Austin has had faculty members leave, and faculty candidates turn us down, with abortion being the stated reason, and I expect that to accelerate. Just last night my wife, Dana Moshkovitz, presented a proposal at the STOC business meeting to host STOC’2024 at a beautiful family-friendly resort outside Austin. The proposal failed, in part because of the argument that, if a pregnant STOC attendee faced a life-threatening medical condition, Texas doctors might choose to let her die, or the attendee might be charged with murder for having a miscarriage. In other words: Texas (and indeed, half the US) will apparently soon be like Donetsk or North Korea, dangerous for Blue Americans to visit even for just a few days. To my fellow Texans, I say: if you find that hyperbolic, understand that this is how the blue part of the country now sees you. Understand that only a restoration of the previous social contract can reverse it.

Of course, this destruction of everything some of us have tried to build in science in Texas is happening despite the fact that 47-48% of Texans actually vote Democratic. It’s happening despite the fact that, if Blue Americans wanted to stop it, the obvious way to do so would be to move to Austin and Houston (and the other blue enclaves of red states) in droves, and exert their electoral power. In other words, to do precisely what Dana and I did. But can I urge others to do the same with a straight face?

As far as I can tell, the only hope at this point of averting a cold Civil War is if, against all odds, there’s a Democratic landslide in Congress, sufficient to get the right to abortion enshrined into federal law. Given the ways both the House and the Senate are stacked against Democrats, I don’t expect that anytime soon, but I’ll work for it—and will do so even if many of the people I’m working with me despise me for other reasons. I will match reader donations to Democratic PACs and Congressional campaigns (not necessarily the same ones, though feel free to advocate for your favorites), announced in the comment section of this post, up to a limit of $10,000.

OpenAI!

Friday, June 17th, 2022

I have some exciting news (for me, anyway). Starting next week, I’ll be going on leave from UT Austin for one year, to work at OpenAI. They’re the creators of the astonishing GPT-3 and DALL-E2, which have not only endlessly entertained me and my kids, but recalibrated my understanding of what, for better and worse, the world is going to look like for the rest of our lives. Working with an amazing team at OpenAI, including Jan Leike, John Schulman, and Ilya Sutskever, my job will be think about the theoretical foundations of AI safety and alignment. What, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want?

Yeah, I don’t know the answer either. That’s why I’ve got a whole year to try to figure it out! One thing I know for sure, though, is that I’m interested both in the short-term, where new ideas are now quickly testable, and where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern, and the long-term, where one might worry about what happens once AIs surpass human abilities across nearly every domain. (And all the points in between: we might be in for a long, wild ride.) When you start reading about AI safety, it’s striking how there are two separate communities—one mostly worried about machine learning perpetuating racial and gender biases, and the other mostly worried about superhuman AI turning the planet into goo—who not only don’t work together, but are at each other’s throats, with each accusing the other of totally missing the point. I persist, however, in the possibly-naïve belief that these are merely two extremes along a single continuum of AI worries. By figuring out how to align AI with human values today—constantly confronting our theoretical ideas with reality—we can develop knowledge that will give us a better shot at aligning it with human values tomorrow.

For family reasons, I’ll be doing this work mostly from home, in Texas, though traveling from time to time to OpenAI’s office in San Francisco. I’ll also spend 30% of my time continuing to run the Quantum Information Center at UT Austin and working with my students and postdocs. At the end of the year, I plan to go back to full-time teaching, writing, and thinking about quantum stuff, which remains my main intellectual love in life, even as AI—the field where I started, as a PhD student, before I switched to quantum computing—has been taking over the world in ways that none of us can ignore.

Maybe fittingly, this new direction in my career had its origins here on Shtetl-Optimized. Several commenters, including Max Ra and Matt Putz, asked me point-blank what it would take to induce me to work on AI alignment. Treating it as an amusing hypothetical, I replied that it wasn’t mostly about money for me, and that:

The central thing would be finding an actual potentially-answerable technical question around AI alignment, even just a small one, that piqued my interest and that I felt like I had an unusual angle on. In general, I have an absolutely terrible track record at working on topics because I abstractly feel like I “should” work on them. My entire scientific career has basically just been letting myself get nerd-sniped by one puzzle after the next.

Anyway, Jan Leike at OpenAI saw this exchange and wrote to ask whether I was serious in my interest. Oh shoot! Was I? After intensive conversations with Jan, others at OpenAI, and others in the broader AI safety world, I finally concluded that I was.

I’ve obviously got my work cut out for me, just to catch up to what’s already been done in the field. I’ve actually been in the Bay Area all week, meeting with numerous AI safety people (and, of course, complexity and quantum people), carrying a stack of technical papers on AI safety everywhere I go. I’ve been struck by how, when I talk to AI safety experts, they’re not only not dismissive about the potential relevance of complexity theory, they’re more gung-ho about it than I am! They want to talk about whether, say, IP=PSPACE, or MIP=NEXP, or the PCP theorem could provide key insights about how we could verify the behavior of a powerful AI. (Short answer: maybe, on some level! But, err, more work would need to be done.)

How did this complexitophilic state of affairs come about? That brings me to another wrinkle in the story. Traditionally, students follow in the footsteps of their professors. But in trying to bring complexity theory into AI safety, I’m actually following in the footsteps of my student: Paul Christiano, one of the greatest undergrads I worked with in my nine years at MIT, the student whose course project turned into the Aaronson-Christiano quantum money paper. After MIT, Paul did a PhD in quantum computing at Berkeley, with my own former adviser Umesh Vazirani, while also working part-time on AI safety. Paul then left quantum computing to work on AI safety full-time—indeed, along with others such as Dario Amodei, he helped start the safety group at OpenAI. Paul has since left to found his own AI safety organization, the Alignment Research Center (ARC), although he remains on good terms with the OpenAI folks. Paul is largely responsible for bringing complexity theory intuitions and analogies into AI safety—for example, through the “AI safety via debate” paper and the Iterated Amplification paper. I’m grateful for Paul’s guidance and encouragement—as well as that of the others now working in this intersection, like Geoffrey Irving and Elizabeth Barnes—as I start this new chapter.

So, what projects will I actually work on at OpenAI? Yeah, I’ve been spending the past week trying to figure that out. I still don’t know, but a few possibilities have emerged. First, I might work out a general theory of sample complexity and so forth for learning in dangerous environments—i.e., learning where making the wrong query might kill you. Second, I might work on explainability and interpretability for machine learning: given a deep network that produced a particular output, what do we even mean by an “explanation” for “why” it produced that output? What can we say about the computational complexity of finding that explanation? Third, I might work on the ability of weaker agents to verify the behavior of stronger ones. Of course, if P≠NP, then the gap between the difficulty of solving a problem and the difficulty of recognizing a solution can sometimes be enormous. And indeed, even in empirical machine learing, there’s typically a gap between the difficulty of generating objects (say, cat pictures) and the difficulty of discriminating between them and other objects, the latter being easier. But this gap typically isn’t exponential, as is conjectured for NP-complete problems: it’s much smaller than that. And counterintuitively, we can then turn around and use the generators to improve the discriminators. How can we understand this abstractly? Are there model scenarios in complexity theory where we can prove that something similar happens? How far can we amplify the generator/discriminator gap—for example, by using interactive protocols, or debates between competing AIs?

OpenAI, of course, has the word “open” right in its name, and a founding mission “to ensure that artificial general intelligence benefits all of humanity.” But it’s also a for-profit enterprise, with investors and paying customers and serious competitors. So throughout the year, don’t expect me to share any proprietary information—that’s not my interest anyway, even if I hadn’t signed an NDA. But do expect me to blog my general thoughts about AI safety as they develop, and to solicit feedback from readers.

In the past, I’ve often been skeptical about the prospects for superintelligent AI becoming self-aware and destroying the world anytime soon (see, for example, my 2008 post The Singularity Is Far). While I was aware since 2005 or so of the AI-risk community; and of its leader and prophet, Eliezer Yudkowsky; and of Eliezer’s exhortations for people to drop everything else they’re doing and work on AI risk, as the biggest issue facing humanity, I … kept the whole thing at arms’ length. Even supposing I agreed that this was a huge thing to worry about, I asked, what on earth do you want me to do about it today? We know so little about a future superintelligent AI and how it would behave that any actions we took today would likely be useless or counterproductive.

Over the past 15 years, though, my and Eliezer’s views underwent a dramatic and ironic reversal. If you read Eliezer’s “litany of doom” from two weeks ago, you’ll see that he’s now resigned and fatalistic: because his early warnings weren’t heeded, he argues, humanity is almost certainly doomed and an unaligned AI will soon destroy the world. He says that there are basically no promising directions in AI safety research: for any alignment strategy anyone points out, Eliezer can trivially refute it by explaining how (e.g.) the AI would be wise to the plan, and would pretend to go along with whatever we wanted from it while secretly plotting against us.

The weird part is, just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic. Part of my optimism is because people like Paul Christiano have laid foundations for a meaty mathematical theory: much like the Web (or quantum computing theory) in 1992, it’s still in a ridiculously primitive stage, but even my limited imagination now suffices to see how much more could be built there. An even greater part of my optimism is because we now live in a world with GPT-3, DALL-E2, and other systems that, while they clearly aren’t AGIs, are powerful enough that worrying about AGIs has come to seem more like prudence than like science fiction. And we can finally test our intuitions against the realities of these systems, which (outside of mathematics) is pretty much the only way human beings have ever succeeded at anything.

I didn’t predict that machine learning models this impressive would exist by 2022. Most of you probably didn’t predict it. For godsakes, Eliezer Yudkowsky didn’t predict it. But it’s happened. And to my mind, one of the defining virtues of science is that, when empirical reality gives you a clear shock, you update and adapt, rather than expending your intelligence to come up with clever reasons why it doesn’t matter or doesn’t count.

Anyway, so that’s the plan! If I can figure out a way to save the galaxy, I will, but I’ve set my goals slightly lower, at learning some new things and doing some interesting research and writing some papers about it and enjoying a break from teaching. Wish me a non-negligible success probability!


Update (June 18): To respond to a couple criticisms that I’ve seen elsewhere on social media…

Can the rationalists sneer at me for waiting to get involved with this subject until it had become sufficiently “respectable,” “mainstream,” and ”high-status”? I suppose they can, if that’s their inclination. I suppose I should be grateful that so many of them chose to respond instead with messages of congratulations and encouragement. Yes, I plead guilty to keeping this subject at arms-length until I could point to GPT-3 and DALL-E2 and the other dramatic advances of the past few years to justify the reality of the topic to anyone who might criticize me. It feels internally like I had principled reasons for this: I can think of almost no examples of research programs that succeeded over decades even in the teeth of opposition from the scientific mainstream. If so, then arguably the best time to get involved with a “fringe” scientific topic, is when and only when you can foresee a path to it becoming the scientific mainstream. At any rate, that’s what I did with quantum computing, as a teenager in the mid-1990s. It’s what many scientists of the 1930s did with the prospect of nuclear chain reactions. And if I’d optimized for getting the right answer earlier, I might’ve had to weaken the filters and let in a bunch of dubious worries that would’ve paralyzed me. But I admit the possibility of self-serving bias here.

Should you worry that OpenAI is just hiring me to be able to say “look, we have Scott Aaronson working on the problem,” rather than actually caring about what its safety researchers come up with? I mean, I can’t prove that you shouldn’t worry about that. In the end, whatever work I do on the topic will have to speak for itself. For whatever it’s worth, though, I was impressed by the OpenAI folks’ detailed, open-ended engagement with these questions when I met them—sort of like how it might look if they actually believed what they said about wanting to get this right for the world. I wouldn’t have gotten involved otherwise.

An understandable failing?

Sunday, May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

An update on the campaign to defend serious math education in California

Tuesday, April 26th, 2022

Update (April 27): Boaz Barak—Harvard CS professor, longtime friend-of-the-blog, and coauthor of my previous guest post on this topic—has just written an awesome FAQ, providing his personal answers to the most common questions about what I called our “campaign to defend serious math education.” It directly addresses several issues that have already come up in the comments. Check it out!


As you might remember, last December I hosted a guest post about the “California Mathematics Framework” (CMF), which was set to cause radical changes to precollege math in California—e.g., eliminating 8th-grade algebra and making it nearly impossible to take AP Calculus. I linked to an open letter setting out my and my colleagues’ concerns about the CMF. That letter went on to receive more than 1700 signatures from STEM experts in industry and academia from around the US, including recipients of the Nobel Prize, Fields Medal, and Turing Award, as well as a lot of support from college-level instructors in California. 

Following widespread pushback, a new version of the CMF appeared in mid-March. I and others are gratified that the new version significantly softens the opposition to acceleration in high school math and to calculus as a central part of mathematics.  Nonetheless, we’re still concerned that the new version promotes a narrative about data science that’s a recipe for cutting kids off from any chance at earning a 4-year college degree in STEM fields (including, ironically, in data science itself).

To that end, some of my Californian colleagues have issued a new statement today on behalf of academic staff at 4-year colleges in California, aimed at clearing away the fog on how mathematics is related to data science. I strongly encourage my readers on the academic staff at 4-year colleges in California to sign this commonsense statement, which has already been signed by over 250 people (including, notably, at least 50 from Stanford, home of two CMF authors).

As a public service announcement, I’d also like to bring to wider awareness Section 18533 of the California Education Code, for submitting written statements to the California State Board of Education (SBE) about errors, objections, and concerns in curricular frameworks such as the CMF.  

The SBE is scheduled to vote on the CMF in mid-July, and their remaining meeting before then is on May 18-19 according to this site, so it is really at the May meeting that concerns need to be aired.  Section 18533 requires submissions to be written (yes, snail mail) and postmarked at least 10 days before the SBE meeting. So to make your voice heard by the SBE, please send your written concern by certified mail (for tracking, but not requiring signature for delivery), no later than Friday May 6, to State Board of Education, c/o Executive Secretary of the State Board of Education, 1430 N Street, Room 5111, Sacramento, CA 95814, complemented by an email submission to sbe@cde.ca.gov and mathframework@cde.ca.gov.

Back

Saturday, April 23rd, 2022

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.


Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!


A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.


Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.


Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.


For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.