Archive for the ‘Announcements’ Category

To all those who’ve emailed me…

Monday, October 9th, 2023

My wife’s family is OK; thanks very much for asking. But yes, missiles are landing and sirens are going off in Tel Aviv, and people there regularly have to use their buildings’ bomb shelters.

Of course, the main developments are further south, where at least seven hundred Israelis were murdered or kidnapped and thousands were wounded, in what’s being called “Israel’s 9/11” (ironically, I remember 9/11 itself being called America’s Israel experience). Some back-of-the-envelopes: this weekend, the number of Jews murdered for being Jews was about 12% of the number murdered per day in Auschwitz when it operated at max capacity, and nearly as many as were killed in the entire Six-Day War (most of whom were soldiers). It was also about ten 9/11’s, if scaled by the Israeli vs. US population.

As for why this war started, Hamas itself cited, not any desire to improve the miserable conditions of the people under its charge, but a few ultra-Orthodox Jews praying on the Temple Mount — a theological rationale.

This is either the worst intelligence and operational failure in Israeli history or the second-worst, after the Yom Kippur War. It’s impossible not to ask whether the total political dysfunction gripping Israel played a central role, whether Netanyahu’s ministers were much more interested in protecting West Bank settlers than in protecting communities near Gaza, and whether Hamas and Iran knowingly capitalized on all this. But there will be investigations afterward.

For now, both sides of Israel’s internal conflict — the secular modernists and the religious nationalist Bibi-ists — are completely united behind the goal of winning this unasked-for war, with the support of the world’s Jewish diaspora and reasonable people and nations, because what alternative is there?

Added: This Quillette article is good for the historical context that many Western intellectuals refuse to understand. Namely: for everything the Israeli government has done wrong, in Hamas it faces an enemy that descends directly from the Grand Mufti’s fusion of Nazism and Islamism in the 1930s and 1940s, and whose goal since its founding has been explicitly genocidal toward all Jews everywhere on earth—as we saw in the worst massacre of Jews since the Holocaust that it carried out this weekend.


Update: This is really, really not thematically appropriate to this post, but … an interview with me, entitled Scott Aaronson Disentangles Quantum Hype, is now available on Craig Smith’s “Eye on AI” podcast. Give it a listen if you’re interested.

Quantum miscellany

Tuesday, September 19th, 2023
  1. Tomorrow at 1:30pm US Central time, I’ll be doing an online Q&A with Collective[i] Forecast about quantum computing (probably there will also be questions about AI safety). It’s open to all. Hope to see some of you there!
  2. Toby Cubitt of University College London is visiting UT Austin. We’ve been discussing the question: can you produce a QMA witness state using a closed timelike curve? Since QMA⊆PSPACE, and since Fortnow, Watrous, and I proved that closed timelike curves (or anyway, Deutsch’s model of them) let you solve PSPACE problems, clearly a closed timelike curve lets you solve QMA decision problems, but that’s different from producing the actual witness state as the fixed-point of a polynomial-time superoperator. Toby has a neat recent result, which has as a corollary that you can produce the ground state of a local Hamiltonian using a CTC, if you have as auxiliary information the ground state energy as well as (a lower bound on) the spectral gap. But you do seem to need that extra information.

    Yesterday I realized there’s also a simpler construction: namely, take an n-qubit state from the CTC, and check whether it’s a valid QMA witness, having used Marriott-Watrous amplification to push the probability of error down to (say) exp(-n2). If the witness is valid, then send it back in time unmodified; otherwise replace it by the maximally mixed state. If valid witnesses exist, then you can check that this sets up a Markov chain whose stationary distribution is almost entirely concentrated on such witnesses. (If no valid witnesses exist, then the stationary distribution is just the maximally mixed state, or exponentially close to it.) One drawback of this construction is that it can only produce a Marriott-Watrous state, rather than the “original” QMA witness state.

    Is there a third approach, which overcomes the disadvantages of both mine and Toby’s? I’ll leave that question to my readers!
  3. On the theme of QMA plus weird physics, a wonderful question emerged from a recent group meeting: namely, what’s the power of QMA if we let the verifier make multiple non-collapsing measurements of the same state, as in the “PDQP” model defined by myself, Bouland, Fitzsimons, and Lee? I conjecture that this enhanced QMA goes all the way up to NEXP (Nondeterministic Exponential-Time), by a construction related to the one I used to show that PDQP/qpoly = ALL (i.e., non-collapsing measurements combined with quantum advice lets you decide literally all languages), and that also uses the PCP Theorem. I even have some candidate constructions, though I haven’t yet proven their soundness.

    In the past, I would’ve spent more time on such a problem before sharing it. But after giving some students a first crack, I now … just want to know the answer? Inspecting my feelings in my second year of leave at OpenAI, I realized that I still care enormously about quantum complexity theory, but only about getting answers to the questions, barely at all anymore about getting credit for them. Admittedly, it took me 25 years to reach this state of not caring.

On students as therapy

Friday, July 21st, 2023
From left: Ruizhe, Daniel, me, Jiahui, William

This summer, I’m delighted to report, we’ve had four (!) students complete their PhDs in computer science through UT Austin’s Quantum Information Center:

  • Dr. Ruizhe Zhang, student of my wife Dana Moshkovitz, who’s worked on numerous topics in quantum algorithms, optimization, meta-complexity, and machine learning, and who’s continuing to a postdoc at the Simons Institute in Berkeley.
  • Dr. Daniel Liang, student of me, who’s worked on efficient learning of stabilizer and near-stabilizer states, and who’s continuing to a postdoc at Rice University.
  • Dr. Jiahui Liu, student of me, who’s worked on quantum copy-protection, quantum money, and other quantum cryptographic functionalities, and who’s continuing to a postdoc at MIT.
  • Dr. William Kretschmer, student of me, who’s worked on quantum query complexity, oracle separations between quantum complexity classes, pseudorandom quantum states, and much more, and who’s continuing to a postdoc at the Simons Institute in Berkeley.

A fifth, Dr. Yuxuan Zhang, completed his PhD in condensed-matter physics.

We also had two postdocs finish this summer:

All told, I’ve now supervised or co-supervised a total of 12 PhD students and 15 postdocs (see my homepage for the complete list). I’m ridiculously proud of all of them; along with my kids, they’re one of the central joys of my life.

While there are many reasons to want to celebrate this news, I confess that among them is thumbing my nose at haters. This past week, Shtetl-Optimized endured yet another sustained troll attack. One troll claimed that my use of the names “Alice” and “Bob,” in discussing communication protocols, was Eurocentric and offensive, and threatened to contact UT Austin’s DEI office about this matter and whip up dozens of students to protest outside my office. A second troll (or was it the same troll?) accused my Chinese students of being spies and called them a long litany of racial slurs. He also accused me of being paid off by the Chinese government, and of blogging skeptically about quantum speedup claims merely to hide the fact that China will soon build a quantum computer able to break US encryption. These trolls, and others, pursued me relentlessly by both blog comments and email—acting like I was the unreasonable one for ignoring them—until I finally broke down and engaged, calling upon the Shtetl-Optimized Committee of Guardians (who I thank profusely) for needed moral support.

The fact that there are people so far beyond the reach of reason bothers me much more than it reasonably should. But whenever the toxicity of the Internet gets me down, I try to take solace from the real world. Over the past seven years, I’m proud to have built a research group at UT Austin that’s welcomed students and postdocs and visitors from all over the world, and that’s treated them as individuals on a shared journey to understand reality. I intend to continue in that spirit for as long as I’m able.

Common knowledge and quantum utility

Sunday, July 16th, 2023

Yesterday James Knight did a fun interview with me for his “Philosophical Muser” podcast about Aumann’s agreement theorem and human disagreements more generally. It’s already on YouTube here for those who would like to listen.


Speaking of making things common knowledge, several people asked me to blog about the recent IBM paper in Nature, “Evidence for the utility of quantum computing before fault tolerance.” So, uhh, consider it blogged about now! I was very happy to have the authors speak (by Zoom) in our UT Austin quantum computing group meeting. Much of the discussion focused on whether they were claiming a quantum advantage over classical, and how quantum computing could have “utility” if it doesn’t beat classical. Eventually I understood something like: no, they weren’t claiming a quantum advantage for their physics simulation, but they also hadn’t ruled out the possibility of quantum advantage (i.e., they didn’t know how to reproduce many of their data points in reasonable time on a classical computer), and they’d be happy if quantum advantage turned out to stand, but were also prepared for the possibility that it wouldn’t.

And I also understood: we’re now in an era where we’re going to see more and more of this stuff: call it the “pass the popcorn” era of potential quantum speedups for physical simulation problems. And I’m totally fine with it—as long as people communicate about it honestly, as these authors took pains to.

And then, a few days after our group meeting came three papers refuting the quantum speedup that was never claimed in the first place, by giving efficient classical simulations. And I was fine with that too.

I remember that years ago, probably during one of the interminable debates about D-Wave, Peter Shor mused to me that quantum computers might someday show “practical utility” without “beating” classical computers in any complexity-theoretic sense—if, for example, a single quantum device could easily simulate a thousand different quantum systems, and if the device’s performance on any one of those systems could be matched classically, but only if a team of clever programmers spent a year optimizing for that specific system. I don’t think we’re at that stage yet, and even if we do reach the stage it hopefully won’t last forever. But I acknowledge the possibility that such a stage might exist and that we might be heading for it.

Life, blogging, and the Busy Beaver function go on

Wednesday, July 5th, 2023

Update (July 11): If you’re interested in blogging about progress in science and technology, check out a new fellowship from The Roots of Progress (h/t Jason Crawford).

Update (July 10): Speaking of the Busy Beaver function, check out this beautifully-produced YouTube video about BB by Duane Rich—a video that Duane says was directly inspired by my survey article! Duane tells me that a Part II is coming as well.

Update (July 7): My UT colleague David Zuckerman asked me to advertise that the FOCS Test of Time Award is urgently seeking nominations. This is your chance to get your favorite papers from 1993, 2003, or 2013 recognized!


This is my first post in well over a month. The obvious excuse is that I’ve been on a monthlong family world tour, which took me first to the Bay Area, then NYC, then Israel, then Orlando for STOC/FCRC as well as Disney World, now the Jersey Shore, and later today, back to the Bay Area, joining Dana there, and leaving the kids with my parents for a few weeks. I’ll return to Austin in mid-August, by which point, the “heat dome” will hopefully have dissipated, leaving temperatures a mere 100F or less.

I’m trying to get back into production mode. Part of that is contemplating the future of this blog, about which I’ll say more in a future post.

For now, just to wet my toes, here are three updates about the Busy Beaver function:

  • Johannes Riebel, an undergraduate at the University of Augsburg in Germany, has produced a tour-de-force bachelor’s thesis that contains, among other things, the first careful writeup of Stefan O’Rear’s result from six years ago that the value of BB(748) is independent of the ZFC axioms of set theory. Regular readers might recall that O’Rear’s result substantially improved over the initial result by me and Adam Yedidia, which showed the value of BB(8000) independent of ZFC assuming the consistency of a stronger system. Along the way, Riebel even gets a tiny improvement, showing that BB(745) independent of ZFC.
  • Pascal Michel, who curated arguably the Internet’s most detailed source of information on the Busy Beaver function, has retired from academia, and his website was at imminent risk of being lost forever. Fortunately, friends intervened and his site is now available at bbchallenge.org.
  • Shawn Ligocki and Pavel Kropitz have continued to find busier beavers, with spectacular recent progress over the past couple months for Turing machines with more than 2 symbols per tape square. See here for example.

Anyway, more coming soon!

Google’s Sycamore chip: no wormholes, no superfast classical simulation either

Friday, December 2nd, 2022

Update (Dec. 6): I’m having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I’m learning a huge amount from the talks and discussions here—and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I’ll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. [New: click here for video of my talk!]

But there’s one thing, relevant to this post, that I can’t let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

To be clear, I don’t blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment’s historic significance might strike me as perfectly valid inferences from what I’d read in the press. I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits—a talk that eschewed all claims of historic significance and of literal wormhole creation.

But it’s now clear to me that, between

(1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and

(2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time,

something went terribly wrong—something that risks damaging trust in the scientific process itself. And I think it’s worth reflecting on what we can do to prevent it from happening again.


This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

News, social media, and my inbox have been abuzz with two claims about Google’s Sycamore quantum processor, the one that now has 72 superconducting qubits.

The first claim is that Sycamore created a wormhole (!)—a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit’s blog and Chad Orzel’s blog.

The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means.

On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can’t do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime?

I submit that neither claim is true. On the one hand, Sycamore did not “create a wormhole.” On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can’t do remains much the same as last week or last month!


Let’s start with the wormhole thing. I can’t really improve over how I put it in Dennis Overbye’s NYT piece:

“The most important thing I’d want New York Times readers to understand is this,” Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

More broadly, Overbye’s NYT piece explains with admirable clarity what this experiment did and didn’t do—leaving only the question “wait … if that’s all that’s going on here, then why is it being written up in the NYT??” This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the “QC creates a wormhole” framing.

Alright, but what’s the actual basis for the “QC creates a wormhole” claim, for those who don’t want to leave this blog to read about it? Well, the authors used 9 of Sycamore’s 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do—i.e., what they’d already classically calculated that it would do.

So, the objection is obvious: if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim. Why should the standards be different just because we’re talking about a quantum computer rather than a classical one?

Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 29=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There’s some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we’d mostly learn about certain corrections that arise because the simulation was being done with “only” n=100 qubits, rather than in the n→∞ limit where SYK is rigorously understood. But while those corrections, arising when n is “neither too large nor too small,” would surely be interesting to specialists, they’d have no obvious bearing on the prospects for creating real physical wormholes in our universe.

And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?

Daniel Harlow explained the thinking to me as follows (he stresses that he’s explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I’d still prefer to call it a simulated wormhole for simulated people.

For completeness, here’s Harlow’s passage from the NYT article:

Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper.

“So I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Dr. Harlow wrote in an email. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would CERTAINLY be off the table.” Developing computers big enough to do so might take 10 or 15 years, he added.


Alright, let’s move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can’t provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you’re in the “anti-concentration regime” (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there’s a classical algorithm to approximately sample the quantum circuit’s output distribution in poly(n) time (albeit, not yet a practical algorithm).

Here’s what’s crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits: a regime where

  1. you can still see explicitly that you’re exploiting a 250– or 2100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but
  2. you also have a signal that (as it turned out) is large enough to see with heroic effort.

To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I’m just worried that others aren’t reading their paper as carefully as they should be!

So then, what’s the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n→∞ qubits. Nothing in Google’s or USTC’s actual Random Circuit Sampling experiments depended on that hope, but it would’ve been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis.

Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait … no … that’s not what I meant to write…


Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments “create their own reality” full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-‘n-supremacy post.

Dismantling Quantum Hype with Tim Nguyen

Tuesday, November 22nd, 2022

Happy Thanksgiving to my American readers! While I enjoy a family holiday-week vacation in exotic Dallas—and yes, I will follow up on my old JFK post by visiting Dealey Plaza—please enjoy the following Thanksgiving victuals:

I recently recorded a 3-hour (!) YouTube video with Timothy Nguyen, host of the Cartesian Cafe. Our episode is entitled Quantum Computing: Dismantling the Hype. In it, I teach a sort of extremely compressed version of my undergraduate Intro to Quantum Information Science course, unburdening myself about whatever Tim prompts me to explain: the basic rules of quantum information, quantum circuits, the quantum black-box model, the Deutsch-Jozsa algorithm, BQP and its relationship to classical complexity classes, and sampling-based quantum supremacy experiments. This is a lot more technical than an average podcast, a lot less technical than an actual course, and hopefully just right for some nonempty subset of readers.

Outside of his podcasting career, some of you might recognize Nguyen as the coauthor, with Theo Polya, of a rebuttal of “Geometric Unity.” This latter is the proposal by the financier, podcaster, and leading “Intellectual Dark Web” figure Eric Weinstein for a unified theory of particle physics. Now, I slightly know Weinstein, and have even found him fascinating, eloquent, and correct about various issues. So, in an addendum to the main video, Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

Enjoy!

WINNERS of the Scott Aaronson Grant for Advanced Precollege STEM Education!

Friday, November 18th, 2022

I’m thrilled to be able to interrupt your regular depressing programming for 100% happy news.

Some readers will remember that, back in September, I announced that an unnamed charitable foundation had asked my advice on how best to donate $250,000 for advanced precollege STEM education. So, just like the previous time I got such a request, from Jaan Tallinn’s Survival and Flourishing Fund, I decided to do a call for proposals on Shtetl-Optimized before passing along my recommendations.

I can now reveal that the generous foundation, this time around, was the Packard Foundation. Indeed, the idea and initial inquiries to me came directly from Dave Orr: the chair of the foundation, grandson of Hewlett-Packard cofounder David Packard, and (so I learned) longtime Shtetl-Optimized reader.

I can also now reveal the results. I was honored to get more than a dozen excellent applications. After carefully considering all of them, I passed along four finalists to the Packard Foundation, which preferred to award the entire allotment to a single program if possible. After more discussion and research, the Foundation then actually decided on two winners:

  • $225,000 for general support to PROMYS: the long-running, world-renowned summer math camp for high-school students, which (among other things) is in the process of launching a new branch in India. While I ended up at Canada/USA Mathcamp (which I supported in my first grant round) rather than PROMYS, I knew all about and admired PROMYS even back when I was the right age to attend it. I’m thrilled to be able to play a small role in its expansion.
  • $30,000 for general support to AddisCoder: the phenomenal program that introduces Ethiopian high-schoolers to programming and algorithms. AddisCoder was founded by UC Berkeley theoretical computer science professor and longtime friend-of-the-blog Jelani Nelson, and also received $30,000 in my first grant round. Jelani and his co-organizers will be pressing ahead with AddisCoder despite political conflict in Ethiopia including a recently-concluded civil war. I’m humbled if I can make even the tiniest difference.

Thanks so much to the Packard Foundation, and to Packard’s talented program officers, directors, and associates—especially Laura Sullivan, Jean Ries, and Prithi Trivedi—for their hard work to make this happen. Thanks so much also to everyone who applied. While I wish we could’ve funded everyone, I’ve learned a lot about programs to which I’d like to steer future support (other prospective benefactors: please email me!!), and to which I’d like to steer kids: my own, once they’re old enough, and other kids of my acquaintance.

I feel good that, in the tiny, underfunded world of accelerated STEM education, the $255,000 that Packard is donating will already make a difference. But of course, $255,000 is only a thousandth of $255 million, which is a thousandth of $255 billion. Perhaps I could earn the latter sort of sums, to donate to STEM education or any other cause, by (for example) starting my own cryptocurrency exchange. I hope my readers will forgive me for not having chosen that route, expected-utility-maximization arguments be damned.

Postdocs, matrix multiplication, and WSJ: yet more shorties

Friday, October 7th, 2022

I’m proud to say that Nick Hunter-Jones and Matteo Ippoliti—both of whom work at the interface between quantum information science and condensed-matter physics (Nick closer to the former and Matteo to the latter)—have joined the physics faculty at UT Austin this year. And Nick, Matteo, and I are jointly seeking postdocs to start in Fall 2023! Please check out our call for applications here. The deadline is December 1; you apply through AcademicJobsOnline rather than by emailing me as in past years.


The big news in AI and complexity theory this week was DeepMind’s AlphaTensor, and its automated discovery of new algorithms for matrix multiplication. (See here for the Nature paper.) More concretely, they’ve used AI to discover (among other things) an algorithm for multiplying 4×4 matrices, over finite fields of characteristic 2, using only 47 scalar multiplications. This beats the 49=7×7 that you’d get from Strassen’s algorithm. There are other improvements for other matrix dimensions, many of which work over fields of other characteristics.

Since I’ve seen confusion about the point on social media: this does not improve over the best known asymptotic exponent for matrix multiplication, which over any field, still stands at the human-discovered 2.373 (meaning, we know how to multiply two N×N matrices in O(N2.373) time, but not faster). But it does asymptotically improve over Strassen’s O(N2.81) algorithm from 1968, conceivably even in a way that could have practical relevance for multiplying hundreds-by-hundreds or thousands-by-thousands matrices over F2.

Way back in 2007, I gave a talk at MIT CSAIL’s “Wild and Crazy Ideas Session,” where I explicitly proposed to use computer search to look for faster algorithms for 4×4 and 5×5 matrix multiplication. The response I got at the time was that it was hopeless, since the search space was already too huge. Of course, that was before the deep learning revolution.


This morning, the Wall Street Journal published an article by Karen Hao about competition between China and the US in quantum computing. Unfortunately paywalled, but includes the following passage:

Meanwhile, American academics say it’s gotten harder for Chinese students to obtain visas to conduct quantum research in the U.S. “It’s become common knowledge that when Chinese students or postdocs come to the U.S., they can’t say they’re doing quantum computing,” says Scott Aaronson, director of the Quantum Information Center at the University of Texas, Austin.

Two more shorties

Tuesday, October 4th, 2022

For anyone living under a rock with no access to nerd social media, Alain Aspect, John Clauser, and Anton Zeilinger have finally won the Nobel Prize in Physics, for their celebrated experiments that rubbed everyone’s faces in the reality of quantum entanglement (including Bell inequality violation and quantum teleportation). I don’t personally know Aspect or Clauser, but Zeilinger extremely graciously hosted me and my wife Dana when we visited Vienna in 2012, even bringing us to the symphony (he knows the director and has front-row seats), and somehow making me feel more cultured rather than less.

As usual, the recipe for winning the Nobel Prize in Physics is this:

(1) Do something where anyone who knows about it is like, “why haven’t they given the Nobel Prize in Physics for that yet?”

(2) Live long enough.

Huge congratulations to Aspect, Clauser, and Zeilinger!


Elham Kashefi, my quantum complexity theory colleague and treasured friend for more than 20 years, brought to my attention a Statement of Solidarity with Students in Iran from the International Academic Community. Of course I was happy to sign the statement, just like I was back in 2009 when brave Iranian students similarly risked their lives and freedom for women’s rights and other Enlightenment values against the theocracy. I urge you to sign the statement as well. If enough Shtetl-Optimized readers disapprove of their brutal repression, surely the mullahs will reconsider! More seriously though: if any readers can recommend a charity that’s actually making a difference in helping Iranians participate in the modern world, I’d be happy to do another of my matching donation drives.