Archive for the ‘Nerd Interest’ Category

Fake it till you make it (to the moon)

Friday, July 19th, 2019

While I wait to board a flight at my favorite location on earth—Philadelphia International Airport—I figured I might as well blog something to mark the 50th anniversary of Apollo 11. (Thanks also to Joshua Zelinsky for a Facebook post that inspired this.)

I wasn’t alive for Apollo, but I’ve been alive for 3/4 of the time after it, even though it now seems like ancient history—specifically, like a Roman cathedral being gawked at by a medieval peasant, like an achievement by some vanished, more cohesive civilization that we can’t even replicate today, let alone surpass.

Which brings me to a depressing mystery: why do so many people now deny that humans walked on the moon at all? Like, why that specifically? While they’re at it, why don’t they also deny that WWII happened, or that the Beatles existed?

Surprisingly, skepticism of the reality of Apollo seems to have gone all the way back to the landings themselves. One of my favorite stories growing up was of my mom, as a teenager, working as a waitress at an Israeli restaurant in Philadelphia, on the night of Apollo 11 landing. My mom asked for a few minutes off to listen to news of the landing on the radio. The owners wouldn’t grant it—explaining that it was all Hollywood anyway, just some actors in spacesuits on a sound stage, and obviously my mom wasn’t so naïve as to think anyone was actually walking to the moon?

Alas, as we get further and further from the event, with no serious prospect of ever replicating it past the stage of announcing an optimistic timetable (nor, to be honest, any scientific reason to replicate it), as the people involved die off, and as our civilization becomes ever more awash in social-media-fueled paranoid conspiracies, I fear that moon-landing denalism will become more common.

Because here’s the thing: Apollo could happen, but only because of a wildly improbable, once-in-history confluence of social and geopolitical factors. It was economically insane, taking 100,000 people and 4% of the US federal budget for some photo-ops, a flag-planting, some data and returned moon rocks that had genuine scientific value but could’ve been provided much more cheaply by robots. It was dismantled immediately afterwards like a used movie set, rather than leading to any greater successes. Indeed, manned spaceflight severely regressed afterwards, surely mocking the expectations of every last science fiction fan and techno-utopian who was alive at that time.

One could summarize the situation by saying that, in certain respects, the Apollo program really was “faked.” It’s just that the way they “faked” it, involved actually landing people on the moon!

On two blog posts of Jerry Coyne

Saturday, July 13th, 2019

A few months ago, I got to know Jerry Coyne, the recently-retired biologist at the University of Chicago who writes the blog “Why Evolution Is True.” The interaction started when Jerry put up a bemused post about my thoughts on predictability and free will, and if I pointed out that if he wanted to engage me on those topics, there was more to go on than an 8-minute YouTube video. I told Coyne that it would be a shame to get off on the wrong foot with him, since perusal of his blog made it obvious that whatever he and I disputed, it was dwarfed by our areas of agreement. He and I exchanged more emails and had lunch in Chicago.

By way of explaining how he hadn’t read “The Ghost in the Quantum Turing Machine,” Coyne emphasized the difference in my and his turnaround times: while these days I update my blog only a couple times per month, Coyne often updates multiple times per day. Indeed the sheer volume of material he posts, on subjects from biology to culture wars to Chicago hot dogs, would take months to absorb.

Today, though, I want to comment on just two posts of Jerry’s.

The first post, from back in May, concerns David Gelernter, the computer science professor at Yale who was infamously injured in a 1993 attack by the Unabomber, and who’s now mainly known as a right-wing commentator. I don’t know Gelernter, though I did once attend a small interdisciplinary workshop in the south of France that Gelernter also attended, wherein I gave a talk about quantum computing and computational complexity in which Gelernter showed no interest. Anyway, Gelernter, in an essay in May for the Claremont Review of Books, argued that recent work has definitively disproved Darwinism as a mechanism for generating new species, and until something better comes along, Intelligent Design is the best available alternative.

Curiously, I think that Gelernter’s argument falls flat not for detailed reasons of biology, but mostly just because it indulges in bad math and computer science—in fact, in precisely the sorts of arguments that I was trying to answer in my segment on Morgan Freeman’s Through the Wormhole (see also Section 3.2 of Why Philosophers Should Care About Computational Complexity). Gelernter says that

  1. a random change to an amino acid sequence will pretty much always make it worse,
  2. the probability of finding a useful new such sequence by picking one at random is at most ~1 in 1077, and
  3. there have only been maybe ~1040 organisms in earth’s history.

Since 1077 >> 1040, Darwinism is thereby refuted—not in principle, but as an explanation for life on earth. QED.

The most glaring hole in the above argument, it seems to me, is that it simply ignores intermediate possible numbers of mutations. How hard would it be to change, not 1 or 100, but 5 amino acids in a given protein to get a usefully different one—as might happen, for example, with local optimization methods like simulated annealing run at nonzero temperature? And how many chances were there for that kind of mutation in the earth’s history?

Gelernter can’t personally see how a path could cut through the exponentially large solution space in a polynomial amount of time, so he asserts that it’s impossible. Many of the would-be P≠NP provers who email me every week do the same. But this particular kind of “argument from incredulity” has an abysmal track record: it would’ve applied equally well, for example, to problems like maximum matching that turned out to have efficient algorithms. This is why, in CS, we demand better evidence of hardness—like completeness results or black-box lower bounds—neither of which seem however to apply to the case at hand. Surely Gelernter understands all this, but had he not, he could’ve learned it from my lecture at the workshop in France!

Alas, online debate, as it’s wont to do, focused less on Gelernter’s actual arguments and the problems with them, than on the tiresome questions of “standing” and “status.” In particular: does Gelernter’s authority, as a noted computer science professor, somehow lend new weight to Intelligent Design? Or conversely: does the very fact that a computer scientist endorsed ID prove that computer science itself isn’t a real science at all, and that its practitioners should never be taken seriously in any statements about the real world?

It’s hard to say which of these two questions makes me want to bury my face deeper into my hands. Serge Lang, the famous mathematician and textbook author, spent much of his later life fervently denying the connection between HIV and AIDS. Lynn Margulis, the discoverer of the origin of mitochondria (and Carl Sagan’s first wife), died a 9/11 truther. What broader lesson should we draw from any of this? And anyway, what percentage of computer scientists actually do doubt evolution, and how does it compare to the percentage in other academic fields and other professions? Isn’t the question of how divorced we computer scientists are from the real world an … ahem … empirical matter, one hard to answer on the basis of armchair certainties and anecdotes?

Speaking of empiricism, if you check Gelernter’s publication list on DBLP and his Google Scholar page, you’ll find that he did influential work in programming languages, parallel computing, and other areas from 1981 through 1997, and then in the past 22 years published a grand total of … two papers in computer science. One with four coauthors, the other a review/perspective piece about his earlier work. So it seems fair to say that, some time after receiving tenure in a CS department, Gelernter pivoted (to put it mildly) away from CS and toward conservative punditry. His recent offerings, in case you’re curious, include the book America-Lite: How Imperial Academia Dismantled Our Culture (and Ushered In the Obamacrats).

Some will claim that this case underscores what’s wrong with the tenure system itself, while others will reply that it’s precisely what tenure was designed for, even if in this instance you happen to disagree with what Gelernter uses his tenured freedom to say. The point I wanted to make is different, though. It’s that the question “what kind of a field is computer science, anyway, that a guy can do high-level CS research on Monday, and then on Tuesday reject Darwinism and unironically use the word ‘Obamacrat’?”—well, even if I accepted the immense weight this question places on one atypical example (which I don’t), and even if I dismissed the power of compartmentalization (which I again don’t), the question still wouldn’t arise in Gelernter’s case, since getting from “Monday” to “Tuesday” seems to have taken him 15+ years.

Anyway, the second post of Coyne’s that I wanted to talk about is from just yesterday, and is about Jeffrey Epstein—the financier, science philanthropist, and confessed sex offender, whose appalling crimes you’ll have read all about this week if you weren’t on a long sea voyage without Internet or something.

For the benefit of my many fair-minded friends on Twitter, I should clarify that I’ve never met Jeffrey Epstein, let alone accepted any private flights to his sex island or whatever. I doubt he has any clue who I am either—even if he did once claim to be “intrigued” by quantum information.

I do know a few of the scientists who Epstein once hung out with, including Seth Lloyd and Steven Pinker. Pinker, in particular, is now facing vociferous attacks on Twitter, similar in magnitude perhaps to what I faced in the comment-171 affair, for having been photographed next to Epstein at a 2014 luncheon that was hosted by Lawrence Krauss (a physicist who later faced sexual harassment allegations of his own). By the evidentiary standards of social media, this photo suffices to convict Pinker as basically a child molester himself, and is also a devastating refutation of any data that Pinker might have adduced in his books about the Enlightenment’s contributions to human flourishing.

From my standpoint, what’s surprising is not that Pinker is up against this, but that it took this long to happen, given that Pinker’s pro-Enlightenment, anti-blank-slate views have had the effect of painting a giant red target on his back. Despite the near-inevitability, though, you can’t blame Pinker for wanting to defend himself, as I did when it was my turn for the struggle session.

Thus, in response to an emailed inquiry by Jerry Coyne, Pinker shared some detailed reflections about Epstein; Pinker then gave Coyne permission to post those reflections on his blog (though they were originally meant for Coyne only). Like everything Pinker writes, they’re worth reading in full. Here’s the opening paragraph:

The annoying irony is that I could never stand the guy [Epstein], never took research funding from him, and always tried to keep my distance. Friends and colleagues described him to me as a quantitative genius and a scientific sophisticate, and they invited me to salons and coffee klatches at which he held court. But I found him to be a kibitzer and a dilettante — he would abruptly change the subject ADD style, dismiss an observation with an adolescent wisecrack, and privilege his own intuitions over systematic data.

Pinker goes on to discuss his record of celebrating, and extensively documenting, the forces of modernity that led to dramatic reductions in violence against women and that have the power to continue doing so. On Twitter, Pinker had already written: “Needless to say I condemn Epstein’s crimes in the strongest terms.”

I probably should’ve predicted that Pinker would then be attacked again—this time, for having prefaced his condemnation with the phrase “needless to say.” The argument, as best I can follow, runs like this: given all the isms of which woke Twitter has already convicted Pinker—scientism, neoliberalism, biological determinism, etc.—how could Pinker’s being against Epstein’s crimes (which we recently learned probably include the rape, and not only statutorily, of a 15-year-old) possibly be assumed as a given?

For the record, just as Epstein’s friends and enablers weren’t confined to one party or ideology, so the public condemnation of Epstein strikes me as a matter that is (or should be) beyond ideology, with all reasonable dispute now confined to the space between “very bad” and “extremely bad,” between “lock away for years” and “lock away for life.”

While I didn’t need Pinker to tell me that, one reason I personally appreciated his comments is that they helped to answer a question that had bugged me, and that none of the mountains of other condemnations of Epstein had given me a clear sense about. Namely: supposing, hypothetically, that I’d met Epstein around 2002 or so—without, of course, knowing about his crimes—would I have been as taken with him as many other academics seem to have been? (Would you have been? How sure are you?)

Over the last decade, I’ve had the opportunity to meet some titans and semi-titans of finance and business, to discuss quantum computing and other nerdy topics. For a few (by no means all) of these titans, my overriding impression was precisely their unwillingness to concentrate on any one point for more than about 20 seconds—as though they wanted the crust of a deep intellectual exchange without the meat filling. My experience with them fit Pinker’s description of Epstein to a T (though I hasten to add that, as far as I know, none of these others ran teenage sex rings).

Anyway, given all the anger at Pinker for having intersected with Epstein, it’s ironic that I could easily imagine Pinker’s comments rattling Epstein the most of anyone’s, if Epstein hears of them from his prison cell. It’s like: Epstein must have developed a skin like a rhinoceros’s by this point about being called a child abuser, a creep, and a thousand similar (and similarly deserved) epithets. But “a kibitzer and a dilettante” who merely lured famous intellectuals into his living room, with wads of cash not entirely unlike the ones used to lure teenage girls to his massage table? Ouch!

OK, but what about Alan Dershowitz—the man who apparently used to be Epstein’s close friend, who still is Pinker’s friend, and who played a crucial role in securing Epstein’s 2008 plea bargain, the one now condemned as a travesty of justice? I’m not sure how I feel about Dershowitz.  It’s like: I understand that our system requires attorneys willing to mount a vociferous defense even for clients who they privately know or believe to be guilty—and even to get those clients off on technicalities or bargaining whenever they can.  I’m also incredibly grateful that I chose CS rather than law school, because I don’t think I could last an hour advocating causes that I knew to be unjust. Just like my fellow CS professor, the intelligent design advocate David Gelernter, I have the privilege and the burden of speaking only for myself.

Quanta of Solace

Thursday, June 20th, 2019

In Quanta magazine, Kevin Hartnett has a recent article entitled A New Law to Describe Quantum Computing’s Rise? The article discusses “Neven’s Law”—a conjecture, by Hartmut Neven (head of Google’s quantum computing effort), that the number of integrated qubits is now increasing exponentially with time, so that the difficulty of simulating a state-of-the-art QC on a fixed classical computer is increasing doubly exponentially with time. (Jonathan Dowling tells me that he expressed the same thought years ago.)

Near the end, the Quanta piece quotes some UT Austin professor whose surname starts with a bunch of A’s as follows:

“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work. They’re the ones who need to articulate where and why the progress will stop.”

The quote is perfectly accurate, but in context, it might give the impression that I’m endorsing Neven’s Law. In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50. I say only that, regardless of what anyone believes is the ultimate rate of progress in QC, what’s already happened today puts the ball firmly in the skeptics’ court.

Also in Quanta, Anil Ananthaswamy has a new article out on How to Turn a Quantum Computer Into the Ultimate Randomness Generator. This piece covers two schemes for using a quantum computer to generate “certified random bits”—that is, bits you can prove are random to a faraway skeptic. one due to me, the other due to Brakerski et al. The article cites my paper with Lijie Chen, which shows that under suitable computational assumptions, the outputs in my protocol are hard to spoof using a classical computer. The randomness aspect will be addressed in a paper that I’m currently writing; for now, see these slides.

As long as I’m linking to interesting recent Quanta articles, Erica Klarreich has A 53-Year-Old Network Coloring Conjecture is Disproved. Briefly, Hedetniemi’s Conjecture stated that, given any two finite, undirected graphs G and H, the chromatic number of the tensor product G⊗H is just the minimum of the chromatic numbers of G and H themselves. This reasonable-sounding conjecture has now been falsified by Yaroslav Shitov. For more, see also this post by Gil Kalai—who appears here not in his capacity as a quantum computing skeptic.

In interesting math news beyond Quanta magazine, the Berkeley alumni magazine has a piece about the crucial, neglected topic of mathematicians’ love for Hagoromo-brand chalk (hat tip: Peter Woit). I can personally vouch for this. When I moved to UT Austin three years ago, most offices in CS had whiteboards, but I deliberately chose one with a blackboard. I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything. With the Hagoromo brand, though, you pretty much get all the benefits of chalk with none of the downsides, so it just strictly dominates whiteboards.

Jan Kulveit asked me to advertise the European Summer Program on Rationality (ESPR), which will take place this August 13-23, and which is aimed at students ages 16-19. I’ve lectured both at ESPR and at a similar summer program that ESPR was modeled after (called SPARC)—and while I was never there as a student, it looked to me like a phenomenal experience. So if you’re a 16-to-19-year-old who reads this blog, please consider applying!

I’m now at the end of my annual family trip to Tel Aviv, returning to the Eastern US tonight, and then on to STOC’2019 at the ACM Federated Computing Research Conference in Phoenix (which I can blog about if anyone wants me to). It was a good trip, although marred by my two-year-old son Daniel falling onto sun-heated metal and suffering a second-degree burn on his leg, and then by the doctor botching the treatment. Fortunately Daniel’s now healing nicely. For future reference, whenever bandaging a burn wound, be sure to apply lots of Vaseline to prevent the bandage from drying out, and also to change the bandage daily. Accept no fancy-sounding substitute.

On the scientific accuracy of “Avengers: Endgame”

Friday, May 3rd, 2019

[BY REQUEST: SPOILERS FOLLOW]

Today Ben Lindbergh, a writer for The Ringer, put out an article about the scientific plausibility (!) of the time-travel sequences in the new “Avengers” movie. The article relied on two interviewees:

(1) David Deutsch, who confirmed that he has no idea what the “Deutsch proposition” mentioned by Tony Stark refers to but declined to comment further, and

(2) some quantum computing dude from UT Austin who had no similar scruples about spouting off on the movie.

To be clear, the UT Austin dude hadn’t even seen the movie, or any of the previous “Avengers” movies for that matter! He just watched the clips dealing with time travel. Yet Lindbergh still saw fit to introduce him as “a real-life [Tony] Stark without the vast fortune and fancy suit.” Hey, I’ll take it.

Anyway, if you’ve seen the movie, and/or you know Deutsch’s causal consistency proposal for quantum closed timelike curves, and you can do better than I did at trying to reconcile the two, feel free to take a stab in the comments.

Death of proof greatly exaggerated

Thursday, March 7th, 2019

In 1993, the science writer John Horgan—who’s best known for his book The End of Science, and (of course) for interviewing me in 2016—wrote a now-(in)famous cover article for Scientific American entitled “The Death of Proof.” Mashing together a large number of (what I’d consider) basically separate trends and ideas, Horgan argued that math was undergoing a fundamental change, with traditional deductive proofs being replaced by a combination of non-rigorous numerical simulations, machine-generated proofs, probabilistic and probabilistically-checkable proofs, and proofs using graphics and video. Horgan also suggested that Andrew Wiles’s then-brand-new proof of Fermat’s Last Theorem—which might have looked, at first glance, like a spectacular counterexample to the “death of proof” thesis—could be the “last gasp of a dying culture” and a “splendid anachronism.” Apparently, “The Death of Proof” garnered one of the largest volumes of angry mail in Scientific American‘s history, with mathematician after mathematician arguing that Horgan had strung together half-digested quotes and vignettes to manufacture a non-story.

Now Horgan—who you could variously describe as a wonderful sport, or a ham, or a sucker for punishment—has written a 26-year retrospective on his “death of proof” article. The prompt for this was Horgan’s recent discovery that, back in the 90s, David Hoffman and Hermann Karcher, two mathematicians annoyed by the “death of proof” article, had named a nonexistent mathematical object after its author. The so-called Horgan surface is a minimal surface that numerical computations strongly suggested should exist, but that can be rigorously proven not to exist after all. “The term was intended as an insult, but I’m honored anyway,” Horgan writes.

As a followup to his blog post, Horgan then decided to solicit commentary from various people he knew, including yours truly, about “how proofs are faring in an era of increasing computerization.” He wrote, “I’d love to get a paragraph or two from you.” Alas, I didn’t have the time to do as requested, but only to write eight paragraphs. So Horgan suggested that I make the result into a post on my own blog, which he’d then link to. Without further ado, then:


John, I like you so I hate to say it, but the last quarter century has not been kind to your thesis about “the death of proof”!  Those mathematicians sending you the irate letters had a point: there’s been no fundamental change to mathematics that deserves such a dramatic title.  Proof-based math remains quite healthy, with (e.g.) a solution to the Poincaré conjecture since your article came out, as well as to the Erdős discrepancy problem, the Kadison-Singer conjecture, Catalan’s conjecture, bounded gaps in primes, testing primality in deterministic polynomial time, etc. — just to pick a few examples from the tiny subset of areas that I know anything about.

There are evolutionary changes to mathematical practice, as there always have been.  Since 2009, the website MathOverflow has let mathematicians query the global hive-mind about an obscure reference or a recalcitrant step in a proof, and get near-instant answers.  Meanwhile “polymath” projects have, with moderate success, tried to harness blogs and other social media to make advances on long-standing open math problems using massive collaborations.

While humans remain in the driver’s seat, there are persistent efforts to increase the role of computers, with some notable successes.  These include Thomas Hales’s 1998 computer-assisted proof of the Kepler Conjecture (about the densest possible way to pack oranges) — now fully machine-verified from start to finish, after the Annals of Mathematics refused to publish a mixture of traditional mathematics and computer code (seems this is not exactly what happened; see the comment section for more).  It also includes William McCune’s 1996 solution to the Robbins Conjecture in algebra (the computer-generated proof was only half a page, but involved substitutions so strange that for 60 years no human had found them); and at the “opposite extreme,” the 2016 solution to the Pythagorean triples problem by Marijn Heule and collaborators, which weighed in at 200 terabytes (at that time, “the longest proof in the history of mathematics”).

It’s conceivable that someday, computers will replace humans at all aspects of mathematical research — but it’s also conceivable that, by the time they can do that, they’ll be able to replace humans at music and science journalism and everything else!

New notions of proof — including probabilistic, interactive, zero-knowledge, and even quantum proofs — have seen further development by theoretical computer scientists since 1993.  So far, though, these new types of proof remain either entirely theoretical (as with quantum proofs), or else they’re used for cryptographic protocols but not for mathematical research.  (For example, zero-knowledge proofs now play a major role in certain cryptocurrencies, such as Zcash.)

In many areas of math (including my own, theoretical computer science), proofs have continued to get longer and harder for any one person to absorb.  This has led some to advocate a split approach, wherein human mathematicians would talk to each other only about the handwavy intuitions and high-level concepts, while the tedious verification of details would be left to computers.  So far, though, the huge investment of time needed to write proofs in machine-checkable format — for almost no return in new insight — has prevented this approach’s wide adoption.

Yes, there are non-rigorous approaches to math, which continue to be widely used in physics and engineering and other fields, as they always have been.  But none of these approaches have displaced proof as the gold standard whenever it’s available.  If I had to speculate about why, I’d say: if you use non-rigorous approaches, then even if it’s clear to you under what conditions your results can be trusted, it’s probably much less clear to others.  Also, even if only one segment of a research community cares about rigor, whatever earlier work that segment builds on will need to be rigorous as well — thereby exerting constant pressure in that direction.  Thus, the more collaborative a given research area becomes, the more important is rigor.

For my money, the elucidation of the foundations of mathematics a century ago, by Cantor, Frege, Peano, Hilbert, Russell, Zermelo, Gödel, Turing, and others, still stands as one of the greatest triumphs of human thought, up there with evolution or quantum mechanics or anything else.  It’s true that the ideal set by these luminaries remains mostly aspirational.  When mathematicians say that a theorem has been “proved,” they still mean, as they always have, something more like: “we’ve reached a social consensus that all the ideas are now in place for a strictly formal proof that could be verified by a machine … with the only task remaining being massive rote coding work that none of us has any intention of ever doing!”  It’s also true that mathematicians, being human, are subject to the full panoply of foibles you might expect: claiming to have proved things they haven’t, squabbling over who proved what, accusing others of lack of rigor while hypocritically taking liberties themselves.  But just like love and honesty remain fine ideals no matter how often they’re flouted, so too does mathematical rigor.

Update: Here’s Horgan’s new post (entitled “Okay, Maybe Proofs Aren’t Dying After All”), which also includes a contribution from Peter Woit.

De-sneering my life

Wednesday, February 27th, 2019

If I’m being honest, the most exciting recent development in my life is this: a little over a month ago, I stopped checking “SneerClub” (a place I’d previously resolved not even to name here, but I think an exception is warranted now). Permanently, cold turkey. I won’t even visit to read their sneers about this post. I’ve made progress cutting down on other self-destructive social media fixations as well. Many friends suggested this course to me, and I thank them all, though I ultimately had to follow my own path to the obvious.

Ironically, the SneerClubbers themselves begged me to stop reading them (!), so presumably for once they’ll be okay with something I did (but if not, I don’t care). If any of them still have something to say to me, they can come to this blog, or email me, or if they pass through Austin, set up a time to hash it out over chips and queso (my treat). What I’ll no longer do is spend hours every week binge-reading a forum of people who’ve adopted nastiness and bad faith as their explicit principles. I’ll no longer toss and turn at night wondering how it came about that two thousand Redditors hate Scott Aaronson so much, and what I could say or do (short of total self-abnegation) that would make them hate me less. I plan to spend the freed-up time being Scott Aaronson.

Resolving to ignore one particular online hate pit—and then sticking to the resolution, as so far I have—has been a pure, unmitigated improvement to my quality of life. If you don’t believe me, ask my wife and kids. I recommend this course to anyone.

You could sensibly ask: why did I ever spend time worrying about an anti-nerds-like-me forum that’s so poisonous for its targets and participants alike? After long introspection, I think the answer is: there’s a part of me, perhaps a gift from the childhood bullies, that’s so obsessed with “society’s hatred of STEM nerds,” that it constantly seeks out evidence to confirm that its fears are justified—evidence that it can then wave in front of the rest of my brain to say “you see?? what did I always tell you?” And alas, whenever that part of my brain seeks such evidence, the world dutifully supplies mountains of it. It’s never once disappointed.

Now the SneerClubbers—who are perceptive and talented in their cruelty, if in nothing else—notice this about me, and gleefully ridicule me for it. But they’re oblivious to the central irony: that unlike the vast majority of humankind, or even the vast majority of social justice activists, they (the SneerClubbers) really do hate everyone like me. They’re precisely what the paranoid part of my brain wrongly fears that everyone else I meet is secretly like. They’re like someone who lectures you about your hilariously overblown fear of muggers, while simultaneously mugging you.

But at least they’re not the contented and self-confident bullies of my childhood nightmares, kicking dirt down at nerds from atop their pinnacle of wokeness and social adeptness. If you spend enough time studying them, they themselves come across as angry, depressed, pathetic. So for example: here’s one of my most persistent attackers, popping up on a math thread commemorating Michael Atiyah (one of the great mathematicians of the 20th century), just to insult Atiyah—randomly, gratuitously, and a few days after Atiyah had died. Almost everything posted all over Reddit by this individual—who uses the accurate tagline “unpleasantly radical”—has the same flavor. Somehow seeing this made it click for me: wait a second, these are the folks are lecturing me about my self-centeredness and arrogance and terrible social skills? Like, at least I try to be nice.


Scott Alexander, who writes the world’s best blog and is a more central target of SneerClub than I’ve been, recently announced that he asked the moderators of r/ssc to close its notorious “Culture War” thread, and they’ve done so—moving the thread to a new home on Reddit called “TheMotte.”

For those who don’t know: r/ssc is the place on Reddit to discuss Scott’s SlateStarCodex blog, though Scott himself was never too involved as more than a figurehead.  The Culture War thread was the place within r/ssc to discuss race, gender, immigration, and other hot-button topics.  The thread, which filled up with a bewildering thousands of comments per week (!), attracted the, err … full range of political views, including leftists, libertarians, and moderates but also alt-righters, neoreactionaries, and white nationalists. Predictably, SneerClub treated the thread as a gift from heaven: a constant source of inflammatory material that they could use to smear Scott personally (even if most of the time, Scott hadn’t even seen the offending content, let alone endorsing it).

Four months ago, I was one of the apparently many friends who told Scott that I felt he should dissociate the Culture War thread from his brand. So I congratulate him on his decision, which (despite his eloquently-expressed misgivings) I feel confident was the right one. Think about it this way: nobody’s freedom of speech has been curtailed—the thread continues full steam at TheMotte, for anyone who enjoys it—but meanwhile, the sneerers have been deprived of a golden weapon with which to slime Scott. Meanwhile, while the sneerers themselves might never change their minds about anything, Scott has demonstrated to third parties that he’s open and reasonable and ready to compromise, like the debater who happily switches to his opponent’s terminology. What’s not to like?


A couple weeks ago, while in Albuquerque for the SQuInT conference, I visited the excellent National Museum of Nuclear Science and History.  It was depressing, as it should have been, to tour the detailed exhibits about the murderous events surrounding the birth of the nuclear era: the Holocaust, the Rape of Nanking, the bombings of Hiroshima and Nagasaki. It was depressing in a different way to tour the exhibits about the early Atomic Age, and see the boundless optimism that ‘unleashing the power of the atom’ would finally usher in a near-utopia of space travel and clean energy—and then to compare that vision to where we are now, with climate change ravaging the planet and (in a world-historic irony) the people who care most about the environment having denounced and marginalized the most reliable source of carbon-free energy, the one that probably had the best chance to avert our planet’s terrifying future.

But on the bright side: how wonderful to have born into a time and place when, for the most part, those who hate you have only the power to destroy your life that you yourself grant them. How wonderful when one can blunt their knives by simply refusing to open a browser tab.

The Winding Road to Quantum Supremacy

Tuesday, January 15th, 2019

Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on.

I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus.

But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.

Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more.

And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI.

And a fourth, in which I talk about whether reality is discrete or continuous.

Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started.

Incompleteness ex machina

Sunday, December 30th, 2018

I have a treat with which to impress your friends at New Year’s Eve parties tomorrow night: a rollicking essay graciously contributed by a reader named Sebastian Oberhoff, about a unified and simplified way to prove all of Gödel’s Incompleteness Theorems, as well as Rosser’s Theorem, directly in terms of computer programs. In particular, this improves over my treatments in Quantum Computing Since Democritus and my Rosser’s Theorem via Turing machines post. While there won’t be anything new here for the experts, I loved the style—indeed, it brings back wistful memories of how I used to write, before I accumulated too many imaginary (and non-imaginary) readers tut-tutting at crass jokes over my shoulder. May 2019 bring us all the time and the courage to express ourselves authentically, even in ways that might be sneered at as incomplete, inconsistent, or unsound.

The NP genie

Tuesday, December 11th, 2018

Hi from the Q2B conference!

Every nerd has surely considered the scenario where an all-knowing genie—or an enlightened guru, or a superintelligent AI, or God—appears and offers to answer any question of your choice.  (Possibly subject to restrictions on the length or complexity of the question, to prevent glomming together every imaginable question.)  What do you ask?

(Standard joke: “What question should I ask, oh wise master, and what is its answer?”  “The question you should ask me is the one you just asked, and its answer is the one I am giving.”)

The other day, it occurred to me that theoretical computer science offers a systematic way to generate interesting variations on the genie scenario, which have been contemplated less—variations where the genie is no longer omniscient, but “merely” more scient than any entity that humankind has ever seen.  One simple example, which I gather is often discussed in the AI-risk and rationality communities, is an oracle for the halting problem: what computer program can you write, such that knowing whether it halts would provide the most useful information to civilization?  Can you solve global warming with such an oracle?  Cure cancer?

But there are many other examples.  Here’s one: suppose what pops out of your lamp is a genie for NP questions.  Here I don’t mean NP in the technical sense (that would just be a pared-down version of the halting genie discussed above), but in the human sense.  The genie can only answer questions by pointing you to ordinary evidence that, once you know where to find it, makes the answer to the question clear to every competent person who examines the evidence, with no further need to trust the genie.  Or, of course, the genie could fail to provide such evidence, which itself provides the valuable information that there’s no such evidence out there.

More-or-less equivalently (because of binary search), the genie could do what my parents used to do when my brother and I searched the house for Hanukkah presents, and give us “hotter” or “colder” hints as we searched for the evidence ourselves.

To make things concrete, let’s assume that the NP genie will only provide answers of 1000 characters or fewer, in plain English text with no fancy encodings.  Here are the candidates for NP questions that I came up with after about 20 seconds of contemplation:

  • Which pieces of physics beyond the Standard Model and general relativity can be experimentally confirmed with the technology of 2018? What are the experiments we need to do?
  • What’s the current location of the Ark of the Covenant, or its remains, if any still exist?  (Similar: where can we dig to find physical records, if any exist, pertaining to the Exodus from Egypt, or to Jesus of Nazareth?)
  • What’s a sketch of a resolution of P vs. NP, from which experts would stand a good chance of filling in the details?  (Similar for other any famous unsolved math problem.)
  • Where, if anywhere, can we point radio telescopes to get irrefutable evidence for the existence of extraterrestrial life?
  • What happened to Malaysia Flight 370, and where are the remains by which it could be verified?  (Similar for Amelia Earhart.)
  • Where, if anywhere, can we find intact DNA of non-avian dinosaurs?

Which NP questions would you ask the genie?  And what other complexity-theoretic genies would be interesting to consider?  (I thought briefly about a ⊕P genie, but I’m guessing that the yearning to know whether the number of sand grains in the Sahara is even or odd is limited.)


Update: I just read Lenny Susskind’s Y Combinator interview, and found it delightful—pure Lenny, and covering tons of ground that should interest anyone who reads this blog.

Teaching quantum in junior high: special Thanksgiving guest post by Terry Rudolph

Thursday, November 22nd, 2018

Happy Thanksgiving!

People have sometimes asked me: “how do you do it?  how do you do your research, write papers, teach classes, mentor grad students, build up the quantum center at UT, travel and give talks every week or two, serve on program committees, raise two rambunctious young kids, and also blog and also participate in the comments and also get depressed about people saying mean things on social media?”  The answer is that increasingly I don’t.  Something has to give, and this semester, alas, that something has often been blogging.

And that’s why, today, I’m delighted to have a special guest post by my good friend Terry Rudolph.  Terry, who happens to be Erwin Schrödinger’s grandson, has done lots of fascinating work over the years in quantum computing and the foundations of quantum mechanics, and previously came up on this blog in the context of the PBR (Pusey-Barrett-Rudolph) Theorem.  Today, he’s a cofounder and chief architect at PsiQuantum, a startup in Palo Alto that’s trying to build silicon-photonic quantum computers.

Terry’s guest post is about the prospects for teaching quantum theory at the junior high school level—something he thought about a lot in the context of writing his interesting recent book Q is for Quantum.  I should stress that the opinions in this post are Terry’s, and don’t necessarily reflect the official editorial positions of Shtetl-Optimized.  Personally, I have taught the basics of quantum information to sharp junior high and high school students, so I certainly know that it’s possible.  (By undergrad, it’s not only possible, but maybe should become standard for both physics and CS majors.)  But I would also say that, given the current state of junior high and high school education in the US, it would be a huge step up if most students graduated fully understanding what’s a probability, what’s a classical bit, what’s a complex number, and any of dozens of other topics that feed into quantum information—so why not start by teaching the simpler stuff well?  And also, if students don’t learn the rules of classical probability first, then how will they be properly shocked when they come to quantum? 🙂

But without further ado, here’s Terry—who’s also graciously agreed to stick around and answer some comments.


Can we/should we teach Quantum Theory in Junior High?

by Terry Rudolph

Should we?

Reasons which suggest the answer is “yes” include:

Economic: We are apparently into a labor market shortage in quantum engineers.  We should not, however, need the recent hype around quantum computing to make the economic case – the frontier of many disparate regions of the modern science and technology landscape is quantum.  Surely if students do decide to drop out of school at 16 they should at least be equipped to get an entry-level job as a quantum physicist?

Educational: If young peoples’ first exposures to science are counterintuitive and “cutting edge,” it could help excite them into STEM.  The strong modern quantum information theoretic connections between quantum physics, computer science and math can help all three subjects constructively generate common interest.

Pseudo-Philosophical: Perhaps our issues with understanding/accepting quantum theory are because we come to it late and have lost the mental plasticity for a “quantum reset” of our brain when we eventually require it late in an undergraduate degree.  It may be easier to achieve fluency in the “language of quantum” with early exposure.

Can we?

There are two distinct aspects to this question: Firstly, is it possible at the level of “fitting it in” – training teachers, adjusting curricula and so on?  Secondly, can a nontrivial, worthwhile fraction of quantum theory even be taught at all to pre-calculus students?

With regards to the first question, as the child of two schoolteachers I am very aware that an academic advocating for such disruption will not be viewed kindly by all.  As I don’t have relevant experience to say anything useful about this aspect, I have to leave it for others to consider.

Let me focus for the remainder of this post on the second aspect, namely whether it is even possible to appropriately simplify the content of the theory.  This month it is exactly 20 years since I lectured the first of many varied quantum courses I have taught at multiple universities. For most of that period I would have said it simply wasn’t possible to teach any but the most precocious of high school students nontrivial technical content of quantum theory – despite some brave attempts like Feynman’s use of arrows in QED: The Strange Theory of Light and Matter (a technique that cannot easily get at the mysteries of two-particle quantum theory, which is where the fun really starts).  I now believe, however, that it is actually possible.

A pedagogical method covering nontrivial quantum theory using only basic arithmetic

My experience talking about quantum theory to 12-15 year olds has only been in the idealized setting of spending a few hours with them at science fairs, camps and similar.  In fact it was on the way to a math camp for very young students, desperately trying to plan something non-trivial to engage them with, that I came up with a pedagogical method which I (and a few colleagues) have found does work.

I eventually wrote the method into a short book Q is for Quantum, but if you don’t want to purchase the book then here is a pdf of Part I,, which takes a student knowing only the rules of basic arithmetic through to learning enough quantum computing they can understand the Deutsch–Jozsa algorithm.  In fact not only can they do a calculation to see how it works in detail, they can appreciate conceptual nuances often under-appreciated in popular expositions, such as why gate speed doesn’t matter – it’s all about the number of steps, why classical computing also can have exponential growth in “possible states” so interference is critical, why quantum computers do not compute the uncomputable and so on.

Before pointing out a few features of the approach, here are some rules I set myself while writing the book:

  • No analogies, no jargon – if it can’t be explained quantitatively then leave it out.
  • No math more than basic arithmetic and distribution across brackets.
  • Keep clear the distinction between mathematical objects and the observed physical events they are describing.
  • Be interpretationally neutral.
  • No soap opera: Motivate by intriguing with science, not by regurgitating quasi-mythological stories about the founders of the theory.
  • No using the word “quantum” in the main text! This was partly to amuse myself, but I also thought if I was succeeding in the other points then I should be able to avoid a word almost synonymous with “hard and mysterious.”

One of the main issues to confront is how to represent and explain superposition.  It is typical in popular expositions to draw analogies between a superposition of, say, a cat which is dead and a cat which is alive by saying it is dead “and” alive.  But if superposition was equivalent to logical “and”, or, for that matter, logical “or”, then quantum computing wouldn’t be interesting, and in this and other ways the analogy is ultimately misleading.  The approach I use is closer to the latter – an unordered list of possible states for a system (which is most like an “or”) can be used to represent a superposition. Using a list has some advantages – it is natural to apply a transformation to all elements of a list, for instance doubling the list of ingredients in a recipe.  More critically, given two independent lists of possibilities the new joint list of combined possibilities is a natural concept.  This makes teaching the equivalent of the Kronecker (tensor) product for multiple systems easy, something often a bit tricky even for undergrads to become comfortable with.

Conceptually the weirdest part of the whole construction, particularly for someone biased by the standard formalism, is that I use a standard mathematical object (a negative or minus sign) applied to a diagram of a physical object (a black or white ball).  Moreover, positive and negative balls in a diagram can cancel out (interfere).  This greatly simplifies the exposition, by removing a whole level of abstraction in the standard theory (we do not need to use a vector containing entries whose specific ordering must be remembered in order to equate them to the physical objects).  While it initially seemed odd to me personally to do this, I have yet to have any young person think of it as any more weird than using the negative sign on a number.  And if it is always kept clear that drawing and manipulating the whole diagram is an abstract thing we do, which may or may not have any correspondence to what is “really going on” in the physical setups we are describing, then there really is no difference.

There are some subtleties about the whole approach – while the formalism is universal for quantum computing, it can only make use of unitary evolution which is proportional to a matrix with integer entries.  Thus the Hadamard gate (PETE box) is ok, the Controlled-NOT and Toffoli likewise, but a seemingly innocuous gate like the controlled-Hadamard is not capable of being incorporated (without adding a whole bunch of unintuitive and unjustified rules).  The fact the approach covers a universal gate set means some amazing things can be explained in this simple diagrammatic language.  For example, the recent paper Quantum theory cannot consistently describe the use of itself, which led to considerable discussion on this blog, can be fully reproduced.  That is, a high school student can in principle understand the technical details of a contemporary argument between professional physicists.  I find this amazing.

Based on communication with readers I have come to realize the people at most risk of being confused by the book are actually those already with a little knowledge – someone who has done a year or two’s worth of undergraduate quantum courses, or someone who has taken things they read in pop-sci books too literally.  Initially, as I was developing the method, I thought it would be easy to keep “touching base” with the standard vector space formalism.  But in fact it becomes very messy to do so (and irrelevant for someone learning quantum theory for the first time).  In the end I dropped that goal, but now realize I need to develop some supplementary notes to help someone in that situation.

Q is for Quantum is certainly not designed to be used as a classroom text – if nothing else my particular style and choice of topics will not be to others’ tastes, and I haven’t included all the many, many simple examples and exercises I have students doing along with me in class when I actually teach this stuff.  It should be thought of as more a “proof of principle,” that the expository challenge can be met.  Several colleagues have used parts of these ideas already for teaching, and they have given me some great feedback.  As such I am planning on doing a revised and slightly expanded version at some point, so if you read it and have thoughts for improvement please send me them.