Born Wednesday March 22, 2017, exactly at noon. 19.5 inches, 7 pounds.

I learned that Dana had gone into labor—unexpectedly early, at 37 weeks—just as I was waiting to board a redeye flight back to Austin from the It from Qubit complexity workshop at Stanford. I made it in time for the birth with a few hours to spare. Mother and baby appear to be in excellent health. So far, Daniel seems to be a relatively easy baby. Lily, his sister, is extremely excited to have a new playmate (though not one who does much yet).

I apologize that I haven’t been answering comments on the is-the-universe-a-simulation thread as promptly as I normally do. This is why.

]]>The immediate impetus for Mandelbaum’s piece was a blog post by Sabine Hossenfelder, a physicist who will likely be familiar to regulars here in the nerdosphere. In her post, Sabine vents about the simulation speculations of philosophers like Nick Bostrom. She writes:

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

After hammering home that point, Sabine goes further, and says that the simulation hypothesis is almost *ruled out*, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.

In writing his post, Ryan Mandelbaum interviewed two people: Sabine and me.

I basically told Ryan that I agree with Sabine insofar as she argues that the simulation hypothesis is *lazy*—that it doesn’t pay its rent by doing real explanatory work, doesn’t even engage much with any of the deep things we’ve learned about the physical world—and disagree insofar as she argues that the simulation hypothesis faces some special difficulty because of Lorentz-invariance or other continuous phenomena in known physics. In short: blame it for being unfalsifiable rather than for being falsified!

Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~10^{69} qubits per square meter of surface area. And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10^{122} qubits. The qubits would in some sense be the fundamental reality, from which Lorentz-invariant spacetime and all the rest would need to be recovered as low-energy effective descriptions. (I hasten to add: there’s of course nothing special about *qubits* here, any more than there is about bits in classical computation, compared to some other unit of information—nothing that says the Hilbert space dimension has to be a power of 2 or anything silly like that.) Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~2^{10^122} time steps.

Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity. But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot. The question becomes: do you reject the Church-Turing Thesis? Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems? If so, how? But if not, then how exactly does the universe *avoid* being computational, in the broad sense of the term?

I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised. It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin. But it should surprise no one that, amid all the discussion of computation and fundamental physics, the question of whether the universe “really” “is” a simulation has barely come up. Why would it, when there are so many more fruitful things to ask? All I can say with confidence is that, if our world *is* a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.

Prof. Aaronson, given your expertise, we’d be incredibly grateful for your feedback on a paper / report / grant proposal about quantum computing. To access the document in question, all you’ll need to do is create an account on our proprietary DigiScholar Portal system, a process that takes no more than 3 hours. If, at the end of that process, you’re told that the account setup failed, it might be because your browser’s certificates are outdated, or because you already have an account with us, or simply because our server is acting up, or some other reason. If you already have an account, you’ll of course need to remember your DigiScholar Portal ID and password, and not confuse them with the 500 other usernames and passwords you’ve created for similar reasons—ours required their own distinctive combination of upper and lowercase letters, numerals, and symbols. After navigating through our site to access the document, you’ll then be able to enter your DigiScholar Review, strictly adhering to our 15-part format, and keeping in mind that our system will log you out and delete all your work after 30 seconds of inactivity. If you have trouble, just call our helpline during normal business hours (excluding Wednesdays and Thursdays) and stay on the line until someone assists you. Most importantly, please understand that we can neither email you the document we want you to read, nor accept any comments about it by email. In fact, all emails to this address will be automatically ignored.

Every day, I seem to grow crustier than the last.

More than a decade ago, I resolved that I would no longer submit to or review for most for-profit journals, as a protest against the exorbitant fees that those journals charge academics in order to buy back access to our own work—work that we turn over to the publishers (copyright and all) and even review for them completely for free, with the publishers typically adding zero or even negative value. I’m happy that I’ve been able to keep that pledge.

Today, I’m proud to announce a new boycott, less politically important but equally consequential for my quality of life, and to recommend it to all of my friends. Namely: **as long as the world gives me any choice in the matter, I will never again struggle to log in to any organization’s website.** I’ll continue to devote a huge fraction of my waking hours to fielding questions from all sorts of people on the Internet, and I’ll do it cheerfully and free of charge. All I ask is that, if you have a question, or a document you want me to read, you email it! Or leave a blog comment, or stop by in person, or whatever—but in any case, don’t make me log in to anything other than Gmail or Facebook or WordPress or a few other sites that remain navigable by a senile 35-year-old who’s increasingly fixed in his ways. Even Google Docs and Dropbox are pushing it: I’ll give up *(on principle)* at the first sight of any login issue, and ask for just a regular URL or an attachment.

Oh, Skype no longer lets me log in either. Could I get to the bottom of that? Probably. But life is too short, and too precious. So if we must, we’ll use the phone, or Google Hangouts.

In related news, I will no longer patronize any haircut place that turns away walk-in customers.

Back when we were discussing the boycott of Elsevier and the other predatory publishers, I wrote that this was a rare case “when laziness and idealism coincide.” But the truth is more general: whenever my deepest beliefs and my desire to get out of work both point in the same direction, from here till the grave there’s not a force in the world that can turn me the opposite way.

]]>The moon still waxes and wanes. Electrons remain bound to their nuclei. P≠NP proofs still fill my inbox. Squirrels still gather acorns. And—of course!—people continue to claim big quantum speedups using D-Wave devices, and those claims still require careful scrutiny.

With that preamble, I hereby offer you eight quantum computing news items.

**Cathy McGeoch Episode II: The Selby Comparison**

On January 17, a group from D-Wave—including Cathy McGeoch, who now works directly for D-Wave—put out a preprint claiming a factor-of-2500 speedup for the D-Wave machine (the new, 2000-qubit one) compared to the best classical algorithms. Notably, they wrote that the speedup persisted when they compared against simulated annealing, quantum Monte Carlo, and even the so-called Hamze-de Freitas-Selby (HFS) algorithm, which was often the classical victor in previous performance comparisons against the D-Wave machine.

Reading this, I was happy to see how far the discussion has advanced since 2013, when McGeoch and Cong Wang reported a factor-of-3600 speedup for the D-Wave machine, but then it turned out that they’d compared only against classical exact solvers rather than heuristics—a choice for which they were heavily criticized on this blog and elsewhere. (And indeed, that particular speedup disappeared once the classical computer’s shackles were removed.)

So, when people asked me this January about the *new* speedup claim—the one even against the HFS algorithm—I replied that, even though we’ve by now been around this carousel several times, I felt like the ball was now firmly in the D-Wave skeptics’ court, to reproduce the observed performance classically. And if, after a year or so, no one could, *that* would be a good time to start taking seriously that a D-Wave speedup might finally be here to stay—and to move on to the next question, of whether this speedup had anything to do with *quantum computation*, or only with the building of a piece of special-purpose optimization hardware.

**A&M: Annealing and Matching**

As it happened, it only took one month. On March 2, Salvatore Mandrà, Helmut Katzgraber, and Creighton Thomas put up a response preprint, pointing out that the instances studied by the D-Wave group in their most recent comparison are actually reducible to the minimum-weight perfect matching problem—and for that reason, are solvable in polynomial time on a classical computer. Much of Mandrà et al.’s paper just consists of graphs, wherein they plot the running times of the D-Wave machine and of a classical heuristic on the relevant instances—clearly all different flavors of exponential—and then Edmonds’ matching algorithm from the 1960s, which breaks away from the pack into polynomiality.

But let me bend over backwards to tell you the full story. Last week, I had the privilege of visiting Texas A&M to give a talk. While there, I got to meet Helmut Katzgraber, a condensed-matter physicist who’s one of the world experts on quantum annealing experiments, to talk to him about their new response paper. Helmut was clear in his prediction that, with only small modifications to the instances considered, one could see similar performance by the D-Wave machine while avoiding the reduction to perfect matching. With those future modifications, it’s possible that one really *might* see a D-Wave speedup that survived serious attempts by skeptics to make it go away.

But Helmut was equally clear in saying that, even in such a case, he sees no evidence at present that the speedup would be *asymptotic* or *quantum-computational* in nature. In other words, he thinks the existing data is well explained by the observation that we’re comparing D-Wave against classical algorithms for Ising spin minimization problems on Chimera graphs, and D-Wave has heroically engineered an expensive piece of hardware *specifically for Ising spin minimization problems on Chimera graphs and basically nothing else*. If so, then the prediction would be that such speedups as can be found are unlikely to extend either to more “practical” optimization problems—which need to be embedded into the Chimera graph with considerable losses—or to better scaling behavior on large instances. (As usual, as long as the comparison is against the *best* classical algorithms, and as long as we grant the classical algorithm the same *non-quantum* advantages that the D-Wave machine enjoys, such as classical parallelism—as Rønnow et al advocated.)

Incidentally, my visit to Texas A&M was partly an “apology tour.” When I announced on this blog that I was moving from MIT to UT Austin, I talked about the challenge and excitement of setting up a quantum computing research center in a place that currently had little quantum computing for hundreds of miles around. This thoughtless remark inexcusably left out not only my friends at Louisiana State (like Jon Dowling and Mark Wilde), but even closer to home, Katzgraber and the others at Texas A&M. I felt terrible about this for months. So it gives me special satisfaction to have the opportunity to call out Katzgraber’s new work in this post. In football, UT and A&M were longtime arch-rivals, but when it comes to the appropriate level of skepticism to apply to quantum supremacy claims, the Texas Republic seems remarkably unified.

**When 15 MilliKelvin is Toasty**

In other D-Wave-related scientific news, on Monday night Tameem Albash, Victor Martin-Mayer, and Itay Hen put out a preprint arguing that, in order for quantum annealing to have any real chance of yielding a speedup over classical optimization methods, the temperature of the annealer should decrease at least like 1/log(n), where n is the instance size, and more likely like 1/n^{β} (i.e., as an inverse power law).

If this is correct, then cold as the D-Wave machine is, at 0.015 degrees or whatever above absolute zero, it still wouldn’t be cold enough to see a scalable speedup, at least not without quantum fault-tolerance, something that D-Wave has so far eschewed. With no error-correction, *any* constant temperature that’s above zero would cause dangerous level-crossings up to excited states when the instances get large enough. Only a temperature that actually converged to zero as the problems got larger would suffice.

Over the last few years, I’ve heard many experts make this exact same point in conversation, but this is the first time I’ve seen the argument spelled out in a paper, with explicit calculations (modulo assumptions) of the *rate* at which the temperature would need to go to zero for uncorrected quantum annealing to be a viable path to a speedup. I lack the expertise to evaluate the calculations myself, but any experts who’d like to share their insight in the comments section are “warmly” (har har) invited.

**“Their Current Numbers Are Still To Be Checked”**

As some of you will have seen, *The Economist* now has a sprawling 10-page cover story about quantum computing and other quantum technologies. I had some contact with the author while the story was in the works.

The piece covers a lot of ground and contains many true statements. It could be much worse.

But I take issue with two things.

First, *The Economist* claims: “What is notable about the effort [to build scalable QCs] now is that the challenges are no longer scientific but have become matters of engineering.” As John Preskill and others pointed out, this is pretty far from true, at least if we interpret the claim in the way most engineers and businesspeople would.

Yes, we know the rules of quantum mechanics, and the theory of quantum fault-tolerance, and a few promising applications; and the basic building blocks of QC have already been demonstrated in several platforms. But if (let’s say) someone were to pony up $100 billion, asking only for a universal quantum computer as soon as possible, I think the rational thing to do would be to spend initially on a frenzy of *basic research*: should we bet on superconducting qubits, trapped ions, nonabelian anyons, photonics, a combination thereof, or something else? (Even that is far from settled.) Can we invent better error-correcting codes and magic state distillation schemes, in order to push the resource requirements for universal QC down by three or four orders of magnitude? Which decoherence mechanisms will be relevant when we try to do this stuff at scale? And of course, which new quantum algorithms can we discover, and which new cryptographic codes resistant to quantum attack?

The second statement I take issue with is this:

“For years experts questioned whether the [D-Wave] devices were actually exploiting quantum mechanics and whether they worked better than traditional computers. Those questions have since been conclusively answered—yes, and sometimes”

I would instead say that the answers are:

- depends on what you mean by “exploit” (yes, there are quantum tunneling effects, but do they help you solve problems faster?), and
- no, the evidence remains weak to nonexistent that the D-Wave machine solves
*anything*faster than a traditional computer—certainly if, by “traditional computer,” we mean a device that gets all the advantages of the D-Wave machine (e.g., classical parallelism, hardware heroically specialized to the one type of problem we’re testing on), but no quantum effects.

Shortly afterward, when discussing the race to achieve “quantum supremacy” (i.e., a clear quantum computing speedup for *some* task, not necessarily a useful one), the *Economist* piece hedges: “D-Wave has hinted it has already [achieved quantum supremacy], but has made similar claims in the past; their current numbers are still to be checked.”

To me, “their current numbers are still to be checked” deserves its place alongside “mistakes were made” among the great understatements of the English language—perhaps a fitting honor for *The Economist*.

**Defeat Device**

Some of you might also have seen that D-Wave announced a deal with Volkswagen, to use D-Wave machines for traffic flow. I had some advance warning of this deal, when reporters called asking me to comment on it. At least in the materials I saw, no evidence is discussed that the D-Wave machine actually solves whatever problem VW is interested in faster than it could be solved with a classical computer. Indeed, in a pattern we’ve seen repeatedly for the past decade, the question of such evidence is never even directly confronted or acknowledged.

So I guess I’ll say the same thing here that I said to the journalists. Namely, until there’s a paper or some other technical information, obviously there’s not much I can say about this D-Wave/Volkswagen collaboration. But it would be astonishing if quantum supremacy were to be achieved on an application problem of interest to a carmaker, even as scientists struggle to achieve that milestone on contrived and artificial benchmarks, even as the milestone seems repeatedly to elude D-Wave itself on contrived and artificial benchmarks. In the previous such partnerships—such as that with Lockheed Martin—we can reasonably guess that no convincing evidence for quantum supremacy was found, because if it had been, it would’ve been trumpeted from the rooftops.

Anyway, I confess that I couldn’t resist adding a tiny snark—something about how, if these claims of amazing performance *were* found not to withstand an examination of the details, it would not be the first time in Volkswagen’s recent history.

**Farewell to a Visionary Leader—One Who Was Trash-Talking Critics on Social Media A Decade Before President Trump**

This isn’t really news, but since it happened since my last D-Wave post, I figured I should share. Apparently D-Wave’s outspoken and inimitable founder, Geordie Rose, left D-Wave to form a machine-learning startup (see D-Wave’s leadership page, where Rose is absent). I wish Geordie the best with his new venture.

**Martinis Visits UT Austin**

On Feb. 22, we were privileged to have John Martinis of Google visit UT Austin for a day and give the physics colloquium. Martinis concentrated on the quest to achieve quantum supremacy, in the near future, using sampling problems inspired by theoretical proposals such as BosonSampling and IQP, but tailored to Google’s architecture. He elaborated on Google’s plan to build a 49-qubit device within the next few years: basically, a 7×7 square array of superconducting qubits with controllable nearest-neighbor couplings. To a layperson, 49 qubits might sound unimpressive compared to D-Wave’s 2000—but the point is that these qubits will hopefully maintain coherence times *thousands of times* longer than the D-Wave qubits, and will also support arbitrary quantum computations (rather than only annealing). Obviously I don’t know whether Google will succeed in its announced plan, but *if* it does, I’m very optimistic about a convincing quantum supremacy demonstration being possible with this sort of device.

Perhaps most memorably, Martinis unveiled some spectacular data, which showed near-perfect agreement between Google’s earlier 9-qubit quantum computer and the theoretical predictions for a simulation of the Hofstadter butterfly (incidentally invented by Douglas Hofstadter, of *Gödel, Escher, Bach* fame, when he was still a physics graduate student). My colleague Andrew Potter explained to me that the Hofstadter butterfly can’t be used to show quantum supremacy, because it’s mathematically equivalent to a system of non-interacting fermions, and can therefore be simulated in classical polynomial time. But it’s certainly an impressive calibration test for Google’s device.

**2000 Qubits Are Easy, 50 Qubits Are Hard**

Just like the Google group, IBM has also publicly set itself the ambitious goal of building a 50-qubit superconducting quantum computer in the near future (i.e., the next few years). Here in Austin, IBM held a quantum computing session at South by Southwest, so I went—my first exposure of any kind to SXSW. There were 10 or 15 people in the audience; the purpose of the presentation was to walk through the use of the IBM Quantum Experience in designing 5-qubit quantum circuits and submitting them first to a simulator and then to IBM’s actual superconducting device. (To the end user, of course, the real machine differs from the simulation only in that with the former, you can see the exact effects of decoherence.) Afterward, I chatted with the presenters, who were extremely friendly and knowledgeable, and relieved (they said) that I found nothing substantial to criticize in their summary of quantum computing.

Hope everyone had a great Pi Day and Ides of March.

]]>I don’t expect this petition to have the slightest effect on the regime, but at least we should demonstrate to the world and to history that American academia didn’t take this silently.

I’m sure there were weeks, in February or March 1933, when the educated, liberal Germans commiserated with each other over the latest outrages of their new Chancellor, but consoled themselves that at least none of it was going to affect them *personally*.

This time, it’s taken just five days, since the hostile takeover of the US by its worst elements, for edicts from above to have actually hurt my life and (much more directly) the lives of my students, friends, and colleagues.

Today, we learned that Trump is suspending the issuance of US visas to people from seven majority-Islamic countries, including Iran (but strangely *not* Saudi Arabia, the cradle of Wahhabist terrorism—not that that would be morally justified either). This suspension *might* last just 30 days, but might also continue indefinitely—particularly if, as seems likely, the Iranian government thumbs its nose at whatever Trump demands that it do to get the suspension rescinded.

So the upshot is that, until further notice, science departments at American universities can no longer recruit PhD students from Iran—a country that, along with China, India, and a few others, has long been the source of some of our best talent. This will directly affect this year’s recruiting season, which is just now getting underway. (If Canada and Australia have any brains, they’ll snatch these students, and make the loss America’s.)

But what about the thousands of Iranian students who are already here? So far, no one’s rounding them up and deporting them. But their futures have suddenly been thrown into jeopardy.

Right now, I have an Iranian PhD student who came to MIT on a student visa in 2013. He started working with me two years ago, on the power of a rudimentary quantum computing model inspired by (1+1)-dimensional integrable quantum field theory. You can read our paper about it, with Adam Bouland and Greg Kuperberg, here. It so happens that this week, my student is visiting us in Austin and staying at our home. He’s spent the whole day pacing around, terrified about his future. His original plan, to do a postdoc in the US after he finishes his PhD, now seems impossible (since it would require a visa renewal).

Look: in the 11-year history of this blog, there have been only a few occasions when I felt so strongly about something that I stood my ground, even in the face of widespread attacks from people who I otherwise respected. One, of course, was when I spoke out for shy nerdy males, and for a vision of feminism broad enough to recognize their suffering as a problem. A second was when I was more blunt about D-Wave, and about its and its supporters’ quantum speedup claims, than some of my colleagues were comfortable with. But the remaining occasions almost all involved my defending the values of the United States, Israel, Zionism, or “the West,” or condemning Islamic fundamentalism, radical leftism, or the worldviews of such individuals as Noam Chomsky or my “good friend” Mahmoud Ahmadinejad.

Which is simply to say: I don’t think anyone on earth can accuse me of secret sympathies for the Iranian government.

But when it comes to student visas, I can’t see that my feelings about the mullahs have anything to do with the matter. We’re talking about people who happen to have been born in Iran, who came to the US to do math and science. Would we rather have these young scientists here, filled with gratitude for the opportunities we’ve given them, or back in Iran filled with justified anger over our having expelled them?

To the Trump regime, I make one request: if you ever decide that it’s the policy of the US government to deport my PhD students, then deport me first. I’m practically begging you: come to my house, arrest me, revoke my citizenship, and tear up the awards I’ve accepted at the White House and the State Department. I’d consider that to be the greatest honor of my career.

And to those who cheered Trump’s campaign in the comments of this blog: go ahead, let me hear you defend *this*.

**Update (Jan. 27, 2017):** To everyone who’s praised the “courage” that it took me to say this, thank you so much—but to be perfectly honest, it takes orders of magnitude less courage to say *this*, than to say something that any of your friends or colleagues might actually disagree with! The support has been totally overwhelming, and has reaffirmed my sense that the United States is now effectively two countries, an open and a closed one, locked in a cold Civil War.

Some people have expressed surprise that I’d come out so strongly for Iranian students and researchers, “given that they don’t always agree with my politics,” or given my unapologetic support for the founding principles (if not always the actions) of the United States and of Israel. For my part, I’m surprised that *they’re* surprised! So let me say something that might be clarifying.

I care about the happiness, freedom, and welfare of all the men and women who are actually working to understand the universe and build the technologies of the future, and of all the bright young people who want to join these quests, whatever their backgrounds and wherever they might be found—whether it’s in Iran or Israel, in India or China or right here in the US. The system of science is far from perfect, and we often discuss ways to improve it on this blog. But I have not the slightest interest in tearing down what we have now, or destroying the world’s current pool of scientific talent in some cleansing fire, in order to pursue someone’s mental model of what the scientific community used to look like in Periclean Athens—or for that matter, their fantasy of what it *would* look like in a post-gender post-racial communist utopia. I’m interested in the actual human beings doing actual science who I actually meet, or hope to meet.

Understand that, and a large fraction of all the political views that I’ve ever expressed on this blog, even ones that might seem to be in tension with each other, fall out as immediate corollaries.

(Related to that, some readers might be interested in a further explanation of my views about Zionism. See also my thoughts about liberal democracy, in response to numerous comments here by Curtis Yarvin a.k.a. Mencius Moldbug a.k.a. “Boldmug.”)

**Update (Jan. 29)** Here’s a moving statement from my student Saeed himself, which he asked me to share here.

This is not of my best interest to talk about politics. Not because I am scared but because I know little politics. I am emotionally affected like many other fellow human beings on this planet. But I am still in the US and hopefully I can pursue my degree at MIT. But many other talented friends of mine can’t. Simply because they came back to their hometowns to visit their parents. On this matter, I must say that like many of my friends in Iran I did not have a chance to see my parents in four years, my basic human right, just because I am from a particular nationality; something that I didn’t have any decision on, and that I decided to study in my favorite school, something that I decided when I was 15. When, like many other talented friends of mine, I was teaching myself mathematics and physics hoping to make big impacts in positive ways in the future. And I must say I am proud of my nationality – home is home wherever it is. I came to America to do science in the first place. I still don’t have any other intention, I am a free man, I can do science even in desert, if I have to. If you read history you’ll see scientists even from old ages have always been traveling.

As I said I know little about many things, so I just phrase my own standpoint. You should also talk to the ones who are really affected. A good friend of mine, Ahmad, who studies Mechanical engineering in UC Berkeley, came back to visit his parents in August. He is one of the most talented students I have ever seen in my life. He has been waiting for his student visa since then and now he is ultimately depressed because he cannot finish his degree. The very least the academic society can do is to help students like Ahmad finish their degrees even if it is from abroad. I can’t emphasize enough I know little about many things. But, from a business standpoint, this is a terrible deal for America. Just think about it. All international students in this country have been getting free education untill 22, in the American point of reference, and now they are using their knowledge to build technology in the USA. Just do a simple calculation and see how much money this would amount to. In any case my fellow international students should rethink this deal, and don’t take it unless at the least they are treated with respect. Having said all of this I must say I love the people of America, I have had many great friends here, great advisors specially Scott Aaronson and Aram Harrow, with whom I have been talking about life, religion, freedom and my favorite topic the foundations of the universe. I am grateful for the education I received at MIT and I think I have something I didn’t have before. I don’t even hate Mr Trump. I think he would feel different if we have a cup of coffee sometime.

**Update (Jan. 31):** See also this post by Terry Tao.

**Update (Feb. 2):** If you haven’t been checking the comments on this post, come have a look if you’d like to watch me and others doing our best to defend the foundations of Enlightenment and liberal democracy against a regiment of monarchists and neoreactionaries, including the notorious Mencius Moldbug, as well as a guy named Jim who explicitly advocates abolishing democracy and appointing Trump as “God-Emperor” with his sons to succeed him. (Incidentally, which son? Is Ivanka out of contention?)

I find these people to be simply articulating, more clearly and logically than most, the worldview that put Trump into office and where it inevitably leads. And any of us who are horrified by it had better get over our incredulity, fast, and pick up the case for modernity and Enlightenment where Spinoza and Paine and Mill and all the others left it off—because that’s what’s actually at stake here, and if we don’t understand that then we’ll continue to be blindsided.

]]>As part of her birthday festivities, and despite her packed schedule, Lily has graciously agreed to field a few questions from readers of this blog. You can ask about her parents, favorite toys, recent trip to Disney World, etc. Just FYI: to the best of my knowledge, Lily doesn’t have any special insight about computational complexity, although she can write the letters ‘N’ and ‘P’ and find them on the keyboard. Nor has she demonstrated much interest in politics, though she’s aware that many people are upset because a very bad man just became the president. Anyway, if you ask questions that are appropriate for a real 4-year-old girl, rather than a blog humor construct, there’s a good chance I’ll let them through moderation and pass them on to her!

Meanwhile, here’s a photo I took of UT Austin students protesting Trump’s inauguration beneath the iconic UT tower.

]]>**Update (Jan. 23)** By request, I’ve prepared a Kindle-friendly edition of this P vs. NP survey—a mere 260 pages!

Two years ago, I learned that John Nash—*that* John Nash—was, together with Michail Rassias, editing a volume about the great open problems in mathematics. And they wanted me to write the chapter about the P versus NP question—a question that Nash himself had come close to raising, in a prescient handwritten letter that he sent to the National Security Agency in 1955.

On the one hand, I knew I didn’t have time for such an undertaking, and am such a terrible procrastinator that, in *both* previous cases where I wrote a book chapter, I singlehandedly delayed the entire volume by months. But on the other hand, John Nash.

So of course I said yes.

What followed was a year in which Michail sent me increasing panicked emails (and then phone calls) informing me that the whole volume was ready for the printer, *except* for my P vs. NP thing, and is there any chance I’ll have it by the end of the week? Meanwhile, I’m reading yet more papers about Karchmer-Wigderson games, proof complexity, time/space tradeoffs, elusive functions, and small-depth arithmetic circuits. P vs. NP, as it turns out, is now a *big* subject.

And in the middle of it, on May 23, 2015, John Nash and his wife Alicia were tragically killed on the New Jersey Turnpike, on their way back from the airport (Nash had just accepted the Abel Prize in Norway), when their taxi driver slammed into a guardrail.

But while Nash himself sadly wouldn’t be alive to see it, the volume was still going forward. And now we were effectively honoring Nash’s memory, so I *definitely* couldn’t pull out.

So finally, last February, after more months of struggle and delay, I sent Michail what I had, and it duly appeared in the volume Open Problems in Mathematics.

But I knew I wasn’t done: there was still sending my chapter out to experts to solicit their comments. This I did, and massively-helpful feedback started pouring in, creating yet more work for me. The thorniest section, by far, was the one about Geometric Complexity Theory (GCT): the program, initiated by Ketan Mulmuley and carried forward by a dozen or more mathematicians, that seeks to attack P vs. NP and related problems using a fearsome arsenal from algebraic geometry and representation theory. The experts told me, in no uncertain terms, that my section on GCT got things badly wrong—but they didn’t agree with each other about *how* I was wrong. So I set to work trying to make them happy.

And then I got sidetracked with the move to Austin and with other projects, so I set the whole survey aside: a year of sweat and tears down the toilet. Soon after that, Bürgisser, Ikenmeyer, and Panova proved a breakthrough “no-go” theorem, substantially changing the outlook for the GCT program, meaning yet more work for me if and when I ever returned to the survey.

Anyway, today, confined to the house with my sprained ankle, I decided that the perfect was the enemy of the good, and that I should just *finish* the damn survey and put it up on the web, so readers can benefit from it before the march of progress (we can hope!) renders it totally obsolete.

So here it is! All 116 pages, 268 bibliography entries, and 52,000 words.

For your convenience, here’s the abstract:

In 1955, John Nash sent a remarkable letter to the National Security Agency, in which—seeking to build theoretical foundations for cryptography—he all but formulated what today we call the P=?NP problem, considered one of the great open problems of science. Here I survey the status of this problem in 2017, for a broad audience of mathematicians, scientists, and engineers. I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P≠NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century toward solving those problems. The discussion of progress includes diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.

Thanks so much to everyone whose feedback helped improve the survey. If you have additional feedback, feel free to share in the comments section! My plan is to incorporate the *next* round of feedback by the year 2100, if not earlier.

**Update (Jan. 4)** Bill Gasarch writes to tell me that Lazslo Babai has posted an announcement scaling back his famous “Graph Isomorphism in Quasipolynomial Time” claim. Specifically, Babai says that, due to an error discovered by Harald Helfgott, his graph isomorphism algorithm actually runs in about 2^{2^O(√log(n))} time, rather than the originally claimed n^{polylog(n)}. This still beats the best previously-known running time for graph isomorphism (namely, 2^{O(√(n log n))}), and by a large margin, but not quite as large as before.

Babai pointedly writes:

I apologize to those who were drawn to my lectures on this subject solely because of the quasipolynomial claim, prematurely magnified on the internet in spite of my disclaimers.

Alas, my own experience has taught me the hard way that, on the Internet, it is do or do not. There is no disclaim.

In any case, I’ve already updated my P vs. NP survey to reflect this new development.

**Another Update (Jan. 10)** For those who missed it, Babai has another update saying that he’s fixed the problem, and the running time of his graph isomorphism algorithm is back to being quasipolynomial.

**Update (Jan. 19):** This moment—the twilight of the Enlightenment, the eve of the return of the human species back to the rule of thugs—seems like as good a time as any to declare my P vs. NP survey officially **done**. I.e., thanks so much to everyone who sent me suggestions for additions and improvements, I’ve implemented pretty much all of them, and I’m not seeking additional suggestions!

Another year, another annual *Edge* question, with its opportunity for hundreds of scientists and intellectuals (including yours truly) to pontificate, often about why their own field of study is the source of the most important insights and challenges facing humanity. This year’s question was:

**What scientific term or concept ought to be more widely known?**

With the example given of Richard Dawkins’s “meme,” which jumped into the general vernacular, becoming a meme itself.

My entry, about the notion of “state” (yeah, I tried to focus on the basics), is here.

This year’s question presented a particular challenge, which scientists writing for a broad audience might not have faced for generations. Namely: to what extent, if any, should your writing acknowledge the dark shadow of recent events? Does the Putinization of the United States render your little pet debates and hobbyhorses irrelevant? Or is the most defiant thing you can do to *ignore* the unfolding catastrophe, to continue building your intellectual sandcastle even as the tidal wave of populist hatred nears?

In any case, the instructions from *Edge* were clear: ignore politics. Focus on the eternal. But people interpreted that injunction differently.

One of my first ideas was to write about the Second Law of Thermodynamics, and to muse about how one of humanity’s tragic flaws is to take for granted the gargantuan effort needed to create and maintain even little temporary pockets of order. Again and again, people imagine that, if their local pocket of order isn’t working how they want, then they should smash it to pieces, since while admittedly that *might* make things even worse, there’s also at least 50/50 odds that they’ll magically improve. In reasoning thus, people fail to appreciate just how exponentially more numerous are the paths downhill, into barbarism and chaos, than are the few paths further up. So thrashing about randomly, with no knowledge or understanding, is *statistically certain* to make things worse: on this point thermodynamics, common sense, and human history are all in total agreement. The implications of these musings for the present would be left as exercises for the reader.

Anyway, I was then pleased when, in a case of convergent evolution, my friend and hero Steven Pinker wrote exactly that essay, so I didn’t need to.

There are many other essays that are worth a read, some of which allude to recent events but the majority of which don’t. Let me mention a few.

- Nicholas Humphrey on referential opacity: while I didn’t know the term before, this is precisely the reason why, even if P=NP, that still wouldn’t imply P
^{A}=NP^{A}for all oracles A. - Rebecca Newberger Goldstein on scientific realism.
- Dawkins himself on “The Genetic Book of the Dead.”
- Jim Holt on invariance, and why Einstein’s
*real*greatest blunder was to call it “relativity theory” rather than “invariant theory.” (Holt stole another of my essay ideas!) - Sean Carroll on Bayes’ Theorem.
- Seth Lloyd on the virial theorem.
- My former algorithms professor Jon Kleinberg on digital representation.
- Peter Norvig on counting.
- Bruce Schneier on class breaks.
- Joichi Ito on neurodiversity.
- Adam Waytz on the illusion of explanatory depth.
- Brian Eno on confirmation bias (maybe the shortest entry, but one of the best!).
- Seth Shostak on Fermi problems.
- Lee Smolin on variety.
- Jennifer Jacquet on the Anthropocene.
- Abigail Marsh on alloparenting.
- Steve Omohundro on costly signalling.
- Chiara Marletto on the notion of “impossible.”
- Elizabeth Wrigley-Field on length-biased sampling.
- Carlo Rovelli on relative information.
- Raphael Bousso on the cosmological constant.
- Max Tegmark on substrate independence.
- Politically incorrect section: Greg Cochran on the breeder’s equation and Helena Cronin on sex.
- Gregory Benford on antagonistic pleiotropy.
- Richard Thaler on premortems.
- Nancy Etcoff on supernormal stimuli.
- John Tooby on coalitional instincts.
- Kurt Gray on relative deprivation.
- Jason Wilkes on functional equations (while the content was fine, the things that he strangely calls “functional equations” should really be called “axioms” or “postulates”).
- Linda Wilbrecht on sleeper sensitive periods.

Let me now discuss some disagreements I had with a few of the essays.

- Donald Hoffman on the holographic principle. For the point he wanted to make, about the mismatch between our intuitions and the physical world, it seems to me that Hoffman could’ve picked pretty much
*anything*in physics, from Galileo and Newton onward. What’s new about holography? - Jerry Coyne on determinism. Coyne, who’s written many things I admire, here offers his version of an old argument that I tear my hair out every time I read. There’s no free will, Coyne says,
*and therefore*we should treat criminals more lightly, e.g. by eschewing harsh punishments in favor of rehabilitation. Following tradition, Coyne never engages the obvious reply, which is: “sorry, to*whom*were you addressing that argument? To me, the jailer? To the judge? The jury? Voters? Were you addressing us as moral agents, for whom the concept of ‘should’ is relevant? Then why shouldn’t we address the criminals the same way?” - Michael Gazzaniga on “The Schnitt.” Yes, it’s possible that things like the hard problem of consciousness, or the measurement problem in quantum mechanics, will never have a satisfactory resolution. But even if so, building a complicated verbal edifice whose sole purpose is to tell people not even to
*look*for a solution, to be satisfied with two “non-overlapping magisteria” and a lack of any explanation for how to reconcile them, never struck me as a substantive contribution to knowledge. It wasn’t when Niels Bohr did it, and it’s not when someone today does it either. - I had a related quibble with Amanda Gefter’s piece on “enactivism”: the view she takes as her starting point, that “physics proves there’s no third-person view of the world,” is controversial to put it mildly among those who know the relevant physics. (And even if we granted that view, surely a third-person perspective exists for the quasi-Newtonian world in which we evolved, and that’s relevant for the cognitive science questions Gefter then discusses.)
- Thomas Bass on information pathology. Bass obliquely discusses the propaganda, conspiracy theories, social-media echo chambers, and unchallenged lies that helped fuel Trump’s rise. He then locates the source of the problem in Shannon’s information theory (!), which told us how to quantify information, but failed to address questions about the information’s meaning or relevance. To me, this is almost
*exactly*like blaming arithmetic because it only tells you how to add numbers, without caring whether they’re numbers of rescued orphans or numbers of bombs. Arithmetic is fine; the problem is with us. - In his piece on “number sense,” Keith Devlin argues that the teaching of “rigid, rule-based” math has been rendered obsolete by computers, leaving only the need to teach high-level conceptual understanding. I partly agree and partly disagree, with the disagreement coming from firsthand knowledge of just how badly that lofty idea gets beaten to mush once it filters down to the grade-school level. I would say that the basic function of math education is to teach
*clarity of thought*: does this statement hold for all positive integers, or not? Not how do you feel about it, but does it hold? If it holds, can you prove it? What other statements would it follow from? If it doesn’t hold, can you give a counterexample? (Incidentally, there are plenty of questions of this type for which humans still outperform the best available software!) Admittedly, pencil-and-paper arithmetic*is*both boring and useless—but if you never mastered anything like it, then you certainly wouldn’t be ready for the concept of an algorithm, or for asking higher-level questions about algorithms. - Daniel Hook on PT-symmetric quantum mechanics. As far as I understand, PT-symmetric Hamiltonians are equivalent to ordinary Hermitian ones under similarity transformations. So this is a mathematical trick, perhaps a useful one—but it’s extremely misleading to talk about it as if it were a new physical theory that differed from quantum mechanics.
- Jared Diamond extols the virtues of common sense, of which there are indeed many—but alas, his example is that if a mathematical proof leads to a conclusion that your common sense tells you is wrong, then you shouldn’t waste time looking for the exact mistake.
*Sometimes*that’s good advice, but it’s pretty terrible applied to Goodstein’s Theorem, the muddy children puzzle, the strategy-stealing argument for Go, or anything else that genuinely*is*shocking until your common sense expands to accommodate it. Math, like science in general, is a constant*dialogue*between formal methods and common sense, where sometimes it’s one that needs to get with the program and sometimes it’s the other. - Hans Halvorson on matter. I take issue with Halvorson’s claim that quantum mechanics had to be discarded in favor of quantum field theory, because QM was inconsistent with special relativity. It seems much better to say: the thing that conflicts with special relativity, and that quantum field theory superseded, was a particular
*application*of quantum mechanics, involving wavefunctions of N particles moving around in a non-relativistic space. The general principles of QM—unit vectors in complex Hilbert space, unitary evolution, the Born rule, etc.—survived the transition to QFT without the slightest change.

]]>

**“THE TALK”: My joint cartoon about quantum comgputing with Zach Weinersmith of SMBC Comics.**

Just to whet your appetite:

In case you’re wondering how this came about: after our mutual friend Sean Carroll introduced me and Zach for a different reason, the idea of a joint quantum computing comic just seemed too good to pass up. The basic premise—“The Talk”—was all Zach. I dutifully drafted some dialogue for him, which he then improved and illustrated. I.e., he did almost all the work (despite having a newborn competing for his attention!). Still, it was an honor for me to collaborate with one of the great visual artists of our time, and I hope you like the result. Beyond that, I’ll let the work speak for itself.

]]>(1) My former student Leonid Grinberg points me to an astonishing art form, which I somehow hadn’t known about: namely, music videos generated by executable files that fit in only 4K of memory. Some of these videos have to be seen to be believed. (See also this one.) Much like, let’s say, a small Turing machine whose behavior is independent of set theory, these videos represent exercises in applied (or, OK, recreational) Kolmogorov complexity: how far out do you need to go in the space of all computer programs before you find beauty and humor and adaptability and surprise?

Admittedly, Leonid explains to me that the rules allow these programs to call DirectX and Visual Studio libraries to handle things like the 3D rendering (with the libraries not counted toward the 4K program size). This makes the programs’ existence *merely* extremely impressive, rather than a sign of alien superintelligence.

In some sense, all the programming enthusiasts over the decades who’ve burned their free time and processor cycles on Conway’s Game of Life and the Mandelbrot set and so forth were captivated by the same eerie beauty showcased by the videos: that of data compression, of the vast unfolding of a simple deterministic rule. But I also feel like the videos add a bit extra: the 3D rendering, the music, the panning across natural or manmade-looking dreamscapes. What we have here is a wonderful resource for either an acid trip or an undergrad computability and complexity course.

(2) A week ago Igor Oliveira, together with my longtime friend Rahul Santhanam, released a striking paper entitled Pseudodeterministic Constructions in Subexponential Time. To understand what this paper does, let’s start with Terry Tao’s 2009 polymath challenge: namely, to *find a fast, deterministic method that provably generates large prime numbers*. Tao’s challenge still stands today: one of the most basic, simplest-to-state unsolved problems in algorithms and number theory.

To be clear, we already have a fast deterministic method to decide whether a *given* number is prime: that was the 2002 breakthrough by Agrawal, Kayal, and Saxena. We also have a fast *probabilistic* method to *generate* large primes: namely, just keep picking n-digit numbers at random, test each one, and stop when you find one that’s prime! And those methods can be made deterministic assuming far-reaching conjectures in number theory, such as Cramer’s Conjecture (though note that even the Riemann Hypothesis wouldn’t lead to a polynomial-time algorithm, but “merely” a faster exponential-time one).

But, OK, what if you want a 5000-digit prime number, and you want it *now*: provably, deterministically, and fast? That was Tao’s challenge. The new paper by Oliveira and Santhanam doesn’t quite solve it, but it makes some exciting progress. Specifically, it gives a deterministic algorithm to generate n-digit prime numbers, with merely the following four caveats:

- The algorithm isn’t polynomial time, but subexponential (2
^{n^o(1)}) time. - The algorithm isn’t deterministic, but
*pseudodeterministic*(a concept introduced by Gat and Goldwasser). That is, the algorithm uses randomness, but it almost always succeeds, and it outputs the*same*n-digit prime number in every case where it succeeds. - The algorithm might not work for all input lengths n, but merely for infinitely many of them.
- Finally, the authors can’t quite say what the algorithm is—they merely prove that it
*exists*! If there’s a huge complexity collapse, such as ZPP=PSPACE, then the algorithm is one thing, while if not then the algorithm is something else.

Strikingly, Oliveira and Santhanam’s advance on the polymath problem is pure complexity theory: hitting sets and pseudorandom generators and win-win arguments and stuff like that. Their paper uses absolutely nothing specific to the prime numbers, except the facts that (a) there are lots of them (the Prime Number Theorem), and (b) we can efficiently decide whether a given number is prime (the AKS algorithm). It seems almost certain that one could do better by exploiting more about primes.

(3) I’m in Lyon, France right now, to give three quantum computing and complexity theory talks. I arrived here today from London, where I gave another two lectures. So far, the trip has been phenomenal, my hosts gracious, the audiences bristling with interesting questions.

But getting from London to Lyon also taught me an important life lesson that I wanted to share: **never fly EasyJet.** Or at least, if you fly one of the European “discount” airlines, realize that you get what you pay for (I’m told that Ryanair is even worse). These airlines have a fundamentally dishonest business model, based on selling impossibly cheap tickets, but then forcing passengers to check even tiny bags and charging exorbitant fees for it, counting on snagging enough travelers who just naïvely clicked “yes” to whatever would get them from point A to point B at a certain time, assuming that all airlines followed more-or-less similar rules. Which might not be so bad—it’s only money—if the minuscule, overworked staff of these quasi-airlines didn’t *also* treat the passengers like beef cattle, barking orders and berating people for failing to obey rules that one could log hundreds of thousands of miles on normal airlines without ever once encountering. Anyway, if the airlines won’t warn you, then *Shtetl-Optimized* will.