Archive for the ‘Rage Against Doofosity’ Category

Just says in P

Wednesday, April 17th, 2019

Recently a Twitter account started called justsaysinmice. The only thing this account does, is to repost breathless news articles about medical research breakthroughs that fail to mention that the effect in question was only observed in mice, and then add the words “IN MICE” to them. Simple concept, but it already seems to be changing the conversation about science reporting.

It occurred to me that we could do something analogous for quantum computing. While my own deep-seated aversion to Twitter prevents me from doing it myself, which of my readers is up for starting an account that just reposts one overhyped QC article after another, while appending the words “A CLASSICAL COMPUTER COULD ALSO DO THIS” to each one?

Can we reverse time to before this hypefest started?

Friday, March 15th, 2019

The purpose of this post is mostly just to signal-boost Konstantin Kakaes’s article in MIT Technology Review, entitled “No, scientists didn’t just ‘reverse time’ with a quantum computer.” The title pretty much says it all—but if you want more, you should read the piece, which includes the following droll quote from some guy calling himself “Director of the Quantum Information Center at the University of Texas at Austin”:

If you’re simulating a time-reversible process on your computer, then you can ‘reverse the direction of time’ by simply reversing the direction of your simulation. From a quick look at the paper, I confess that I didn’t understand how this becomes more profound if the simulation is being done on IBM’s quantum computer.

Incredibly, the time-reversal claim has now gotten uncritical attention in Newsweek, Discover, Cosmopolitan, my Facebook feed, and elsewhere—hence this blog post, which has basically no content except “the claim to have ‘reversed time,’ by running a simulation backwards, is exactly as true and as earth-shattering as a layperson might think it is.”

If there’s anything interesting here, I suppose it’s just that “scientists use a quantum computer to reverse time” is one of the purest examples I’ve ever seen of a scientific claim that basically amounts to a mind-virus or meme optimized for sharing on social media—discarding all nontrivial “science payload” as irrelevant to its propagation.

Airport idiocy

Wednesday, November 28th, 2018

On Sunday, I returned to Austin with Dana and the kids from Thanksgiving in Pennsylvania.  The good news is that I didn’t get arrested this time, didn’t mistake any tips for change, and didn’t even miss the flight!  But I did experience two airports that changed decisively for the worse.

In Newark Terminal C—i.e., one of the most important terminals of one of the most important airports in the world—there’s now a gigantic wing without a single restaurant or concession stand that, quickly and for a sane price, serves the sort of food that a child (say) might plausibly want to eat.  No fast food, not even an Asian place with rice and teriyaki to go.  Just one upscale eatery after the next, with complicated artisanal foods at brain-exploding prices, and—crucially—“servers” who won’t even acknowledge or make eye contact with the customers, because you have to do everything through a digital ordering system that gives you no idea how long the food might take to be ready, and whether your flight is going to board first.  The experience was like waking up in some sci-fi dystopia, where all the people have been removed from a familiar environment and replaced with glassy-eyed cyborgs.  And had we not thought to pack a few snacks with us, our kids would’ve starved.

Based on this and other recent experiences, I propose the following principle: if a customer’s digitally-mediated order to your company is eventually going to need to get processed by a human being anyhow—a fallible human who could screw things up—and if you’re less competent at designing user interfaces than Amazon (which means: anyone other than Amazon), then you must make it easy for the customer to talk to one of the humans behind the curtain.  Besides making the customer happy, such a policy is good business, since when you do screw things up due to miscommunications caused by poor user interfaces—and you will—it will be on you to fix things anyway, which will eat into your profit margin.  To take another example, besides Newark Terminal C, all these comments apply with 3000% force to the delivery service DoorDash.

Returning to airports, though: whichever geniuses ruined Terminal C at Newark are amateurs compared to those in my adopted home city of Austin.  Austin-Bergstrom International Airport (ABIA) chose Thanksgiving break—i.e., the busiest travel time of the year—to roll out a universally despised redesign where you now need to journey for an extra 5-10 minutes (or 15 with screaming kids in tow), up and down elevators and across three parking lots, to reach the place where taxis and Ubers are.  The previous system was that you simply walked out of the terminal, crossed one street, and the line of taxis was there.

Supposedly this is to “reduce congestion” … except that, compared to other airports, ABIA never had any significant congestion caused by taxis.  I’d typically be the only person walking to them at a given time, or I’d join a line of just 3 or 4 people.  Nor does this do anything for the environment, since the city of Austin has no magical alternative, no subway or monorail to whisk you from the airport to downtown.  Just as many people will need a taxi or Uber as before; the only difference is that they’ll need to go ten times further out of their way as they’d need to go at a ten times busier airport.  For new visitors, this means their first experience of Austin will be one of confusion and anger; for Austin residents who fly a few times per month, it means that days or weeks have been erased from their lives.  From the conversations I’ve had so far, it appears that every single passenger of ABIA, and every single taxi and Uber driver, is livid about the change.  With one boneheaded decision, ABIA singlehandedly made Austin a less attractive place to live and work.

Postscript I.  But if you’re a prospective grad student, postdoc, or faculty member, you should still come to UT!  The death of reason, and the triumph of the blank-faced bureaucrats, is a worldwide problem, not something in any way unique to Austin.

Postscript II.  No, I don’t harbor any illusions that posts like this, or anything else I can realistically say or do, will change anything for the better, at my local airport let alone in the wider world.  Indeed, I sometimes wonder whether, for the bureaucrats, the point of ruining facilities and services that thousands rely on is precisely to grind down people’s sense of autonomy, to make them realize the futility of argument and protest.  Even so, if someone responsible for the doofus decisions in question happened to come across this post, and if they felt even the tiniest twinge of fear or guilt, felt like their victory over common sense wouldn’t be quite as easy or painless as they’d hoped—well, that would be reason enough for the post.

Review of Vivek Wadhwa’s Washington Post column on quantum computing

Tuesday, February 13th, 2018

Various people pointed me to a Washington Post piece by Vivek Wadhwa, entitled “Quantum computers may be more of an immiment threat than AI.”  I know I’m late to the party, but in the spirit of Pete Wells’ famous New York Times “review” of Guy Fieri’s now-closed Times Square restaurant, I have a few questions that have been gnawing at me:

Mr. Wadhwa, when you decided to use the Traveling Salesman Problem as your go-to example of a problem that quantum computers can solve quickly, did the thought ever cross your mind that maybe you should look this stuff up first—let’s say, on Wikipedia?  Or that you should email one person—just one, anywhere on the planet—who works in quantum algorithms?

When you wrote of the Traveling Salesman Problem that “[i]t would take a laptop computer 1,000 years to compute the most efficient route between 22 cities”—how confident are you about that?  Willing to bet your house?  Your car?  How much would it blow your mind if I told you that a standard laptop, running a halfway decent algorithm, could handle 22 cities in a fraction of a second?

When you explained that quantum computing is “equivalent to opening a combination lock by trying every possible number and sequence simultaneously,” where did this knowledge come from?  Did it come from the same source you consulted before you pronounced the death of Bitcoin … in January 2016?

Had you wanted to consult someone who knew the first thing about quantum computing, the subject of your column, would you have been able to use a search engine to find one?  Or would you have simply found another “expert,” in the consulting or think-tank worlds, who “knew” the same things about quantum computing that you do?

Incidentally, when you wrote that quantum computing “could pose a greater burden on businesses than the Y2K computer bug did toward the end of the ’90s,” were you trying to communicate how large the burden might be?

And when you wrote that

[T]here is substantial progress in the development of algorithms that are “quantum safe.” One promising field is matrix multiplication, which takes advantage of the techniques that allow quantum computers to be able to analyze so much information.

—were you generating random text using one of those Markov chain programs?  If not, then what were you referring to?

Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance?  Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now?  How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings”—that’s a freebie)?

Were the fact-checkers home with the flu?  Did they give your column a pass simply because it was “perspective” rather than news?  Or did they trust you as a widely-published technology expert?  How does one become such an expert, anyway?

Thanks!


Update (Feb. 21): For casual readers, Vivek Wadhwa quickly came into the comments section to try to defend himself—before leaving in a huff as a chorus of commenters tried to explain why he was wrong. As far as I know, he has not posted any corrections to his Washington Post piece. Wadhwa’s central defense was that he was simply repeating what Michelle Simmons, a noted quantum computing experimentalist in Australia, said in various talks in YouTube—which turns out to be largely true (though Wadhwa said explicitly that quantum computers could efficiently solve TSP, while Simmons mostly left this as an unstated implication). As a result, while Wadhwa should obviously have followed the journalistic practice of checking incredible-sounding claims—on Wikipedia if nowhere else!—before repeating them in the Washington Post, I now feel that Simmons shares in the responsibility for this. As John Preskill tweeted, an excellent lesson to draw from this affair is that everyone in our field needs to be careful to say things that are true when speaking to the public.

Quickies

Monday, December 4th, 2017

Updates (Dec. 5): The US Supreme Court has upheld Trump’s latest travel ban. I’m grateful to all the lawyers who have thrown themselves in front of the train of fascism, desperately trying to slow it down—but I could never, ever have been a lawyer myself. Law is fundamentally a make-believe discipline. Sure, there are times when it involves reason and justice, possibly even resembles mathematics—but then there are times when the only legally correct thing to say is, “I guess that, contrary to what I thought, the Establishment Clause of the First Amendment does let you run for president promising to discriminate against a particular religious group, and then find a pretext under which to do it. The people with the power to decide that question have decided it.” I imagine that I’d last about half a day before tearing up my law-school diploma in disgust, which is surely a personality flaw on my part.

In happier news, many of you may have seen that papers by the groups of Chris Monroe and of Misha Lukin, reporting ~50-qubit experiments with trapped ions and optical lattices respectively, have been published back-to-back in Nature. (See here and here for popular summaries.) As far as I can tell, these papers represent an important step along the road to a clear quantum supremacy demonstration. Ideally, one wants a device to solve a well-defined computational problem (possibly a sampling problem), and also highly-optimized classical algorithms for solving the same problem and for simulating the device, which both let one benchmark the device’s performance and verify that the device is solving the problem correctly. But in a curious convergence, the Monroe group and Lukin group work suggests that this can probably be achieved with trapped ions and/or optical lattices at around the same time that Google and IBM are closing in on the goal with superconducting circuits.


As everyone knows, the flaming garbage fire of a tax bill has passed the Senate, thanks to the spinelessness of John McCain, Lisa Murkowski, Susan Collins, and Jeff Flake.  The fate of American higher education will now be decided behind closed doors, in the technical process of “reconciling” the House bill (which includes the crippling new tax on PhD students) with the Senate bill (which doesn’t—that one merely guts a hundred other things).  It’s hard to imagine that this particular line item will occassion more than about 30 seconds of discussion.  But, I dunno, maybe calling your Senator or Representative could help.  Me, I left a voicemail message with the office of Texas Senator Ted Cruz, one that I’m confident Cruz and his staff will carefully consider.

Here’s talk show host Seth Meyers (scroll to 5:00-5:20):

“By 2027, half of all US households would pay more in taxes [under the new bill].  Oh my god.  Cutting taxes was the one thing Republicans were supposed to be good at.  What’s even the point of voting for a Republican if they’re going to raise your taxes?  That’s like tuning in to The Kardashians only to see Courtney giving a TED talk on quantum computing.”


Speaking of which, you can listen to an interview with me about quantum computing, on a podcast called Data Skeptic. We discuss the basics and then the potential for quantum machine learning algorithms.


I got profoundly annoyed by an article called The Impossibility of Intelligence Explosion by François Chollet.  Citing the “No Free Lunch Theorem”—i.e., the (trivial) statement that you can’t outperform brute-force search on random instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.  In this case, Chollet then goes on to argue that most intelligence doesn’t reside in individuals but rather in culture; that there are hard limits to intelligence and to its usefulness; that we know of those limits because people with stratospheric intelligence don’t achieve correspondingly extraordinary results in life [von Neumann? Newton? Einstein? –ed.]; and finally, that recursively self-improving intelligence is impossible because we, humans, don’t recursively improve ourselves.  Scattered throughout the essay are some valuable critiques, but nothing comes anywhere close to establishing the impossibility advertised in the title.  Like, there’s a standard in CS for what it takes to show something’s impossible, and Chollet doesn’t even reach the same galaxy as that standard.  The certainty that he exudes strikes me as wholly unwarranted, just as much as (say) the near-certainty of a Ray Kurzweil on the other side.

I suppose this is as good a place as any to say that my views on AI risk have evolved.  A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game.  But now that we know these things, I think intellectual honesty requires updating on them.  And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.”  (Related: Eliezer Yudkowsky’s There’s No Fire Alarm for Artificial General Intelligence.)

Who knows how much of the human cognitive fortress might fall to a few more orders of magnitude in processing power?  I don’t—not in the sense of “I basically know but am being coy,” but really in the sense of not knowing.

To be clear, I still think that by far the most urgent challenges facing humanity are things like: resisting Trump and the other forces of authoritarianism, slowing down and responding to climate change and ocean acidification, preventing a nuclear war, preserving what’s left of Enlightenment norms.  But I no longer put AI too far behind that other stuff.  If civilization manages not to destroy itself over the next century—a huge “if”—I now think it’s plausible that we’ll eventually confront questions about intelligences greater than ours: do we want to create them?  Can we even prevent their creation?  If they arise, can we ensure that they’ll show us more regard than we show chimps?  And while I don’t know how much we can say about such questions that’s useful, without way more experience with powerful AI than we have now, I’m glad that a few people are at least trying to say things.

But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later.  Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI.  Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.


Speaking of superintelligences, I strongly recommend an interview of Ed Witten by Quanta magazine’s Natalie Wolchover: one of the best interviews of Witten I’ve read.  Some of Witten’s prouncements still tend toward the oracular—i.e., we’re uncovering facets of a magnificent new theoretical structure, but it’s almost impossible to say anything definite about it, because we’re still missing too many pieces—but in this interview, Witten does stick his neck out in some interesting ways.  In particular, he speculates (as Einstein also did, late in life) about whether physics should be reformulated without any continuous quantities.  And he reveals that he’s recently been rereading Wheeler’s old “It from Bit” essay, because: “I’m trying to learn about what people are trying to say with the phrase ‘it from qubit.'”


I’m happy to report that a group based mostly in Rome has carried out the first experimental demonstration of PAC-learning of quantum states, applying my 2006 “Quantum Occam’s Razor Theorem” to reconstruct optical states of up to 6 qubits.  Better yet, they insisted on adding me to their paper!


I was at Cornell all of last week to give the Messenger Lectures: six talks in all (!!), if you include the informal talks that I gave at student houses (including Telluride House, where I lived as a Cornell undergrad from 1998 to 2000).  The subjects were my usual beat (quantum computing, quantum supremacy, learnability of quantum states, firewalls and AdS/CFT, big numbers).  Intimidatingly, the Messenger Lectures are the series in which Richard Feynman presented The Character of Physical Law in 1964, and in which many others (Eddington, Oppenheimer, Pauling, Weinberg, …) set a standard that my crass humor couldn’t live up to in a trillion years.  Nevertheless, thanks so much to Paul Ginsparg for hosting my visit, and for making it both intellectually stimulating and a trip down memory lane, with meetings with many of the professors from way back when who helped to shape my thinking, including Bart Selman, Jon Kleinberg, and Lillian Lee.  Cornell is much as I remember it from half a lifetime ago, except that they must’ve made the slopes twice as steep, since I don’t recall so much huffing and puffing on my way to class each morning.

At one of the dinners, my hosts asked me about the challenges of writing a blog when people on social media might vilify you for what you say.  I remarked that it hasn’t been too bad lately—indeed that these days, to whatever extent I write anything ‘controversial,’ mostly it’s just inveighing against Trump.  “But that is scary!” someone remarked.  “You live in Texas now!  What if someone with a gun got angry at you?”  I replied that the prospect of enraging such a person doesn’t really keep me awake at night, because it seems like the worst they could do would be to shoot me.  By contrast, if I write something that angers leftists, they can do something far scarier: they can make me feel guilty!


I’ll be giving a CS colloquium at Georgia Tech today, then attending workshops in Princeton and NYC the rest of the week, so my commenting might be lighter than usual … but yours need not be.

The destruction of graduate education in the United States

Friday, November 17th, 2017

If and when you emerged from your happiness bubble to read the news, you’ll have seen (at least if you live in the US) that the cruel and reckless tax bill has passed the House of Representatives, and remains only to be reconciled with an equally-vicious Senate bill and then voted on by the Republican-controlled Senate.  The bill will add about $1.7 trillion to the national debt and raise taxes for about 47.5 million people, all in order to deliver a massive windfall to corporations, and to wealthy estates that already pay some of the lowest taxes in the developed world.

In a still-functioning democracy, those of us against such a policy would have an intellectual obligation to seek out the strongest arguments in favor of the policy and try to refute them.  By now, though, it seems to me that the Republicans hold the public in such contempt, and are so sure of the power of gerrymandering and voter restrictions to protect themselves from consequences, that they didn’t even bother to bring anything to the debate more substantive than the schoolyard bully’s “stop punching yourself.”  I guess some of them still repeat the fairytale about the purpose of tax cuts for the super-rich being to trickle down and help everyone else—but can even they advance that “theory” anymore without stifling giggles?  Mostly, as far as I can tell, they just brazenly deny that they’re doing what they obviously are doing: i.e., gleefully setting on fire anything that anyone, regardless of their ideology, could recognize as the national interest, in order to enrich a small core of supporters.

But none of that is what interests me in this post—because it’s “merely” as bad as, and no worse than, what one knew to expect when a coalition of thugs, kleptocrats, and white-nationalist demagogues seized control of Hamilton’s and Jefferson’s experiment.  My concern here is only with the “kill shot” that the Republicans have now aimed, with terrifying precision, at the system that’s kept American academic science the envy of the world in spite of the growing dysfunction all around it.

As you’ve probably heard, one of the ways Republicans intend to pay for their tax giveaway, is to change the tax code so that graduate students will now need to pay taxes on “tuition”—a large sum of money (as much as $50,000/year) that PhD students never actually see, that can easily exceed the stipends they do see, and that’s basically just an accounting trick that serves the internal needs of universities and granting agencies.  Again, to eliminate any chance of misunderstanding: PhD students, who are effectively low-wage employees, already pay taxes on their actual stipends.  The new proposal is that they’ll also have to pay taxes on a whopping, make-believe “X” on their payroll sheet that’s always exactly balanced out by “-X.”

For detailed analyses of the impacts, see, e.g. Luca Trevisan’s post or Inside Higher Ed or the Chronicle of Higher Ed or Vox or NPR.  Briefly, though, the proposal would raise taxes by a few thousand dollars per year, or in some cases as much as $10,000 per year (!), on PhD students who already live hand-to-mouth-to-ramen-bowl, with the largest impact falling on students in STEM fields.  For many students who aren’t independently wealthy, this could push a PhD beyond the realm of affordability, and cause them to leave academia or to do their graduate work in other countries.

“But isn’t there some workaround?”  Indeed, financial ignoramus that I am, my first reaction was to ask: if PhD tuition is basically an accounting fiction anyway, then why can’t the universities just declare that the tuition in question no longer exists, or is now zero dollars?  Feel free to explain further in the comments if you understand this stuff, but as far as I can tell, the answer is: because PhD tuition is used to calculate how much “tax” the universities can take from professors’ grant money.  If universities could no longer take that tax, and they had no other way to make up for it, then except for the richest few universities, they’d have to scale back research and teaching pretty drastically.  To avoid that outcome, the universities would be relying on the granting agencies to let them keep taking the overhead they needed to operate, even though the “PhD tuition” no longer existed.  But the granting agencies aren’t set up for this: you can’t just throw a bomb into one part of a complicated bureaucratic machine built up over decades, and expect the machine to continue working with no disruption to science.

But more ominously: as my friend Daniel Harlow and many others pointed out, it’s hard to look at the indefensible, laser-specific meanness of this policy, without suspecting that for many in Congress, the destruction of American higher education isn’t a regrettable byproduct, but the goal—just another piece of red meat to throw to the base.  If so, then we’d expect Congress to direct federal granting agencies not to loosen their rules about overhead, thereby forcing the students to pay the tax, and achieving the desired destruction.  (Note that the Trump administration has already made tightening overhead rules—i.e., doing the exact opposite of what would be needed to counteract the new tax—a central focus of its attempt to cut federal research funding.)

OK, two concluding thoughts:

  1. When Republicans in Congress defended Trump’s travel ban, they at least had the craven excuse that they were only following the lead of the populist strongman who’d taken over their party.  Here they don’t even have that.  As far as I know, this targeted destruction of American higher education was Congress’s initiative, not Trump’s—which to me, underscores again the feather-thinness of any moral distinction between the Vichy GOP leadership and the administration with which it collaborates.  Trump didn’t emerge from nowhere.  It took decades of effort—George W. Bush, Sarah Palin, Karl Rove, Rush Limbaugh, Mitch McConnell, and all the rest—to transform the GOP into the pure seething cauldron of anti-intellectual resentment and hatred that we know today.
  2. Given the existential risk to American higher education, why didn’t I blog about this earlier?  The answer is embarrassing to admit, and reflects no credit on me.  It’s simply that I didn’t believe it—even given all the other stuff that could “never happen in the US,” until it happened this past year.  I didn’t believe it, not because it was too far from me but because it was too close—because if true, it would mean the crippling of the research world in which I’ve spent most of my life since age 15, so therefore it couldn’t be true.  Surely even the House Republicans would realize they’d screwed up this time, and would take out this crazy provision before the full bill was voted on?  Or surely there’s some workaround that makes the whole thing less awful than it sounds?  There has to be … right?

Anyway, what else is there to say, except to call your representative, if you’re American and still have the faith in the system that such an act implies.

Because you asked: the Simulation Hypothesis has not been falsified; remains unfalsifiable

Tuesday, October 3rd, 2017

By email, by Twitter, even as the world was convulsed by tragedy, the inquiries poured in yesterday about a different topic entirely: Scott, did physicists really just prove that the universe is not a computer simulation—that we can’t be living in the Matrix?

What prompted this was a rash of popular articles like this one (“Researchers claim to have found proof we are NOT living in a simulation”).  The articles were all spurred by a recent paper in Science Advances: Quantized gravitational responses, the sign problem, and quantum complexity, by Zohar Ringel of Hebrew University and Dmitry L. Kovrizhin of Oxford.

I’ll tell you what: before I comment, why don’t I just paste the paper’s abstract here.  I invite you to read it—not the whole paper, just the abstract, but paying special attention to the sentences—and then make up your own mind about whether it supports the interpretation that all the popular articles put on it.

Ready?  Set?

Abstract: It is believed that not all quantum systems can be simulated efficiently using classical computational resources.  This notion is supported by the fact that it is not known how to express the partition function in a sign-free manner in quantum Monte Carlo (QMC) simulations for a large number of important problems.  The answer to the question—whether there is a fundamental obstruction to such a sign-free representation in generic quantum systems—remains unclear.  Focusing on systems with bosonic degrees of freedom, we show that quantized gravitational responses appear as obstructions to local sign-free QMC.  In condensed matter physics settings, these responses, such as thermal Hall conductance, are associated with fractional quantum Hall effects.  We show that similar arguments also hold in the case of spontaneously broken time-reversal (TR) symmetry such as in the chiral phase of a perturbed quantum Kagome antiferromagnet.  The connection between quantized gravitational responses and the sign problem is also manifested in certain vertex models, where TR symmetry is preserved.

For those tuning in from home, the “sign problem” is an issue that arises when, for example, you’re trying to use the clever trick known as Quantum Monte Carlo (QMC) to learn about the ground state of a quantum system using a classical computer—but where you needed probabilities, which are real numbers from 0 to 1, your procedure instead spits out numbers some of which are negative, and which you can therefore no longer use to define a sensible sampling process.  (In some sense, it’s no surprise that this would happen when you’re trying to simulate quantum mechanics, which of course is all about generalizing the rules of probability in a way that involves negative and even complex numbers!  The surprise, rather, is that QMC lets you avoid the sign problem as often as it does.)

Anyway, this is all somewhat far from my expertise, but insofar as I understand the paper, it looks like a serious contribution to our understanding of the sign problem, and why local changes of basis can fail to get rid of it when QMC is used to simulate certain bosonic systems.  It will surely interest QMC experts.

OK, but does any of this prove that the universe isn’t a computer simulation, as the popular articles claim (and as the original paper does not)?

It seems to me that, to get from here to there, you’d need to overcome four huge difficulties, any one of which would be fatal by itself, and which are logically independent of each other.

  1. As a computer scientist, one thing that leapt out at me, is that Ringel and Kovrizhin’s paper is fundamentally about computational complexity—specifically, about which quantum systems can and can’t be simulated in polynomial time on a classical computer—yet it’s entirely innocent of the language and tools of complexity theory.  There’s no BQP, no QMA, no reduction-based hardness argument anywhere in sight, and no clearly-formulated request for one either.  Instead, everything is phrased in terms of the failure of one specific algorithmic framework (namely QMC)—and within that framework, only “local” transformations of the physical degrees of freedom are considered, not nonlocal ones that could still be accessible to polynomial-time algorithms.  Of course, one does whatever one needs to do to get a result.
    To their credit, the authors do seem aware that a language for discussing all possible efficient algorithms exists.  They write, for example, of a “common understanding related to computational complexity classes” that some quantum systems are hard to simulate, and specifically of the existence of systems that support universal quantum computation.  So rather than criticize the authors for this limitation of their work, I view their paper as a welcome invitation for closer collaboration between the quantum complexity theory and quantum Monte Carlo communities, which approach many of the same questions from extremely different angles.  As official ambassador between the two communities, I nominate Matt Hastings.
  2. OK, but even if the paper did address computational complexity head-on, about the most it could’ve said is that computer scientists generally believe that BPP≠BQP (i.e., that quantum computers can solve more decision problems in polynomial time than classical probabilistic ones); and that such separations are provable in the query complexity and communication complexity worlds; and that at any rate, quantum computers can solve exact sampling problems that are classically hard unless the polynomial hierarchy collapses (as pointed out in the BosonSampling paper, and independently by Bremner, Jozsa, Shepherd).  Alas, until someone proves P≠PSPACE, there’s no hope for an unconditional proof that quantum computers can’t be efficiently simulated by classical ones.
    (Incidentally, the paper comments, “Establishing an obstruction to a classical simulation is a rather ill-defined task.”  I beg to differ: it’s not ill-defined; it’s just ridiculously hard!)
  3. OK, but suppose it were proved that BPP≠BQP—and for good measure, suppose it were also experimentally demonstrated that scalable quantum computing is possible in our universe.  Even then, one still wouldn’t by any stretch have ruled out that the universe was a computer simulation!  For as many of the people who emailed me asked themselves (but as the popular articles did not), why not just imagine that the universe is being simulated on a quantum computer?  Like, duh?
  4. Finally: even if, for some reason, we disallowed using a quantum computer to simulate the universe, that still wouldn’t rule out the simulation hypothesis.  For why couldn’t God, using Her classical computer, spend a trillion years to simulate one second as subjectively perceived by us?  After all, what is exponential time to She for whom all eternity is but an eyeblink?

Anyway, if it weren’t for all four separate points above, then sure, physicists would have now proved that we don’t live in the Matrix.

I do have a few questions of my own, for anyone who came here looking for my reaction to the ‘news’: did you really need me to tell you all this?  How much of it would you have figured out on your own, just by comparing the headlines of the popular articles to the descriptions (however garbled) of what was actually done?  How obvious does something need to be, before it no longer requires an ‘expert’ to certify it as such?  If I write 500 posts like this one, will the 501st post basically just write itself?

Asking for a friend.


Comment Policy: I welcome discussion about the Ringel-Dovrizhin paper; what might’ve gone wrong with its popularization; QMC; the sign problem; the computational complexity of condensed-matter problems more generally; and the relevance or irrelevance of work on these topics to broader questions about the simulability of the universe.  But as a little experiment in blog moderation, I won’t allow comments that just philosophize in general about whether or not the universe is a simulation, without making further contact with the actual content of this post.  We’ve already had the latter conversation here—probably, like, every week for the last decade—and I’m ready for something new.

Alex Halderman testifying before the Senate Intelligence Committee

Wednesday, June 21st, 2017

This morning, my childhood best friend Alex Halderman testified before the US Senate about the proven ease of hacking electronic voting machines without leaving any record, the certainty that Russia has the technical capability to hack American elections, and the urgency of three commonsense (and cheap) countermeasures:

  1. a paper trail for every vote cast in every state,
  2. routine statistical sampling of the paper trail—enough to determine whether large-scale tampering occurred, and
  3. cybersecurity audits to instill general best practices (such as firewalling election systems).

You can watch Alex on C-SPAN here—his testimony begins at 2:16:13, and is followed by the Q&A period.  You can also read Alex’s prepared testimony here, as well as his accompanying Washington Post editorial (joint with Justin Talbot-Zorn).

Alex’s testimony—its civic, nonpartisan nature, right down to Alex’s flourish of approvingly quoting President Trump in support of paper ballots—reflects a moving optimism that, even in these dark times for democracy, Congress can be prodded into doing the right thing merely because it’s clearly, overwhelmingly in the national interest.  I wish I could say I shared that optimism.  Nevertheless, when called to testify, what can one do but act on the assumption that such optimism is justified?  Here’s hoping that Alex’s urgent message is heard and acted on.

Me at the Science March today, in front of the Texas Capitol in Austin

Saturday, April 22nd, 2017

I will not log in to your website

Sunday, March 19th, 2017

Two or three times a day, I get an email whose basic structure is as follows:

Prof. Aaronson, given your expertise, we’d be incredibly grateful for your feedback on a paper / report / grant proposal about quantum computing.  To access the document in question, all you’ll need to do is create an account on our proprietary DigiScholar Portal system, a process that takes no more than 3 hours.  If, at the end of that process, you’re told that the account setup failed, it might be because your browser’s certificates are outdated, or because you already have an account with us, or simply because our server is acting up, or some other reason.  If you already have an account, you’ll of course need to remember your DigiScholar Portal ID and password, and not confuse them with the 500 other usernames and passwords you’ve created for similar reasons—ours required their own distinctive combination of upper and lowercase letters, numerals, and symbols.  After navigating through our site to access the document, you’ll then be able to enter your DigiScholar Review, strictly adhering to our 15-part format, and keeping in mind that our system will log you out and delete all your work after 30 seconds of inactivity.  If you have trouble, just call our helpline during normal business hours (excluding Wednesdays and Thursdays) and stay on the line until someone assists you.  Most importantly, please understand that we can neither email you the document we want you to read, nor accept any comments about it by email.  In fact, all emails to this address will be automatically ignored.

Every day, I seem to grow crustier than the last.

More than a decade ago, I resolved that I would no longer submit to or review for most for-profit journals, as a protest against the exorbitant fees that those journals charge academics in order to buy back access to our own work—work that we turn over to the publishers (copyright and all) and even review for them completely for free, with the publishers typically adding zero or even negative value.  I’m happy that I’ve been able to keep that pledge.

Today, I’m proud to announce a new boycott, less politically important but equally consequential for my quality of life, and to recommend it to all of my friends.  Namely: as long as the world gives me any choice in the matter, I will never again struggle to log in to any organization’s website.  I’ll continue to devote a huge fraction of my waking hours to fielding questions from all sorts of people on the Internet, and I’ll do it cheerfully and free of charge.  All I ask is that, if you have a question, or a document you want me to read, you email it!  Or leave a blog comment, or stop by in person, or whatever—but in any case, don’t make me log in to anything other than Gmail or Facebook or WordPress or a few other sites that remain navigable by a senile 35-year-old who’s increasingly fixed in his ways.  Even Google Docs and Dropbox are pushing it: I’ll give up (on principle) at the first sight of any login issue, and ask for just a regular URL or an attachment.

Oh, Skype no longer lets me log in either.  Could I get to the bottom of that?  Probably.  But life is too short, and too precious.  So if we must, we’ll use the phone, or Google Hangouts.

In related news, I will no longer patronize any haircut place that turns away walk-in customers.

Back when we were discussing the boycott of Elsevier and the other predatory publishers, I wrote that this was a rare case “when laziness and idealism coincide.”  But the truth is more general: whenever my deepest beliefs and my desire to get out of work both point in the same direction, from here till the grave there’s not a force in the world that can turn me the opposite way.