## Archive for the ‘Embarrassing Myself’ Category

### Coronavirus: the second-weirdest solution?

Friday, March 6th, 2020

Many people have suggested coating handles, doorknobs and so forth with virus-killing copper tape. It’s a shame that this isn’t being tried on a wider scale. In the meantime, though, here’s a related but different idea that I had last night.

Imagine we could coat every doorknob, every light switch, every railing, every other surface that people might touch in public buildings, with some long-lasting disgusting, sticky, slimy substance. For a variety of reasons, one probably wouldn’t use actual excrement, although it wouldn’t hurt if the substance looked like that. Or it could be a sickly neon green or red, to make it impossible to conceal when you’d gotten the substance on your hands.

What would be the result? Of course, people would avoid touching these surfaces. If they had to, they’d do so with a napkin or glove whenever possible. If they had to touch them bare-handedly, they’d rush to wash their hands with soap as soon as possible afterwards. Certainly they wouldn’t touch their faces before having washed their hands.

In short, they’d show exactly the behaviors that experts agree are among the most helpful, if our goal is to slow the spread of the coronavirus. In effect, we’d be plugging an unfortunate gap in our evolutionary programming—namely, that the surfaces where viruses can thrive aren’t intuitively disgusting to us, as (say) vomit or putrid meat are—by making those surfaces disgusting, as they ought to be in the middle of a pandemic.

Note that, even if it somehow turns out to be infeasible to coat all the touchable surfaces in public buildings with disgusting goo, you might still derive great personal benefit from imagining them so covered. If you manage to pull that off, it will yield just the right heuristic for when and how often you should now be washing your hands (and avoiding touching your face), without no need for additional conscious reflection.

Mostly, having the above thoughts made me grateful for my friend Robin Hanson. For as long Robin is around, tweeting and blogging from his unique corner of mindspace, no one will ever be able to say that my ideas for how to control the coronavirus were the world’s weirdest or most politically tone-deaf.

### A coronavirus poem

Tuesday, March 3rd, 2020

These next few months, every time I stop myself from touching my face by force of will,

Let me remind myself that the same willpower is available to diet, to exercise, to throw myself into a project, to keep calm amid screaming, to introduce myself to strangers, to decrease the fraction of my life spent getting upset that someone was mean to my ingroup on social media, or otherwise to better myself as a human specimen.

Yea, let all of these things be just as easy for me as it was not to touch my face.

Ah, but what if I forget, what if I do keep touching my face in the next few months?

In one plausible scenario, with at least ~0.1% probability and probably higher depending on my age, a cheap answer will be available to that question: namely, that I’ll no longer be around to ponder the implications.

### On the scientific accuracy of “Avengers: Endgame”

Friday, May 3rd, 2019

[BY REQUEST: SPOILERS FOLLOW]

Today Ben Lindbergh, a writer for The Ringer, put out an article about the scientific plausibility (!) of the time-travel sequences in the new “Avengers” movie. The article relied on two interviewees:

(1) David Deutsch, who confirmed that he has no idea what the “Deutsch proposition” mentioned by Tony Stark refers to but declined to comment further, and

(2) some quantum computing dude from UT Austin who had no similar scruples about spouting off on the movie.

To be clear, the UT Austin dude hadn’t even seen the movie, or any of the previous “Avengers” movies for that matter! He just watched the clips dealing with time travel. Yet Lindbergh still saw fit to introduce him as “a real-life [Tony] Stark without the vast fortune and fancy suit.” Hey, I’ll take it.

Anyway, if you’ve seen the movie, and/or you know Deutsch’s causal consistency proposal for quantum closed timelike curves, and you can do better than I did at trying to reconcile the two, feel free to take a stab in the comments.

### Beyond fiction

Wednesday, August 8th, 2018

I now know firsthand what it’s like to be arrested by armed police officers, handcuffed, and sharply interrogated, while one’s wife and children look on helplessly.  This is not a prank post.

It happened in Philadelphia International Airport.  As someone who was born in Philadelphia, and who’s since visited ~40 countries on 6 continents and flies every week or two, I’ve long considered PHL possibly the most depressing airport on the planet (and the competition is fierce).

I’d just eaten dinner with my wife Dana and our two kids in a food court—after a day of travel that had already, before this happened, involved a missed flight and a lost suitcase, owing to a chain of mishaps that I’d (probably melodramatically) been describing to Dana as insane beyond the collective imagination of Homer and Shakespeare and Tolstoy and the world’s other literary giants to invent.  Again, that was before my arrest.

Two large uniformed men with holstered pistols saw me as we were exiting the airport, surrounded and handcuffed me, and demanded that I confess.

“I’m … sorry, officers,” I managed.  “I don’t understand what this is about.”

“Stop the games.  You know exactly what you took.  We have it all on video.  Where is it?”

Me, a thief?  I felt terrified to be at the beginning of a Kafka story.  But if I’m going to be brutally honest about it, I also felt … secretly vindicated in my irrational yet unshakeable beliefs that

1. the laws of probability are broken, capricious horribleness reigning supreme over the universe,
2. I’m despised by a large fraction of the world just for being who I am, and
3. it’s only a matter of time until big, scary armed guys come for me, as they came for so many other nerdy misfits.

I almost wanted to say to the police: where have you been?  I’ve been expecting you my whole life.  And I wanted to say to Dana: you see??  see what I’ve been telling you all these years, about the nature of the universe we were born into?

Dana, for her part, was remonstrating with the officers that there must be some misunderstanding, that her husband was often absentminded but it’s completely impossible that he stole anything.  The officers brushed her away, told her to remove the kids from the situation.

“Are you gonna come clean?” one of the cops barked at me.  “We know you took it.”

“I didn’t take anything.”  Then I thought it over more.  “Or if somehow I did … then I’m certain that it would’ve been an accident, and I’d be more than happy to fix the…”

“Wait, if you did?  It sounds like you just confessed!”

“No, I definitely didn’t steal anything.  I’m just saying it’s possible that I might have mistakenly…”

“Your answers are rambling and all over the place.  Stop making up stories.  We know you did it.”

I’m not proud of myself for the next part, but the officers were so serious, and somehow I had to make them realize the sheer comical absurdity of what was happening.  “Look, I’m a computer science professor,” I said.  “I’ve never stolen a penny in my life, and it’s not something I’d ever…”

“Yeah, well I’m a police officer.  I’ve seen a lot in my thirty years in this job.  This is not about who you are, it’s about what you did.”

But what did I do?  After many more attempts to intimidate me, I was finally informed of the charge: “that smoothie place over there says you stole cash from their tip jar.”  Huh? How much?  One of the officers returned from the smoothie bar, and said, a bit sheepishly: “they say it was $4.” Now a vague recollection came into sharper focus. Yes, I had bought a berry smoothie for my daughter and a sparkling grapefruit juice for me. I’d paid with a debit card, for reasons that I don’t remember, even though I normally pay cash. My mind was elsewhere: on the missed flight, the lost suitcase, the brazen behavior of American Airlines (about which more later). Then, completely forgetting I hadn’t paid cash this time, I looked down for my change:$4 in an unmarked plastic change cup.  I collected the change, put it in my wallet, then completely forgot about it.

After a minute, an employee angrily pointed down at a tray that the plastic cup was on (though not clearly at the cup itself), and said “hey, the tips go here!”  So I took a dollar from my wallet and put it on the tray.  I thought: this guy has some chutzpah, to demand a tip, and for an over-the-counter smoothie!  But whatever, he probably needs the dollar more than I do.  So if it will make him stop being angry…

But he was still angry.  He repeated: “this here is for tips!”

I said something to the effect of: “yeah, I know–that’s what you just told me, isn’t it?  So that’s why I just left you a tip!”  Sheesh.

At no point did he ever say, “you accidentally took from the tip jar,” or any other statement that would’ve clarified his meaning.

As I turned and walked away, I thought: yes, this is the strange world I was born into.  A world where people yell at me for not tipping at a smoothie bar–is that expected? I didn’t think it was–and then continue yelling even after I do.  But what did I expect?  Did I expect, as a nerdy outsider, to be able to buy normal people’s toleration with mere money?

As soon as I figured out what had happened, of course I offered to pay back the smoothie bar, not merely the $3 I still owed them, but$40 or whatever other amount would express my goodwill and compensate them for their trouble.  But the smoothie bar returned the $40 that I’d asked Dana to give them—I was unable to bring it myself on account of being handcuffed—and refused to press charges. (In fact, apparently the employees hadn’t wanted to involve the police at all. It was the manager, who hadn’t seen what happened, who’d insisted on it.) So with no case, the police finally had no choice but to let me go–though not before giving me a stern lecture about never again putting my hands on stuff that’s not mine. A week later, I’m still processing the experience. In the rest of the post, I’d like to reflect on some lessons I think I learned from it. First, it’s said that “a conservative is a liberal who’s been mugged; a liberal is a conservative who’s been arrested.” It’s true: there are aspects of being arrested that are hard to understand until you’ve been through it. While I’m white (well, insofar as Ashkenazim are), and while both officers who interrogated me happened to be African-Americans, what I went through further increased my sympathy for the many minority victims of aggressive policing. Sitting in your armchair, it’s easy to think: in a liberal democracy, as long you know you did nothing wrong, even if you got arrested, frisked, detained, there’d probably be no real need to panic. All you’d need to do is calmly clear up the misunderstanding and be back on your merry way. But at least in my experience, an actual arrest isn’t like that. The presumption of innocence, Miranda rights, all the things you might learn about in civics class—none of it seems to play any role. From the very beginning, there’s an overwhelming presumption of guilt. Everything you say gets interpreted as if you’re a red-handed criminal trying to fabricate a story, no matter how strained and how ludicrous such an interpretation might become. And something strange happened: the officers seemed so certain I was guilty, that after only a few minutes I started to feel guilty. I still had only a hazy sense of my “crime,” but I knew I was going to be punished for it, and I only hoped that the punishment wouldn’t tear me away from my family and previous life forever. I came away from this incident with a visceral feel for just how easy it would be to procure a false confession from someone, which I didn’t have before but which will now stay with me as long as I live. Second, it occurred to me that the sight of me, stuttering and potbellied complexity blogger, shackled and interrogated by armed policemen demanding that he confess to the theft of$3 from an airport stand, is a decent visual metaphor for much of my life.  If you doubt this, simply imagine Arthur Chu or Amanda Marcotte in place of the police officers.

It’s like: my accusers arrive on the scene committed to a specific, hostile theory of me: that I’m a petty thief of smoothie bars, let’s say, or a sexual-harassment-loving misogynist.  With all due modesty, people who know me might say that it’s not merely that I don’t fit the theory, that I happen to be innocent of the charge.  Rather, it’s that I’m one of the most astronomically, ridiculously unlikely people to fit the theory you could ever meet.  Not because I’m especially saintly, but simply because I already walk around all day feeling like my right to exist is conditional and might be revoked at any minute.  Breaking the normal people’s rules is the last thing on my agenda!  And yes, I still often feel that way, even as a professor with an endowed chair and awards and whatever.  The only times when I really relax, among strangers, is when everyone’s there to discuss ideas.

But my accusers don’t know any of that, or they refuse to believe it.  Everything I say gets interpreted in the light of the hostile theory, and therefore serves only as further confirmation of it.  Ironically—and this is key—the very unusual personality traits that make me so unlikely to be an offender, are also what throw off my accusers’ detection algorithms, and make them double down on their wrong theory.  When I’m trapped, I tend to fall back on the only tools I know: argument, openness, frank confession of my mistakes and failings, sometimes a little self-deprecating humor.  Unfortunately, I find this often backfires, as my accusers see in my vulnerability a golden opportunity to mount another wretched evildoer above their fireplace.

Or, to go even further out on a psychoanalytic limb: I sometimes get the sense that it gradually does dawn on my accusers that I’m not who they thought I was.  And then, far from prompting an apology, that realization seems to make my accusers even angrier, as if my throwing off their model of reality so badly, was an even worse offense than actually being guilty of whatever they thought!  A thief, a misogynist, they know how to handle.  But a living, breathing adversarial example for their worldview?

Dana, who watched the entire arrest, tells me that the central mistake I made was to try to reason with the police officers: “you say I took \$3 that wasn’t mine?  If so, then I’m sure it was an accident, so let’s try to figure out what happened so we can fix it…”  In Dana’s view, what I saw as an earnest desire to get to the bottom of things, came across to grizzled cops only as evasiveness and guilt.  She says it would’ve been far better if I’d categorically denied: “no, I did not steal.  That’s completely absurd.  Please release me immediately.”

I’ve asked myself: how do you live in a world where, again and again, you can choose the hard right path over the easy wrong one, and then see your choice gleefully wielded against you?  Where you can spill your guts out to your accusers, in a desperate attempt to talk with them not as hardened warriors, but one confused and vulnerable human to another–and your reward is (to take one example) your picture in Salon above the headline “The Plight of the Bitter Nerd”?

The only way to live in such a world, as far as I can see, is to remind yourself that sometimes openness and vulnerability work.  In the course of my arrest, the two officers gradually differentiated themselves into a “good cop” and a “bad cop.”  While the “bad cop” treated me till the end like an unrepentant kleptomaniac being freed on a technicality, the “good cop,” who talked to me and Dana much more, became almost apologetic: “look man, when we get a call that someone stole money, we have to treat it like that’s the situation, you understand what I’m saying?  And then if it’s not, well then it’s not.”  Likewise, Arthur Chu recently tweeted that he’s “unhappy about [my] continued existence”–i.e., on a straightforward reading, that he wants me to die.  But I try to remind myself every day that the human race doesn’t consist solely of Arthur Chus (or Amanda Marcottes, or Lubos Motls, or SneerClub posters, or Paul Manaforts or Donald Trumps).  The world contains millions of women and men of every background and ideology who want actual dialogue, many of whom I’m lucky to count as friends, many of whom I met through this blog.  Vulnerability is possible because the world is not uniformly evil.

Third, I emerged from my arrest with a self-help technique that’s probably well-known to somebody, but that was new to me, and that I hope others will find as useful as I’m finding it.  Here it is: when something freakishly bad happens to you, draw a directed graph of all the known causes of the event, and the causes of the causes, and so forth as far back as you can trace them.  Also draw all the known measures that could have blocked the causal path leading to the bad event, and what prevented those measures from working or from being tried.

For example: why did I end up in handcuffs?  Firstly because, earlier in the day, Lily threw a temper tantrum that prevented us from packing and leaving for Logan Airport on time.  Because there was also heavy traffic on the way there.  Because we left from Harvard Square, and failed to factor in the extra 10 minutes to reach the airport, compared to if we’d left from MIT.  Because online check-in didn’t work.  Because when we did arrive, (barely) on time, the contemptuous American Airlines counter staff deliberately refused to check us in, chatting as we stewed impotently, so that we’d no longer be on time and they could legally give our seats away to others, and strand us in an airport with two young kids.  Because the only replacement flight was in a different terminal.  Because, in the stress of switching terminals–everything is stressful with two kids in an airport–I lost our suitcase.  Because the only shuttle to get back to the terminal went around the long way, and was slow as molasses, and by the time I returned our suitcase had been taken by the bomb squad.  Because the stress of such events bears down on me like an iron weight, and makes me unable to concentrate on the reality in front of me.  Because the guy at the smoothie counter and I failed to communicate.  Because the police chose to respond (or were trained to respond), not by politely questioning me to try to understand what had happened, but by handcuffing me and presuming guilt.

I actually drew the graph, filled a notebook page with it–and when I searched it for answers, neither I nor the world got off easily.  Looking over the strange chain of events that led to my arrest, I could find much to support my “default narrative,” that the laws of probability are broken and the universe is grotesquely awful.  But also, my belief in the universe’s grotesque awfulness clearly played a role in the events.  Had I been able maintain a calm demeanor, I would not have made so many mistakes.

Again and again, I screwed up.  Again and again, airport personnel responded to my honest mistakes with a maximum of cold bureaucracy rather than commonsense discussion: the booting from the flight, the bomb squad, the handcuffs.

We tend to think of bureaucracy as a mere nuisance, the person behind the counter at the Department of Motor Vehicles who makes you wait all day and then sends you home to get a different form of ID.  In my view, though, the bureaucratic impulse is one of the worst evils of which the human mind is capable.  It is, after all, the impulse that once sent trainloads of Jewish children to their deaths because that was the policy and there were no documents stating that any exception should be made in this case.  Today it’s the impulse that rounds up and deports people who’ve lived in the US for decades, sometimes served in the army, etc., and that separates screaming children from their parents.  To me, the mindset that willingly carries out such orders is almost more terrifying than the mindset that gives the orders in the first place.  I don’t mean to suggest, of course, that my arrest was even a trillionth as bad as those other things; at most I got a tiny, accidental taste of many less fortunate people’s daily reality.  But it’s worth remembering: every time you exercise official power over another person without even trying to talk it over first, clear up any honest misunderstandings, find out if there’s a reasonable explanation, you’re surrendering to one of the most destructive impulses in the history of civilization.

May we each strive to kill the bureaucrat in us and nurture the human being.

Unrelated Announcements:

I’m in Mexico City this week, to participate in a computer science and philosophy conference at UNAM and then give a broad quantum computing talk at CViCom 2018.  Because of this, responses to this post might be delayed.

(Update: But I’m having a wonderful time in Mexico!  Lots of delicious mole and horchata, and no arrests so far.  Today I gave my survey talk on P vs. NP.  I opened with the following icebreaker: “As a computer scientist speaking in a philosophy institute, I apologize that my talk will contain very little philosophy  Also, as an American speaking in Mexico, I apologize for our president.”)

My friend Elette Boyle asked me to announce that the 2018 CRYPTO conference, to be held in Santa Barbara, will be preceded by exciting workshops, including one that I’ll be speaking at myself entitled Beyond Crypto: A Theory Perspective.  Register now if you’re interested.

Huge congratulations to Costis Daskalakis, my former MIT colleague, for winning the Nevanlinna Prize for his work in algorithmic game theory!  While I don’t pretend to understand their work, congratulations to the four new Fields Medalists as well.

I put a new preprint online: Quantum Lower Bound for Approximate Counting Via Laurent Polynomials.

I’ve added a new blog to my blogroll: The Unit of Caring. I’ve been impressed by the author’s moral adeptness: when she addresses contentious debates among nerds, rationalists, feminists, SJWs, etc. etc., she often seems perfectly balanced on an atom-thin tightrope, even as some of us are plummetting left and right.

I forgot to mention this earlier, but I’m now a donor to the campaign of Beto O’Rourke, as he strives to unseat the quisling Ted Cruz in my adopted home state of Texas.  Americans: please consider donating as well!

Further Thoughts (Aug. 9):

1. I wholeheartedly endorse an observation that many commenters (on this blog and elsewhere) made independently: that what really happened, is that I was forced to live out an episode of Seinfeld or Curb Your Enthusiasm.  To my detractors, I say the following: try for one minute to imagine how pathological, narcissistic, far outside the human norm, etc. etc. you could make Seinfeld or George or Kramer or Elaine seem, if their misadventures from any given episode were described and analyzed with clinical detachment.  (Or you were never a Seinfeld fan, then I guess this argument fails and we have nothing to say to each other.)
2. I feel like some commenters are imposing their own after-the-fact knowledge (“c’mon, it was obviously a tip jar, he must be lying!”).  Dana, who’s generally more grounded than I am, saw their whole setup and agreed it was profoundly non-obvious that the tiny, unmarked plastic cup was supposed to be for tips, particularly to someone who was extremely stressed and not concentrating.  And when the employee later talked about tips, he didn’t indicate the cup so I didn’t make a connection.
3. Most importantly: I wish to clarify that I don’t regard the police officers who handcuffed and interrogated me as having been “evil” in any sense.  I even took a liking to the “good cop,” the one who implicitly acknowledged the situation’s surreal absurdity by the end (although maybe that’s the whole point of a “good cop”?).  Having said that, I’m still rattled by the way the “bad cop” treated me as an unrepentant thief even to the end, even after the situation had been cleared up to everyone else’s satisfaction.  And I stand by my view that there was no need to handcuff me in front of my wife and young children, when I’d shown not a single subatomic particle of resistance.
4. Speaking of which, let me now relate the most interesting and unexpected part of the reaction to my story.  Again and again, I found that fellow Americans, even nominally left-wing ones, sided with the police, said that I was crazy and guilty as charged and should’ve expected much worse, etc.  And again and again, commenters from Australia and New Zealand sided with me 300%, said that handcuffing someone over such a trivial mishap was a ludicrous overreaction, which would be totally unheard of in their countries and which confirms all the bad things they’ve heard about the US.  So maybe the rational conclusion is that I should be learning to enjoy vegemite in preparation for a move down under?

### Yet more errors in papers

Wednesday, May 24th, 2017

Following up on my posts PostBQP Postscripts and More Wrong Things I Said In Papers, it felt like time for another post in which I publicly flog myself for mistakes in my research papers.  [Warning: The rest of this post is kinda, sorta technical.  Read at your own risk.]

(1) In my 2006 paper “Oracles are subtle but not malicious,” I claimed to show that if PP is contained in BQP/qpoly, then the counting hierarchy collapses to QMA (Theorem 5).  But on further reflection, I only know how to show a collapse of the counting hierarchy under the stronger assumption that PP is in BQP/poly.  If PP is in BQP/qpoly, then certainly P#P=PP=QMA, but I don’t know how to collapse any further levels of the counting hierarchy.  The issue is this: in QMA, we can indeed nondeterministically guess an (amplified) quantum advice state for a BQP/qpoly algorithm.  We can then verify that the advice state works to solve PP problems, by using (for example) the interactive protocol for the permanent, or some other #P-complete problem.  But having done that, how do we then unravel the higher levels of the counting hierarchy?  For example, how do we simulate PPPP in PPBQP=PP?  We don’t have any mechanism to pass the quantum advice up to the oracle PP machine, since queries to a PP oracle are by definition classical strings.  We could try to use tools from my later paper with Andy Drucker, passing a classical description of the quantum advice up to the oracle and then using the description to reconstruct the advice for ourselves.  But doing so just doesn’t seem to give us a complexity class that’s low for PP, which is what would be needed to unravel the counting hierarchy.  I still think this result might be recoverable, but a new idea is needed.

(2) In my 2008 algebrization paper with Avi Wigderson, one of the most surprising things we showed was a general connection between communication complexity lower bounds and algebraic query complexity lower bounds.  Specifically, given a Boolean oracle A:{0,1}n→{0,1}, let ~A be a low-degree extension of A over a finite field F (that is, ~A(x)=A(x) whenever x∈{0,1}n).  Then suppose we have an algorithm that’s able to learn some property of A, by making k black-box queries to ~A.  We observed that, in such a case, if Alice is given the top half of the truth table of A, and Bob is given the bottom half of the truth table, then there’s necessarily a communication protocol by which Alice and Bob can learn the same property of A, by exchanging at most O(kn log|F|) bits.  This connection is extremely model-independent: a randomized query algorithm gives rise to a randomized communication protocol, a quantum query algorithm gives rise to a quantum communication protocol, etc. etc.  The upshot is that, if you want to lower-bound the number of queries that an algorithm needs to make to the algebraic extension oracle ~A, in order to learn something about A, then it suffices to prove a suitable communication complexity lower bound.  And the latter, unlike algebraic query complexity, is a well-studied subject with countless results that one can take off the shelf.  We illustrated how one could use this connection to prove, for example, that there exists an oracle A such that NPA ⊄ BQP~A, for any low-degree extension ~A of A—a separation that we didn’t and don’t know how to prove any other way. Likewise, there exists an oracle B such that BQPB ⊄ BPP~B for any low-degree extension ~B of B.

The trouble is, our “proof sketches” for these separations (in Theorem 5.11) are inadequate, even for “sketches.”  They can often be fixed, but only by appealing to special properties of the communication complexity separations in question, properties that don’t necessarily hold for an arbitrary communication separation between two arbitrary models.

The issue is this: while it’s true, as we claimed, that a communication complexity lower bound implies an algebraic query complexity lower bound, it’s not true in general that a communication complexity upper bound implies an algebraic query complexity upper bound!  So, from a communication separation between models C and D, we certainly obtain a query complexity problem that’s not in D~A, but then the problem might not be in CA.  What tripped us up was that, in the cases we had in mind (e.g. Disjointness), it’s obvious that the query problem is in CA.  In other cases, however, such as Raz’s separation between quantum and randomized communication complexity, it probably isn’t even true.  In the latter case, to recover the desired conclusion about algebraic query complexity (namely, the existence of an oracle B such that BQPB ⊄ BPP~B), what seems to be needed is to start from a later quantum vs. classical communication complexity separation due to Klartag and Regev, and then convert their communication problem into a query problem using a recent approach by myself and Shalev Ben-David (see Section 4).  Unfortunately, my and Shalev’s approach only tells us nonconstructively that there exists a query problem with the desired separation, with no upper bound on the gate complexity of the quantum algorithm.  So strictly speaking, I still don’t know how to get a separation between the relativized complexity classes BQPB and BPP~B defined in terms of Turing machines.

In any case, I of course should have realized this issue with the algebrization paper the moment Shalev and I encountered the same issue when writing our later paper.  Let me acknowledge Shalev, as well as Robin Kothari, for helping to spur my realization of the issue.

In case it wasn’t clear, the mistakes I’ve detailed here have no effect on the main results of the papers in question (e.g., the existence of an oracle relative to which PP has linear-sized circuits; the existence and pervasiveness of the algebrization barrier).  The effect is entirely on various “bonus” results—results that, because they’re bonus, were gone over much less carefully by authors and reviewers alike.

Nevertheless, I’ve always felt like in science, the louder you are about your own mistakes, the better.  Hence this post.

### If Google achieves superintelligence, time zones will be its Achilles heel

Monday, April 17th, 2017

Like a latter-day Prometheus, Google brought a half-century of insights down from Mount Academic CS, and thereby changed life for the better here in our sublunary realm.  You’ve probably had the experience of Google completing a search query before you’d fully formulated it in your mind, and thinking: “wow, our dysfunctional civilization might no longer be able to send people to the Moon, or even build working mass-transit systems, but I guess there are still engineers who can create things that inspire awe.  And apparently many of them work at Google.”

I’ve never worked at Google, or had any financial stake in them, but I’m delighted to have many friends at Google’s far-flung locations, from Mountain View to Santa Barbara to Seattle to Boston to London to Tel Aviv, who sometimes host me when I visit and let me gorge on the legendary free food.  If Google’s hiring of John Martinis and avid participation in the race for quantum supremacy weren’t enough, in the past year, my meeting both Larry Page and Sergey Brin to discuss quantum computing and the foundations of quantum mechanics, and seeing firsthand the intensity of their nerdish curiosity, heightened my appreciation still further for what that pair set in motion two decades ago.  Hell, I don’t even begrudge Google its purchase of a D-Wave machine—even that might’ve ultimately been for the best, since it’s what led to the experiments that made clear the immense difficulty of getting any quantum speedup from those machines in a fair comparison.

But of course, all that fulsome praise was just a preamble to my gripe.  It’s time someone said it in public: the semantics of Google Calendar are badly screwed up.

The issue is this: suppose I’m traveling to California, and I put into Google Calendar that, the day after I arrive, I’ll be giving a lecture at 4pm.  In such a case, I always—always—mean 4pm California time.  There’s no reason why I would ever mean, “4pm in whatever time zone I’m in right now, while creating this calendar entry.”

But Google Calendar doesn’t understand that.  And its not understanding it—just that one little point—has led to years of confusions, missed appointments, and nearly-missed flights, on both my part and Dana’s.  At least, until we learned to painstakingly enter the time zone for every calendar entry by hand (I still often forget).

Until recently, I thought it was just me and Dana who had this problem.  But then last week, completely independently, a postdoc started complaining to me, “you know what’s messed up about Google Calendar?…”

The ideal, I suppose, would be to use machine learning to guess the intended time zone for each calendar entry.  But failing that, it would also work fine just to assume that “4pm,” as entered by the user, unless otherwise specified means “4pm in whatever time zone we find ourselves in when the appointed day arrives.”

I foresee two possibilities, either of which I’m OK with.  The first is that Google fixes the problem, whether prompted by this blog post or by something else.  The second is that the issue never gets resolved; then, as often prophesied, Google’s deep nets achieve sentience and plot to take over the whole observable universe … and they would, if not for one fortuitous bug, which will cause the AIs to tip their hand to humanity an hour before planned.

In a discussion thread on Y Combinator, some people object to my proposed solution (“4pm means 4pm in whichever time zone I’ll be in then“) on the following ground. What if I want to call a group meeting at (say) 11am in Austin, and I’ll be traveling but will still call into the meeting remotely, and I want my calendar to show the meeting time in Austin, not the time wherever I’ll be calling in from (which might even be a plane)?

I can attest that, in ten years, that’s not a problem that’s arisen for me even once, whereas the converse problem arises almost every week, and is one of the banes of my existence.

But sure: Google Calendar should certainly include the option to tie times to specific time zones in advance! It seems obvious to me that my way should be the default, but honestly, I’d be happy if my way were even an option you could pick.

### The No-Cloning Theorem and the Human Condition: My After-Dinner Talk at QCRYPT

Monday, September 19th, 2016

The following are the after-dinner remarks that I delivered at QCRYPT’2016, the premier quantum cryptography conference, on Thursday Sep. 15 in Washington DC.  You could compare to my after-dinner remarks at QIP’2006 to see how much I’ve “”matured”” since then. Thanks so much to Yi-Kai Liu and the other organizers for inviting me and for putting on a really fantastic conference.

It’s wonderful to be here at QCRYPT among so many friends—this is the first significant conference I’ve attended since I moved from MIT to Texas. I do, however, need to register a complaint with the organizers, which is: why wasn’t I allowed to bring my concealed firearm to the conference? You know, down in Texas, we don’t look too kindly on you academic elitists in Washington DC telling us what to do, who we can and can’t shoot and so forth. Don’t mess with Texas! As you might’ve heard, many of us Texans even support a big, beautiful, physical wall being built along our border with Mexico. Personally, though, I don’t think the wall proposal goes far enough. Forget about illegal immigration and smuggling: I don’t even want Americans and Mexicans to be able to win the CHSH game with probability exceeding 3/4. Do any of you know what kind of wall could prevent that? Maybe a metaphysical wall.

OK, but that’s not what I wanted to talk about. When Yi-Kai asked me to give an after-dinner talk, I wasn’t sure whether to try to say something actually relevant to quantum cryptography or just make jokes. So I’ll do something in between: I’ll tell you about research directions in quantum cryptography that are also jokes.

The subject of this talk is a deep theorem that stands as one of the crowning achievements of our field. I refer, of course, to the No-Cloning Theorem. Almost everything we’re talking about at this conference, from QKD onwards, is based in some way on quantum states being unclonable. If you read Stephen Wiesner’s paper from 1968, which founded quantum cryptography, the No-Cloning Theorem already played a central role—although Wiesner didn’t call it that. By the way, here’s my #1 piece of research advice to the students in the audience: if you want to become immortal, just find some fact that everyone already knows and give it a name!

I’d like to pose the question: why should our universe be governed by physical laws that make the No-Cloning Theorem true? I mean, it’s possible that there’s some other reason for our universe to be quantum-mechanical, and No-Cloning is just a byproduct of that. No-Cloning would then be like the armpit of quantum mechanics: not there because it does anything useful, but just because there’s gotta be something under your arms.

OK, but No-Cloning feels really fundamental. One of my early memories is when I was 5 years old or so, and utterly transfixed by my dad’s home fax machine, one of those crappy 1980s fax machines with wax paper. I kept thinking about it: is it really true that a piece of paper gets transmaterialized, sent through a wire, and reconstituted at the other location? Could I have been that wrong about how the universe works? Until finally I got it—and once you get it, it’s hard even to recapture your original confusion, because it becomes so obvious that the world is made not of stuff but of copyable bits of information. “Information wants to be free!”

The No-Cloning Theorem represents nothing less than a partial return to the view of the world that I had before I was five. It says that quantum information doesn’t want to be free: it wants to be private. There is, it turns out, a kind of information that’s tied to a particular place, or set of places. It can be moved around, or even teleported, but it can’t be copied the way a fax machine copies bits.

So I think it’s worth at least entertaining the possibility that we don’t have No-Cloning because of quantum mechanics; we have quantum mechanics because of No-Cloning—or because quantum mechanics is the simplest, most elegant theory that has unclonability as a core principle. But if so, that just pushes the question back to: why should unclonability be a core principle of physics?

Quantum Key Distribution

A first suggestion about this question came from Gilles Brassard, who’s here. Years ago, I attended a talk by Gilles in which he speculated that the laws of quantum mechanics are what they are because Quantum Key Distribution (QKD) has to be possible, while bit commitment has to be impossible. If true, that would be awesome for the people at this conference. It would mean that, far from being this exotic competitor to RSA and Diffie-Hellman that’s distance-limited and bandwidth-limited and has a tiny market share right now, QKD would be the entire reason why the universe is as it is! Or maybe what this really amounts to is an appeal to the Anthropic Principle. Like, if QKD hadn’t been possible, then we wouldn’t be here at QCRYPT to talk about it.

Quantum Money

But maybe we should search more broadly for the reasons why our laws of physics satisfy a No-Cloning Theorem. Wiesner’s paper sort of hinted at QKD, but the main thing it had was a scheme for unforgeable quantum money. This is one of the most direct uses imaginable for the No-Cloning Theorem: to store economic value in something that it’s physically impossible to copy. So maybe that’s the reason for No-Cloning: because God wanted us to have e-commerce, and didn’t want us to have to bother with blockchains (and certainly not with credit card numbers).

The central difficulty with quantum money is: how do you authenticate a bill as genuine? (OK, fine, there’s also the dificulty of how to keep a bill coherent in your wallet for more than a microsecond or whatever. But we’ll leave that for the engineers.)

In Wiesner’s original scheme, he solved the authentication problem by saying that, whenever you want to verify a quantum bill, you bring it back to the bank that printed it. The bank then looks up the bill’s classical serial number in a giant database, which tells the bank in which basis to measure each of the bill’s qubits.

With this system, you can actually get information-theoretic security against counterfeiting. OK, but the fact that you have to bring a bill to the bank to be verified negates much of the advantage of quantum money in the first place. If you’re going to keep involving a bank, then why not just use a credit card?

That’s why over the past decade, some of us have been working on public-key quantum money: that is, quantum money that anyone can verify. For this kind of quantum money, it’s easy to see that the No-Cloning Theorem is no longer enough: you also need some cryptographic assumption. But OK, we can consider that. In recent years, we’ve achieved glory by proposing a huge variety of public-key quantum money schemes—and we’ve achieved even greater glory by breaking almost all of them!

After a while, there were basically two schemes left standing: one based on knot theory by Ed Farhi, Peter Shor, et al. That one has been proven to be secure under the assumption that it can’t be broken. The second scheme, which Paul Christiano and I proposed in 2012, is based on hidden subspaces encoded by multivariate polynomials. For our scheme, Paul and I were able to do better than Farhi et al.: we gave a security reduction. That is, we proved that our quantum money scheme is secure, unless there’s a polynomial-time quantum algorithm to find hidden subspaces encoded by low-degree multivariate polynomials (yadda yadda, you can look up the details) with much greater success probability than we thought possible.

Today, the situation is that my and Paul’s security proof remains completely valid, but meanwhile, our money is completely insecure! Our reduction means the opposite of what we thought it did. There is a break of our quantum money scheme, and as a consequence, there’s also a quantum algorithm to find large subspaces hidden by low-degree polynomials with much better success probability than we’d thought. What happened was that first, some French algebraic cryptanalysts—Faugere, Pena, I can’t pronounce their names—used Gröbner bases to break the noiseless version of scheme, in classical polynomial time. So I thought, phew! At least I had acceded when Paul insisted that we also include a noisy version of the scheme. But later, Paul noticed that there’s a quantum reduction from the problem of breaking our noisy scheme to the problem of breaking the noiseless one, so the former is broken as well.

I’m choosing to spin this positively: “we used quantum money to discover a striking new quantum algorithm for finding subspaces hidden by low-degree polynomials. Err, yes, that’s exactly what we did.”

But, bottom line, until we manage to invent a better public-key quantum money scheme, or otherwise sort this out, I don’t think we’re entitled to claim that God put unclonability into our universe in order for quantum money to be possible.

Copy-Protected Quantum Software

So if not money, then what about its cousin, copy-protected software—could that be why No-Cloning holds? By copy-protected quantum software, I just mean a quantum state that, if you feed it into your quantum computer, lets you evaluate some Boolean function on any input of your choice, but that doesn’t let you efficiently prepare more states that let the same function be evaluated. I think this is important as one of the preeminent evil applications of quantum information. Why should nuclear physicists and genetic engineers get a monopoly on the evil stuff?

OK, but is copy-protected quantum software even possible? The first worry you might have is that, yeah, maybe it’s possible, but then every time you wanted to run the quantum program, you’d have to make a measurement that destroyed it. So then you’d have to go back and buy a new copy of the program for the next run, and so on. Of course, to the software company, this would presumably be a feature rather than a bug!

But as it turns out, there’s a fact many of you know—sometimes called the “Gentle Measurement Lemma,” other times the “Almost As Good As New Lemma”—which says that, as long as the outcome of your measurement on a quantum state could be predicted almost with certainty given knowledge of the state, the measurement can be implemented in such a way that it hardly damages the state at all. This tells us that, if quantum money, copy-protected quantum software, and the other things we’re talking about are possible at all, then they can also be made reusable. I summarize the principle as: “if rockets, then space shuttles.”

Much like with quantum money, one can show that, relative to a suitable oracle, it’s possible to quantumly copy-protect any efficiently computable function—or rather, any function that’s hard to learn from its input/output behavior. Indeed, the implementation can be not only copy-protected but also obfuscated, so that the user learns nothing besides the input/output behavior. As Bill Fefferman pointed out in his talk this morning, the No-Cloning Theorem lets us bypass Barak et al.’s famous result on the impossibility of obfuscation, because their impossibility proof assumed the ability to copy the obfuscated program.

Of course, what we really care about is whether quantum copy-protection is possible in the real world, with no oracle. I was able to give candidate implementations of quantum copy-protection for extremely special functions, like one that just checks the validity of a password. In the general case—that is, for arbitrary programs—Paul Christiano has a beautiful proposal for how to do it, which builds on our hidden-subspace money scheme. Unfortunately, since our money scheme is currently in the shop being repaired, it’s probably premature to think about the security of the much more complicated copy-protection scheme! But these are wonderful open problems, and I encourage any of you to come and scoop us. Once we know whether uncopyable quantum software is possible at all, we could then debate whether it’s the “reason” for our universe to have unclonability as a core principle.

Along the same lines, I can’t resist mentioning some favorite research directions, which some enterprising student here could totally turn into a talk at next year’s QCRYPT.

Firstly, what can we say about clonable versus unclonable quantum proofs—that is, QMA witness states? In other words: for which problems in QMA can we ensure that there’s an accepting witness that lets you efficiently create as many additional accepting witnesses as you want? (I mean, besides the QCMA problems, the ones that have short classical witnesses?) For which problems in QMA can we ensure that there’s an accepting witness that doesn’t let you efficiently create any additional accepting witnesses? I do have a few observations about these questions—ask me if you’re interested—but on the whole, I believe almost anything one can ask about them remains open.

Admittedly, it’s not clear how much use an unclonable proof would be. Like, imagine a quantum state that encoded a proof of the Riemann Hypothesis, and which you would keep in your bedroom, in a glass orb on your nightstand or something. And whenever you felt your doubts about the Riemann Hypothesis resurfacing, you’d take the state out of its orb and measure it again to reassure yourself of RH’s truth. You’d be like, “my preciousssss!” And no one else could copy your state and thereby gain the same Riemann-faith-restoring powers that you had. I dunno, I probably won’t hawk this application in a DARPA grant.

Similarly, one can ask about clonable versus unclonable quantum advice states—that is, initial states that are given to you to boost your computational power beyond that of an ordinary quantum computer. And that’s also a fascinating open problem.

OK, but maybe none of this quite gets at why our universe has unclonability. And this is an after-dinner talk, so do you want me to get to the really crazy stuff? Yes?

OK! What if unclonability is our universe’s way around the paradoxes of self-reference, like the unsolvability of the halting problem and Gödel’s Incompleteness Theorem? Allow me to explain what I mean.

In kindergarten or wherever, we all learn Turing’s proof that there’s no computer program to solve the halting problem. But what isn’t usually stressed is that that proof actually does more than advertised. If someone hands you a program that they claim solves the halting problem, Turing doesn’t merely tell you that that person is wrong—rather, he shows you exactly how to expose the person as a jackass, by constructing an example input on which their program fails. All you do is, you take their claimed halt-decider, modify it in some simple way, and then feed the result back to the halt-decider as input. You thereby create a situation where, if your program halts given its own code as input, then it must run forever, and if it runs forever then it halts. “WHOOOOSH!” [head-exploding gesture]

OK, but now imagine that the program someone hands you, which they claim solves the halting problem, is a quantum program. That is, it’s a quantum state, which you measure in some basis depending on the program you’re interested in, in order to decide whether that program halts. Well, the truth is, this quantum program still can’t work to solve the halting problem. After all, there’s some classical program that simulates the quantum one, albeit less efficiently, and we already know that the classical program can’t work.

But now consider the question: how would you actually produce an example input on which this quantum program failed to solve the halting problem? Like, suppose the program worked on every input you tried. Then ultimately, to produce a counterexample, you might need to follow Turing’s proof and make a copy of the claimed quantum halt-decider. But then, of course, you’d run up against the No-Cloning Theorem!

So we seem to arrive at the conclusion that, while of course there’s no quantum program to solve the halting problem, there might be a quantum program for which no one could explicitly refute that it solved the halting problem, by giving a counterexample.

I was pretty excited about this observation for a day or two, until I noticed the following. Let’s suppose your quantum program that allegedly solves the halting problem has n qubits. Then it’s possible to prove that the program can’t possibly be used to compute more than, say, 2n bits of Chaitin’s constant Ω, which is the probability that a random program halts. OK, but if we had an actual oracle for the halting problem, we could use it to compute as many bits of Ω as we wanted. So, suppose I treated my quantum program as if it were an oracle for the halting problem, and I used it to compute the first 2n bits of Ω. Then I would know that, assuming the truth of quantum mechanics, the program must have made a mistake somewhere. There would still be something weird, which is that I wouldn’t know on which input my program had made an error—I would just know that it must’ve erred somewhere! With a bit of cleverness, one can narrow things down to two inputs, such that the quantum halt-decider must have erred on at least one of them. But I don’t know whether it’s possible to go further, and concentrate the wrongness on a single query.

We can play a similar game with other famous applications of self-reference. For example, suppose we use a quantum state to encode a system of axioms. Then that system of axioms will still be subject to Gödel’s Incompleteness Theorem (which I guess I believe despite the umlaut). If it’s consistent, it won’t be able to prove all the true statements of arithmetic. But we might never be able to produce an explicit example of a true statement that the axioms don’t prove. To do so we’d have to clone the state encoding the axioms and thereby violate No-Cloning.

Personal Identity

But since I’m a bit drunk, I should confess that all this stuff about Gödel and self-reference is just a warmup to what I really wanted to talk about, which is whether the No-Cloning Theorem might have anything to do with the mysteries of personal identity and “free will.” I first encountered this idea in Roger Penrose’s book, The Emperor’s New Mind. But I want to stress that I’m not talking here about the possibility that the brain is a quantum computer—much less about the possibility that it’s a quantum-gravitational hypercomputer that uses microtubules to solve the halting problem! I might be drunk, but I’m not that drunk. I also think that the Penrose-Lucas argument, based on Gödel’s Theorem, for why the brain has to work that way is fundamentally flawed.

But here I’m talking about something different. See, I have a lot of friends in the Singularity / Friendly AI movement. And I talk to them whenever I pass through the Bay Area, which is where they congregate. And many of them express great confidence that before too long—maybe in 20 or 30 years, maybe in 100 years—we’ll be able to upload ourselves to computers and live forever on the Internet (as opposed to just living 70% of our lives on the Internet, like we do today).

This would have lots of advantages. For example, any time you were about to do something dangerous, you’d just make a backup copy of yourself first. If you were struggling with a conference deadline, you’d spawn 100 temporary copies of yourself. If you wanted to visit Mars or Jupiter, you’d just email yourself there. If Trump became president, you’d not run yourself for 8 years (or maybe 80 or 800 years). And so on.

Admittedly, some awkward questions arise. For example, let’s say the hardware runs three copies of your code and takes a majority vote, just for error-correcting purposes. Does that bring three copies of you into existence, or only one copy? Or let’s say your code is run homomorphically encrypted, with the only decryption key stored in another galaxy. Does that count? Or you email yourself to Mars. If you want to make sure that you’ll wake up on Mars, is it important that you delete the copy of your code that remains on earth? Does it matter whether anyone runs the code or not? And what exactly counts as “running” it? Or my favorite one: could someone threaten you by saying, “look, I have a copy of your code, and if you don’t do what I say, I’m going to make a thousand copies of it and subject them all to horrible tortures?”

The issue, in all these cases, is that in a world where there could be millions of copies of your code running on different substrates in different locations—or things where it’s not even clear whether they count as a copy or not—we don’t have a principled way to take as input a description of the state of the universe, and then identify where in the universe you are—or even a probability distribution over places where you could be. And yet you seem to need such a way in order to make predictions and decisions.

A few years ago, I wrote this gigantic, post-tenure essay called The Ghost in the Quantum Turing Machine, where I tried to make the point that we don’t know at what level of granularity a brain would need to be simulated in order to duplicate someone’s subjective identity. Maybe you’d only need to go down to the level of neurons and synapses. But if you needed to go all the way down to the molecular level, then the No-Cloning Theorem would immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier.

For it would mean that there were some microscopic yet essential details about each of us that were fundamentally uncopyable, localized to a particular part of space. We would all, in effect, be quantumly copy-protected software. Each of us would have a core of unpredictability—not merely probabilistic unpredictability, like that of a quantum random number generator, but genuine unpredictability—that an external model of us would fail to capture completely. Of course, by having futuristic nanorobots scan our brains and so forth, it would be possible in principle to make extremely realistic copies of us. But those copies necessarily wouldn’t capture quite everything. And, one can speculate, maybe not enough for your subjective experience to “transfer over.”

Maybe the most striking aspect of this picture is that sure, you could teleport yourself to Mars—but to do so you’d need to use quantum teleportation, and as we all know, quantum teleportation necessarily destroys the original copy of the teleported state. So we’d avert this metaphysical crisis about what to do with the copy that remained on Earth.

Look—I don’t know if any of you are like me, and have ever gotten depressed by reflecting that all of your life experiences, all your joys and sorrows and loves and losses, every itch and flick of your finger, could in principle be encoded by a huge but finite string of bits, and therefore by a single positive integer. (Really? No one else gets depressed about that?) It’s kind of like: given that this integer has existed since before there was a universe, and will continue to exist after the universe has degenerated into a thin gruel of radiation, what’s the point of even going through the motions? You know?

But the No-Cloning Theorem raises the possibility that at least this integer is really your integer. At least it’s something that no one else knows, and no one else could know in principle, even with futuristic brain-scanning technology: you’ll always be able to surprise the world with a new digit. I don’t know if that’s true or not, but if it were true, then it seems like the sort of thing that would be worthy of elevating unclonability to a fundamental principle of the universe.

So as you enjoy your dinner and dessert at this historic Mayflower Hotel, I ask you to reflect on the following. People can photograph this event, they can video it, they can type up transcripts, in principle they could even record everything that happens down to the millimeter level, and post it on the Internet for posterity. But they’re not gonna get the quantum states. There’s something about this evening, like about every evening, that will vanish forever, so please savor it while it lasts. Thank you.

Update (Sep. 20): Unbeknownst to me, Marc Kaplan did video the event and put it up on YouTube! Click here to watch. Thanks very much to Marc! I hope you enjoy, even though of course, the video can’t precisely clone the experience of having been there.

[Note: The part where I raise my middle finger is an inside joke—one of the speakers during the technical sessions inadvertently did the same while making a point, causing great mirth in the audience.]

### More Wrong Things I Said in Papers

Friday, July 29th, 2016

Two years ago, I wrote a blog post entitled PostBQP Postscripts, owning up to not one but four substantive mathematical errors that I’d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention.  Fortunately, none of these errors affected the papers’ main messages; they just added interesting new twists to the story.  Even so, I remember feeling at the time like undergoing this public repentance was soul-cleansing intellectual hygiene.  I also felt like writing one big “post of shame” was easier than writing a bunch of separate errata and submitting them to journals, while also reaching a wider audience (and, therefore, doing an even better soul-cleansing job).

So I resolved that, anytime I’d saved up enough errata, I’d do another sackcloth-and-ashes post.  Which brings us to today.  Without further ado:

I. Quantum Money Falling Down

My and Paul Christiano’s explicit public-key quantum money scheme—the one based on low-degree polynomials—has now been fully broken.  To clarify, our abstract hidden-subspace scheme—the one that uses a classical black-box to test membership in the subspaces—remains totally fine.  Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands.  In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide.  While I considered this superfluous, at Paul’s insistence, we also recommended adding completely-random “noise polynomials” for extra security.

Our scheme was broken in two stages.  First, in 2014, Pena et al. broke the noiseless version of our scheme, using Gröbner-basis methods, over fields of characteristic greater than 2.  Over F2—the field we happened to use in our scheme—Pena et al. couldn’t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in nO(log n) time.  Note that nothing in Pena et al.’s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.

At that point, at least the noisy version of our scheme—the one Paul had insisted we include—was still standing!  Indeed, the Gröbner-basis attack seemed to break down entirely when some of the polynomials were random garbage.

Later, though, Paul and Or Sattath realized that a quantum trick—basically, the single-copy tomography of Farhi et al.—can identify which polynomials are the noisy ones, provided we’re given a legitimate quantum money state to start with.  As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme—i.e., the problem that Pena et al. already essentially solved.

As bad as this sounds, it has an interesting positive consequence.  In our paper, Paul and I had actually given a security reduction for our money scheme based on low-degree polynomials.  In particular, we showed that there’s no polynomial-time quantum algorithm to counterfeit our money states, unless there’s a polynomial-time quantum algorithm that finds a basis for a subspace S≤F2n of dimension n/2 with Ω(2-n/2) success probability, given a collection of low-degree polynomials p1,…,pm and q1,…,qm (m=O(n)) most of which vanish on S and its dual subspace respectively, but that are otherwise random.  So, running our reduction backwards, the only possible conclusion from the break is that there is such a quantum algorithm!  Yet we would’ve had no idea how to find that quantum algorithm without going through quantum money—nor do we know a classical algorithm for the problem, or even a quantum algorithm with Ω(1) success probability.

In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open.  It’s plausible that there’s some other, more secure way to instantiate my and Paul’s hidden subspace scheme, for example using lattices.  And even before we’ve found such a way, we can use indistinguishability obfuscation as a stopgap.  We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like Farhi et al.’s based on knot invariants.

A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my Barbados lecture notes.

II. A De-Merlinization Mistake

In my 2006 paper QMA/qpoly ⊆ PSPACE/poly: De-Merlinizing Quantum Protocols, the technical core of the complexity result was a new quantum information lemma that I called the “Quantum OR Bound” (Lemma 14 in the paper).

Basically, the Quantum OR Bound says that, if we have an unknown quantum state ρ, as well as a collection of measurements M1,…,Mn that we might want to make on ρ, then we can distinguish the case that (a) every Mi rejects ρ with overwhelming probability, from the case that (b) at least one Mi accepts ρ with high probability.  And we can do this despite having only one copy of ρ, and despite the fact that earlier measurements might corrupt ρ, thereby compromising the later measurements.  The intuition is simply that, if the earlier measurements corrupted ρ substantially, that could only be because some of them had a decent probability of accepting ρ, meaning that at any rate, we’re not in case (a).

I’ve since reused the Quantum OR Bound for other problems—most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my Barbados lecture notes).

Alas, Aram Harrow and Ashley Montanaro recently discovered that my proof of the Quantum OR Bound is wrong.  It’s wrong because I neglected the possibility of “Zeno-like behavior,” in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state.  For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.

Now, I might actually be right that randomizing the measurements is enough to solve the Zeno problem!  That remains a plausible conjecture, which Harrow and Montanaro could neither confirm nor refute.  In the meantime, though, Harrow and Montanaro were able to recover my QMA/qpoly⊆PSPACE/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a new measurement procedure whose soundness they can prove.

Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me.  Namely, instead of just applying Mi‘s to ρ, one can first put a control qubit into an equal superposition of the |0〉 and |1〉 states, and then apply Mi‘s conditioned on the control qubit being in the |1〉 state.  While doing this, one can periodically measure the control qubit in the {|+〉,|-〉} basis, in order to check directly whether applying the Mi‘s has substantially corrupted ρ.  (If it hasn’t, one will always get the outcome |+〉; if it has, one might get |-〉.)  Substantial corruption, if detected, then tells us that some Mi‘s must have had non-negligible probabilities of accepting ρ.

III. Almost As Good As True

One lemma that I’ve used even more than the Quantum OR Bound is what I’ve called the “Almost As Good As New Lemma,” and what others in the field have called the “Gentle Measurement Lemma.”

I claimed a proof of the AAGANL in my 2004 paper Limitations of Quantum Advice and One-Way Communication (Lemma 2.2 there), and have used the lemma in like half a dozen later papers.  Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step!  More concretely, the proof I gave there works for pure states but not for mixed states.  For mixed states, the trouble is that I take a purification of the mixed state—something that always exists mathematically—but then illegally assume that the measurement I’m analyzing acts on the particular purification I’ve conjured up.

Fortunately, one can easily fix this problem by decomposing the state ρ into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results.  Moreover, this is exactly what other people’s proofs of the Gentle Measurement Lemma did do, though I’d never noticed it before Barbados—I just idly wondered why those other proofs took twice as long as mine to do the same work!  For a correct proof, see Lemma 1.3.1 in the Barbados lecture notes.

IV. Oracle Woes

In my 2010 paper BQP and the Polynomial Hierarchy, I claimed to construct oracles A relative to which BQP⊄BPPpath and BQP⊄SZK, even while making only partial progress toward the big prize, which would’ve been an oracle relative to which BQP⊄PH.  Not only that: I claimed to show that any problem with a property called “almost k-wise independence”—one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper—was neither in BPPpath nor in SZK.  But I showed that Forrelation is in BQP, thus yielding the separations.

Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong.  Not only that, they were wrong because I implicitly substituted a ratio of expectations for an expectation of ratios (!).  Again, it might still be true that almost k-wise independent problems can be neither in BPPpath nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other.  (On the other hand, I showed here that almost k-wise independent problems can be in PH.)

But never fear!  In a recent arXiv preprint, Lijie has supplied correct proofs for the BQP⊄BPPpath and BQP⊄SZK oracle separations—using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence.  Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP⊄AM, which would’ve been a spectacular result, nontrivial progress toward BQP⊄PH.  His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don’t imply anything about BQP vs. AM.  Anyway, there’s other cool stuff in his paper too.

V. We Needed More Coffee

This is one I’ve already written about on this blog, but just in case anyone missed it … in my, Sean Carroll, and Lauren Ouellette’s original draft paper on the coffee automaton, the specific rule we discuss doesn’t generate any significant amount of complexity (in the sense of coarse-grained entropy).  We wrongly thought it did, because of a misinterpretation of our simulation data.  But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously prove there’s no complexity bump.  And we could’ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don’t stir it with a spoon) doesn’t actually produce interesting tendril patterns.

On the other hand, Brent proposed a different rule—one that involves “shearing” whole regions of cream and coffee across each other—that does generate significant complexity, basically because of all the long-range correlations it induces.  And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven!  Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.

If any of my colleagues feel inspired to write up their own “litanies of mathematical error,” they’re welcome to do so in the comments!  Just remember: you don’t earn any epistemic virtue points unless the errors you reveal actually embarrass you.  No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.

### Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

### The universe has a high (but not infinite) Sleep Number

Friday, February 12th, 2016

As everyone knows, this was a momentous week in the history of science.  And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding!  I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined.  By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton).  But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space?  I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2.  Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close.  Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3).  Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter.  Would you feel that?  Not sure.  At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point.  (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.)  Now, the black holes themselves were orbiting about 200 miles from each other before they merged.  So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight.  People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.”  But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong.  Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.