GapP, Oracles, and Quantum Supremacy

Let me start with a few quick announcements before the main entrée:

First, the website haspvsnpbeensolved.com is now live!  Thanks so much to my friend Adam Chalmers for setting it up.  Please try it out on your favorite P vs. NP solution paper—I think you’ll be impressed by how well our secret validation algorithm performs.

Second, some readers might enjoy a YouTube video of me lecturing about the computability theory of closed timelike curves, from the Workshop on Computational Complexity and High Energy Physics at the University of Maryland a month ago.  Other videos from the workshop—including of talks by John Preskill, Daniel Harlow, Stephen Jordan, and other names known around Shtetl-Optimized, and of a panel discussion in which I participated—are worth checking out as well.  Thanks so much to Stephen for organizing such a great workshop!

Third, thanks to everyone who’s emailed to ask whether I’m holding up OK with Hurricane Harvey, and whether I know how to swim (I do).  As it happens, I haven’t been in Texas for two months—I spent most of the summer visiting NYU and doing other travel, and this year, Dana and I are doing an early sabbatical at Tel Aviv University.  However, I understand from friends that Austin, being several hours’ drive further inland, got nothing compared to what Houston did, and that UT is open on schedule for the fall semester.  Hopefully our house is still standing as well!  Our thoughts go to all those affected by the disaster in Houston.  Eventually, the Earth’s rapidly destabilizing climate almost certainly means that Austin will be threatened as well by “500-year events” happening every year or two, as for that matter will a large portion of the earth’s surface.  For now, though, Austin lives to be weird another day.


GapP, Oracles, and Quantum Supremacy

by Scott Aaronson

Stuart Kurtz 60th Birthday Conference, Columbia, South Carolina

August 20, 2017

It’s great to be here, to celebrate the life and work of Stuart Kurtz, which could never be … eclipsed … by anything.

I wanted to say something about work in structural complexity and counting complexity and oracles that Stuart was involved with “back in the day,” and how that work plays a major role in issues that concern us right now in quantum computing.  A major goal for the next few years is the unfortunately-named Quantum Supremacy.  What this means is to get a clear quantum speedup, for some task: not necessarily a useful task, but something that we can be as confident as possible is classically hard.  For example, consider the 49-qubit superconducting chip that Google is planning to fabricate within the next year or so.  This won’t yet be good enough for running Shor’s algorithm, to factor numbers of any interesting size, but it hopefully will be good enough to sample from a probability distribution over n-bit strings—in this case, 49-bit strings—that’s hard to sample from classically, taking somewhere on the order of 249 steps.

Furthermore, the evidence that that sort of thing is indeed classically hard, might actually be stronger than the evidence that factoring is classically hard.  As I like to say, a fast classical factoring algorithm would “merely” collapse the world’s electronic commerce—as far as we know, it wouldn’t collapse the polynomial hierarchy!  By contrast, a fast classical algorithm to simulate quantum sampling would collapse the polynomial hierarchy, assuming the simulation is exact.  Let me first go over the argument for that, and then explain some of the more recent things we’ve learned.

Our starting point will be two fundamental complexity classes, #P and GapP.

#P is the class of all nonnegative integer functions f, for which there exists a nondeterministic polynomial-time Turing machine M such that f(x) equals the number of accepting paths of M(x).  Less formally, #P is the class of problems that boil down to summing up an exponential number of nonnegative terms, each of which is efficiently computable individually.

GapP—introduced by Fenner, Fortnow, and Kurtz in 1992—can be defined as the set {f-g : f,g∈#P}; that is, the closure of #P under subtraction.  Equivalently, GapP is the class of problems that boil down to summing up an exponential number of terms, each of which is efficiently computable individually, but which could be either positive or negative, and which can therefore cancel each other out.  As you can see, GapP is a class that in some sense anticipates quantum computing!

For our purposes, the most important difference between #P and GapP is that #P functions can at least be multiplicatively approximated in the class BPPNP, by using Stockmeyer’s technique of approximating counting with universal hash functions.  By contrast, even if you just want to approximate a GapP function to within (say) a factor of 2—or for that matter, just decide whether a GapP function is positive or negative—it’s not hard to see that that’s already a #P-hard problem.  For, supposing we had an oracle to solve this problem, we could then shift the sum this way and that by adding positive and negative dummy terms, and use binary search, to zero in on the sum’s exact value in polynomial time.

It’s also not hard to see that a quantum computation can encode an arbitrary GapP function in one of its amplitudes.  Indeed, let s:{0,1}n→{1,-1} be any Boolean function that’s given by a polynomial-size circuit.  Then consider the quantum circuit below.

When we run this circuit, the probability that we see the all-0 string as output is

$$ \left( \frac{1}{\sqrt{2^n}} \sum_{z\in \{0,1\}^n} s(z) \right)^2 = \frac{1}{2^n} \sum_{z,w\in \{0,1\}^n} s(z) s(w) $$

which is clearly in GapP, and clearly #P-hard even to approximate to within a multiplicative factor.

By contrast, suppose we had a probabilistic polynomial-time classical algorithm, call it M, to sample the output distribution of the above quantum circuit.  Then we could rewrite the above probability as Prr[M(r) outputs 0…0], where r consists of the classical random bits used by M.  This is again an exponentially large sum, with one term for each possible r value—but now it’s a sum of nonnegative terms (probabilities), which is therefore approximable in BPPNP.

We can state the upshot as follows.  Let ExactSampBPP be the class of sampling problems—that is, families of probability distributions {Dx}x, one for each input x∈{0,1}n—for which there exists a polynomial-time randomized algorithm that outputs a sample exactly from Dx, in time polynomial in |x|.  Let ExactSampBQP be the same thing except that we allow a polynomial-time quantum algorithm.  Then we have that, if ExactSampBPP = ExactSampBQP, then squared sums of both positive and negative terms, could efficiently be rewritten as sums of nonnegative terms only—and hence P#P=BPPNP.  This, in turn, would collapse the polynomial hierarchy to the third level, by Toda’s Theorem that PH⊆P#P, together with the result BPPNP⊆∑3.  To summarize:

Theorem 1.  Quantum computers can efficiently solve exact sampling problems that are classically hard unless the polynomial hierarchy collapses.

(In fact, the argument works not only if the classical algorithm exactly samples Dx, but if it samples from any distribution in which the probabilities are multiplicatively close to Dx‘s.  If we really only care about exact sampling, then we can strengthen the conclusion to get that PH collapses to the second level.)

This sort of reasoning was implicit in several early works, including those of Fenner et al. and Terhal and DiVincenzo.  It was made fully explicit in my paper with Alex Arkhipov on BosonSampling in 2011, and in the independent work of Bremner, Jozsa, and Shepherd on the IQP model.  These works actually showed something stronger, which is that we get a collapse of PH, not merely from a fast classical algorithm to simulate arbitrary quantum systems, but from fast classical algorithms to simulate various special quantum systems.  In the case of BosonSampling, that special system is a collection of identical, non-interacting photons passing through a network of beamsplitters, then being measured at the very end to count the number of photons in each mode.  In the case of IQP, the special system is a collection of qubits that are prepared, subjected to some commuting Hamiltonians acting on various subsets of the qubits, and then measured.  These special systems don’t seem to be capable of universal quantum computation (or for that matter, even universal classical computation!)—and correspondingly, many of them seem easier to realize in the lab than a full universal quantum computer.

From an experimental standpoint, though, all these results are unsatisfactory, because they all talk only about the classical hardness of exact (or very nearly exact) sampling—and indeed, the arguments are based around the hardness of estimating just a single, exponentially-small amplitude.  But any real experiment will have tons of noise and inaccuracy, so it seems only fair to let the classical simulation be subject to serious noise and inaccuracy as well—but as soon as we do, the previous argument collapses.

Thus, from the very beginning, Alex Arkhipov and I took it as our “real” goal to show, under some reasonable assumption, that there’s a distribution D that a polynomial-time quantum algorithm can sample from, but such that no polynomial-time classical algorithm can sample from any distribution that’s even ε-close to D in variation distance.  Indeed, this goal is what led us to BosonSampling in the first place: we knew that we needed amplitudes that were not only #P-hard but “robustly” #P-hard; we knew that the permanent of an n×n matrix (at least over finite fields) was the canonical example of a “robustly” #P-hard function; and finally, we knew that systems of identical non-interacting bosons, such as photons, gave rise to amplitudes that were permanents in an extremely natural way.  The fact that photons actually exist in the physical world, and that our friends with quantum optics labs like to do experiments with them, was just a nice bonus!

A bit more formally, let ApproxSampBPP be the class of sampling problems for which there exists a classical algorithm that, given an input x∈{0,1}n and a parameter ε>0, samples a distribution that’s at most  away from Dx in variation distance, in time polynomial in n and 1/ε.  Let ApproxSampBQP be the same except that we allow a quantum algorithm.  Then the “dream” result that we’d love to prove—both then and now—is the following.

Strong Quantum Supremacy Conjecture.  If ApproxSampBPP = ApproxSampBQP, then the polynomial hierarchy collapses.

Unfortunately, Alex and I were only able to prove this conjecture assuming a further hypothesis, about the permanents of i.i.d. Gaussian matrices.

Theorem 2 (A.-Arkhipov).  Given an n×n matrix X of independent complex Gaussian entries, each of mean 0 and variance 1, assume it’s a #P-hard problem to approximate |Per(X)|2 to within ±ε⋅n!, with probability at least 1-δ over the choice of X, in time polynomial in n, 1/ε, and 1/δ.  Then the Strong Quantum Supremacy Conjecture holds.  Indeed, more than that: in such a case, even a fast approximate classical simulation of BosonSampling, in particular, would imply P#P=BPPNP and hence a collapse of PH.

Alas, after some months of effort, we were unable to prove the needed #P-hardness result for Gaussian permanents, and it remains an outstanding open problem—there’s not even a consensus as to whether it should be true or false.  Note that there is a famous polynomial-time classical algorithm to approximate the permanents of nonnegative matrices, due to Jerrum, Sinclair, and Vigoda, but that algorithm breaks down for matrices with negative or complex entries.  This is once again the power of cancellations, the difference between #P and GapP.

Frustratingly, if we want the exact permanents of i.i.d. Gaussian matrices, we were able to prove that that’s #P-hard; and if we want the approximate permanents of arbitrary matrices, we also know that that’s #P-hard—it’s only when we have approximation and random inputs in the same problem that we no longer have the tools to prove #P-hardness.

In the meantime, one can also ask a meta-question.  How hard should it be to prove the Strong Quantum Supremacy Conjecture?  Were we right to look at slightly exotic objects, like the permanents of Gaussian matrices?  Or could Strong Quantum Supremacy have a “pure, abstract complexity theory proof”?

Well, one way to formalize that question is to ask whether Strong Quantum Supremacy has a relativizing proof, a proof that holds in the presence of an arbitrary oracle.  Alex and I explicitly raised that as an open problem in our BosonSampling paper.

Note that “weak” quantum supremacy—i.e., the statement that ExactSampBPP = ExactSampBQP collapses the polynomial hierarchy—has a relativizing proof, namely the proof that I sketched earlier.  All the ingredients that we used—Toda’s Theorem, Stockmeyer approximate counting, simple manipulations of quantum circuits—were relativizing ingredients.  By contrast, all the way back in 1998, Fortnow and Rogers proved the following.

Theorem 3 (Fortnow and Rogers).  There exists an oracle relative to which P=BQP and yet PH is infinite.

In other words, if you want to prove that P=BQP collapses the polynomial hierarchy, the proof can’t be relativizing.  This theorem was subsequently generalized in a paper by Fenner, Fortnow, Kurtz, and Li, which used concepts like “generic oracles” that seem powerful but that I don’t understand.

The trouble is, Fortnow and Rogers’s construction was extremely tailored to making P=BQP.  It didn’t even make PromiseBPP=PromiseBQP (that is, it allowed that quantum computers might still be stronger than classical ones for promise problems), let alone did it collapse quantum with classical for sampling problems.

We can organize the various quantum/classical collapse possibilities as follows:

ExactSampBPP = ExactSampBQP

ApproxSampBPP = ApproxSampBQP   ⇔   FBPP = FBQP

PromiseBPP = PromiseBQP

BPP = BQP

Here FBPP is the class of relation problems solvable in randomized polynomial time—that is, problems where given an input x∈{0,1}n and a parameter ε>0, the goal is to produce any output in a certain set Sx, with success probability at least 1-ε, in time polynomial in n and 1/ε.  FBQP is the same thing except for quantum polynomial time.

The equivalence between the two equalities ApproxSampBPP = ApproxSampBQP and FBPP=FBQP is not obvious, and was the main result in my 2011 paper The Equivalence of Sampling and Searching.  While it’s easy to see that ApproxSampBPP = ApproxSampBQP implies FBPP=FBQP, the opposite direction requires us to take an arbitrary sampling problem S, and define a relation problem RS that has “essentially the same difficulty” as S (in the sense that RS has an efficient classical algorithm iff S does, RS has an efficient quantum algorithm iff S does, etc).  This, in turn, we do using Kolmogorov complexity: basically, RS asks us to output a tuple of samples that have large probabilities according to the requisite probability distribution from the sampling problem; and that also, conditioned on that, are close to algorithmically random.  The key observation is that, if a probabilistic Turing machine of fixed size can solve that relation problem for arbitrarily large inputs, then it must be doing so by sampling from a probability distribution close in variation distance to D—since any other approach would lead to outputs that were algorithmically compressible.

Be that as it may, staring at the chain of implications above, a natural question is which equalities in the chain collapse the polynomial hierarchy in a relativizing way, and which equalities collapse PH (if they do) only for deeper, non-relativizing reasons.

This is one of the questions that Lijie Chen and I took up, and settled, in our paper Complexity-Theoretic Foundations of Quantum Supremacy Experiments, which was presented at this summer’s Computational Complexity Conference (CCC) in Riga.  The “main” results in our paper—or at least, the results that the physicists care about—were about how confident we can be in the classical hardness of simulating quantum sampling experiments with random circuits, such as the experiments that the Google group will hopefully be able to do with its 49-qubit device in the near future.  This involved coming up with a new hardness assumption, which was tailored to those sorts of experiments, and giving a reduction from that new assumption, and studying how far existing algorithms come toward breaking the new assumption (tl;dr: not very far).

But our paper also had what I think of as a “back end,” containing results mainly of interest to complexity theorists, about what kinds of quantum supremacy theorems we can and can’t hope for in principle.  When I’m giving talks about our paper to physicists, I never have time to get to this back end—it’s always just “blah, blah, we also did some stuff involving structural complexity and oracles.”  But given that a large fraction of all the people on earth who enjoy those things are probably right here in this room, in the rest of this talk, I’d like to tell you about what was in the back end.

The first thing there was the following result.

Theorem 4 (A.-Chen).  There exists an oracle relative to which ApproxSampBPP = ApproxSampBQP and yet PH is infinite. In other words, any proof of the Strong Quantum Supremacy Conjecture will require non-relativizing techniques.

Theorem 4 represents a substantial generalization of Fortnow and Rogers’s Theorem 3, in that it makes quantum and classical equivalent not only for promise problems, but even for approximate sampling problems.  There’s also a sense in which Theorem 4 is the best possible: as we already saw, there are no oracles relative to which ExactSampBPP = ExactSampBQP and yet PH is infinite, because the opposite conclusion relativizes.

So how did we prove Theorem 4?  Well, we learned at this workshop that Stuart Kurtz pioneered the development of principled ways to prove oracle results just like this one, with multiple “nearly conflicting” requirements.  But, because we didn’t know that at the time, we basically just plunged in and built the oracle we wanted by hand!

In more detail, you can think of our oracle construction as proceeding in three steps.

  1. We throw in an oracle for a PSPACE-complete problem.  This collapses ApproxSampBPP with ApproxSampBQP, which is what we want.  Unfortunately, it also collapses the polynomial hierarchy down to P, which is not what we want!
  2. So then we need to add in a second part of the oracle that makes PH infinite again.  From Håstad’s seminal work in the 1980s until recently, even if we just wanted any oracle that makes PH infinite, without doing anything else at the same time, we only knew how to achieve that with quite special oracles.  But in their 2015 breakthrough, Rossman, Servedio, and Tan have shown that even a random oracle makes PH infinite with probability 1.  So for simplicity, we might as well take this second part of the oracle to be random.  The “only” problem is that, along with making PH infinite, a random oracle will also re-separate ApproxSampBPP and ApproxSampBQP (and for that matter, even ExactSampBPP and ExactSampBQP)—for example, because of the Fourier sampling task performed by the quantum circuit I showed you earlier!  So we once again seem back where we started.
    (To ward off confusion: ever since Fortnow and Rogers posed the problem in 1998, it remains frustratingly open whether BPP and BQP can be separated by a random oracle—that’s a problem that I and others have worked on, making partial progress that makes a query complexity separation look unlikely without definitively ruling one out.  But separating the sampling versions of BPP and BQP by a random oracle is much, much easier.)
  3. So, finally, we need to take the random oracle that makes PH infinite, and “scatter its bits around randomly” in such a way that a PH machine can still find the bits, but an ApproxSampBQP machine can’t.  In other words: given our initial random oracle A, we can make a new oracle B such that B(y,r)=(1,A(y)) if r is equal to a single randomly-chosen “password” ry, depending on the query y, and B(y,r)=(0,0) otherwise.  In that case, it takes just one more existential quantifier to guess the password ry, so PH can do it, but a quantum algorithm is stuck, basically because the linearity of quantum mechanics makes the algorithm not very sensitive to tiny random changes to the oracle string (i.e., the same reason why Grover’s algorithm can’t be arbitrarily sped up).  Incidentally, the reason why the password ry needs to depend on the query y is that otherwise the input x to the quantum algorithm could hardcode a password, and thereby reveal exponentially many bits of the random oracle A.

We should now check: why does the above oracle “only” collapse ApproxSampBPP and ApproxSampBQP?  Why doesn’t it also collapse ExactSampBPP and ExactSampBQP—as we know that it can’t, by our previous argument?  The answer is: because a quantum algorithm does have an exponentially small probability of correctly guessing a given password ry.  And that’s enough to make the distribution sampled by the quantum algorithm differ, by 1/exp(n) in variation distance, from the distribution sampled by any efficient classical simulation of the algorithm—an error that doesn’t matter for approximate sampling, but does matter for exact sampling.

Anyway, it’s then just like seven pages of formalizing the above intuitions and you’re done!

OK, since there seems to be time, I’d like to tell you about one more result from the back end of my and Lijie’s paper.

If we can work relative to whatever oracle A we like, then it’s easy to get quantum supremacy, and indeed BPPA≠BQPA.  We can, for example, use Simon’s problem, or Shor’s period-finding problem, or Forrelation, or other choices of black-box problems that admit huge, provable quantum speedups.  In the unrelativized world, by contrast, it’s clear that we have to make some complexity assumption for quantum supremacy—even if we just want ExactSampBPP ≠ ExactSampBQP.  For if (say) P=P#P, then ExactSampBPP and ExactSampBQP would collapse as well.

Lijie and I were wondering: what happens if we try to “interpolate” between the relativized and unrelativized worlds?  More specifically, what happens if our algorithms are allowed to query a black box, but we’re promised that whatever’s inside the black box is efficiently computable (i.e., has a small circuit)?  How hard is it to separate BPP from BQP, or ApproxSampBPP from ApproxSampBQP, relative to an oracle A that’s constrained to lie in P/poly?

Here, we’ll start with a beautiful observation that’s implicit in 2004 work by Servedio and Gortler, as well as 2012 work by Mark Zhandry.  In our formulation, this observation is as follows:

Theorem 5.  Suppose there exist cryptographic one-way functions (even just against classical adversaries).  Then there exists an oracle A∈P/poly such that BPPA≠BQPA.

While we still need to make a computational hardness assumption here, to separate quantum from classical computing, the surprise is that the assumption is so much weaker than what we’re used to.  We don’t need to assume the hardness of factoring or discrete log—or for that matter, of any “structured” problem that could be a basis for, e.g., public-key cryptography.  Just a one-way function that’s hard to invert, that’s all!

The intuition here is really simple.  Suppose there’s a one-way function; then it’s well-known, by the HILL and GGM Theorems of classical cryptography, that we can bootstrap it to get a cryptographic pseudorandom function family.  This is a family of polynomial-time computable functions fs:{0,1}n→{0,1}n, parameterized by a secret seed s, such that fs can’t be distinguished from a truly random function f by any polynomial-time algorithm that’s given oracle access to the function and that doesn’t know s.  Then, as our efficiently computable oracle A that separates quantum from classical computing, we take an ensemble of functions like

gs,r(x) = fs(x mod r),

where r is an exponentially large integer that serves as a “hidden period,” and s and r are both secrets stored by the oracle that are inaccessible to the algorithm that queries it.

The reasoning is now as follows: certainly there’s an efficient quantum algorithm to find r, or to solve some decision problem involving r, which we can use to define a language that’s in BQPA but not in BPPA.  That algorithm is just Shor’s period-finding algorithm!  (Technically, Shor’s algorithm needs certain assumptions on the starting function fs to work—e.g., it couldn’t be a constant function—but if those assumptions aren’t satisfied, then fs wasn’t pseudorandom anyway.)  On the other hand, suppose there were an efficient classical algorithm to find the period r.  In that case, we have a dilemma on our hands: would the classical algorithm still have worked, had we replaced fs by a truly random function?  If so, then the classical algorithm would violate well-known lower bounds on the classical query complexity of period-finding.  But if not, then by working on pseudorandom functions but not on truly random functions, the algorithm would be distinguishing the two—so fs wouldn’t have been a cryptographic pseudorandom function at all, contrary to assumption!

This all caused Lijie and me to wonder whether Theorem 5 could be strengthened even further, so that it wouldn’t use any complexity assumption at all.  In other words, why couldn’t we just prove unconditionally that there’s an oracle A∈P/poly such that BPPA≠BQPA?  By comparison, it’s not hard to see that we can unconditionally construct an oracle A∈P/poly such that PA≠NPA.

Alas, with the following theorem, we were able to explain why BPP vs. BQP (and even ApproxSampBPP vs. ApproxSampBQP) are different, and why some computational assumption is still needed to separate quantum from classical, even if we’re working relative to an efficiently computable oracle.

Theorem 6 (A.-Chen).  Suppose that, in the real world, ApproxSampBPP = ApproxSampBQP and NP⊆BPP (granted, these are big assumptions!).  Then ApproxSampBPPA = ApproxSampBQPA for all oracles A∈P/poly.

Taking the contrapositive, this is saying that you can’t separate ApproxSampBPP from ApproxSampBQP relative to an efficiently computable oracle, without separating some complexity classes in the real world.  This contrasts not only with P vs. NP, but even with ExactSampBPP vs. ExactSampBQP, which can be separated unconditionally relative to efficiently computable oracles.

The proof of Theorem 6 is intuitive and appealing.  Not surprisingly, we’re going to heavily exploit the assumptions ApproxSampBPP = ApproxSampBQP and NP⊆BPP.  Let Q be a polynomial-time quantum algorithm that queries an oracle A∈P/poly.  Then we need to simulate Q—and in particular, sample close to the same probability distribution over outputs—using a polynomial-time classical algorithm that queries A.

Let

$$ \sum_{x,w} \alpha_{x,w} \left|x,w\right\rangle $$

be the state of Q immediately before its first query to the oracle A, where x is the input to be submitted to the oracle.  Then our first task is to get a bunch of samples from the probability distribution D={|αx,w|2}x,w, or something close to D in variation distance.  But this is easy to do, using the assumption ApproxSampBPP = ApproxSampBQP.

Let x1,…,xk be our samples from D, marginalized to the x part.  Then next, our classical algorithm queries A on each of x1,…,xk, getting responses A(x1),…,A(xk).  The next step is to search for a function f∈P/poly—or more specifically, a function of whatever fixed polynomial size is relevant—that agrees with A on the sample data, i.e. such that f(xi)=A(xi) for all i∈[k].  This is where we’ll use the assumption NP⊆BPP (together, of course, with the fact that at least one such f exists, namely A itself!), to make the task of finding f efficient.  We’ll also appeal to a fundamental fact about the sample complexity of PAC-learning.  The fact is that, if we find a polynomial-size circuit f that agrees with A on a bunch of sample points drawn independently from a distribution, then f will probably agree with A on most further points drawn from the same distribution as well.

So, OK, we then have a pretty good “mock oracle,” f, that we can substitute for the real oracle on the first query that Q makes.  Of course f and A won’t perfectly agree, but the small fraction of disagreements won’t matter much, again because of the linearity of quantum mechanics (i.e., the same thing that prevents us from speeding up Grover’s algorithm arbitrarily).  So we can basically simulate Q’s first query, and now our classical simulation is good to go until Q’s second query!  But now you can see where this is going: we iterate the same approach, and reuse the same assumptions ApproxSampBPP = ApproxSampBQP and NP⊆BPP, to find a new “mock oracle” that lets us simulate Q’s second query, and so on until all of Q’s queries have been simulated.

OK, I’ll stop there.  I don’t have a clever conclusion or anything.  Thank you.

81 Responses to “GapP, Oracles, and Quantum Supremacy”

  1. Gil Kalai Says:

    Very nice! I do not see good reasons to conjecture that the approximation problems for permanents you mentioned is #P complete, (or even that it is NP-hard, but maybe I miss something). But it looks that all you need is that the problem is not in PH which is more plausible. (The conjecture resembles a conjecture about NP-hardness of random SAT formula, for the critical probability, which also I dont know what people think about but seems implausible to me.)

  2. Scott Says:

    Gil #1: That’s correct; all we need is that Gaussian permanent estimation is not in BPPNP. So if you believed that it had some intermediate status, not in PH but not #P-hard either, that would suffice for us.

    And also, if GPE were to fall for reasons specific to the permanent, one could simply ask the same question about the problem studied by Bremner, Shepherd, and Montanaro, and other problems for which #P-hardness of average-case approximation is known to imply strong quantum supremacy.

  3. Gil Kalai Says:

    Yes, I agree. Regarding the permanent approximation question I wonder if you think it is NP hard. This is much weaker than #P complete but I cannot imagine even a reduction showing this and I would even guess it is not NP hard. As I said, maybe I miss something obvious here which would be a little fadiha. (Fadiha (פדיחה) is a Hebrew word that I try to teach my blog readership 🙂 . It can be useful for proof ending.)

  4. Atreat Says:

    Thanks for the video on CTC’s and implications for computability theory! I was much more ambiguous about whether the universe allows CTC’s before this talk, but after watching it I’m left with the feeling that real physical law must rule them out somehow.

    What this has left me wondering is why these questions are not considered as grave for quantum gravity as the firewall problem? It seems to me that GR allows CTC’s and agnostically, but actual existence of them would violate conservation of energy if information is physical. How can the universe allow Shakespeare’s output to come into existence without any actual work being performed that does not violate conservation of energy?

  5. Florian Cassayre Says:

    Interesting article. I have accidently unplugged my ethernet RJ45 cable (I stepped on it, what a shame!) right after loading the webpage of your tool http://haspvsnpbeensolved.com. I’m very curious to know how it manages to be so accurate without even reading the paper! Perhaps it’s using some probabilistic model based on the given URL. This is fascinating!

    https://i.cassayre.me/20170901174013.png
    https://i.cassayre.me/20170901174013.png

  6. Noah Stephens-Davidowitz Says:

    #P should be GapP in both spots here: “#P function is positive or negative—it’s not hard to see that that’s already a #P-hard problem.”

  7. Michael Says:

    Well, the web site could also respond to Enter in the text box (not only to click). Too bad checking if URL exists would require additional effort.

  8. Scott Says:

    Gil #3: I don’t know! All I can tell you is that Alex and I didn’t try any approach to proving NP-hardness, that wouldn’t have actually yielded #P-hardness had it worked. But maybe we should have, and maybe that’s a good direction for someone else to try.

  9. Scott Says:

    Atreat #4: I think most physicists—among those with any view at all!—just take the attitude that a quantum theory of gravity should rule out CTCs, and our not understanding the details of that is explained by our not having a quantum theory of gravity, and that’s that. There’s not really a conflict between accepted principles analogous to what there was with the firewall paradox.

    It’s true that classical GR allows CTC solutions—but even there, if you want to create a CTC in a spacetime that initially lacks it, you seem to need matter with negative energy density, so then you just need to accept that physics should rule the latter out (it would have all sorts of other bizarre consequences as well). In quantum gravity—well, it depends on which proposal you’re working with, but from chatting about it with experts, there seems to be a realistic prospect that we could at least more-or-less understand this issue within AdS/CFT. E.g. that someone could explain why, if there are CTCs in the AdS bulk, then the CFT boundary theory must necessarily be “unphysical” in some way. From what little I understand, this seems to me like a great direction—since if a quantum gravity theory can’t even rule out CTCs, then what does it do for us?

  10. Scott Says:

    Noah #6: Thanks!! Fixed the first one; for the second, I really did mean “#P-hard.”

  11. Aula Says:

    First, the website haspvsnpbeensolved.com is now live!

    Does it even bother to download anything from the input URL? 🙂

    a fast classical factoring algorithm would “merely” collapse the world’s electronic commerce

    Not very relevant, but possibly interesting anyway: if integer factoring is classically hard, then it must be strictly harder than breaking the cryptography based on it. The gist of the proof is that if there were a reduction algorithm demonstrating equal hardness, then the reduction algorithm by itself would be sufficient to factor integers (no need to actually have an RSA-breaking oracle, you could just fake it).

    Be that as it may, staring at the chain of implications above, a natural question is which equalities in the chain collapse the polynomial hierarchy in a relativizing, and which equalities collapse PH (if they do) only for deeper, non-relativizing reasons.

    Something seems to be missing from that sentence.

    I don’t a clever conclusion or anything.

    You have to have a “have” there.

  12. Atreat Says:

    Scott #9,

    I’m not sure I understand. The firewall pits the principles of QM and GR against each other and says we have to give up something precious if we want to resolve the problem: accept information loss, abandon the equivalence principal, or say QM is wrong. Likewise, CTC’s seemingly pit some version(s) of Church-Turing against GR, but I guess you’re saying everyone is just happier to wave hands and say quantum gravity will fix it. Interestingly, your results indicate that non-quantum computers are just as deadly to Church-Turing as QM ones inside a CTC. Makes me wonder why people have faith that a QM gravity will somehow resolve the problems of CTC’s. OTOH, if AdS/CFT can somehow rule out CTC’s in the way you describe, perhaps some real QM gravity will do so for our universe. Would give more confidence at least.

    Try my hand at placebomancy here and say that in the future some enterprising computer scientist will start from Church-Turing a priori and derive qm gravity as somehow emergent from constraints of computability. Somehow gravity itself will follow from the universe imposing computability with black holes and the cosmological constant arising as a nature’s “answer” to uncomputable questions 🙂 ie, some external observable of black hole physics will be a function of the kolmogorov complexity of the information it contains or something.

    What a beautiful story that would be… come on universe, go for it! 🙂

  13. asdf Says:

    http://news.sky.com/story/scientists-offer-1m-prize-to-solve-simple-chess-puzzle-11015899

    This seems to be a garbled report of a finding that the generalized 8 queens problem is NP-hard. It says there’s a $1M prize for solving it etc. I.e. without being strictly wrong anywhere as far as I can tell, it gives the impression that a Millennium Prize is being offered for solving a chess puzzle.

  14. Scott Says:

    Aula #11:

      if integer factoring is classically hard, then it must be strictly harder than breaking the cryptography based on it. The gist of the proof is that if there were a reduction algorithm demonstrating equal hardness, then the reduction algorithm by itself would be sufficient to factor integers (no need to actually have an RSA-breaking oracle, you could just fake it).

    I’ve never seen any result to that effect. Do you have a reference? As far as I knew, the equivalence of factoring and inverting the RSA function is a longstanding open problem.

  15. Scott Says:

    Atreat #12: I think the difference is that the principles of GR don’t imply the possibility of creating CTCs of a kind we could actually observe. You only get that from GR plus some additional ingredients (like negative-energy matter or a really bizarre cosmology).

    Also, I don’t see how you could possibly derive quantum gravity from the Chuch-Turing Thesis alone, because there are so many possible universes that satisfy Church-Turing but that have neither quantum mechanics nor gravity (to take one example: Conway’s Game of Life).

  16. wolfgang Says:

    @Scott

    >> You only get that from GR plus some additional ingredients

    The Kerr metric is a solution to the vacuum equation(s) and contains CTC in its interior.

  17. Scott Says:

    wolfgang #16: But isn’t the CTC in the Kerr metric hidden behind an event horizon? All sorts of shenanigans can happen behind the horizon, but, y’know, whatever happens there stays there…

  18. Or Meir Says:

    Hi Scott,
    Great post! I really like the proof of Theorem 6.

    How severe do you think the limitation A \in P/poly is?

  19. wolfgang Says:

    @Scott

    Yes it is “hidden” behind an event horizon and perhaps not stable in realistic scenarios (e.g. if the rotating black hole was created from some rotating matter).
    But “pure” GR does imply the possibility of CTCs and as far as I know it is an open problem if CTCs are always hidden behind a horizon and/or necessarily unstable.

  20. Atreat Says:

    Scott #15, LOL, no I guess it won’t derive from just that so my inner crackpot amends it to that + thermodynamics + geometry and waves his hands 😉

    Anyway, in what way does Conway’s Game of Life (CGL) “satisfy” Church-Turing? When saying, “constraints imposed by Church-Turing” I’m just saying the universal laws allow no easily accessible oracles or hypercomputers. Has it been *proven* that the rules of CGL allow no oracles? To be sure, I wouldn’t think CGL would allow oracles, but I wonder how this is proven.

    Anyway, I guess many sets of universal law would constrain oracles, but my inner crackpot is a romantic and likes the idea that our universe is capable of answering uncomputable questions, but only via blackholes. I like the idea that in a 1000 years, our distant ancestors will still be physically incapable of building a hypercomputer just as they are incabable of traveling at light speed, but if they wish to know the kolmogorov complexity of some string or whether a given program halts, they could fly a spaceship near a blackhole and throw in some matter that encodes the string and then wait a long time for some physical observable output of the blackhole that is directly related to the kolmogorov complexity of the incoming matter. That would be sweet.

  21. Scott Says:

    Or #18: Thanks! I do think A∈P/poly is a pretty serious limitation: for example, relative to this sort of oracle, you can never make PH infinite unless it’s already infinite in the real world (exercise for you). On the other hand, many lower-down things can still be separated, often using cryptographic constructions. In the appendix of my and Lijie’s paper, we (or mostly Lijie, really 🙂 ) try to begin a wider investigation of P/poly oracles—showing, for example, that P=BPP relative to all oracles of this kind, if and only if E requires exponential-size circuits.

  22. Scott Says:

    Atreat #20: That the rules of Life “allow no oracles” follows immediately from the fact that Life can be simulated on a Turing machine. The more interesting direction is the opposite one, that Life can simulate any Turing machine. But that was shown as well, I believe by Conway himself in 1982.

  23. Atreat Says:

    Ok, sure. Any universe I can simulate on a computer is obviously computable. So this is just a tautology. I guess I was struck by your comment in the video of how it might turn out that gravity has the last laugh on Penrose by constraining uncomputability rather than the reverse.

  24. Bram Cohen Says:

    Coming at it with only classical intuitions, it seems like the important thing for making a sum hard to classically sample is that the expected number of non-cancelling points a subexponential number of samples should hit is less than one. Which means that the number of such things can be at most two minus epsilon. That seems like a lot of room.

    The sum-of-everything explanation of what quantum computing can do makes a lot more sense to me than everything else I’ve seen, which my brain refuses to parse as anything more than gobbledygook. Is it fair to say that all quantum speedups, or at least all superpolynomial quantum speedups, can be characterized as speeded up summation over large numbers of outputs?

  25. asdf Says:

    Scott, is a “generic oracle” something different from a random oracle? That is, take the reals, subtract some set of measure 0, and a generic oracle is any of the remaining ones.

  26. Scott Says:

    Bram #24:

      Is it fair to say that all quantum speedups, or at least all superpolynomial quantum speedups, can be characterized as speeded up summation over large numbers of outputs?

    At a super high level, sure! That’s just a translation into English of the statement that BQP⊆P#P.

    On the other hand, P#P is just a crude upper bound on BQP, not a characterization of it. Or, translated: not all summations over large numbers of inputs can be speeded up this way, but only the subset that you can actually realize using unitary transformations (with the amount of speedup given by the efficiency of realizing the unitary, as a product of 1- and 2-qubit gates).

  27. Scott Says:

    asdf #25: Yeah, generic oracles are different from random oracles. Generic oracles are closely related to generic sets, which Cohen introduced to prove the independence of the axiom of choice and continuum hypothesis, whereas random oracles are something that I understand. This seems like the best reference for comparing the two, but I don’t know if there’s a non-paywalled version to link to.

  28. Bjørn Kjos-Hanssen Says:

    Generic oracles are to random oracles as Baire category is to probability.

  29. Scott Says:

    Bjørn #28: OK, thanks! Maybe you or someone else who understands this topic could take a stab at explaining it to us, without using terms like “Baire category”?

  30. Bjørn Kjos-Hanssen Says:

    Ah, maybe this instead: Generic oracles are to random oracles as open dense sets of oracles are to high-probability sets of oracles. See also
    https://mathoverflow.net/a/43480/4600
    Really, though, a generic infinite binary sequence is like this: a bunch of bits, followed by 100 zeros, followed by 10^500 ones, followed by a patternless string of length 10^5000, followed by “01” repeated 10^10^10 times and so on.
    Basically, anything that can be insured using finite action does happen, again and again, but anything requiring discipline (such as the law of large numbers) fails to happen.

  31. Daniel Says:

    So we know that (i) computing exactly the permanent for i.i.d. Gaussians is #P-hard, and (ii) approximating the permanents of arbitrary matrices is #P-hard, but it was necessary to conjecture that (iii) it is #P-hard to approximate Gaussian permanents. How strong an evidence are (i) and (ii) for (iii)? Are there problems for which (i) and (ii) hold but (iii) is easy (maybe easy in the sense that it is “merely” in PH)? Or are (i) and (ii) simply sanity checks for (iii)?

    The point of the question is that, although I know there is no consensus on whether it should be true or not, I personally don’t know enough to even try to have an intuition.

  32. vzn Says:

    tried out this following site url on your haspvsnpbeensolved site, it returned negative, so seems like test case was correct/ validated! congratulations on that!

    https://www.scottaaronson.com/

  33. Aula Says:

    Scott #14:

    Well, the problem of whether breaking RSA is equivalent to factoring (or more generally, whether breaking a cryptosystem is equivalent to solving the underlying hard problem) is a subtle one, because you need to be explicit about just what kind of attacks are allowed in the breaking process. There is a paper from Eurocrypt 2009 by Aggarwal and Maurer titled “Breaking RSA Generically is Equivalent to Factoring” which is not the least bit surprising, because generic attacks are very weak; there are proposed cryptosystems that are provably secure against generic attacks but can be trivially broken by non-generic attacks. However, if you were to prove that factoring is equivalent to breaking RSA using some stronger kinds of attacks (you know, the kinds that actual real-world attackers might be using), you would just break RSA. The result I referred to earlier is Daniel Brown, “What Hashes Make RSA-OAEP Secure?”, and these two sci.crypt posts about it are well worth reading.

  34. Shmi Says:

    Classical GR rules out creating CTCs by the uniqueness of the metric, since you cannot have two different metrics at the same spacetime point. Best one can do is to push the Cauchy development to the Cauchy horizon (see Amos Ori’s “time machine” solution, for example: https://arxiv.org/abs/gr-qc/0503077). This also precludes “entering” and “exiting” a CTC, of course. Anything inside a CTC is there forever, and always has been. I cannot speak to QFT on a CTC spacetime background, or the hypothetical self-consistent theory of quantum gravity. The latter would probably all information- and entropy-based, so all bets are off.

  35. Scott Says:

    Shmi #34: No, you can have CTCs in classical GR; you just need some weird global way to ensure “causal consistency” (i.e., each point in the CTC having a unique metric)—which requirement, indeed, is exactly the source of the vast computational power in situations like the one in my talk. Yes, this might be impossible in our world, but not because of any local violation of Einstein’s field equation.

  36. wolfgang Says:

    @Shmi

    As a very simple example consider flat Minkowski spacetime with coordinates t,x,y,z in a particular frame.
    Now identify the points t=+T and t=-T (i.e. roll up spacetime into a cylinder in t-direction).

    The result is a flat spacetime, i.e. a vacuum solution of GR, and every world line is now a CTC.
    But obviously it does not describe the world we live in.

    As for time machines, there is a famous result of Hawking (Phys Rev D, 1992), which shows that in classical GR there cannot be a time machine of *finite* size, unless there is either one the following:
    i) a horizon (so the CTCs are “hidden”)
    ii) a violation of the weak energy condition
    iii) a (naked) singularity.

    Unfortunately, there is no proof of cosmic censorship to rule out iii) and indeed several more or less realistic counter examples (e.g. Bonnor 2002).

    Last but not least, there is a paper by Clifford Johnson about an exact string theory model with CTCs (links at my blog).

  37. Shmi Says:

    Scott #35: Yes, you can have eternal CTCs in classical GR, of course, the rotating black hole, the van Stockum/Tipler cylinder and the rotating Godel universe are some of the better known examples. you just can’t CREATE them, or visit them. So, no time machines in the conventional sense, where one can be used to travel back in time.

  38. Scott Says:

    Bjørn #30: Thanks, that was helpful! If I still have you here, then maybe I can take advantage to ask:

    So, from your comment as well as reading the papers, a “generic oracle” basically just means that you enumerate over all possible arithmetically-definable ways of extending a finite prefix of an oracle to a longer finite prefix of an oracle, and whenever you find such a way, you apply it once?

    If so, then a couple questions:

    – Is it clear that any collapse or separation that holds relative to one generic oracle, must also hold relative to all the others? If so, why?

    – What are the most basic things that happen in the generic world? E.g., is P≠NP relative to a generic oracle? If so, how do we see this? What about PH being infinite?

    – What’s an example of something that’s different for generic oracles than for random oracles?

  39. Bjørn Kjos-Hanssen Says:

    Scott #38:
    You’re describing an arithmetically generic oracle, which indeed is often simply called a “generic oracle”. As for your questions,

    (i) (Is it clear that any collapse or separation that holds relative to one generic oracle, must also hold relative to all the others?): Yes, pretty much.
    If someone proves a separation or collapse result (let’s call it “PNP”) relative to one generic oracle, then it’s almost guaranteed that they do so by proving that PNP holds for all oracles that belong to certain open dense sets U_1,U_2,…
    If those sets happen to be arithmetically definable then PNP has been proved for all arithmetically generic oracles. Great.

    Even if they don’t, PNP has been proved for all “F-generic” oracles where F={U_1,U_2,…} and hence in particular for all sufficiently generic oracles. (As long as F is countable such oracles exist and are plentiful — that’s the “Baire category theorem”.)

    (ii) (Is P≠NP relative to a generic oracle?): Yes, so I’ve read, and I’m thinking the reason is that a generic oracle G may contain hidden “gems” (1s in a sea of 0s) that are easy to verify but hard to find. So the proof should be similar to, and actually easier than, the proof that P≠NP relative to a random oracle. Also I read somewhere that PH is infinite in both the random and generic cases, which I’m hoping comes from elaboration of the same idea.

    (iii) (What’s an example of something that’s different for generic oracles than for random oracles?): I don’t know about separation/collapse results of that kind, but… a random oracle can be used to compute a function violating the Recursion Theorem (a fixed-point free function), and a generic oracle cannot.

  40. Scott Says:

    Bjørn #39: Thanks again!!

    One more question: are these generic oracles, considered as sets of integers, exactly the objects that Cohen forced into models of ZF in order to make CH and AC false? If not, then what are the main ways the latter differed?

  41. wolfgang Says:

    @Shmi

    >> you just can’t CREATE them

    I am not exactly sure what you mean with “create” but if you think about a spacetime with an (initial) hypersurface free of CTCs, with a region containing CTS later, then Amos Ori provides an example in the paper you cited. There are other examples as well …

    >> or visit them

    An observer jumping into a rotating black hole can in principle visit the CTC around the ring singularity – assuming that the extended Kerr metric is stable e.g. against quantum effects (a big if).
    This is just one example, there is enough literature with examples which you may or may not find more convincing …

  42. Bjørn Kjos-Hanssen Says:

    Scott #40: the same except there F was the set of all open dense sets (of infinite binary sequences) that belong to a given countable model M of ZF, making G more generic than complexity theorists need. And then the extension M[G] was the place where CH failed.

    But sometimes we need a different ‘notion of forcing’ and actually random oracles can be viewed as generic oracles for a different notion of forcing.

  43. Aula Says:

    Scott #38:

    is P≠NP relative to a generic oracle?

    Yes, Cook mentions this in his introduction of PvsNP as one of the Millennium Prize Problems (so I’m kinda surprised that you didn’t know that already).

    Scott #40:
    Uh, what? I don’t know how “Cohen forced objects into models of ZF” is supposed to be related to anything he actually did; as far as I understand, that’s not at all what forcing means.

  44. Scott Says:

    Aula #43: Well, given that I only just now learned the definition of a generic oracle, I guess I was looking for a couple sentences’ intuition for why they separate P and NP, or other pairs of complexity classes that are easy to separate (e.g.) with random oracles. Do you understand that? Would you like to explain it?

    Fine, maybe Cohen isn’t exactly “forcing” objects into models of ZF, but my understanding was that he was “forcing” certain statements to be true by ensuring that a countable infinity of conditions all hold, closely analogous to (but more complicated than) what we do when we construct oracles that put NEXP in P/poly or whatever. Am I wrong?

  45. jonas Says:

    Scott #38: see also https://mathoverflow.net/a/176498/5340 which tells about the same as #30. That tells only about constructing generic functions, not about using them as an oracle for any computation.

  46. Hanan Cohen Says:

    I tried checking http://haspvsnpbeensolved.com/ in http://haspvsnpbeensolved.com/ and got a response that it DOES point to a solution to the P vs. NP problem. (-;

  47. Serge Says:

    Nice checking algorithm, though I can’t figure out the hidden message in it. Have you changed your mind and now believe that P!=NP is undecidable, or don’t you mind that it might eventually reject a valid proof?

  48. Aula Says:

    Scott #44:

    Well, given that I only just now learned the definition of a generic oracle, I guess I was looking for a couple sentences’ intuition for why they separate P and NP, or other pairs of complexity classes that are easy to separate (e.g.) with random oracles.

    If you want that kind of intuition, then I think you should be asking something like “what do generic oracles have in common with these other types of oracles?” (I have no idea.) Instead you asked if generic oracles are exactly (emphasis yours) like something that has no obvious reason to have anything whatsoever to do with them.

    Fine, maybe Cohen isn’t exactly “forcing” objects into models of ZF, but my understanding was that he was “forcing” certain statements to be true by ensuring that a countable infinity of conditions all hold, closely analogous to (but more complicated than) what we do when we construct oracles that put NEXP in P/poly or whatever. Am I wrong?

    I don’t know if you are wrong about the “closely analogous” part, because I know nothing about how you would construct an oracle so that complexity classes relative to that oracle have some desired properties. Other than that, I think the answer is “it depends.” For one thing, you mentioned earlier “subsets of integers” which is probably true in some technical sense (since after all we are dealing with a countable model of ZFC) but most likely merely misleading if you want to understand how forcing works.

  49. Joshua Zelinsky Says:

    If I’m reading http://people.cs.uchicago.edu/~fortnow/papers/gensep.pdf correctly it does seem that Lance Fortnow at least when thinking about generic oracles is thinking about them as closely analogous to forcing.

  50. asdf Says:

    Well I guess this is related to possible quantum supremacy:

    https://hardware.slashdot.org/story/17/09/06/1426201/researchers-find-new-way-to-build-quantum-computers

    No idea if it’s important.

  51. asdf Says:

    If we view an oracle as a real number (i.e. infinite string of 1’s and 0’s) in the unit interval, there are continuum many of them. So is saying “X is true relative to a generic oracle” the same as saying “the set of oracles for which X is false has measure 0 on the interval”?

  52. Scott Says:

    asdf #51: No, that’s a random oracle (at least with the usual notion of “measure”). Generic oracle means something different and more complicated, which is exactly why we’ve been having this discussion.

  53. fred Says:

    Lots of interesting news lately.
    Flip-flop qubits :
    https://www.sciencedaily.com/releases/2017/09/170906103507.htm

  54. a Says:

    Any comments on DACA and coming immigration chop?

  55. Raoul Ohio Says:

    a #54:

    It is hard to know if any of these will actually happen.

    Trump appears to tweet whatever thought has popped into his head, or what ever someone has just told him. So no one knows what is a brainfart, or what will be followed through on. Reality often intrudes.

    The former Marine General that he hired has done a decent job of keeping the more crazy people away from Trump, and he appears to have gotten him to only tweet about 1/4 as much.

    Not obvious if this is a good thing in the long run. On the one hand, it will lessen the chance of Trump starting a war for the heck of it, on the other hand it might prolong the period until Trump takes his ball and goes home.

  56. fred Says:

    Another interesting piece of news for quantum computing:

    https://spectrum.ieee.org/nanoclast/semiconductors/materials/majorana-particles-grab-the-limelight-and-remain-center-stage?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29

  57. Scott Says:

    a #54: I think they’re both absolutely terrible, if they actually happen. And I’m depressed that it now falls to the US Congress, one of the most dysfunctional institutions on the planet, to resist destructive policies motivated by xenophobia and white nationalism.

  58. a Says:

    Game theoretically speaking the chances of deportation are pretty high I think. What do you think?

  59. Raoul Ohio Says:

    a #58:

    My guess is that the chances are not so high. The Trump agenda changes minute by minute. While this is somewhat of a central issue, the friction inherent in a thousand lunatic proposals grinding up against reality is likely to result in a big pile of nothing happening.

    The mental picture of the Trump administration I have is like a tractor pull. In these events, tractors pull a contraption that increasingly repositions the mass from over wheels to over a skid, making it harder to pull, until things grind to a halt. (The tractor that pulls the farthest is the winner). So I think that every whacky new thing is adding mass over the skids, and before long nothing will move.

    Of course, it is hard to predict the future. For example, Trump got elected.

  60. Joshua Zelinsky Says:

    “Theorem 5. Suppose there exist cryptographic one-way functions (even just against classical adversaries). Then there exists an oracle A∈P/poly such that BPP^A≠BQP^A.”

    Your work with Lijie Chen looks at weakening the assumption of the one-way function and showed that that’s essentially not doable without having serious class separations . Is there any hope for instead tightening the conclusion to tighten where the oracle A lives (possibly with tightening the one-way function assumption)?

  61. fred Says:

    Those attempts to prove Quantum Supremacy are really fascinating (and way over my head), but I do get the feeling that the entire endeavor is like trying to “nail down” the exact position of a fractal shape.

  62. sf Says:

    Off-topic (again) but it was mentioned recently here…

    https://www.the-tls.co.uk/articles/public/voynich-manuscript-solution/

  63. Scott Says:

    sf #63: Apparently the experts are profoundly skeptical, and are surprised that TLS even published this. In particular, the author provides supposed decodings for only two lines of the entire manuscript, and those decodings seem unconvincing (they’re not grammatical Latin); he postulates a missing “index” to the manuscript that there’s no evidence for; and he didn’t consult with any experts to see if the connection between the illustrations and women’s health was already known (it was).

    I myself might have thought, “well, if TLS published this, they must have fact-checked it, run it by a few experts, etc.,” were it not for my experience with quantum computing! 🙂

  64. Scott Says:

    fred #61: You mean, in the sense that they’re both complicated, or both nontrivial? 🙂

  65. Scott Says:

    Joshua #60: Sure, you could do that. As one example, if you make a strong enough cryptographic assumption (e.g., about the hardness of lattice problems, or Learning with Errors), I believe you could strengthen the conclusion to get that the oracle lies in TC0.

    (Except for the one detail that we need to take g(x mod r) rather than just g(x), this is just taking off-the-shelf results from the crypto literature about building pseudorandom functions in small complexity classes.)

  66. sf Says:

    Scott #63
    Mea culpa, I hadn’t checked for rebuttals, given the reputation of TLS and thinking that it was too recent anyway. Thanks for the link. It sounds like Gibbs rushed to publish, perhaps nervous about priority. Also quite amazing how the web coordinated reaction came so quickly.

  67. Douglas Knight Says:

    Also, the author doesn’t exist. Nonexistent authors have done good work in other fields, but they usually don’t usually go on so long about their life.

  68. Douglas Knight Says:

    Actually, Scott’s link mentions that, although it hardly stressed it. On the other hand, the author gave an interview to TLS.

  69. Aula Says:

    Disk Lipton reports that there is a polynomial-time algorithm for the Asymmetric Traveling Salesman Problem that is guaranteed to return a tour whose length is at most a constant times the optimal length. The only slight problem is that the constant has an upper bound of 5500 which probably isn’t all that useful for most practical purposes.

  70. fred Says:

    #61 Scott,

    haha, no, more in the sense how you were once characterizing the P/NP “boundary” as a sort of impenetrable fence – we know it’s separating two domains, but somehow the boundary seems out of reach – we can get closer and closer to it (by making the complexity hierarchy richer and richer), but never touch it (a bit like a fractal).
    Quantum Supremacy seems to share some of that a bit. But I guess in that case, the proof will be in the pudding – once QC become a reality, classical algorithm will fall out of favor?

  71. fred Says:

    The “Dark Side” of QC

    https://www.sciencedaily.com/releases/2017/09/170913192957.htm

  72. fred Says:

    Reversible Computing Supremacy?

    https://spectrum.ieee.org/computing/hardware/the-future-of-computing-depends-on-making-it-reversible

  73. AJ Says:

    If EXP is in BPP then does BQP = BPP hold?

  74. Scott Says:

    AJ #73: Yes, because BQP⊆EXP.

  75. AJ Says:

    1.If CH=PP (counting hierarchy collapses to PP) then is GapP=#P?

    2.Do you have faith that CH=PP and/or GapP=#P holds?

  76. Scott Says:

    AJ #75:

    1. No, GapP≠#P unconditionally, simply because #P is restricted to nonnegative functions whereas GapP isn’t.

    2. I have no faith that CH=PP. I’d guess they’re not equal, but not with any conviction.

  77. AJ Says:

    So ok but at least if CH=PP then would it suffice to show P^#P=BPP^NP or is there no such collapse known?

  78. Scott Says:

    AJ #77: Suffice for what?

  79. AJ Says:

    In your work you have assumed P^#P=BPP^NP is impossible. My query is if CH=PP then is it strong enough to violate your assumptions.

    I am looking for minimal conditions to violate your result. I just want to see how it corroborates with classical complexity.

  80. Scott Says:

    AJ #79: CH=PP would be a spectacular collapse, but I don’t know of any result suggesting that it would lead to P#P=BPPNP, or otherwise kick out the legs from under our BosonSampling hardness argument.

  81. AJ Says:

    Interesting that the BosonSampling result cannot be taken apart easily classically. Slightly unbelievable given that GapP and #P is in FP^PP=FP^#P but I will take your word for it and we can approximate #P in PH with no collapse of CH while even with entire CH reduced to rubble GapP cannot be approximated. I guess difference of approximations of needed functions cannot approximate difference of needed functions. It looks like a problem with some kind of hidden hook.