One down

Last summer I posed Ten Semi-Grand Challenges for Quantum Computing Theory. Today I’m pleased to report that (part of) one of my challenges has been “solved” — where, as always in this business, the word “solved” is defined broadly so as to include “proven to be not really worth working on, since a solution to it would imply a solution to something else that most of us gave up on years ago.”

Challenge 10 involved finding a polynomial-time quantum algorithm to PAC-learn neural networks (that is, the class TC0 of polynomial-size, constant-depth threshold circuits). In a new ECCC preprint, Adam Klivans and Alex Sherstov show that, if there’s a fast quantum algorithm to learn even depth-2 neural nets, then there’s also a fast quantum algorithm for the ~n1.5-approximate shortest vector problem. Embarrassingly for me, once you have the idea — to use Oded Regev’s lattice-based public key cryptosystems — the quantum hardness of learning (say) depth 4 or 5 neural nets is immediate, while getting down to depth 2 takes another page. This is one of those results that hangs in the wonderful balance between “you could’ve thought of that” and “nyah nyah, you didn’t.”

Feel free to post your own challenges in the comments section. But please, no “spouter challenges” like “where does the power of quantum computing come from?” or “is there a deeper theoretical framework for quantum algorithms?” In general, if you’re going to pose a scientific challenge, you should (1) indicate some technical problem whose solution would clearly represent progress, and (2) be willing to place at least 25% odds on such progress being made within five years. Or if you’re not a gambler, pick technical problems that you yourself intend to solve — that’s the approach I took with Semi-Grand Challenges 4 and 7.

Theoretical computer science is often disheartening: there are so many open problems, and a week later they’re all still open, and a week after that, they’re all still open. Wait a year, though, or five years, or twenty, and some grad student will have had the insight that’s eluded everyone else: that the problem can’t be solved with any existing technique, unless Blum integers are factorable in 2n^ε time for all ε>0.

28 Responses to “One down”

  1. Dave Bacon Says:

    proven to be not really worth working on, since a solution to it would imply a solution to something else that most of us gave up on years ago.

    Crap, why didn’t anyone tell me this earlier ;)

  2. Anonymous Says:

    proven to be not really worth working on, since a solution to it would imply a solution to something else that most of us gave up on years ago.

    I.e., not solved at all, but an interesting connection has been made.

  3. Bram Cohen Says:

    Perhaps someone could generalize this result to any pair of stronger and weaker axiomatic systems. That is, for any such pair of axiomatic systems, there are statements which are are polynomial to prove for the stronger one but require exponential (or worse) time for the weaker one.

    In my mind, this relates to complexity because it has to do with debunking – some work has been done on trying to show that weaker systems can require a certain amount of time to prove certain things, and such a result would demonstrate that such results don’t really make much progress at all (as well as some rather disturbing things about the limitations of ZFC relative to stronger systems).

  4. Scott Says:

    I.e., not solved at all, but an interesting connection has been made.

    In the same sense that Max Clique has an “interesting connection” to 3SAT. :)

  5. Nagesh Adluru Says:

    In the same sense that Max Clique has an “interesting connection” to 3SAT. :)

    Can we use the word reduction for this connection or is it something different?

  6. Scott Says:

    Bram: Let S(n) be the statement

    “This statement has no ZF proof at most n symbols long.”

    Clearly S(n) has a short proof in ZF+Con(ZF), and clearly it has a proof of about 2^n symbols in ZF. A first step toward meeting your challenge would be to show that S(n) has no ZF proof with a subexponential number of symbols.

    I discussed this in a comment on This Week’s Finds by John Baez. Alas, there’s no hope of proving an unconditional lower bound — this is related to the hardest open problems in proof complexity. But conceivably one could show that (say) S(n) has no polynomial-size ZF proof unless NE=coNE.

  7. Scott Says:

    Can we use the word reduction for this connection or is it something different?

    It’s a reduction.

  8. Anonymous Says:

    this result, tip of an iceberg connecting quantum computing, AI, and complexity theory?

  9. Scott Says:

    this result, tip of an iceberg connecting quantum computing, AI, and complexity theory?

    Are cryptographic hardness results for PAC-learning part of AI, or are they just straight complexity theory? And which answer reflects more credit on AI? :)

    (AI folks have long complained that, as soon as a branch of “AI” becomes mathematically nontrivial, people want to call it something else! We saw that with automated theorem proving, local search, Bayesian inference, … and I think we also see it with PAC-learning.)

  10. Anonymous Says:

    In the same sense that Max Clique has an “interesting connection” to 3SAT. :)

    Yeah, pretty much. :)

    But I still think that it’s a bit sloppy to call a problem solved because it has a reduction to a well-known problem. Or do you also think that 3SAT is solved? After all, it has a reduction to Max Clique… :)

  11. Anonymous Says:

    It’s a reduction.

    It shouldn’t be called a reduction. You haven’t reduced anything.

  12. Scott Says:

    But I still think that it’s a bit sloppy to call a problem solved because it has a reduction to a well-known problem.

    No, it’s solved because a well-known problem has to reduction to it — so if you want to think about algorithms, then you might as well do so for the well-known problem instead!

    This was a novel attitude in the mid-1970’s. Today it’s so routine that complexity theorists barely even think about it (unless of course they’re writing a blog). This can lead to hilarity when someone who’s accepted the Gospel of Reducibility encounters someone who hasn’t.

  13. John Sidles Says:

    There is a very broad class of “sparsification” problems that arise naturally in quantum simulation theory, that also arise naturally in theoretical physics, and that also arise naturally in cryptography.

    But as far as I can tell, no results are known for these problems.

    Here is how generic sparsification problems arise. Someone gives you a quantum trajectory |psi(t)>. And they guarantee that this trajectory is the unraveling of a POVM-type process whose generators are sparse in some “natural” basis. But they don’t tell you the “natural” basis. Finding that natural basis is the sparsification problem.

    Engineers want to solve sparsification-type problems to find the basis that allows efficient model order reduction.

    Physicists want to solve sparsification-type problems to discover the basis that Mother Nature uses for the effective Hamiltonian.

    Cryptographers want to solve sparsification-type problems to discover the secret basis that Alice used to encode the trajectory — there are public key schemes that work by this principle.

  14. Nagesh Adluru Says:

    Thanks Scott. I asked because usually connections are reductions and was wondering if there are any other kind like being non-constructive or non-algorithmic.

  15. Scott Says:

    Nagesh: It’s a good question, and there are other kinds of connections. As a simple example, Sigma2P-complete problems (like “is there an x such that for all y, the predicate P(x,y) is true?”) are known to be solvable efficiently if and only if P=NP. But that doesn’t mean they’re NP-complete (indeed they aren’t, unless the polynomial hierarchy collapses).

  16. Nagesh Adluru Says:

    Scott, I see that how the connections could be different. But doesn’t P=NP imply polynomial hierarchy is not relevant (collapse)? If it does, then the Sigma2P-Complete are also NP-Complete right?

  17. Scott Says:

    Yes and yes. That’s consistent with what I said.

  18. Nagesh Adluru Says:

    Now I see your last bracketed sentence. And even clearly understand what you meant! You are great writer Scott, only subtler – may be theorists are like that:)

  19. Anonymous Says:

    You are great writer Scott, only subtler…

    Nothing more subtle than biting vaginas.

  20. Andy D Says:

    Scott or others–any opinions on the extent to which Valiant’s ‘matchgates/holographic algorithms’ paradigm has fulfilled/could fulfill Challenge 2? Challenge 5?

    (i.e., classical simulation of quantum systems, and new algorithms, quantum or classical)

  21. Scott Says:

    Andy: In Challenge 2, when I referred to “fermionic systems with noninteracting modes,” that’s equivalent to Valiant’s matchgate model (a paper by Terhal and DiVincenzo made the connection). So yes, we’ve known about that for a while, and it would be great to generalize it and/or find more applications.

    Holography is a framework for classical algorithms, so it’s outside the scope of my challenge. I see it as a design strategy that was implicit in some early polynomial-time algorithms (for matchings, determinants, etc.), and that made explicit has led to some neat new algorithms for things like counting problems on planar graphs.

  22. Johan Richter Says:

    A question: How long do you think it will take for P!=NP to be solved, Scott? Or how long will it take before there is any non-trivial result about class-seperation? (Is that the correct term?)

  23. Luca Says:

    It shouldn’t be called a reduction. You haven’t reduced anything.

    My favorite quote ever from a theory paper is in a paper by Papadimitriou and Yannakakis (it was either the one showing that many optimization problems are Max SNP-complete, or the one showing that the VC dimension is log-SNP-complete):

    “As usual in complexity theory, we have decreased the number of questions without, alas, increasing the number of answers.”

  24. Scott Says:

    Johan: Depending on what you consider “nontrivial,” we already have nontrivial class separations. For example, we know that mP doesn’t equal mNP (where m denotes monotone, i.e. only AND and OR gates), and that MA_EXP is not contained in P/poly.

    If you want some spouting about when P!=NP will be solved, check out Bill Gasarch’s P/NP poll. To me, the most striking aspect of this poll is that those who’ve thought about the problem the most are usually willing to speculate the least.

    I believe that P!=NP, I believe this is provable in standard axiom systems, and I believe the proof will require entirely new mathematics. Keep in mind, this is not the first time a mathematical question was asked decades or centuries before anyone had the tools to answer it.

    We know that a non-relativizing, non-naturalizing lower bound technique will be needed, and existing interactive proof results tell us that such techniques are possible, but we don’t have any that are strong enough to answer even much “easier” questions, like NEXP vs. P/poly, or whether SAT has polynomial-size, depth-3 circuits consisting entirely of mod 6 gates.

    Beyond that I won’t speculate. :)

  25. Anonymous Says:

    While you’re at discussing advances in QC, let me ask you the following related question. You surely know (at least a bit) the paper Totally secure classical communication utilizing Johnson(-like) noise and Kirchoff’s law by Laszlo B. Kish (physics/0509136), claiming that a communication protocol with basically the same properties as quantum encryption is possible within entirely classical systems. Can you comment — or do you know some “quantum collegue” of yours who has done it — on its significance and on whether it has any impact on quantum-related matters?

    Thanks and happy blogging!

  26. Scott Says:

    I actually haven’t seen that paper — anyone who has is welcome to comment.

  27. Anonymous Says:

    Scott, which is your opinion on this paper?

    http://arxiv.org/abs/quant-ph/0604079

  28. John Sidles Says:

    Here is a well-posed question for people to chew on — suppose we compute the state psi that is the outer product of four spin 1/2 particles:

    psi = {xa,ya}⊗{xb,yb}⊗{xc,yc}⊗{xd,yd}

    and we specify that the eight coordinates {xa,ya,xb,yb,xc,yc,xd,yd} are all real. Then the above outer product yields a 16-dimensional vector

    psi = {xa*xb*xc*xd, xa*xb*xc*yd, xa*xb*xd*yc, xa*xb*yc*yd, xa*xc*xd*yb, xa*xc*yb*yd, xa*xd*yb*yc, xa*yb*yc*yd, xb*xc*xd*ya, xb*xc*ya*yd, xb*xd*ya*yc, xb*ya*yc*yd, xc*xd*ya*yb, xc*ya*yb*yd, xd*ya*yb*yc, ya*yb*yc*yd}

    such that our eight coordinates define a five-dimensional submanifold in this 16-dimensional space (three of the eight coordinates are redundant, hence five-dimensional).

    We imagine that this five-dimensional surface is very bumpy, but here is the surprise — its Weyl tensor vanishes, and hence it is conformally flat.

    And this is true AFAICT for any number of spins, not just four.

    Who can point us engineers in Seattle toward a book that will tell us how to generalize this surprising result (surprising to us) to complex coefficients?

    And be gentle — it will be our first time.