Spouting Buhl

For those who are interested, video of my Buhl Lecture in Physics at Carnegie Mellon is now available on YouTube.  (The lecture was on April 29; the topic was “Quantum Computing and the Limits of the Efficiently Computable.”)  Thanks to everyone at CMU for their amazing hospitality.

27 Responses to “Spouting Buhl”

  1. Jeroen Says:

    Great lecture! You should come to Hasselt University (Belgium) for one when you are around. It might warm our theoretical cs group for some research (or courses!) in Quantum complexity, instead of the good ol’ logics and database research, hehe.

    I am joking around of course. Keep up the great blog, love it.

    Cheers,

    A student.

  2. dfphil Says:

    That was a lovely, interesting and understandable lecture. Thanks for sharing it!

    I was surprised by how much like a physics lecture it sounded. I have memories of reading you a few years ago on the importance of taking mathematics and complexity seriously on their own: if mathematicians listen to physical reasoning too much, many interesting problems might be ignored.

    In this lecture, you seemed to propose using the conjecture that hard problems actually are hard to reason about the physical universe. To my physicist ear, it sounded as though the reverse was really going on: for hard problems to be solvable with polynomial resources, lots of non-physical things would have to be possible. While that would be my physicist bias, I didn’t think it was your typical mode of thought.

    I recognize that you were giving a lecture to a physics audience and that colored your subject matter, but I did keep expecting a defense to the physicists of taking mathematical problems on their own.

  3. Pat Says:

    Two questions:

    1) Where are the end of talk questions? (The other Buhl lecture I checked had them at the end)

    2) You say the exponential separation of state vectors under possible nonlinear schrödinger equations could be used to quickly solve NP problems (citing Abrams and Lloyd 98) and use this as evidence that QM is linear. I’m reminded of the scientists who, upon learning the weather had sensitive dependence on initial conditions, thought “hey, that means we should able to control the weather!”. Wouldn’t nonlinear QM have such sensitive dependence on initial state and, therefore, be impossible to control? (forgive me if this question is naive. my understanding of quantum computing is basically a strict subset of the statements you made in that talk)

  4. Scott Says:

    Pat: Great questions!

    1) Unfortunately I went way over time, so my hosts decided to skip public Q&A (though lots of people asked questions at the reception afterward).

    2) Yes, it’s a valid point. Indeed, 7 years ago I spent a week or two thinking about whether nonlinear QM can “robustly” or “fault-tolerantly” solve NP-complete problems efficiently. If I remember correctly, I never came up with a noise model that I was happy with, but I still think it’s a great research problem.

  5. Scott Says:

    (con’t) But your analogy to weather prediction isn’t perfect. In the weather case, we know exactly why sensitive dependence doesn’t imply controllability: because we don’t get to control the whole atmosphere, just one piece of it, and the sensitive dependence on the piece we control is completely washed out by the sensitive dependence on all the pieces that we DON’T control. By contrast, it’s not obvious whether or not one could control all components of a nonlinear QC well enough to solve NP-complete problems.

    Interestingly, even if NP-complete problems turned out not to be “robustly solvable” by a nonlinear QC, it’s still a very plausible conjecture that graph isomorphism and the collision problem WOULD be solvable–since for those problems, one only needs to distinguish sets of states that have small pairwise inner products.

    Stepping back, maybe it’s understandable that there hasn’t been much research on nonlinear quantum computing subject to “real-world constraints”! 🙂 But thanks very much for reminding me of a good problem I’d forgotten about.

  6. John Sidles Says:

    Scott, where your Buhl Lecture says:

    “The No-Supersearch Postulate explains why the Schroedinger equation is linear.”

    wouldn’t it be more correct to say

    “The No-Supersearch Postulate excludes some nonlinear modifications of the Schroedinger equation.”

    Here the point is that the articles of Weinberg and of Abrams and Lloyd (for example) exclude only selected nonlinear dynamical models … there is no article that (to my knowledge) credibly claims to have excluded all nonlinear models.

    The point is that nonlinear dynamical systems are notoriously subtle, and very few broad claims can be rigorously proved about them. As von Neumann once put it:

    “The usual theorems of existence and uniqueness of a solution, that one would like to have, have never been demonstrated and are probably not true in their obvious forms.”

    Isn’t von Neumann’s assertion, about classical dynamics, true too about quantum dynamics?

  7. Scott Says:

    John: That’s another valid point; thanks!

    But on the other hand, the argument that Abrams and Lloyd gave was pretty simple and generic: basically, it works for any nonlinearity that lets you repeatedly increase the angle between vectors.

    So maybe we should turn the question on its head: is there any nonlinear modification of QM that you have reason to believe would not make NP-complete problems efficiently solvable? (Where “interesting” is meant to rule out “nonlinearities” like transposing amplitudes, which become linear if you look at what’s going on in a slightly larger space?)

  8. Micki St. James Says:

    Nonlinear modification of QM. Hmm.
    From an Amazon review of the book “Why not?”:

    >This engaging primer is more insightful than the usual free-associational, brainstorming protocols. Economist Nalebuff and law professor Ayres insist that “innovation is a skill that can be taught,” and distill it into a few rules of thumb, like “where else would it work?” (putting airline data recorders into cars, for example) and “would flipping it work?”, which involves gonzo conceptual inversions like students raising their hands to not be called on or “reverse 900 numbers” where telemarketers pay people to accept calls.

    Unfortunately this only seems to lead to two dead-end
    ideas, flipping by transposing amplitudes which you’ve just ruled out and putting airline data recorders into unitary matrices.

  9. Yatima Says:

    It’s DJ Scott at 300 inferences per minute 🙂

  10. Mike S Says:

    Hi Scott,

    Great lecture, I have a couple questions for you:

    1. One thing I’ve heard before about NP(-hard) problems is that often certain instances are much harder than others. What are your feelings on the physical practicality of a computer that solves only most cases of NP(-hard) problems quickly? Also, is determining the ‘difficulty’ of particular instances of NP-complete problems NP(-hard)?

    2. Less seriously, you said something along the lines of ‘P != NP keeps mathematicians in business’. If math is so hard computationally, how do WE do it? Or on the other hand, if the computational complexity of certain problems is a fundamental property of the universe, and we are part of the universe, doesn’t it follow that we could make computers that are as good or better at doing math than we are?

  11. Scott Says:

    Mike S: Those are great (and huge) questions! Briefly:

    1. It depends what you mean by “most”! I think it’s almost certainly possible to generate a probability distribution over 3SAT instances almost all of which are hard (indeed, that assumption is central to modern cryptography). As one example, the approximate shortest vector problem is known to be just as hard on average as it is in the worst case, and it can easily be reduced to 3SAT. Another candidate is random k-SAT instances at the “critical ratio” of clauses to variables, for k≥4.

    But maybe what you meant was those instances of NP-hard problems that “typically arise in real life.” Here all sorts of issues come into play: for example, often the instances that arise in practice have symmetries or other structure that makes them easy. And often your goal is not to find the best solution, but just a better solution than your competitors. And often we terminate trains of thought long before they lead to hard instances of NP-complete problems—we’re usually not even conscious that that’s what we’re doing; we just have an intuition that “such-and-such would require a hopeless search.”

    But at the same time, when we do ask explicitly for optimal solutions, that request for optimality often has a way of finding the hard instances for us.

    2. The short answer is that math (as practiced by humans) is an extremely hit-or-miss business! A billion years of evolution have equipped us with a lot of useful heuristics, as has the much faster evolution of mathematical ideas over the last few thousand years.

    Perhaps even more important, we normally don’t care about arbitrary mathematical questions (does this random Turing machine halt?), but only questions that arise in some explanatory framework. And that criterion tends to select extremely strongly for questions that we can answer! Why it does so is a profound question itself, but whatever the answer, the history of math provides overwhelming evidence that it does. Goldbach’s Conjecture and the Collatz 3x+1 Conjecture are more-or-less “arbitrary” questions (at least in our present state of knowledge), and indeed they haven’t been answered. Fermat’s Last Theorem seemed pretty arbitrary (Gauss regarded it as such), but in the 1980s it was connected to a deep explanatory framework, and less than a decade later it was answered.

    Of course, despite these factors in mathematicians’ favor, they’re very far from having a general-purpose method to solve all the problems they want solved.

    Incidentally, “P≠NP means computers can never replace human mathematicians” is a forehead-bangingly common misunderstanding. Personally, I see no reason at all why the brain couldn’t be simulated by computer (neuron-by-neuron if necessary), and P≠NP does nothing to challenge that belief. All P≠NP suggests is that, once the robots do overtake us, they won’t have a general-purpose way to automate mathematical discovery any more than we do now.

  12. John Sidles Says:

    Scott reminds us: “We normally don’t care about arbitrary mathematical questions.”

    Scott, couldn’t your assertion be phrased even more stringently as the restriction: “We normally don’t care about unverifiable mathematical questions”?

    When we extend this restriction to the informatic and experimental realms we obtain: “We normally don’t care about undecideable mathematical conjectures, irreproducible physical experiments, or unverifiable simulation procedures.”

    To borrow the phrase of your concluding Buhl slide …

    “One can imagine worse research agendas than the following:

    (1) Restrict P to P’ and NP to NP’ such that P’ and NP’ are provably separable and our intuitions regarding efficiency and verifiability are respected.

    (2) Restrict quantum Hamiltonian flow to manifolds such that the flow is efficiently simulable and our intuitions regarding thermodynamics and verifiability are respected.”

    Seriously, Scott … wouldn’t this agenda lead us to a much better world than the one we live in now? Better for mathematicians, better for scientists, better for engineers … better for everyone?

    Because if all three restrictions been embraced in the 20th century, mightn’t we already be living in a world whose complexity classes P’ and NP’ were provably separable, and whose quantum dynamical models were provably simulatable and verifiable in P?

    That would be a mighty fun world to live in, wouldn’t it?

  13. Scott Says:

    John: What exactly are these classes P’ and NP’, such that P’≠NP’ is so easy to prove? If by P’, you mean something like “those languages that can be proved in ZFC to be in P,” then that isn’t even well-defined: the empty language is easily proved to be in P if I define it as such, but not if I define the empty language in a different way—e.g., “the set of pairs (x,y) such that x is a ZFC proof of 0=1 and y is a YES-instance of 3SAT.”

    Alternatively, you could define P’ to be something like, “the set of languages L for which there exists a ZFC predicate A such that ZFC proves that the language defined by A is in P, and it’s metatheoretically true that the unique language defined by A is L.” And you could define NP’ similarly. But I don’t see the slightest reason to think that P’ and NP’ so defined would be easier to separate than P and NP themselves.

    On the broader issue, I don’t think the fact that P vs. NP is hard gives us the right to dismiss it as somehow a wrong question. It does suggest tackling easier questions first, which is precisely what complexity theory has been doing for the last 40 years!

  14. John Sidles Says:

    Hmmm … isn’t it pretty obvious that P’ (under almost any reasonable definition) will have more structure than P?

    For example, the languages of a “reasonable” P’ might have a decidable runtime ordering (an attribute that P conspicuously lacks).

    The broad point is that our intuitions as to whether the added structure of P’ would help separate P’ from NP’ are dependent upon our having balancing intuitions regarding structures that P lacks.

    What would be helpful in this regard would be a thorough literature review of what is known (and not known) regarding P’s decidable versus undecidable attributes, perhaps along the lines of Juris Hartmanis’ Feasible computations and provable complexity properties (1978) … updated to reflect complexity theory’s immense progress during the past 33 years.

    To make this same point another way, it seems implausible (to me) that we will ever separate P from NP without (somewhere along the way) acquiring a more satisfactory understanding of the surprising, subtle, and even beautiful undecidable properties of these classes.

  15. Scott Says:

    John: I’m not sure what “more structure” even means in this context, but as I said, it’s far from obvious to me why “provable P” and “provable NP” (however you define them) should be easier to separate than P and NP themselves. Indeed, in the history of complexity theory, attempts to make progress using metamathematical ideas like provability in ZFC have almost all been dead ends (which I think is a shame, because I love metamathematics, but unmistakeably true). Progress has come, instead, from “meaty math” with nothing meta about it: finite fields, low-degree polynomials, random walks, Fourier analysis. So, while I could be convinced of the program you advocate, I’d need to see evidence first, in the form of new and exciting progress that relied on runtimes being provable.

  16. John Sidles Says:

    Scott, you and I absolutely agree that progress in mathematics is more typically associated to ideas that are more “meaty” than “meta.”

    For me, the role(s) of verifiability in deciding P vs NP are seen most naturally by analogy to geometry. Geometrically speaking, the modern notion of a continuum is broadly stratified from Kahler manifolds to Riemann manifolds to smooth manifolds to topological manifolds to point sets (with many further distinctions). Needless to say, formalizing these distinctions among point-sets was a centuries-long process that had both “meaty” and “meta” elements … a process that was essential to progress in disciplines that span all of mathematics.

    Similarly, the set of Turing machines in P, and the set of languages these machines accept, includes “wild” languages having undecidable properties, whose role in complexity theory is (perhaps?) broadly analogous to “exotic” point sets in geometry and topology. It therefore seems plausible (to me) that resolving P versus NP will require a lengthy process— both “meaty” and “meta”— of acquiring a better understanding of the “wild kingdoms” of languages within P and NP.

    I was shocked to discover that (AFAICT) no complexity theorist since Juris Hartmanis (in his 1978 monograph) has attempted to summarize our understanding of these fundamental issues. Is it surprising that progress is slow?

  17. Scott Says:

    John, I agree that “taming the wildness” of P and NP (by whatever means) is what this business is about! Complexity is a pretty result-driven field, so any approach to taming the wildness that gets results is likely to get attention.

    Related to that, one alternative explanation for why there’s been so little followup to Hartmanis’s 1978 monograph is simply that there hasn’t been much to do in that direction, beyond what Juris (and other people in the 1970s) already did! 🙂 But maybe I’m mistaken—can anyone state any open problems from Juris’s monograph that might be worth revisiting today?

  18. John Sidles Says:

    Scott, the question you ask is a good one, and obviously a good start at an answer would be to quote Hartmanis’ own words verbatim. Very slowly, I am constructing such an quotation-based essay/answer.

    Hartmanis’ monograph argues that (what we would today call) relativization theorems are compatible (obviously) with nondecidability of PvsNP … and thus constitute (less obviously?) heuristic evidence of it … and even (potentially?) an avenue for proving it.

    Of course, the distillation of concrete open problems from discussions like Hartmanis’ is a notoriously subtle task. Even with the help of decades of hindsight, it was not easy for Gödel, Church, and Turing to appreciate that the open problems associated to Hilbert’s second question (of 1900) encompassed not only consistency but decidability … and Hilbert himself was slow to appreciate this avenue of research.

    Similarly, even with the help of decades of hindsight, it is by no means obvious (to me at least) what worthy open problems might be distilled from Hartmanis’ 1978 monograph … but the decidability of the separation between P and NP surely is one of them.

  19. Amirali Says:

    Really enjoyed your talk. Very clear and interesting lecture.
    Quick question: what is the argument behind the statement that if P was equal to NP, then we could (efficiently) solve all the Clay Institute problems? Could you please make that more precise?

    To make things more concrete and perhaps simpler, let’s take e.g. the (< $1M) Goldbach conjecture. If I have a polynomial time algorithm for 3SAT, how can I solve Goldbach?

    (I'm guessing that one would argue that the Goldbach conjecture should have a proof that is easily recognizable; so it's in NP, and therefore if P=NP, the proof should be easy to find. But how do we even phrase the Goldbach conjecture, which is just one instance of a problem as a family of problems with a well defined "input" and "input size"? And once we have done that, how do we show it's in NP?)

  20. Scott Says:

    Amirali: OK, let’s talk about Goldbach’s conjecture for concreteness. Fix some formal system that encompasses all “ordinary” mathematical reasoning, let’s say Zermelo-Fraenkel (ZF) set theory. Certainly Goldbach’s conjecture can be stated in the language of ZF. Furthermore, it’s possible to write a computer program that checks the validity of any purported ZF proof, in time that’s almost linear (and certainly polynomial) in the total number of symbols that the proof contains. But this immediately implies that the problem

      “Given an input n, decide whether Goldbach’s conjecture has a ZF proof with n symbols or fewer”

    is an NP problem (and hence reducible to 3SAT, by the Cook-Levin Theorem). So if you had a polynomial-time algorithm for 3SAT, then it would also be a P problem.

    That would mean that, however long the shortest ZF proof of Goldbach’s Conjecture turned out to be, you could find that proof after a number of steps that was only polynomially greater than the number of steps you’d need to write the proof down.

    (Except for the part about 3SAT and the Cook-Levin Theorem, the above argument is essentially the one that Gödel made in his 1956 letter to von Neumann.)

  21. Amirali Says:

    Thanks a lot Scott. That definitely answers my question. I was thinking of the wrong decision problem; so there is obviously nothing special about Goldbach’s conjecture or any other problem. Couple of quick questions/comments:

    Let’s call the decision problem you have stated above “GP” (for Goldbach’s proof). I agree that we can give a poly length reduction from GP to 3SAT (I’m guessing it would go as GP YES iff 3SAT satisfiable). But let’s say the answer to GP is YES for some n=n*. When we solve the associated 3SAT instance, do we just get to know that the answer to GP_n* is YES, or do we also get a proof of Goldbach’s conjecture? I guess my question is: (i) Is there some argument that says if you can give the correct YES/NO answer to 3SAT in poly time, then with polynomially more work, you can also find a satisfying instance? and (ii) Will the reduction work in a way that a satisfying assignment would directly translate into a proof of Goldbach?

    My only comment is that the argument for P=NP having consequences for (efficiently) solving Goldbach still needs a little fix. Indeed, Goldbach has a proof of some finite length n*. (It does BTW, right?) The reduction then turns GP_n* into a 3SAT of length n**. It could happen that even if P=NP, the best poly time algorithm for 3SAT has running time much larger than 2^n** on the instances of size n**. (In other words, we can’t give the usual “asymptotic” argument for why poly time is better than exp time since GP is only interesting up to n*.)

    BTW, it’s amusing to think that we can turn the P vs. NP question into solving essentially a single instance of 3SAT. Maybe people should try that approach 😉

  22. Scott Says:

    Amirali: I like your questions, since they actually have clear answers!

    (1) Yes, there’s an important result called “the equivalence of search and decision” for NP-complete problems, traditionally given as a homework problem in theory courses. It says that, if you have a polynomial-time algorithm to decide whether (say) a 3SAT formula φ(x1,…,xn) is satisfiable or not, then you can convert it into a polynomial-time algorithm to find a satisfying assignment if one exists. I’ll give you a hint: first you run your decision algorithm on the (n-1)-variable formula

    φ0(x2,…,xn) := φ(FALSE,x2,…,xn).

    then you run it on

    φ1(x2,…,xn) := φ(TRUE,x2,…,xn),

    Then you … OK, the rest is homework for you. 😉

    On top of the above, any standard reduction from the Goldbach problem to 3SAT will have the further property of being a witness mapping. That is, in polynomial time, you can directly convert any n-symbol ZF proof of Goldbach’s Conjecture into a satisfying assignment of the 3SAT formula, and vice versa.

    (2) Yes, the shortest ZF proof of Goldbach’s Conjecture (assuming such a proof exists, which I’m >99% sure of!) has some fixed, finite length n*. But this is only an issue for Gödel’s argument in the same way asymptotics are always and everywhere an issue for complexity theory! It’s clear that a polynomial-time algorithm will “eventually” win massively over an exponential-time one. So the sole remaining question is whether it will “start winning soon enough”—i.e., on those instances that we finite creatures actually want to solve. GP of size n* is just one example of such an instance (albeit one with the interesting feature that we don’t yet know n*).

  23. Amirali Says:

    Thank you very much for your replies. That again covers all of my questions. I’ll ask you one last one, and then I’ll stop taking your time!

    I’m intrigued by the statement that you are “>99%” sure that Goldbach’s conjecture has a finite proof. Is there any yes/no question in mathematics that does not have a finite proof? (and I mean a single yes/no question, not a problem with a family of instances that can be undecidable.) Is this what the incompleteness theorem answers? Do we have a concrete example of such a problem? (I’m familiar with statements of the type: “G: G is not provable”. So, let’s exclude self-reference.)

    Wouldn’t it also be the case that if Goldbach conjecture did not have a finite proof, then it would necessarily be true? (Because if it was false, it would have a finite proof: just the counterexample.)

  24. Scott Says:

    Amirali: Well, there are statements like the Continuum Hypothesis (CH), which are known to be neither provable nor disprovable from the usual axioms of mathematics (ZF set theory), assuming those axioms are consistent. CH doesn’t involve any self-reference, but it does involve transfinite sets, so one could reasonably take the attitude (though not everyone does!) that CH was “too vague to have a definite answer.” In other words, in contrast to (say) Goldbach’s Conjecture, there doesn’t seem to be any imaginable computer program or physical process whose behavior would depend on the truth or falsehood of CH.

    Currently, all the statements that are proven to independent of ZF set theory involve either transfinite sets or Gödelian self-reference. (There are relatively “natural” mathematical statements, like the Paris-Harrington theorem, that are provably independent of Peano Arithmetic, but those statements are provable in stronger theories like ZF set theory.)

    Logically, we can’t exclude the possibility that “normal” mathematical statements, like (say) Goldbach’s Conjecture, are independent as well and could even be proved to be independent. Such statements would still be either true or false; it’s just that our current axioms wouldn’t suffice to say which! Indeed, as you correctly pointed out, if Goldbach’s Conjecture is false then it has a finite counterexample, which means that if it’s independent of ZF set theory then it’s metatheoretically true!

    On the other hand, logically it’s possible that Goldbach would be independent of ZF, and that fact itself would be unprovable in ZF, and so on ad infinitum. In that case, even though Goldbach would be true, it’s not clear how we could ever come to know that.

    But such possibilities seem very unlikely given our actual experience. As I said, my guess is that the lack of a proof of Goldbach’s Conjecture reflects the current limitations of human intelligence rather than the ultimate limitations of ZF set theory.

  25. Amirali Says:

    Learned quite a bit from this discussion. Thank you Scott.

  26. John Sidles Says:

    Gauss’ letter of 1816 to Olbers offered the opinion

    “I confess indeed that Fermat’s theorem as an isolated proposition has little interest for me, since a multitude of such propositions, which one can neither prove nor refute, can easily be formulated.”

    Nowadays we can extend Gauss’ intuition by exhibiting Turing machines in P that recognize languages that are lists of such propositions. Lists of mortal/immortal matrices are natural candidates for such languages.

    It is easy to specify Turing machines in P that generate ordered lists of provably mortal matrices, together with a certificate of each matrix’s mortality. Conversely (but less rigorously) we can conceive a class of Turing machines, each of which guesses a list of immortal matrices (on probablistic grounds) such that most of those Turing machines recognize (in P!) matrix languages whose immortality is Gödel-undecidable.

    The intuition here is the Complexity Zoo’s inhabitants may generically include wild languages that are recognized in P and whose words assert true statements that are Gödel-undecidable—yet not provably so. And the point is that these wild languages in P (if they exist?) are intimately associated to complexity-theoretic questions that are subtle, open, and difficult.

    An essay clarifying these “wildness-in-P” issues would be very welcome to me and many! 🙂

  27. Do the undecidable attributes of P pose an obstruction to deciding P versus NP? (answer: maybe) | Question and Answer Says:

    […] question is open, and because it is being discussed on multiple mathematical weblog threads (1,2,3,4,5,6), this question has been flagged for conversion to Community […]