Valiant’s valiance recognized

Update (March 25): I have a new paper called A Linear-Optical Proof that the Permanent is #P-Hard, which is dedicated to Les Valiant on the occasion of his Turing Award.  Here’s the abstract:

One of the crown jewels of complexity theory is Valiant’s 1979 theorem that computing the permanent of an n*n matrix is #P-hard. Here we show that, by using the model of linear-optical quantum computing—and in particular, a universality theorem due to Knill, Laflamme, and Milburn—one can give a different and arguably more intuitive proof of this theorem.


For decades, Harvard’s Leslie Valiant has obviously deserved a Turing Award—and today, the ACM most excellently announced its agreement with the obvious.  I have little to add to the prize citation (see also Lance’s post): from launching new fields whose reach extends beyond theory (PAC-learning), to proving epochal results (#P-completeness of the permanent), to asking hugely influential questions (permanent vs. determinant), Valiant has been a creative powerhouse of theoretical computer science for longer than I’ve been alive.

One thing the prize citation doesn’t mention is that Valiant is now the third Turing Award winner (after Andy Yao and Len Adleman) to have made a major contribution to quantum computing theory.  Valiant’s 2001 paper Quantum Computers that can be Simulated Classically in Polynomial Time introduced the beautiful computational model that computer scientists now know as “matchgates,” and that physicists know as “noninteracting fermions.” It still amazes that Valiant proposed this model for purely mathematical reasons—hitting physical relevance straight between the eyes despite (as far as I can tell) not having that target anywhere in his sights.

To put the point in terms that my physicist friends will understand, that Valiant himself would probably dispute, but that I would defend:

Valiant’s work has shown that, even if our universe hadn’t been made of bosons and fermions, theoretical computer scientists would have had compelling reasons of their own to invent those particles or something equivalent to them—and furthermore, that at least one theoretical computer scientist would have had the imagination to do so.

Certainly Valiant has had a huge influence on me, both through his work and as someone who made time to talk to me as an obscure grad student a decade ago.   Three of my papers—The Learnability of Quantum States, A Full Characterization of Quantum Advice, and The Computational Complexity of Linear Optics—would collapse entirely without Valiant-laid foundations.

Congratulations, Les!

28 Responses to “Valiant’s valiance recognized”

  1. Ross Snider Says:

    Valiant is one of the men who I would love to treat to a meal as an obscure grad student (whenever some stubborn CS department finally decides I deserve an acceptance rather than a rejection letter).

  2. Will Rabjohns Says:

    Scott, this might be a good time to broach the subject of whether awards like the Turing (e.g., the Nobel, Fields medal, etc.) should even exist. What legitimate purpose do they actually serve? Surely, Valiant (and every other award winner) has received plenty of recognition before getting the prize; if he hadn’t there’s almost no way Valiant could have won to begin with. Given that odds are you will one day win the Turing (assuming you keep on as you have been) would you feel cheated if the prize were abolished before you had a realistic chance to win it? Shouldn’t the energy and the money go to a better purpose (that is, one which helps directly advance science or philanthropy)?

  3. Philip White Says:

    Why don’t you think awards directly advance science? I’m not a scientist or professional mathematician, but awards do many things, including: incentivize, reward, publicize, and glorify (in a good way) progress, whether it’s scientific or of another variety. I can tell you from personal experience that there are few things worse or more demoralizing than a community (e.g., a workplace) that doesn’t celebrate its successes adequately. And besides, reading about great computer scientists and mathematicians being recognized for their work is inspiring to many people.

  4. Scott Says:

    Will: Within the theory community, Oded Goldreich is well-known for sharing your anti-award sentiments.

    On the one hand, I hate the unfair and arbitrary aspects of awards: there are clearly-deserving theorists who were denied the Nevanlinna simply because they had the misfortune of turning 40 before the committee got around to them, just as there were great scientists (Hubble, Rosalind Franklin) who had the misfortune of dying before the Nobel committee got around to them. (Of course, the 40-and-under rule of the Fields and Nevanlinna makes things even more unfair and arbitrary than they’d otherwise have to be.)

    On the other hand, I also find it grossly unfair that (with a few exceptions, like Einstein and Hawking) the world’s greatest scientists probably get less than 1% of the fame, glory, etc. of a mediocre NFL player. And that, in turn, affects what kids dream about being when they grow up, and probably even the relative social status of nerds vs. other kids. To whatever tiny extent the Nobel Prize, Fields Medal, and Turing Award rectify that situation, I find myself supporting their existence despite my reservations about their fairness.

  5. John Sidles Says:

    Oded’s essay was not familiar to me, and thank you for providing a link to it.

    The poet whom Oded quotes, Robinson Jeffers, *is* familiar, and his famous poem The Purse-Seine speaks directly to 21st century global scientific endeavors in the following lines:

    … Lately I was looking from a night mountain-top
    On a wide city, the colored splendor, galaxies of light.

    How could I help but recall the seine-net
      Gathering the luminous fish?

    I cannot tell you how beautiful the city appeared,
       and a little terrible.

    I thought, We have geared the machines and locked all together
    into inter-dependence; we have built the great cities;
    now there is no escape.

    We have gathered vast populations incapable
      of free survival, insulated
    From the strong earth, each person in himself helpless,
      on all dependent.

    The circle is closed, and the net
    Is being hauled in …

    It seems to me that those mathematicians, scientists, engineers (and physicians, artists, politicians, teachers, …) who are most worthy of our appreciation, include in particular those whose work helps us to an appreciation that the space inside our global purse-seine, is (hopefully) more resource-rich, and also vastly more beautiful, than is apparent at first sight.

  6. jonas Says:

    Note however that the very few scientists who do get really famous get fame for centuries, whereas any NFL player will only be famous for a few decades at most.

    By the way, while we’re at famous scientists, the readers of this blog may want to look at “http://www2.stetson.edu/~efriedma/periodictable/” which has a bibliography of 111 famous mathematicians.

  7. Lou Scheffer Says:

    Per Scott’s note above, when integrated over time, the most famous scientists are famous in ways no athlete or movie star has yet achieved – for example, Charles Darwin is referenced by name in 2% of all books ever published in the English language. See “The Science Hall of Fame” from the AAAS:

    http://www.sciencemag.org/content/331/6014/143.3.full

  8. matt Says:

    Giving enormous respect to Valiant, and especially to the work on permanent vs. determinant. But let me take the opportunity to put forth a contrarian viewpoint on the matchgates work (again, with absolutely the highest respect for everything else!): to a physicist, it really is just non-interacting fermions! IMO, when I see people in CS talk about how holographic algorithms might be a key to understanding deep problem in complexity theory, etc…, I really think they miss the point. There are multiple instances in physics where one can make enormous progress on a particular point, and yet a minor modification of the model makes that machinery much, much less effective (compare the power of CFT in 2d vs CFT in 3d), and noninteracting fermions are one example. Further, physicists already did invent the noninteracting fermion model multiple times, at least once for essentially the same reasons as Valiant, without any motivation by actual fermions. Namely, Onsager’s solution of the 2d Ising model is now usually presented in a much simpler way using noninteracting fermions than its original more complicated construction. Bruria Kaufman was responsible for that breakthrough, but she didn’t, as far as I understand it, immediately leap to noninteracting fermions using physical knowledge. Rather, she found a way of using spinor representations of the orthogonal group, and then turned that into fermions, essentially following the math. Then, this was re-invented in the non-interacting fermion mapping of counting dimer covers by Kasteleyn and company (interestingly, while they did it directly, one can find frustrated Ising models on the dual lattice which also count the dimer covers, so given the fermion mapping of the Ising model one can also derive the fermion counting of dimers this way…I don’t know when this was realized, whether it was before or after). Just to add an interesting complexity theoretic remark: one interesting outcome of this work has been the ability to count coverings on nonplanar graphs at a complexity which scales polynomially on the graph size but exponentially in the genus. Anyway, again, huge respect to Valiant, but I think many physicists, if given the paper to referee, would have pointed out references in the earlier literature, and only accepted it in a journal which was not top rank. IMO, this was an area where CS didn’t know the literature. Hope this isn’t considered offensive!

  9. Scott Says:

    Matt: Thanks for the interesting perspective!

    Indeed, I strongly disagree with the view Valiant has sometimes expressed in talks that there might well be an “accidental algorithm” proving NC = P#P. My own view on holographic algorithms is much closer to yours—that what they reveal are surprising weaknesses in very specific models.

    While I agree that physicists had been interested in noninteracting fermion systems for some time (to put it mildly), I’m curious: can you give me an earlier source than Valiant for the claim that such systems can be simulated in P?

    (That claim is more than just the trivial-in-retrospect observation that fermions = determinant = easy, since you also need to know how to sum the exponentially-many squared determinants that arise when you measure. And for whatever it’s worth, almost all the physicists to whom I’ve explained the “noninteracting fermions in P” claim have been incredulous about it! Their intuition comes from quantum Monte Carlo, where bosons are the easy case and fermions are the hard case.)

    Whatever the answer to the above, I agree that the matchcircuits∈P result is much less impressive by itself than when seen as an outgrowth of Valiant’s whole oeuvre. Without Valiant, we wouldn’t even have the basis for believing permanent is harder than determinant (and as a consequence, noninteracting bosons are harder than noninteracting fermions) in the first place…

  10. matt Says:

    People can get confused about the difference between real and imaginary time, yes (QMC pretty much works well for imaginary time and can in some cases be analytically continued to real time, but the continuation is often very hard to do without encountering numerical instability). Not sure why you say it’s different than the observation that fermions=determinant, as if the Hamiltonian is quadratic then any expectation value which is a product of fermion operators can be calculated via Wick’s theorem as a determinant (going to the Heisenberg representation). That also gives you the ability to sample, if desired with a little more work, but that is a separate discussion.

    Agreed that matchgates are an example of particular weaknesses in certain models. I see the theory of NP-completeness (or any other theory which works using reductions from one problem to another) as basically a way of saying that “this problem almost certainly does not have a very large weakness, as if it did then so would all these other problems”, which is indeed opposite to the viewpoint that one might find a weakness in a large class of models.

    Interestingly, Onsager’s work was not actually the first example of solving an integrable model. Bethe beforehand had solved the Heisenberg spin-1/2 chain (although many steps were done later by others to further develop the Bethe ansatz, and I don’t know much about the history or actual results). And it wasn’t the last example: there are many interesting models with names like Baxter, Lieb, etc… which have been exactly solved. So, of course one can ask whether these solutions give rise to nontrivial algorithms, just as Onsager’s work gives rise to certain polynomial time algorithms for computing partition functions. Alas, it seems that (as far as I can tell from talking to my colleagues who actually understand integrable systems in 2d), these models are only exactly solvable with uniform interactions (or perhaps interactions which vary in space in certain particularly simple ways), not with arbitrary spatially varying interactions, so they cannot be viewed in the same way as corresponding to a network of gates where each gate can be chosen arbitrarily so long as it belongs to a particular class. But perhaps there are some examples where these tools do give rise to new algorithms that I just don’t know about yet.

  11. Raoul Ohio Says:

    Jonas,

    Great lead about bio’s of mathematicians, arranged as the periodic table. I picked the last element Lr, which turned out to be about Lagrange. Here is a snippet:

    “In 1771, he proved Wilson’s theorem, that n is prime if and only if (n-1)! + 1 is divisible by n.”

    I agree with Lagrange that 1 is a prime number.

  12. A number theorist Says:

    Raoul Ohio, mathematical definitions can change over time.
    According to nowadays usual definition(s) of prime number, 1 is not a prime number. This is not a matter of opinion, but a fact. (A matter of opinion is whether the current definitios are better or worse than alternatives.)

    There are various concrete reasons why the current definition is the way it is. Yet, in the end it is: the general theory, essentially inexistant at Lagrange’s time, is ‘nicer’ that way, as I believe somebody explained you recently.

    For example, it is overall simply more convenient to classify the elements of a unique factorization domain such as the integers, into: 0, multiplicatively invertible elements (for integers, 1 and -1), prime elements (defined such that they do not include inv. elements),…

  13. Ungrateful_person Says:

    My understanding of holographic algorithms is limited but still I feel Valiant’s question makes sense.

    After all these (allow me the word crazy) looking algorithms can solve problems minor variations of which are #P-Complete. Typically, we come very close to solving the actual #P-Complete problem and hit a dead end when we discover that no matchgate can realize the said signature.

    But still, from what I have gathered so far, I guess the theory of signatures of these matchgates is not yet complete and certainly, IMHO it can be a fruitful avenue for research.

    So my question is – why is the theory community not very excited about the theory of realizable signatures? Moreover, what is wrong with Valiant’s belief that any proof of P \neq NP may need to explain the unsolvability of holographic algorithms using this approach.

    I would like to know the reason (in perhaps, a quantum computing free jargon) that why are we not inclined towards believing Valiant’s assertion

  14. matt Says:

    Sure, minor variants of matchgates are #P complete. But, 2-SAT is in P (actually, can be done in linear time, if you are allowed to manipulate integers in O(1) time), while MAX-2-SAT is a “minor variation” and is NP-complete. And it is “surprising” that 2-SAT is in P since if you just tried every possible solution it would take exponential time. Similarly, Dijkstra’s shortest path algorithm is a fast way to find the shortest path between 2 points (again, “surprising” given that trying every path takes exponential time). But if we make the “minor variation” that you look for the shortest path going to every point once, we are at travelling salesman. So, it is actually a very common phenomenon that if you solve a problem by expoloiting structure, then any minor variant of that structure can kill your solution. The same happens all the time in physics, too, where a minor change of an exactly solvable problem can be hopeless. So, matchgates (ahem, “Onsager/Kaufman/Kasteleyn/Fisher solution of dimer covering and Ising model”) are just another case of this, most likely.

    I agree any correct proof of P \neq NP would need to avoid proving that the problems solved by matchgates are hard. Similarly, a correct proof of P \neq NP can’t prove that 2-SAT is hard (and it seems like a lot of proofs fail this test). But again, this is just another case of the fact that there are lots of fast, clever algorithms and proving that a problem is hard is a hard thing to do.

  15. Ungrateful_person Says:

    @matt

    Thanks for sharing your thoughts. I guess, I see your point. However, I still feel that holographic algorithms are kinda cool maybe because of the surprising way in which they work (and I would also accept that them being cool is not the point you have dissension with. You do not buy into them being any helpful as far as NP-Hard problems are concerned and I see why).

    Use matchgates with appropriate signatures, invoke Holant theorem and game over.

  16. Mitchell Porter Says:

    Speaking of spin lattices and complexity theory… arxiv:1102.3452 looks deep. Let me see if I can summarize it:

    They (Aluffi and Marcolli) bound the algorithmic complexity of the set of real zeroes of the partition function of a Potts model on a graph (and show that this complexity increases exponentially with the size of the graph), by using high-powered motivic mathematics (recently employed in the formal theory of QFT renormalization) to compute a topological property of the set of zeroes. This will then have implications for understanding phase transitions in these Potts models (but they don’t say exactly what, e.g. they don’t say that calculating the phase diagram will also be exponentially hard).

    In the present context, the paper caught my eye because the partition function of the Potts model is the Tutte polynomial, and this is one of those functions which is #P-hard to compute except for a few special cases, like Onsager’s algorithm. Also because the paper is employing very advanced algebraic concepts. There may be a “motivic complexity theory” waiting to be discovered, somewhere above and beyond Mulmuley’s “geometric complexity theory”.

  17. Elmer Gantry Says:

    Hey Scott, that new paper of yours isn’t half bad. But what I’m wondering about is why you say “we” instead of “I” when you are the sole author. Isn’t that convention a little out-dated? Forget modesty. Take the glory that is yours by right.

  18. Scott Says:

    Elmer: It’s funny, we hadn’t even consciously realized we were using that convention! 🙂

    If you look at (say) my thesis, you’ll see that I used to hate that convention and ignore it whenever possible—moved by Mark Twain’s remark that the only individuals who should call themselves “we” are royalty and people with tapeworms. Over time, though, I came to find it charming, much like the related tradition of referring to yourself by last initial only when giving a talk (so that, for example, I’d mention a result of [A.-Gottesman 2004] on a slide). There are so few antiquated traditions we still keep, we might as well cherish the ones we have. And if the traditions seem too modest to you, I assure you that the modesty is almost never sincere. 🙂 (Though sometimes even false modesty is better than no modesty.)

  19. KnowledgeSeeker Says:

    I am from Electrical Engineering. So I do not know much about the jargon. BUt I do know permanent is hard to calculate. Supposing if someone finds a way to calculate permanent efficiently, does that mean P=NP? What are the consequences of such a result to complexity theory and cryptography?

  20. KnowledgeSeeker Says:

    And ofcourse to quantum computing. If you can give some toy examples to each case it would be great:)!!

  21. Scott Says:

    KnowledgeSeeker: Yes, Valiant proved that the permanent is #P-complete, and NP reduces to #P. So if someone finds a way to calculate the permanent efficiently, that would immediately imply P=NP. The consequences of such a result for complexity theory and cryptography would be to render both fields obsolete. 🙂

  22. KnowledgeSeeker Says:

    That is what I dont get. Permanent of an all one square matrix of size n is n!. Say someone computes this in polynomial time which is n^k time. Then how can one factor a number that has 100-300 zeros. Wouldn’t that be still 10^100k time?

  23. Scott Says:

    KnowledgeSeeker: No. If you had a fast algorithm for the permanent, then you could use it to get a fast algorithm for factoring also! That statement is the end result of combining two reductions: from factoring to satisfiability, and from satisfiability to the permanent. You shouldn’t expect to understand it right away (that’s what makes the #P-completeness of the permanent a great result, and not just some trivial observation).

    If you want to understand why the permanent is #P-complete (and what that means), then google for course lecture notes, read the Wikipedia page, read the book Computational Complexity by Christos Papadimitriou, or (for an alternative, quantum computing proof) try my recent paper.

  24. KnowledgeSeeker Says:

    Thank you very much. I will read the most accessible material that you have quoted given my limited knowledge in the field.

  25. KnowledgeSeeker Says:

    I was looking at your paper. It makes more sense now with the introduction you have given:)!

    BTW someone(a math prof) told me that if RSA is broken, then people may go for ECC. So does it mean, permanent is easy does not imply ECC is easy? In that case one has ECC to fall back on correct?

  26. Scott Says:

    No, if permanent is easy then every public-key cryptosystem is breakable. Do you see the logical flaw in what you wrote? Breaking RSA wouldn’t necessarily imply a break of ECC, but breaking RSA by computing the permanent would certainly imply a break of ECC.

  27. KnowledgeSeeker Says:

    Thank you. It is very enlightening. So if permanent is easy then no one-way functions exist.

    So is the converse known?

    For example if someone breaks RSA(through factoring) or discrete log would that provide evidence of permanent is easy?

    I had the following question as well in mind. I found permanent is easy to approximate. Does it provide any help in providing an easier way with one-way functions such as multiplication by two primes?

  28. Braydon Says:

    A good question and yes, there is an even beettr answer and it is Yes.A mismatch between the number of lepton generations and quark generations would mean that there are uncancelled “gauge anomalies”. This is a fancy mathematical object describing quantum corrections to symmetries – quantum effects that would inevitably violate the SU(3) x SU(2) x U(1) gauge symmetry even though it naively “has to hold”.These anomalies are given by triangle Feynman diagrams. For example, the sum of Q^3, the cubed electric charge, over all 2-component left-handed spinors in your theory has to vanish. It vanishes if you add up one generation of quarks and one of leptons (or 3+3) but it doesn’t cancel for leptons or quarks separately. If gauge anomalies would be nonzero and survive, it would mean that the timelike components of the photon or other gauge bosons can’t be “decoupled” and they could be created out of initial particles – with probabilities that would be negative in many cases.