For weeks, I’ve been meaning to blog about an important recent paper by Stephen Jordan, Keith Lee, and John Preskill, entitled Quantum Algorithms for Quantum Field Theories. So I’m now doing so.
As long as I’ve been in quantum computing, people have been wondering aloud about the computational power of realistic quantum field theories (for example, the Standard Model of elementary particles). But no one seemed to have any detailed analysis of this question (if there’s something I missed, surely commenters will let me know). The “obvious” guess would be that realistic quantum field theories should provide exactly the same computational power as “ordinary,” nonrelativistic quantum mechanics—in other words, the power of BQP (the class of problems solvable in polynomial time by a quantum computer). That would be analogous to the situation in classical physics, where bringing in special relativity dramatically changes our understanding of space, time, matter, and energy, but seems (unlike quantum mechanics) to have little or no effect on which computational problems can be solved efficiently. Analogously, it would seem strange if quantum field theories (QFTs)—which tie together quantum mechanics, special relativity, and detailed knowledge about the elementary particles and their interactions, but seen from far enough away are “just” quantum mechanics—forced any major revision to quantum computing theory.
Until now, though, there seems to have been only one detailed analysis supporting that conclusion, and it applied to (2+1)-dimensional topological QFTs (TQFTs) only, rather than “realistic” (3+1)-dimensional QFTs. This was the seminal work of Freedman, Kitaev, and Wang and Freedman, Larsen, and Wang in 2000. (Six years later, Aharonov, Jones, and Landau gave a more computer-science-friendly version, by directly proving the BQP-completeness of approximating the Jones polynomial at roots of unity. The latter problem was known to be closely-related to simulating TQFTs, from the celebrated work of Witten and others in the 1980s.) To a theoretical computer scientist, dropping from three to two spatial dimensions might not sound like a big deal, but what’s important is that the relevant degrees of freedom become “topological”, making possible a clean, simple model of computation. For “realistic” QFTs, by contrast, it wasn’t even obvious how to define a model of computation; putting realistic QFTs on a rigorous mathematical footing remains a notorious open problem.
In their new paper, Jordan, Lee, and Preskill say that they give an algorithm, running on a “conventional” quantum computer, to estimate scattering probabilities in a class of QFTs called “continuum φ4 theories.” Their algorithm uses time polynomial in the number of incoming particles in the scattering experiment and in their total energy, and inversely polynomial in the desired precision ε and in the distance λ-λc between the QFT’s coupling constant λ and a phase transition λc. (In d=2 spatial dimensions, they say the dependence on the precision scales like (1/ε)2.376, the 2.376 coming from matrix multiplication. Naturally, that should now be amended to (1/ε)2.373.) To develop their algorithm, Jordan et al. apparently had to introduce some new techniques for coping with the error incurred by discretizing QFTs. No classical algorithm is known with similar scaling—so when suitably formalized, the “QFT simulation problem” might indeed be in BQP-BPP, matching the uninformed doofus intuition of complexity theorists like me. Jordan et al. don’t say whether the problem they’re solving is also BQP-complete; I imagine that could be a topic for future research. They also don’t say whether their precision parameter ε bounds the variation distance between the real and simulated output distributions (rather than just the differences between probabilities of individual scattering outcomes); I hope they or someone else will be able to clarify that point.
In case it isn’t obvious yet, let me make it crystal-clear that I lack the physics background to evaluate Jordan et al.’s work in a serious technical way. All I can say with confidence is that the small number of people who (1) have the requisite background and (2) care about computational complexity, will probably spend non-negligible time discussing and understanding this paper in the weeks and months to come.
Conflict-of-Interest Warning: At a deep, subconscious level, I probably chose to blog about Jordan et al.’s paper not for any legitimate scientific reason, but simply because I know John Preskill and Stephen Jordan personally, and, despite being physicists, they’re both tremendously-respected colleagues who’ve made many outstanding contributions to quantum computing theory besides this one. Then again, everything I’ve ever done—and everything you’ve ever done—has probably had such unsavory hidden motives as well, so who’s counting? In all of history, there have only been ten or twenty people whose commitment to scientific objectivity has been absolute and pure, and since they comment on complexity blogs anonymously, we’ll probably never even know their names…