The Blog of Scott Aaronson If you take just one piece of information from this blog: Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once.

I arrived in Tempe, Arizona yesterday for a workshop on “The Nature of the Laws of Physics,” kindly hosted by Paul Davies’ Beyond Center. I’m treating this as a much-needed end-of-semester vacation—with warm desert air, eccentric personalities, talks without theorems, and the sort of meandering philosophical debate I get inexplicably cranky if I haven’t had for a month. Just one problem: I was hoping Cosmic Variance‘s Sean Carroll would arrive to provide much-needed positivist reinforcement against the gangs of metaphysical ruffians, but the California Clarifier backed out—leaving the remaining skeptics to dodge relentless volleys of ill-posed questions only three hours’ drive from the O.K. Corral.

My graduate course 6.896 Quantum Complexity Theory ended last week, with ten amazing student project presentations. Thanks so much to the students, and to my TA Yinmeng Zhang, for making this a great course (at least for me). Almost all of the scribe notes are now available on the course website. But be warned: not only did I not write these notes, not only did I not edit them, for the most part I haven’t read them yet. Use entirely at your own risk.

Want to do graduate study in quantum information at MIT? Yes? Then my colleague Jeff Shapiro asks me to point you to the new website of iQUiSE, our Interdisciplinary Quantum Information Science & Engineering program (motto: “Further Depleting the Supply of Quantum Funding-Related Acronyms Containing the Letters Q and I”). If you’re interested, you apply to a traditional department (such as physics, math, EECS, or mechanical engineering), but specify in your application that you’re interested in iQUiSE. The application deadline is today—but if for some strange reason 17 hours isn’t enough to write your application, there’s always another year.

Dmitry Gavinsky asks me to throw the following piece of meat to the comment-wolves: What exactly should count as a “new” quantum algorithm?

This entry was posted
on Monday, December 15th, 2008 at 10:10 am and is filed under Complexity, Mirrored on CSAIL Blog, Quantum.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

Dmitry Gavinsky asks me to throw the following piece of meat to the comment-wolves: What exactly should count as a “new” quantum algorithm?

This is a very good question that in my view is not asked often enough. Newness is never quite rigorous for any type of result in mathematics or computer science; the best you can do is general principles. The basic principle in this case is that your ideal audience is already reasonably well-versed in classical algorithms. A quantum algorithm is then “new” if (a) it is faster than what is classically available or possible; and (b) it isn’t a variation that a classically versed audience will easily think of.

It’s easy to feel that you have a new quantum algorithm when it could just be another use of Grover search, for example. I’ve seen papers that read that way. That said, it is also common enough not to publish a good result just because you mistakenly thought that it was well-known. It doesn’t hurt to ask the right people whether in fact you have a new algorithm.

Now and then someone finds a clever new application of a standard quantum algorithm. For instance, there is a paper by Kedlaya which I rather like in which he points out that Shor’s algorithm can count the points on any algebraic curve over a finite field if the curve is in a standard form. This was known for elliptic curves because those are abelian groups. Even though the quantum part is not new, the algorithm makes the point that the implications of the QFT-style quantum algorithms go beyond analyzing groups.

I am being perfectly serious Scott. The fact that you vituperate my answer is very telling of a serious weakness in the field of quantum algorithms. If pure philosophers had been entrusted with the task of building modern personal computers, they would still be sighing about the amazing complexity issues of the abacus.

Supposing a quantum algorithm has been written up as software, why does that make it new? You can code up old algorithms too! It’s a completely different question, and you know that as well as I do.

On the other hand, the ability to actually code up quantum algorithms would have a huge effect on the development of quantum algorithms. Is the quantum adiabatic algorithm actually useful? Who knows, some special cases yes, some no, but for many cases there are powerful classical heuristics too. The ability to test it out would clear a lot of things up. Loopy belief propagation is a powerful classical heuristic, and I don’t think it would have been developed if not for fast computers-LDPC codes, which rely on loopy bp for decoding, were initially ignored as even in the 60s the computers weren’t fast enough.

“Supposing a quantum algorithm has been written up as software, why does that make it new? You can code up old algorithms too! It’s a completely different question, and you know that as well as I do.”

Obviously, I meant it is new the moment it is *first* written into software.

This reminds me of the question of how do we define a new human. Is the embryo a human the day the husband and wife decide to have a baby, or is it a human at least a few weeks after fertilization?

Gregsays that a NQA is something that (a) is faster than what is classically available or possible; and (b) isn’t a variation that a classically versed audience will easily think of.
Let me be radical and suggest that only superpolynomially faster counts, this would avoid some “implementation issues” mentioned in later posts.

Suppose I find a new class of groups where the Hidden Subgroup Problem (HSP) can be solved using new classical postprocessing of some previously used quantum measurement (in fact, I already did so once). Would that be a new quantum algorithm?

If not, due to the fact that the problem is classically reducible to a previously known quantum algorithm, then there have been no new quantum algorithm since the universality of the quantum Turing machine has been established.

If it is, due to the fact that classically the problem cannot be solved, then right away I can give new quantum algorithms: for any natural k>0, let SHOR+k be the algorithm that given N, efficiently computes p+k, where p is the smallest prime factor of N.

Let me be radical and suggest that only superpolynomially faster counts, this would avoid some “implementation issues” mentioned in later posts.

That seems too strict to me. The fastest classical algorithm for the hidden subgroup problem for the symmetric group is O(n!^{1/2}). The fastest known quantum algorithm is, I think, O(n!^{1/3}), although you might be able to shave the exponent to 1/3-o(n) by using a large abelian subgroup of the symmetric group. (Not sure about that.) If you found an algorithm that runs in time O(n!^{1/4}), that would be a good quantum algorithm in my view. At the very least, I would find the result interesting.

I agree that the phrase “runs in time” needs to be clarified. If you like, you can use circuit size. (Not circuit depth.) Circuit size is related to running time on a RAM machine or running time on a tree-tape machine.

If it is, due to the fact that classically the problem cannot be solved, then right away I can give new quantum algorithms: for any natural k>0, let SHOR+k be the algorithm that given N, efficiently computes p+k, where p is the smallest prime factor of N.

Well yes, that’s the sort of silly variation that we can all think of that doesn’t count as a new quantum algorithm.

But see Kedlaya’s paper. That’s a much more interesting example where it is just Shor’s algorithm, but employed for a new purpose.

I hope 17 hours assuming the statement of purpose is already written… otherwise I’ll assume either nobody reads those essays, or there is a magic way of writing good statements.

This is how Paul Davies thinks quantum information works (from New Scientist, 30 June 2007, available online):

As the number of particles goes up, so the number of defining parameters escalates. A state with 400 entangled particles blows the Lloyd limit – it requires more bits of information to specify it than exist in the entire observable universe. If one takes seriously the inherent uncertainty in the laws implied by Lloyd’s limit, then a noticeable breakdown in fidelity should manifest itself at the level of 400 entangled particles.

Surely Lloyd’s limit is entirely compatible with conventional QM, and it’s a misunderstanding to suggest that it implies that anything unusual is going to occur with a highly entangled many-particle state?

I certainly agree that it’s worth empirically testing what happens to a state with 400 entangled particles. Maybe the reality is that something new and interesting will take place, in violation of the predictions of conventional QM. But that’s a completely separate issue.

Dmitri and Scott ask: What exactly should count as a “new” quantum algorithm?

That’s a perfect question to ask on a blog.

Let’s consider whether a good answer—with “good” meaning “helps create good math, good science, good engineering, and good jobs”—might be the following: “A quantum algorithm is any mathematical operation on complex state-spaces that returns more information, for less time and space resources, than one would expect.”

The above answer is not original; it is adapted from the Wikipedia article on “Rigidity_(mathematics)”.

From this expanded point of view, a fine example of a new quantum algorithm is Plenio and Virmani Upper bounds on fault tolerance thresholds of noisy Clifford-based quantum computers (arXiv:0810.4340). Gee … who wouldn’t want to call their interesting simulation method a “new quantum algorithm”?

The practical advantage of embracing this broad definition of “quantum algorithm” is that it encourages quantum information researchers to explore the still-mysterious mathematics of quantum simulation, which the chemists and condensed matter physicists have been pioneering, without always understanding (in any deep mathematical sense) why it is that their simulation methods work so well.

If anyone has got a definition of “quantum algorithm” that is more likely to create good mathematics and good jobs than the above … well … please post it!

Also, best wishes for happy holidays to all, and special thanks to Scott for hosting this fine weblog. 🙂

I think of a distinct algorithm more as an approach or strategy than as a specific implementation, where a new algorithm is one whose “evolution” at runtime on a set of inputs isn’t parallel to that of an existing one, even if it’s slower when applied to solving a particular problem. One where, if we could graph the execution of an algorithm on a set of inputs, the graph of the new and old would have nothing to do with each other.

Identifying if an algorithm is distinct or equivalent (in approach) to another on a range of inputs (i.e. that it gives not only the same answer but operates in an equivalent way) sounds like an NP-Complete problem (but i use this comparison pretty freely).

Greg E.: Paul is, of course, flat-out wrong about the holographic entropy bound being incompatible with conventional QM. I had a long email debate with him about this, which I won’t recapitulate here. 🙂

I certainly agree that it’s worth empirically testing what happens to a state with 400 entangled particles.

But there have already been many tests with many kinds of entanglement, not only with 400 entangled particles but with billions of particles. It just isn’t computationally useful entanglement.

When you ponder the credibility of quantum computation, it is very important to understand that: (a) Entanglement is description dependent. (b) It is a long-established practice to seek a quantum-mechanical change of basis to describe physical systems without entanglement, and sometimes this change of basis is highly creative. (c) Anyonic system are a moral exception. The standard description is somewhat analogous to no entanglement, but entanglement is never really eliminated.

In other words, when people ask “What would massive entanglement look like?”, the question that the may think that they are asking is, “What does Sanskrit text that is not English look like?”, but the question that they are actually asking is, “What does Sanskrit text that cannot be translated to English look like?” To analogize the other points above, (b) the translation can be highly non-trivial, and (c) there are a few cases where the only credible translation is to a variation of English.

Also, let me add a remark about Tree-BQP, since I was trying to make a remark to Scott about it many years ago, but I never managed to explain it clearly. Any reasonable censorship hypothesis that disallows massive entanglement or otherwise QC would have to be stable with respect to a plausible class of changes of bases in quantum mechanics. As far as I can tell, Tree-BQP does not have that stability property: A tree state in one commonly used basis in condensed matter physics does not have to be a tree state in another commonly used basis in condensed matter physics. So in that respect, Tree-BQP has arguably been a straw man all along.

In fact, this is a general reason that skepticism of QC is a tough game. You would like to say that certain quantum states are impossible (without prohibitive noise), not just certain descriptions of quantum states. Outlawing descriptions doesn’t make sense. I have never seen a viable proposal to outlaw some description-independent class of quantum states, enough to prevent QC, and yet keep states that people have already made.

Greg Egan says: I certainly agree that it’s worth empirically testing what happens to a state with 400 entangled particles.

… and Greg Kuperberg replies: But there have already been many tests with many kinds of entanglement, not only with 400 entangled particles but with billions of particles. It just isn’t computationally useful entanglement.

I agree with both comments, and it seems to me that they are well-suited to blog discussion, After all, in the words of the Federalist Papers:

The plan offered to our deliberations affects too many particular interests, innovates upon too many local institutions, not to involve in its discussion a variety of objects foreign to its merits, and of views, passions and prejudices little favorable to the discovery of truth.

… To judge from the conduct of the opposite parties, we shall be led to conclude that they will mutually hope to evince the justness of their opinions, and to increase the number of their converts, by the loudness of their declamations and the bitterness of their invectives.

Inspired by the above words of Hamilton, Madison, and Jay, here are some remarks relating to the “passions and prejudices” of quantum entanglement.

Chemists and condensed matter physicists have little doubt as to the reality of quantum entanglement, and nowadays they don’t scruple to numerically simulate systems of 400 (or more) throughly quantum-entangled particles (e.g., the electrons in a medium-sized drug molecule).

In doing so, they confidently expect to predict physical quantities (e.g. ground-state energies) to a precision of order 10-4. This is strong evidence for the physical reality of large-N quantum-entangled systems (as Greg Kuperberg notes).

However, chemists and condensed matter physicists don’t compute their simulations on large-dimension Hilbert spaces, but rather compute on small-dimension tensor network state-spaces (here “tensor networks” is the name that is being assigned, by an emerging impromptu consensus, to a class of mathematical objects that is known to many disciplines by many names)—this is how they evade the “curse of dimensionality”.

It is not ar present understood why tensor network spaces work so well … there are recent reviews by (e.g.) Charles van Loan and Frank Wilczek that discuss this challenge.

This leads us to ask, from a physics point of view: Are (linear) Hilbert state-spaces are really necessary? Perhaps Nature herself is using tensor network spaces, for the same reason as chemists … namely, computational efficiency. After all, is string theory even defined on a Hilbert state-space? (hey, don’t ask me!)

From a mathematical point of view, there are several schools of thought. People who like to wire together qubit gates are fond of large-dimension Hilbert state-spaces (it being infeasible to analyze qubit algorithm performance on any other state-space). On the other hand, algebraic geometers are fond of low-dimension compact manifolds (particularly algebraic Kahler manifolds), and are perfectly willing to regard them as dynamical state-spaces.

For this reason, there is a growing trend for algebraic geometers to make common-cause with chemists, condensed matter physicists, and string theorists, and all of these communities are now working—in a more-or-less unified way—on non-linear tensor network quantum state-spaces.

IMHO, the quantum computing people will eventually make common-cause with the tensor network state-space coalition, for the simple reason, that the most interesting mathematical feature of Hilbert space, is that it has room for so many nonlinear state-spaces to be immersed within it.

What will happen to the notion of a “quantum algorithm”? Very likely, it will be generalized in ways that we cannot as yet easily foresee, but hopefully, we will all welcome.

Guys, let me clarify. I wasn’t talking about some deep philosophy when I asked my question.

In classical CS, the last (and only) time I’ve seen somebody looking for “new algorithms” per se was myself at the age of 11-12. Back then I had learnt some basics of programming, which made me so excited that I was readily reading virtually anything related to programming.

These days many researchers of quantum computing behave just the way I did. Almost every speaker on the subject keeps asking for “new quantum algorithm”. Unlike classical computing, which is, to a certain degree, run by goals and therefore develops top-down, modern quantum computing is going pure bottom-up, where everything starts from “hey, I have a new quantum algorithm” — Sorry, you have what??

Dmitry Gavinsky says: Modern quantum computing is going pure bottom-up, where everything starts from “hey, I have a new quantum algorithm” — Sorry, you have what??

Dmitry, it is true that in some venues “I have a new quantum algorithm” means “I have proved a mathematical theorem about computation on quantum state-spaces.” But in other venues, when system engineers and chemists say “I have a new quantum algorithm” they mean “I have working code and am using it to design hardware and molecules.”

This relates to the slang phrase “pure-itician” that appeared recently on Lance Fortnow’s web-log: what “pure-iticians” mean by “quantum algorithm” is more specialized than what system engineers and chemists typically mean.

Fundamentally, though, everyone is talking about the same things: compressed representations and simulations—and so we all have a lot to learn from one another. 🙂

Dmitry, you’re putting your finger precisely on why I went into quantum complexity theory rather than quantum algorithms. Personally, I’ve always been terrible at the research strategy “first look for hammers, then look for nails to hit with them.” I always, always start with the nails. That is, I start with the problem I want to solve, then look for a way to solve it. If you asked me to invent a “new” algorithm—doesn’t matter what problem it solves, all that matters is that the algorithm is “new”—I wouldn’t know where to start. This is not to say others haven’t had great success with the hammers-first strategy (have they? what are some good examples?); it’s just a statement about me.

Scott, what you’re saying is, I believe, the only possible meaning of doing science, much less ambiguous than that of “new quantum algorithms” 🙂

There is an old joke in Russian, quite common between scientists, let me take the risk of translation: Science is a way to satisfy one’s curiosity at government’s expense. A clever government (are there any today?) should be looking for parasites like that, more than anything else.

Scott asks: This is not to say others haven’t had great success with the hammers-first strategy (have they? what are some good examples?)

Grothendieck famously declined to hit any nail for which he had not yet created a sufficiently beautiful hammer:

One does not attack a problem head-on, but one envelopes and dissolves it in a rising tide of general theories … (Serre quoting Grothendieck)

This philosophy led Grothendieck to levels of abstraction (in tensor product topology, for example) that are exceedingly challenging for mere mortals (like me) even to glimpse.

I would argue that statistical mechanics already provides an excellent test of the existence of highly entangled states. The basic idea of stat mech is that when you can’t solve for what a system is doing, assume it’s doing something random. So, a collection of gas molecules in a room become randomly located, with some slight modification of their positions due to interaction. If you give up the tensor product structure of Hilbert spaces, this is going to change all of the basic rules of stat mech, simply because the space of allowed states has now changed dramatically. It’s not clear that there is a class of states with limited entanglement that would let even, for example, the ideal gas law come out right.

Matt Says: The basic idea of stat mech is that when you can’t solve for what a system is doing, assume it’s doing something random.

For classical systems, and also for closed quantum systems, that idea leads to ergodic theorems.

But for open quantum systems the Liouville-type invariant measures that define ergodic theory generalize to become Kraus representations, which possess a gauge-type invariance.

Thus in place of ergodic theorems, we have “litodic” theorems (from the Greek litos+odo “simple path”, compare to ergo+odo “work path”). Physically speaking, these theorems describe quantum dynamical systems for which simpler-than-expected descriptions can be found by exploiting this informatic gauge invariance.

For several weeks I have been collecting litodic-type theorems, which if you keep your eyes open for them, turn out to exist already in large numbers. The above-referenced Plenio and Virmani article is a good example, and it seems likely that many more litodic theorems remain to be discovered.

Indeed, from an informatic point of view, doing quantum physics would scarcely be feasible without these theorems.

I would assume that Shor’s factoring algorithm came hammers first. My guess is that the thought sequence was: Simon’s algorithm -> order finding -> discrete log -> factoring. Someone should ask Peter. If so this would obviously be an excellent example of good science that came from a hammers first approach.

The quantum algorithm for approximating Jones polynomials seems like another good example. It must have started from the idea that quantum computers can simulate topological quantum field theories, and then progressed via the connection between Jones polynomials and Chern-Simons theory.

Based on common sense it is clear that these two algorithms are interesting and worthwhile. So I can’t agree that the nails first approach is “the only possible meaning of doing science.”

Stephen, I am not sure regarding the evolutionary processes behind the examples you mentioned, but in general, there is no doubt that the “hammer approach” can be fruitful.

But we all should decide for ourselves, right? Unfortunately, I am a very selfish person, to me the only point in doing research is enjoying myself. Good results are rare for everybody — of course, good researchers own higher standards for “good results” than I do, but whatever they are, such standards must guarantee that good result are not a daily business.

So, being looking for fun and, at the same time, having right self-measures — we have no choice but to enjoy the process, and not only the good results in the end. Somehow, I enjoy myself pursuing a quest, and much less wondering around with a big hammer and looking for a new victim 🙂

It is not ar present understood why tensor network spaces work so well … there are recent reviews by (e.g.) Charles van Loan and Frank Wilczek that discuss this challenge.

Could you please post more detailed references? I would be interested in reading up on this topic.

Greg: Could you elaborate a little on “A tree state in one commonly used basis in condensed matter physics does not have to be a tree state in another commonly used basis in condensed matter physics.”? I though that tree states were simply those elements of (C^2)^{\otimes N} whose tensor rank was polynomial in N. That´s clearly invariant under any basis change in S_N semidirect U_2^N. (I.e. any transformation which sends particles to particles.)

If I´m wrong about what tree states are, might I suggest polynomial rank tensors as an alternate sure-Shor seperator?

Martin says: Could you please post more detailed references [on tensor product networks in large-scale quantum simulation]

A good start is an older Charles van Loan article, The ubiquitous Kronecker product, J. Comput. Appl. Math., 123(1-2), pp. 85-100 (2000).

More recently, the SIAM Journal on Matrix Analysis and Applications, Volume 30, Issue 3 (2008), is a special issue on tensor product decompositions.

Van Loan’s web site also has a good on-line talk Tensor Network Computations in Quantum Chemistry; this is readily found via Google.

Frank Wilczek’s recent review Particle physics: Mass by numbers, Nature 456, 27 November 2008, p. 449-50, references Verstraete and Cirac Renormalization algorithms for quantum-many body systems in two and higher dimensions, arXiv:cond-mat/0407066.

In most of the above work, the reliance on Choi-type invariance in Kraus representations is implicit rather than explicit, nonetheless that is the mathematical foundation upon which all of this work depends. It is well-covered in Ch. 8 of Nielsen and Chuang.

Our quantum system engineering web-site (www.mrfm.org) has a one-page abstract for nonspecialists, prepared for the 50th Annual ENC meeting, titled Analyzing the Quantum Limits to Magnetic Resonance Microscopy: Insights from Claude Shannon, John von Neumann, and Richard Feynman, that shows some practical applications of these ideas.

Bottom line: there is a flood of work in in large-scale quantum simulation, but so far, nothing comparably rigorous to the conceptual foundations of ergodic theory—due largely to Birkhoff and von Neumann—has been created. Heuristically, though, the conceptual foundations that everyone uses are Choi’s theorem, tensor-network states, and choosing the right informatic gauge. This approach often works amazing well, for reasons that no one, as yet, fully understands.

Stephen, I think we need to consider a third situation: there are certain famous nails that lots of people have been trying to hit and agree are worth hitting. A new hammer is invented. Someone tries it out on the famous nails and, after considerable effort, succeeds in hitting one.

Factoring seems like an excellent example of this (though less so the Jones polynomial). I think it’s safe to say that, if we’re holding a brand-new hammer that might work on a famous nail, any of us (even dedicated hammers-firsters like me) are likely to take a swing.

(Likewise for nails that we failed to hit on earlier occasions and that still annoy us. In both cases, the key point is that we already knew about these nails—we didn’t go out looking for them.)

Scott says: I think we need to consider a third situation: there are certain famous nails that lots of people have been trying to hit and agree are worth hitting.

… and this of course suggests an (exceedingly fun) fourth class of nails: nails that lots of people agree are impossible to hit, that subsequently turn out —with new approaches—to be eminently “hittable.”

The history of mathematics and science affords plenty of celebrated examples of “class-four nails.”

One example is the chemical composition of distant stars, which for centuries was offered by philosophers as an example of the kind of question that science would never be able to answer … until this “fourth-class nail” was pounded down by the advent of spectroscopy.

In more recent times, an example of a class-four nail is the opinion that large-scale quantum systems are generically intractable to simulate by classical means. For example, a superficial reading of Feynman’s 1982 lecture Simulating physics with computers seems to assert that classical simulation of large-scale quantum systems is an “un-hittable nail.”

But to paraphrase Gen. Turgidson: “Mr. President, I would say that the quantum chemists and condensed matter physicists have already invalidated that policy.”

Of course, a more careful reading of Feynman’s 1982 lecture shows that he never asserts that the large-scale quantum simulation nail is unhittable. A more carefully nuanced assessment was given by Nielsen Chuang in 2000:

Sometimes, insightful approximations can be made that reduce the effective number of equations to be solved, making classical simulation of the quantum quantum system feasible. However, there are many interesting quantum systems for which no approximations are known.

The subsequent eight years have seen remarkable progress in quantum simulation science—the above-referenced Plenio and Virmani article being just one example—which have revealed that the class of efficiently simulatable quantum systems is far larger than was previous appreciated. This is the point of view that the closing paragraphs of Frank Wilczek’s review espouse, for example.

An important question is, is this progress in quantum simulation the result of pounding in a lot of small, independent nails? Or are we all pounding on the same big nail?

In the 20th century, ergodic theory turned out to be (in retrospect) one big nail. My own opinion is that in the 21st century, large-scale quantum simulation will similarly come to be regarded as one big nail, that will similarly be founded upon elegant mathematical foundations.

People sometimes ask “Is quantum information theory going to deliver big, practical breakthroughs?” A good answer is, pounding down the nail of large-scale quantum simulation is one such big breakthrough, that is ongoing right now.

Martin, one more (really fun) reference is the one that launched my own interest: a Mathematical Intelligencer article by Mohlenkamp and Monzon, titled Trigonometric identities and sums of separable functions, v27(2) pp. 65-69 (2005).

I was reading this article for fun—pretty much every article in the Mathematical Intelligencer being fun to read—and was dumbfounded to discover that Mohlenkamp-Monzon computer codes were isomorphic to our QSE Group’s quantum simulation codes.

“Why is this”, I wondered, “when Mohlenkamp and Monzon are searching the state-space of algebraic identities, and our QSE Group is searching the state-space of simulated quantum trajectories?”

Of course, Mohlenkamp and Monzon (and Beylkin) were asking themselves similar “big nail” questions, with the result that they are presently applying their ideas in large-scale ab initio quantum chemistry calculations.

Matrix product/tensor product algorithms in 2d are one of the most important things going on in science right now. However, it’s not accurate to claim that the accuracy in 2d is anywhere close to that in 1d. These algorithms are all extremely promising, but there’s a big difference between that situation and the situation in the early days of DMRG. One could just tell, without any math, that White’s algorithm was _right_ because the numbers were ridiculously accurate. In 2d, there is nothing at the same level of accuracy (14 decimal places in the energy on a ’90s workstation!) for general strongly interacting systems.

On the other hand, it’s also not correct to claim that in 1d we lack understanding of the math underlying the success of DMRG. The accuracy is related to bounds on the entanglement as expressed via Renyi entropy (Verstraete and Cirac), CFT gives us accurate calculations of these bounds in many situations (many people), we have rigourous general upper bounds for ground states of gapped systems (Hastings). For time dependent situations, CFT explains the slow dependence of entanglement entropy with time under local perturbations, and under global quenches the entanglement entropy grows linearly and everything becomes intractable fairly rapidly. There’s a few interesting open math questions in 1d: is finding a ground state of a gapped system NP-complete or not? (we know it is in NP) what exactly is the worst possible asymptotic scaling of entanglement entropy with gap? etc… But basically, in 1d the math is pretty much completely understood and is on as good a footing as classical stat mech is.

Now, in 2d, we need better algorithms and better math and they will probably arise together.

That´s clearly invariant under any basis change in S_N semidirect U_2^N. (I.e. any transformation which sends particles to particles.)

That’s to the point, David. Many commonly used transformations in condensed matter physics do not send particles to particles. Otherwise, there would never be such a thing as a quasiparticle.

A good example of this is a phonon, which you see all the time in solid-state physics. If you have a crystal, the change of basis from (H_atom)^{tensor N} to the phonon basis is very far away from the transformation group that you write.

It’s difficult to reject phonons as artificial (ghost of Kronecker) and yet accept photons as natural.

Another set of examples is that certain types of existing qubits, for example a Josephson-junction qubit, require a highly creative change of basis from ab initio particles in order to express the qubit states.

Because of physical examples such as these, the class of tree states does seem like something of a straw man.

Because of physical examples such as these, the class of tree states does seem like something of a straw man.

Constructing a straw man was exactly my goal! 🙂 If people don’t like it, I welcome them to construct a flesh-and-blood man, or at least a man of sturdier straw. The motivation was that I needed some man (straw or otherwise) to put in the “skeptical” chair for the debate I wanted to start about the validity of QM for “complicated” systems. Quantum computing skeptics kept telling me they were sure quantum mechanics would break down for “complicated” states of hundreds of entangled particles (as opposed to the “simple” states for which QM has already been confirmed). (Indeed, Paul Davies just repeated that same argument to me yesterday.) But then when I challenged them to define “complicated,” they always retreated into word soup. Hence the need for a straw man to argue against. Think of it like Ralph Nader debating a parrot.

Scott: First of all I think that the paper is a reasonable result in complexity theory, even if I think that Tree-BQP is a straw man at the level of physics. But I thought that in this respect, your paper (or maybe your talk that I attended) was incomplete. It could have been said that a plausible sure-Shor separator is one that outlaws a class of states, and not a class of descriptions of states, because that would be erratic.

It’s fine to debate a straw man, but for context you should summarize every major reason that you see straw, in addition to making your personal debate points.

I have been a little bit baffled by this talk of tree-states, straw-men, and sure-Shor separators. Finally I figured out that these terms (seemingly) refer to Scott’s abstract Are Quantum States Exponentially Long Vectors? (which a Google search will find on Scott’s web site). Is this right?

Assuming so, our system engineering experience suggests a reasonably robust sure-Shor separator ansatz can be constructed as follows.

In the first equation of Scott’s abstract (the equation that defines a tree state) simply modify the constraint “no Σ are allowed” to the more lax constraint “N Σ are allowed”. We then take the point of view that N is a constant of nature.

In the language of algebraic geometry—Joe Harris’ Algebraic Geometry being a suitable text—the quantum state-space of Nature is then a join of monomial varieties having Kronecker rank N and an induced Kähler metric.

Because we can permute the ⊗ symbols freely, Nature’s quantum state-space has no particular dimensionality or spatial ordering built into it; it is therefore at the same time less efficient and more general than the matrix product states discussed earlier in this thread.

Now we suppose that Bob lives in a universe containing (say) 400 qubit-spins, all coupled via random dipole-dipole magnetic interactions, and without loss of generality we take the mean-square interaction energy to be unity per-spin.

Crucially, we suppose in addition that Bob’s universe is noisy (maybe from magnetic radiation left over from the big-bang), and that the associated relaxation time is (say) 10. Physically speaking, the purpose of this latter condition is to ensure that Bob’s 400 spins are not carrying out a Shor factorization (or any other nontrivial quantum computation).

On the other hand, low-order quantum entanglement is perfectly feasible for Bob to generate and observe, since each spin is associated with (it can be shown) 2N real-valued dynamical coordinates.

Then we will assert, that Bob cannot readily distinguish whether he lives in a large-dimension (linear) Hilbert space, versus a (low-dimension) rank-N tensor product space. The physical reason being, that the ambient magnetic noise left over from the big-bang prevents Bob from ever implementing the quantum computation algorithms that would reveal the difference. And the mathematical reason being, the universal noise can be modeled (via Choi’s theorem) as an equivalent continuous measurement process that compresses Hilbert trajectories onto algebraic manifolds, such that most of a large-dimension Hilbert space need not be used anyway.

Now suppose we turn off the noise, but leave Bob stuck in his low-dimension state-space. It seems plausible (to me at least), that Bob will never notice the difference—he will continue to ascribe the failure of his quantum computations to technical noise.

In this world, it might take a long time for people to realize that unanticipated noise in their quantum computers was a law of nature induced by the nontrivial geometry of quantum state-space.

Of course, even if Bob’s state-space is Hilbert, it might be very useful, for system engineering purposes, that Bob is able to efficiently simulate (noisy) spin dynamics, with high fidelity, on low-dimension algebraic state-spaces.

Perhaps Bob is building quantum spin microscopes (like we are) and wishes to reduce the technical risks, and speed the pace, of this enterprise. In such cases the ability to efficiently simulate noisy systems has great practical value.

This is why we quantum system engineers take an interest in these fundamental issues, and seek to learn all we can of the mathematics and information theory involved.

Plus, it’s fun! Deep math, amazing physics, great engineering, profound information theory. If we are lucky, we live in a world in which atomic-resolution spin microscopy and large-scale quantum computing are both becoming feasible.

As a brief followup to the above—and to save Scott the trouble—let me say that Scott has a (really excellent and highly readable) preprint on his web site, titled Multilinear Formulas and Skepticism of Quantum Computing, that will appear in a forthcoming SIAM Journal of Computing.

Fortunately for my peace of mind, Scott’s Multilinear Formulas analysis doesn’t consider noise (at least, the word “noise” doesn’t appear), while in contrast, our QSE simulation methods rely (crucially) on the presence of noise.

Aye, lasses and laddies, there you have a major difference between an theoretician and an engineer!

Scott’s article closes with a wonderfully thought-provoking list of topics for further investigation. I would suggest, as an addition to Scott’s list, the noise-related fundamental question: What generalized formulation(s) of Choi’s Theorem (specifically Theorem 8.2 of Nielsen & Chuang) apply in a tensor network universe?

This boils down to a physical question: does the usual quantum prohibition against non-causal communication still apply in tensor network state-spaces?

Enforcing this prohibition would seem to require delicate adjustments in both relativity and quantum physics (which is what everyone seems to think is required anyway).

The point being, it’s one thing to simulate non-causal communication, quite another thing to implement it! 🙂

Since comments here are slowing, I’ll close with a couple of references relating to the concluding challenge of Scott’s article:

Is there a practical method to compute the tree size of, say, 10-qubit states? Such a method would have great value in interpreting experimental results.

This is a problem that the ab initio quantum chemists have been routinely solving since the 1970s. See the recent review by Chinnamsetty, Espig, Khoromskij, Hackbusch, and Flad, Tensor product approximation with optimal rank in quantum chemistry (BibTeX appended).

Appreciating this work is not easy, since the chemists often are more focused on binding energies than qubits, and on code performance than theorem-proving.

For the upcoming SQuINT Conference in Seattle, we translated these ideas from quantum chemistry into the language of QIT and coordinate-free geometry, with special attention to the fundamental role of Choi’s Theorem.

The resulting writeup appears on our http://www.mrfm.org web site as Five promises from geometric quantum mechanics for efficient quantum simulation.

A code library is included, which even on a small computer can handle tree-states for quantum systems of order 500 spin-qubits having Kronecker rank 100. This library exploits the factorization structure of n-spin tree states to implement matrix-vector multiplications in O(n) operations instead of the (naive) O(n2).

Historically speaking, O(n) matrix-vector multiplication algorithms have always played a major role in large-scale computation; these algorithms typically exploit either natural sparsity or natural factorization of a state-space; the latter is the case with respect to tree-states.

Geometrically speaking, we have the idea that the musical isomorphism between vectors and one-forms is generically easy to compute on tree states, and this fast isomorphism greatly speeds practical quantum simulation calculations.

Even with these results in-hand, I think it won’t be easy, and will take quite awhile, for the quantum chemists, the QIT theorists, and the quantum system engineers to accept that fundamentally we are all pounding upon the same “big nail”, and that mathematical tools from algebra, geometry, information theory, and complexity theory are all going to play more-or-less equal roles.

This broadening means more work for everyone, but it also means more fun, and more mathematical depth, and more opportunities for young people. Which is good!

——

@article{citename, Title = {Tensor product approximation with optimal rank in quantum chemistry}, Author = {S. R. Chinnamsetty and M. Espig and B. N. Khoromskij and W. Hackbusch and H.-J. Flad}, Journal = {The Journal of Chemical Physics}, Number = {8}, Pages = {084110}, Volume = {127}, Year = {2007}}

anonymous: I took it out. Somehow, I sleep better knowing people at least have to click a link to find out what others are saying about me. 🙂 I apologize for any inconvenience.

I do not know how this thread also got into QC skepticism:

Scott wrote: “The motivation was that I needed some man (straw or otherwise) to put in the “skeptical” chair for the debate I wanted to start about the validity of QM for “complicated” systems. Quantum computing skeptics kept telling me they were sure quantum mechanics would break down for “complicated” states of hundreds of entangled particles (as opposed to the “simple” states for which QM has already been confirmed). ”

Is this really true? I know that some skeptics made this claims, but isnt it a more common view among skeptics that complicated states cannot be created because of noise?

(Also why “hundreds” some skeptical claims will come to play for handful of qubits…)

“(Indeed, Paul Davies just repeated that same argument to me yesterday.) But then when I challenged them to define “complicated,” they always retreated into word soup.”

My own views are very much aligned with Gil Kalai’s.

Perhapas a relevant lesson from history is that Gauss—working as a civil engineer—attempted (seriously) to observe departures from Euclidean geometry in large-scale surveying, just as we presently attempt to observe departures from from Euclidean quantum mechanics in large-scale physical dynamics.

In both cases, the looked-for phenomenon was defined only vaguely: tiny departures from spherical geometry in Gauss’ case, tiny departures from linear quantum superposition in the present case.

In both cases, the immediate practical challenge was noise in measurements, sufficient to mask the (tiny) expected effects.

In both cases, the required revision of physical theory was far more radical than was originally foreseen: the geometry of classical state-space needed to be treated dynamically in Gauss’ case, and (perhaps) the geometry of quantum state-spaces needs to be treated dynamically in the present case.

It both cases, it was not obvious that departures from classical values could occur in a logically consistent way (geometric consistency was the tough issue in Gauss’ case, causal and informatic consistency are the tough issues in the present case).

In both cases, the development of stronger physical theories awaited the development of stronger physical insights and stronger mathematical tools. In Gauss’ case, progress awaited the physical principle of relativity as embodied in Riemannian geometry, and in the present case, progress awaits the physical principle of (????) as embodied in the mathematics of (????).

So just fill in the (????), and your achievements surely will be as celebrated as those of Gauss and Einstein.

Thus, in asking us to fill in the (????), Scott has set the bar *very* high … which is why this is a truly great weblog topic!

Another (less dramatic) lesson from history is that the physical principle of (????) and the mathematical principle of (????)—whatever they turn out to be—are likely to have considerable value in practical engineering. So perhaps it is not a complete waste of time, to try to do some practical engineering along the way (as Gauss did).

My own best candidate for the physical principle of (????) is Choi’s Theorem (which Thm. 8.2 of Nielsen & Chuang), but as for its generalized mathematical expression in (????), there I have no clear ideas (except “just start coding”) … but perhaps some other Shtetl Optimized reader does. 🙂

John, I am sure this is not needed for most readers but can you elaborate on the Gauss’s story you refer to? (I am not familiar with it.) Scott, I see you are really serious about causality just going from the past to the future 🙂

On the main topic of the thread, regarding new quantum algorithms: 1) In most of scientific computing algorithms moving from an nlogn time algorithm to a linear algorithm would be a sensational progress theoretically and practically. Is there some work on quantum speed-up for these and others “low-complexity” algorithms? 2) one quantum algorithm fantasy that I (and others) have (which never got off the ground) would be to come up with quicker quantum algorithm for linear programming or LP-type problems. For exmple: consider the discrete n-dimensional cube and a unique sink acyclic orientation of its edges. (This mean that for every subcube there is a unique sink.) The question is how quickly can you find the sink. Classical random-walk-based algorithms are exponential (the best known is exp (sqrt n) ) and any quantum or clasical improvement would be great. 3) I am sure that the quantum-subroutine allowing sampling the Fourier transform of an n variable function in polynomial time will have many additional applications.

Hi Gil! A very thorough and readable account of Gauss’ practical work in surveying and geodesy, and its relation to his fundamental investigations in the foundations of geometry, is given in Dunningham’s Carl Friedrich Gauss: Titan of Science (recently reprinted by the Mathematical Association of America).

Specially for this post, I did a literature search and found that there are diverse opinions regarding the role that these practical geodetic investigations played in Gauss’ research (see the appended BibTeX references.)

IMHO, the best single article to read is Gauss’ own article Disquisitiones generales circa superficies curvas, which is available on-line in English translation as General Investigations of Curved Surfaces.

This celebrated article introduced the concept of what is now called Gaussian curvature, and established by Gauss’ Theorema Egregium (his name for the theorem) that this curvature is an intrinsic property of surfaces. Gauss thought so highly of the Theorema Egregium that he published it multiple times.

An additional reason that the Disquisitiones generales is well-worth reading for students, is that in it Gauss pretty much invents the modern academic style and idioms of academic writing (“it follows”, “we then have”, “it can be shown”, etc.). Hey, someone had to invent the idioms that nowadays every mathematician, scientist, and engineer uses … students might as well learn these idioms by studying the classics.

Oh yeah … did Gauss’ practical surveying work strongly influence his mathematical work in non-Euclidean geometry? Undoubtedly. Did Gauss envision, not only that the earth’s surface was curved, but that 3D space might be curved too? And did he experimentally test this idea? There is no definitive evidence either way. My own opinion, based on reading the Disquisitiones generales, is that Gauss found no evidence for 3D spatial curvature in his geodetic data, and in keeping with his general conservatism in such matters, declined to publish, as a mere speculation, a mathematical possibility of which he was well-aware.

Nowadays, of course, things are different, and vast herds of speculative theories roam the landscape. 🙂

To summarize: very roughly speaking, attempts to construct highly entangled states can be viewed as analogous to Gauss’ geodetic triangulations. In both cases, the technical difficulties are immense. In both cases, the practical applications are important (and pay the bills). And in both cases, the local structure always looks Euclidean/Hilbert, but it is mathematically and physically possible for the large-scale structure to significantly depart from linear extrapolation of the small-scale structure.

—-

@article{Breitenberger:1984qy, Author = {Breitenberger, Ernst}, Journal = {Archive for History of Exact Sciences}, Number = {3}, Pages = {273–289}, Title = {Gauss’s geodesy and the axiom of parallels}, Volume = {31}, Year = {1984}}

@article{1972, Author = {Miller, Arthur I.}, Journal = {Isis}, Number = {3}, Pages = {345–348}, Title = {The Myth of Gauss’ Experiment on the Euclidean Nature of Physical Space}, Volume = {63}, Year = {1972}}

Greg: Alright, I reinstated the Recent Comments widget, just further down on the sidebar. That seemed like a reasonable compromise between readers’ desire to stick any sign they want on my front lawn, and my desire to sleep. 🙂

Grothendieck describes two styles in mathematics. If you think of a theorem to be proved as a nut to be opened, so as to reach “the nourishing flesh protected by the shell”, then the hammer and chisel principle is: “put the cutting edge of the chisel against the shell and strike hard. If needed, begin again at many different points until the shell cracks–and you are satisfied”. He says:

“I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months–when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!
“A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. . . the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. . . yet it finally surrounds the resistant substance. [Grothendieck 19851987, pp. 552-3]1

“Softening the nut with water” referred to Grothendieck’s approach of building new abstractions to study a problem with.

Hi John, thanks for the further details regarding Gauss. Maybe a more direct analog for a hypothetical (????) of your post is the notion of NP-completeness, which put on formal grounds earlier intuitions and insights.

Gee, it must be “thank you” day, because I want to say a heartfelt “thank you” to ASDF for that reference to Grothendieck’s work. I was familiar with Grothendieck’s “nutshell” analogy, but Colin McClarty’s account of it is by *far* the best that I have seen. Outstanding!

One approach to establishing the relevance of Grothendieck-style mathematics to complexity theory is by way of tensor networks (a.k.a., tree states, product-sum states, Kronecker states, etc.). These networks can be alternatively understood in algebraic terms, or combinatoric terms, or geometric terms, or even (via sparse reconstruction theory, for example) information-theoretic terms.

Nowadays, we increasingly appreciate that there are strong connections among these points-of-view (for example, that the manifold of tensor network states has negative Ricci-Kähler curvature is important to the numerical tractability of quantum-state reconstruction from sparse random projections).

Just to mention, Grothendieck did write at considerable length on tensor product spaces … albeit without the advantageous informatic perspective on product spaces that modern QIT is providing. Hey … this means that our generation has at its disposal a crucial mathematical tool that Grothendieck’s generation lacked!

The Grothendieck point of view is that all of the preceding ideas can be understood as different aspects of one rising tide of theory. And perhaps this happy conception will even turn out to be true. 🙂

Comment #1 December 15th, 2008 at 11:09 am

Two things…

1. How do you pronounce “iQUiSE”?

2. I know you’re not a fan of this particular game, but it’s the Pontiff who is playing it this time.

Comment #2 December 15th, 2008 at 11:22 am

Kurt:

1. “eye-kwize” (rhymes with tie-dyes)

2. Haha, nice try! Read abstract, declined to download paper. Mood still obamalike, blood pressure normal, error left for others to find.

Comment #3 December 15th, 2008 at 11:53 am

The author doesn’t use TeX, and starts off by defining a group. (Signs 1 and 8 from Ten Signs a Claimed Mathematical Breakthrough is Wrong)

The paper also has several colorful pictures. Maybe this should also be on the list.

Comment #4 December 15th, 2008 at 11:57 am

Dmitry Gavinsky asks me to throw the following piece of meat to the comment-wolves: What exactly should count as a “new” quantum algorithm?This is a very good question that in my view is not asked often enough. Newness is never quite rigorous for any type of result in mathematics or computer science; the best you can do is general principles. The basic principle in this case is that your ideal audience is already reasonably well-versed in classical algorithms. A quantum algorithm is then “new” if (a) it is faster than what is classically available or possible; and (b) it isn’t a variation that a classically versed audience will easily think of.

It’s easy to feel that you have a new quantum algorithm when it could just be another use of Grover search, for example. I’ve seen papers that read that way. That said, it is also common enough not to publish a good result just because you mistakenly thought that it was well-known. It doesn’t hurt to ask the right people whether in fact you have a new algorithm.

Now and then someone finds a clever new application of a standard quantum algorithm. For instance, there is a paper by Kedlaya which I rather like in which he points out that Shor’s algorithm can count the points on any algebraic curve over a finite field if the curve is in a standard form. This was known for elliptic curves because those are abelian groups. Even though the quantum part is not new, the algorithm makes the point that the implications of the QFT-style quantum algorithms go beyond analyzing groups.

Comment #5 December 15th, 2008 at 12:26 pm

I totally wimped out, it’s true. Happily, the foundations of clear-eyed empiricism are sturdy enough to be erected on a single stone.

Someone must be that stone. Why not you?

Comment #6 December 15th, 2008 at 12:38 pm

“What exactly should count as a “new” quantum algorithm?”

No brainer. One that has been written up as software. Anything else is pure philosophy

Comment #7 December 15th, 2008 at 1:05 pm

rr, stop trolling.

Comment #8 December 15th, 2008 at 1:25 pm

I am being perfectly serious Scott. The fact that you vituperate my answer is very telling of a serious weakness in the field of quantum algorithms. If pure philosophers had been entrusted with the task of building modern personal computers, they would still be sighing about the amazing complexity issues of the abacus.

Comment #9 December 15th, 2008 at 2:06 pm

Supposing a quantum algorithm has been written up as software, why does that make it new? You can code up old algorithms too! It’s a completely different question, and you know that as well as I do.

Comment #10 December 15th, 2008 at 2:55 pm

On the other hand, the ability to actually code up quantum algorithms would have a huge effect on the development of quantum algorithms. Is the quantum adiabatic algorithm actually useful? Who knows, some special cases yes, some no, but for many cases there are powerful classical heuristics too. The ability to test it out would clear a lot of things up. Loopy belief propagation is a powerful classical heuristic, and I don’t think it would have been developed if not for fast computers-LDPC codes, which rely on loopy bp for decoding, were initially ignored as even in the 60s the computers weren’t fast enough.

Comment #11 December 15th, 2008 at 3:49 pm

“Supposing a quantum algorithm has been written up as software, why does that make it new? You can code up old algorithms too! It’s a completely different question, and you know that as well as I do.”

Obviously, I meant it is new the moment it is *first* written into software.

This reminds me of the question of how do we define a new human. Is the embryo a human the day the husband and wife decide to have a baby, or is it a human at least a few weeks after fertilization?

Comment #12 December 15th, 2008 at 4:01 pm

So “Shor’s algorithm” should actually be called X’s algorithm, where X is the first person who will implement it?

Comment #13 December 15th, 2008 at 4:05 pm

An algorithm in its abstract form is already software, written in the language of logic, it’s just not written to target a specific architecture.

Comment #14 December 15th, 2008 at 6:26 pm

Gregsays that a NQA is something that.(a)is faster than what is classically available or possible; and(b)isn’t a variation that a classically versed audience will easily think ofLet me be radical and suggest that only

superpolynomially fastercounts, this would avoid some “implementation issues” mentioned in later posts.Suppose I find a new class of groups where the

Hidden Subgroup Problem (HSP)can be solved using new classical postprocessing of some previously used quantum measurement (in fact, I already did so once). Would that be a newquantumalgorithm?If not, due to the fact that the problem is

classicallyreducible to apreviously known quantum algorithm, then there have been no new quantum algorithm since the universality of the quantum Turing machine has been established.If it is, due to the fact that classically the problem cannot be solved, then right away I can give new quantum algorithms: for any natural

k>0, letSHOR+kbe the algorithm that givenN, efficiently computesp+k, wherepis the smallest prime factor ofN.Comment #15 December 15th, 2008 at 6:32 pm

In the above post (last paragraph) I meant

aleph-zero new quantum algorithms, but the aleph-zero image has not made its way to the post.Comment #16 December 15th, 2008 at 8:08 pm

Let me be radical and suggest that only superpolynomially faster counts, this would avoid some “implementation issues” mentioned in later posts.That seems too strict to me. The fastest classical algorithm for the hidden subgroup problem for the symmetric group is O(n!^{1/2}). The fastest known quantum algorithm is, I think, O(n!^{1/3}), although you might be able to shave the exponent to 1/3-o(n) by using a large abelian subgroup of the symmetric group. (Not sure about that.) If you found an algorithm that runs in time O(n!^{1/4}), that would be a good quantum algorithm in my view. At the very least, I would find the result interesting.

I agree that the phrase “runs in time” needs to be clarified. If you like, you can use circuit size. (Not circuit depth.) Circuit size is related to running time on a RAM machine or running time on a tree-tape machine.

If it is, due to the fact that classically the problem cannot be solved, then right away I can give new quantum algorithms: for any natural k>0, let SHOR+k be the algorithm that given N, efficiently computes p+k, where p is the smallest prime factor of N.Well yes, that’s the sort of silly variation that we can all think of that doesn’t count as a new quantum algorithm.

But see Kedlaya’s paper. That’s a much more interesting example where it is just Shor’s algorithm, but employed for a new purpose.

Comment #17 December 15th, 2008 at 8:55 pm

But see Kedlaya’s paper. That’s a much more interesting example where it is just Shor’s algorithm, but employed for a new purpose.I will. Are you talking about his “Quantum computation of zeta functions of curves”, or something else?

Comment #18 December 15th, 2008 at 10:14 pm

Hi, I have a question about iQUiSE. Does it provide summer research opportunity for graduate student? Thanks

Comment #19 December 15th, 2008 at 11:23 pm

I hope 17 hours assuming the statement of purpose is already written… otherwise I’ll assume either nobody reads those essays, or there is a magic way of writing good statements.

Comment #20 December 16th, 2008 at 2:32 am

Xiaodi: Yes.

Comment #21 December 16th, 2008 at 3:27 am

This is how Paul Davies thinks quantum information works (from

New Scientist, 30 June 2007, available online):Surely Lloyd’s limit is entirely compatible with conventional QM, and it’s a misunderstanding to suggest that it implies that anything unusual is going to occur with a highly entangled many-particle state?

I certainly agree that it’s worth

empirically testingwhat happens to a state with 400 entangled particles. Maybe the reality is that something new and interesting will take place, in violation of the predictions of conventional QM. But that’s a completely separate issue.Comment #22 December 16th, 2008 at 7:36 am

Dmitri and Scott ask:

What exactly should count as a “new” quantum algorithm?That’s a perfect question to ask on a blog.

Let’s consider whether a good answer—with “good” meaning “helps create good math, good science, good engineering, and good jobs”—might be the following: “A quantum algorithm is any mathematical operation on complex state-spaces that returns more information, for less time and space resources, than one would expect.”

The above answer is not original; it is adapted from the Wikipedia article on “Rigidity_(mathematics)”.

From this expanded point of view, a fine example of a new quantum algorithm is Plenio and Virmani

Upper bounds on fault tolerance thresholds of noisy Clifford-based quantum computers(arXiv:0810.4340). Gee … whowouldn’twant to call their interesting simulation method a “new quantum algorithm”?The practical advantage of embracing this broad definition of “quantum algorithm” is that it encourages quantum information researchers to explore the still-mysterious mathematics of quantum simulation, which the chemists and condensed matter physicists have been pioneering, without always understanding (in any deep mathematical sense) why it is that their simulation methods work so well.

If anyone has got a definition of “quantum algorithm” that is more likely to create good mathematics and good jobs than the above … well … please post it!

Also, best wishes for happy holidays to all, and special thanks to Scott for hosting this fine weblog. 🙂

Comment #23 December 16th, 2008 at 11:20 am

I think of a distinct algorithm more as an approach or strategy than as a specific implementation, where a new algorithm is one whose “evolution” at runtime on a set of inputs isn’t parallel to that of an existing one, even if it’s slower when applied to solving a particular problem. One where, if we could graph the execution of an algorithm on a set of inputs, the graph of the new and old would have nothing to do with each other.

Identifying if an algorithm is distinct or equivalent (in approach) to another on a range of inputs (i.e. that it gives not only the same answer but operates in an equivalent way) sounds like an NP-Complete problem (but i use this comparison pretty freely).

Comment #24 December 16th, 2008 at 12:08 pm

Greg E.: Paul is, of course, flat-out wrong about the holographic entropy bound being incompatible with conventional QM. I had a long email debate with him about this, which I won’t recapitulate here. 🙂

Comment #25 December 16th, 2008 at 1:35 pm

I certainly agree that it’s worth empirically testing what happens to a state with 400 entangled particles.But there have already been many tests with many kinds of entanglement, not only with 400 entangled particles but with billions of particles. It just isn’t computationally useful entanglement.

When you ponder the credibility of quantum computation, it is very important to understand that: (a) Entanglement is description dependent. (b) It is a long-established practice to seek a quantum-mechanical change of basis to describe physical systems without entanglement, and sometimes this change of basis is highly creative. (c) Anyonic system are a moral exception. The standard description is somewhat analogous to no entanglement, but entanglement is never really eliminated.

In other words, when people ask “What would massive entanglement look like?”, the question that the may think that they are asking is, “What does Sanskrit text that is not English look like?”, but the question that they are actually asking is, “What does Sanskrit text that cannot be translated to English look like?” To analogize the other points above, (b) the translation can be highly non-trivial, and (c) there are a few cases where the only credible translation is to a variation of English.

Comment #26 December 16th, 2008 at 3:32 pm

Also, let me add a remark about Tree-BQP, since I was trying to make a remark to Scott about it many years ago, but I never managed to explain it clearly. Any reasonable censorship hypothesis that disallows massive entanglement or otherwise QC would have to be stable with respect to a plausible class of changes of bases in quantum mechanics. As far as I can tell, Tree-BQP does not have that stability property: A tree state in one commonly used basis in condensed matter physics does not have to be a tree state in another commonly used basis in condensed matter physics. So in that respect, Tree-BQP has arguably been a straw man all along.

In fact, this is a general reason that skepticism of QC is a tough game. You would like to say that certain quantum states are impossible (without prohibitive noise), not just certain descriptions of quantum states. Outlawing descriptions doesn’t make sense. I have never seen a viable proposal to outlaw some description-independent class of quantum states, enough to prevent QC, and yet keep states that people have already made.

Comment #27 December 16th, 2008 at 3:33 pm

“What would massive entanglement look like?”

Just look out the window.

Comment #28 December 16th, 2008 at 4:17 pm

Greg Egan says:

I certainly agree that it’s worth empirically testing what happens to a state with 400 entangled particles.… and Greg Kuperberg replies:

But there have already been many tests with many kinds of entanglement, not only with 400 entangled particles but with billions of particles. It just isn’t computationally useful entanglement.I agree with both comments, and it seems to me that they are well-suited to blog discussion, After all, in the words of the

Federalist Papers:Inspired by the above words of Hamilton, Madison, and Jay, here are some remarks relating to the “passions and prejudices” of quantum entanglement.

Chemists and condensed matter physicists have little doubt as to the reality of quantum entanglement, and nowadays they don’t scruple to numerically simulate systems of 400 (or more) throughly quantum-entangled particles (

e.g., the electrons in a medium-sized drug molecule).In doing so, they confidently expect to predict physical quantities (

e.g.ground-state energies) to a precision of order 10-4. This is strong evidence for the physical reality of large-Nquantum-entangled systems (as Greg Kuperberg notes).However, chemists and condensed matter physicists don’t compute their simulations on large-dimension Hilbert spaces, but rather compute on small-dimension tensor network state-spaces (here “tensor networks” is the name that is being assigned, by an emerging impromptu consensus, to a class of mathematical objects that is known to many disciplines by many names)—this is how they evade the “curse of dimensionality”.

It is not ar present understood why tensor network spaces work so well … there are recent reviews by (

e.g.) Charles van Loan and Frank Wilczek that discuss this challenge.This leads us to ask, from a physics point of view: Are (linear) Hilbert state-spaces are really necessary? Perhaps Nature herself is using tensor network spaces, for the same reason as chemists … namely, computational efficiency. After all, is string theory even defined on a Hilbert state-space? (hey, don’t ask me!)

From a mathematical point of view, there are several schools of thought. People who like to wire together qubit gates are fond of large-dimension Hilbert state-spaces (it being infeasible to analyze qubit algorithm performance on any other state-space). On the other hand, algebraic geometers are fond of low-dimension compact manifolds (particularly algebraic Kahler manifolds), and are perfectly willing to regard them as dynamical state-spaces.

For this reason, there is a growing trend for algebraic geometers to make common-cause with chemists, condensed matter physicists, and string theorists, and all of these communities are now working—in a more-or-less unified way—on non-linear tensor network quantum state-spaces.

IMHO, the quantum computing people will eventually make common-cause with the tensor network state-space coalition, for the simple reason, that the most interesting mathematical feature of Hilbert space, is that it has room for so many nonlinear state-spaces to be immersed within it.

What will happen to the notion of a “quantum algorithm”? Very likely, it will be generalized in ways that we cannot as yet easily foresee, but hopefully, we will all welcome.

Comment #29 December 16th, 2008 at 4:49 pm

Guys, let me clarify. I wasn’t talking about some deep philosophy when I asked my question.

In classical CS, the last (and only) time I’ve seen somebody looking for “new algorithms” per se was myself at the age of 11-12. Back then I had learnt some basics of programming, which made me so excited that I was readily reading virtually anything related to programming.

These days many researchers of quantum computing behave just the way I did. Almost every speaker on the subject keeps asking for “new quantum algorithm”. Unlike classical computing, which is, to a certain degree, run by

goalsand therefore develops top-down, modern quantum computing is going pure bottom-up, where everything starts from “hey, I have a new quantum algorithm” —Sorry, you have what??Comment #30 December 16th, 2008 at 5:33 pm

Dmitry Gavinsky says:

Modern quantum computing is going pure bottom-up, where everything starts from“hey, I have a new quantum algorithm” —Sorry, you have what??Dmitry, it is true that in some venues “I have a new quantum algorithm” means “I have proved a mathematical theorem about computation on quantum state-spaces.” But in other venues, when system engineers and chemists say “I have a new quantum algorithm” they mean “I have working code and am using it to design hardware and molecules.”

This relates to the slang phrase “pure-itician” that appeared recently on Lance Fortnow’s web-log: what “pure-iticians” mean by “quantum algorithm” is more specialized than what system engineers and chemists typically mean.

Fundamentally, though, everyone is talking about the same things: compressed representations and simulations—and so we all have a lot to learn from one another. 🙂

Comment #31 December 16th, 2008 at 6:05 pm

Dmitry, you’re putting your finger precisely on why I went into quantum complexity theory rather than quantum algorithms. Personally, I’ve always been terrible at the research strategy “first look for hammers, then look for nails to hit with them.” I always,

alwaysstart with the nails. That is, I start with the problem I want to solve, then look for a way to solve it. If you asked me to invent a “new” algorithm—doesn’t matter what problem it solves, all that matters is that the algorithm is “new”—I wouldn’t know where to start. This is not to say others haven’t had great success with the hammers-first strategy (have they? what are some good examples?); it’s just a statement about me.Comment #32 December 16th, 2008 at 6:47 pm

Scott, what you’re saying is, I believe, the only possible meaning of doing science, much less ambiguous than that of “new quantum algorithms” 🙂

There is an old joke in Russian, quite common between scientists, let me take the risk of translation:

Science is a way to satisfy one’s curiosity at government’s expense. A clever government (are there any today?) should be looking for parasites like that, more than anything else.Comment #33 December 16th, 2008 at 6:59 pm

Scott asks:

This is not to say others haven’t had great success with the hammers-first strategy (have they? what are some good examples?)Grothendieck famously declined to hit any nail for which he had not yet created a sufficiently beautiful hammer:

This philosophy led Grothendieck to levels of abstraction (in tensor product topology, for example) that are exceedingly challenging for mere mortals (like me) even to glimpse.

Comment #34 December 16th, 2008 at 7:00 pm

Dmitry: Joke? What’s the joke?

Comment #35 December 16th, 2008 at 11:02 pm

I would argue that statistical mechanics already provides an excellent test of the existence of highly entangled states. The basic idea of stat mech is that when you can’t solve for what a system is doing, assume it’s doing something random. So, a collection of gas molecules in a room become randomly located, with some slight modification of their positions due to interaction. If you give up the tensor product structure of Hilbert spaces, this is going to change all of the basic rules of stat mech, simply because the space of allowed states has now changed dramatically. It’s not clear that there is a class of states with limited entanglement that would let even, for example, the ideal gas law come out right.

Comment #36 December 17th, 2008 at 12:48 am

Matt Says:

The basic idea of stat mech is that when you can’t solve for what a system is doing, assume it’s doing something random.For classical systems, and also for closed quantum systems, that idea leads to ergodic theorems.

But for open quantum systems the Liouville-type invariant measures that define ergodic theory generalize to become Kraus representations, which possess a gauge-type invariance.

Thus in place of ergodic theorems, we have “litodic” theorems (from the Greek litos+odo “simple path”, compare to ergo+odo “work path”). Physically speaking, these theorems describe quantum dynamical systems for which simpler-than-expected descriptions can be found by exploiting this informatic gauge invariance.

For several weeks I have been collecting litodic-type theorems, which if you keep your eyes open for them, turn out to exist already in large numbers. The above-referenced Plenio and Virmani article is a good example, and it seems likely that many more litodic theorems remain to be discovered.

Indeed, from an informatic point of view, doing quantum physics would scarcely be feasible without these theorems.

Comment #37 December 17th, 2008 at 2:02 am

I would assume that Shor’s factoring algorithm came hammers first. My guess is that the thought sequence was: Simon’s algorithm -> order finding -> discrete log -> factoring. Someone should ask Peter. If so this would obviously be an excellent example of good science that came from a hammers first approach.

The quantum algorithm for approximating Jones polynomials seems like another good example. It must have started from the idea that quantum computers can simulate topological quantum field theories, and then progressed via the connection between Jones polynomials and Chern-Simons theory.

Based on common sense it is clear that these two algorithms are interesting and worthwhile. So I can’t agree that the nails first approach is “the only possible meaning of doing science.”

Comment #38 December 17th, 2008 at 2:58 am

Stephen, I am not sure regarding the evolutionary processes behind the examples you mentioned, but in general, there is no doubt that the “hammer approach” can be fruitful.But we all should decide for ourselves, right? Unfortunately, I am a very selfish person, to me the only point in doing research is enjoying myself. Good results are rare

for everybody— of course, good researchers own higher standards for “good results” than I do, but whatever they are, such standards must guarantee that good result are not a daily business.So, being looking for fun and, at the same time, having right self-measures — we have no choice but to enjoy the process, and not only the good results in the end. Somehow, I enjoy myself pursuing a quest, and much less wondering around with a big hammer and looking for a new victim 🙂

Comment #39 December 17th, 2008 at 3:00 am

John Sidles says:

Could you please post more detailed references? I would be interested in reading up on this topic.

thanks,

Martin

Comment #40 December 17th, 2008 at 6:04 am

Greg: Could you elaborate a little on “A tree state in one commonly used basis in condensed matter physics does not have to be a tree state in another commonly used basis in condensed matter physics.”? I though that tree states were simply those elements of (C^2)^{\otimes N} whose tensor rank was polynomial in N. That´s clearly invariant under any basis change in S_N semidirect U_2^N. (I.e. any transformation which sends particles to particles.)

If I´m wrong about what tree states are, might I suggest polynomial rank tensors as an alternate sure-Shor seperator?

Comment #41 December 17th, 2008 at 9:43 am

Martin says:

Could you please post more detailed references [on tensor product networks in large-scale quantum simulation]A good start is an older Charles van Loan article,

The ubiquitous Kronecker product, J. Comput. Appl. Math., 123(1-2), pp. 85-100 (2000).More recently, the SIAM Journal on Matrix Analysis and Applications, Volume 30, Issue 3 (2008), is a special issue on tensor product decompositions.

Van Loan’s web site also has a good on-line talk

Tensor Network Computations in Quantum Chemistry; this is readily found via Google.Frank Wilczek’s recent review

Particle physics: Mass by numbers, Nature 456, 27 November 2008, p. 449-50, references Verstraete and CiracRenormalization algorithms for quantum-many body systems in two and higher dimensions, arXiv:cond-mat/0407066.In most of the above work, the reliance on Choi-type invariance in Kraus representations is implicit rather than explicit, nonetheless that is the mathematical foundation upon which all of this work depends. It is well-covered in Ch. 8 of Nielsen and Chuang.

Our quantum system engineering web-site (www.mrfm.org) has a one-page abstract for nonspecialists, prepared for the 50th Annual ENC meeting, titled

Analyzing the Quantum Limits to Magnetic Resonance Microscopy: Insights from Claude Shannon, John von Neumann, and Richard Feynman, that shows some practical applications of these ideas.Bottom line: there is a flood of work in in large-scale quantum simulation, but so far, nothing comparably rigorous to the conceptual foundations of ergodic theory—due largely to Birkhoff and von Neumann—has been created. Heuristically, though, the conceptual foundations that everyone uses are Choi’s theorem, tensor-network states, and choosing the right informatic gauge. This approach often works amazing well, for reasons that no one, as yet, fully understands.

Comment #42 December 17th, 2008 at 9:52 am

Martin, your requested references are in the spam-check queue — hopefully they’ll emerge in a day or so.

Comment #43 December 17th, 2008 at 10:10 am

Stephen, I think we need to consider a third situation: there are certain famous nails that lots of people have been trying to hit and agree are worth hitting. A new hammer is invented. Someone tries it out on the famous nails and, after considerable effort, succeeds in hitting one.

Factoring seems like an excellent example of this (though less so the Jones polynomial). I think it’s safe to say that, if we’re holding a brand-new hammer that might work on a

famousnail, any of us (even dedicated hammers-firsters like me) are likely to take a swing.(Likewise for nails that we failed to hit on earlier occasions and that still annoy us. In both cases, the key point is that we already knew about these nails—we didn’t go out looking for them.)

Comment #44 December 17th, 2008 at 12:30 pm

Scott says:

I think we need to consider a third situation: there are certain famous nails that lots of people have been trying to hit and agree are worth hitting.… and this of course suggests an (exceedingly fun) fourth class of nails: nails that lots of people agree are

impossibleto hit, that subsequently turn out —with new approaches—to be eminently “hittable.”The history of mathematics and science affords plenty of celebrated examples of “class-four nails.”

One example is the chemical composition of distant stars, which for centuries was offered by philosophers as an example of the kind of question that science would never be able to answer … until this “fourth-class nail” was pounded down by the advent of spectroscopy.

In more recent times, an example of a class-four nail is the opinion that large-scale quantum systems are generically intractable to simulate by classical means. For example, a superficial reading of Feynman’s 1982 lecture

Simulating physics with computersseems to assert that classical simulation of large-scale quantum systems is an “un-hittable nail.”But to paraphrase Gen. Turgidson: “Mr. President, I would say that the quantum chemists and condensed matter physicists have already invalidated that policy.”

Of course, a more careful reading of Feynman’s 1982 lecture shows that he never asserts that the large-scale quantum simulation nail is unhittable. A more carefully nuanced assessment was given by Nielsen Chuang in 2000:

The subsequent eight years have seen remarkable progress in quantum simulation science—the above-referenced Plenio and Virmani article being just one example—which have revealed that the class of efficiently simulatable quantum systems is far larger than was previous appreciated. This is the point of view that the closing paragraphs of Frank Wilczek’s review espouse, for example.

An important question is, is this progress in quantum simulation the result of pounding in a lot of small, independent nails? Or are we all pounding on the same big nail?

In the 20th century, ergodic theory turned out to be (in retrospect) one big nail. My own opinion is that in the 21st century, large-scale quantum simulation will similarly come to be regarded as one big nail, that will similarly be founded upon elegant mathematical foundations.

People sometimes ask “Is quantum information theory going to deliver big, practical breakthroughs?” A good answer is, pounding down the nail of large-scale quantum simulation is one such big breakthrough, that is ongoing right now.

Comment #45 December 17th, 2008 at 12:55 pm

John, thanks for the references!

Comment #46 December 17th, 2008 at 1:24 pm

Martin, one more (really fun) reference is the one that launched my own interest: a

Mathematical Intelligencerarticle by Mohlenkamp and Monzon, titledTrigonometric identities and sums of separable functions, v27(2) pp. 65-69 (2005).I was reading this article for fun—pretty much every article in the

Mathematical Intelligencerbeing fun to read—and was dumbfounded to discover that Mohlenkamp-Monzon computer codes were isomorphic to our QSE Group’s quantum simulation codes.“Why is this”, I wondered, “when Mohlenkamp and Monzon are searching the state-space of algebraic identities, and our QSE Group is searching the state-space of simulated quantum trajectories?”

Of course, Mohlenkamp and Monzon (and Beylkin) were asking themselves similar “big nail” questions, with the result that they are presently applying their ideas in large-scale

ab initioquantum chemistry calculations.Comment #47 December 17th, 2008 at 1:39 pm

Matrix product/tensor product algorithms in 2d are one of the most important things going on in science right now. However, it’s not accurate to claim that the accuracy in 2d is anywhere close to that in 1d. These algorithms are all extremely promising, but there’s a big difference between that situation and the situation in the early days of DMRG. One could just tell, without any math, that White’s algorithm was _right_ because the numbers were ridiculously accurate. In 2d, there is nothing at the same level of accuracy (14 decimal places in the energy on a ’90s workstation!) for general strongly interacting systems.

On the other hand, it’s also not correct to claim that in 1d we lack understanding of the math underlying the success of DMRG. The accuracy is related to bounds on the entanglement as expressed via Renyi entropy (Verstraete and Cirac), CFT gives us accurate calculations of these bounds in many situations (many people), we have rigourous general upper bounds for ground states of gapped systems (Hastings). For time dependent situations, CFT explains the slow dependence of entanglement entropy with time under local perturbations, and under global quenches the entanglement entropy grows linearly and everything becomes intractable fairly rapidly. There’s a few interesting open math questions in 1d: is finding a ground state of a gapped system NP-complete or not? (we know it is in NP) what exactly is the worst possible asymptotic scaling of entanglement entropy with gap? etc… But basically, in 1d the math is pretty much completely understood and is on as good a footing as classical stat mech is.

Now, in 2d, we need better algorithms and better math and they will probably arise together.

Comment #48 December 17th, 2008 at 2:22 pm

That´s clearly invariant under any basis change in S_N semidirect U_2^N. (I.e. any transformation which sends particles to particles.)That’s to the point, David. Many commonly used transformations in condensed matter physics do not send particles to particles. Otherwise, there would never be such a thing as a quasiparticle.

A good example of this is a phonon, which you see all the time in solid-state physics. If you have a crystal, the change of basis from (H_atom)^{tensor N} to the phonon basis is very far away from the transformation group that you write.

It’s difficult to reject phonons as artificial (ghost of Kronecker) and yet accept photons as natural.

Another set of examples is that certain types of existing qubits, for example a Josephson-junction qubit, require a highly creative change of basis from ab initio particles in order to express the qubit states.

Because of physical examples such as these, the class of tree states does seem like something of a straw man.

Comment #49 December 17th, 2008 at 2:44 pm

Because of physical examples such as these, the class of tree states does seem like something of a straw man.Constructing a straw man was exactly my goal! 🙂 If people don’t like it, I welcome them to construct a flesh-and-blood man, or at least a man of sturdier straw. The motivation was that I needed

someman (straw or otherwise) to put in the “skeptical” chair for the debate I wanted to start about the validity of QM for “complicated” systems. Quantum computing skeptics kept telling me they were sure quantum mechanics would break down for “complicated” states of hundreds of entangled particles (as opposed to the “simple” states for which QM has already been confirmed). (Indeed, Paul Davies just repeated that same argument to me yesterday.) But then when I challenged them to define “complicated,” they always retreated into word soup. Hence the need for a straw man to argue against. Think of it like Ralph Nader debating a parrot.Comment #50 December 17th, 2008 at 3:03 pm

Scott: First of all I think that the paper is a reasonable result in complexity theory, even if I think that Tree-BQP is a straw man at the level of physics. But I thought that in this respect, your paper (or maybe your talk that I attended) was incomplete. It could have been said that a plausible sure-Shor separator is one that outlaws a class of states, and not a class of descriptions of states, because that would be erratic.

It’s fine to debate a straw man, but for context you should summarize every major reason that you see straw, in addition to making your personal debate points.

Comment #51 December 17th, 2008 at 4:58 pm

I have been a little bit baffled by this talk of tree-states, straw-men, and sure-Shor separators. Finally I figured out that these terms (seemingly) refer to Scott’s abstract

Are Quantum States Exponentially Long Vectors?(which a Google search will find on Scott’s web site). Is this right?Assuming so, our system engineering experience suggests a reasonably robust sure-Shor separator

ansatzcan be constructed as follows.In the first equation of Scott’s abstract (the equation that defines a tree state) simply modify the constraint “no Σ are allowed” to the more lax constraint “

NΣ are allowed”. We then take the point of view thatNis a constant of nature.In the language of algebraic geometry—Joe Harris’

Algebraic Geometrybeing a suitable text—the quantum state-space of Nature is then ajoinofmonomial varietieshavingKronecker rank Nand an inducedKähler metric.Because we can permute the ⊗ symbols freely, Nature’s quantum state-space has no particular dimensionality or spatial ordering built into it; it is therefore at the same time less efficient and more general than the matrix product states discussed earlier in this thread.

Now we suppose that Bob lives in a universe containing (say) 400 qubit-spins, all coupled via random dipole-dipole magnetic interactions, and without loss of generality we take the mean-square interaction energy to be unity per-spin.

Crucially, we suppose in addition that Bob’s universe is noisy (maybe from magnetic radiation left over from the big-bang), and that the associated relaxation time is (say) 10. Physically speaking, the purpose of this latter condition is to ensure that Bob’s 400 spins are

notcarrying out a Shor factorization (or any other nontrivial quantum computation).On the other hand, low-order quantum entanglement is perfectly feasible for Bob to generate and observe, since each spin is associated with (it can be shown)

2Nreal-valued dynamical coordinates.Then we will assert, that Bob cannot readily distinguish whether he lives in a large-dimension (linear) Hilbert space, versus a (low-dimension) rank-

Ntensor product space. The physical reason being, that the ambient magnetic noise left over from the big-bang prevents Bob from ever implementing the quantum computation algorithms that would reveal the difference. And the mathematical reason being, the universal noise can be modeled (via Choi’s theorem) as an equivalent continuous measurement process that compresses Hilbert trajectories onto algebraic manifolds, such that most of a large-dimension Hilbert space need not be used anyway.Now suppose we turn off the noise, but leave Bob stuck in his low-dimension state-space. It seems plausible (to me at least), that Bob will never notice the difference—he will continue to ascribe the failure of his quantum computations to technical noise.

In this world, it might take a long time for people to realize that unanticipated noise in their quantum computers was a law of nature induced by the nontrivial geometry of quantum state-space.

Of course, even if Bob’s state-space

isHilbert, it might be very useful, for system engineering purposes, that Bob is able to efficiently simulate (noisy) spin dynamics, with high fidelity, on low-dimension algebraic state-spaces.Perhaps Bob is building quantum spin microscopes (like we are) and wishes to reduce the technical risks, and speed the pace, of this enterprise. In such cases the ability to efficiently simulate noisy systems has great practical value.

This is why we quantum system engineers take an interest in these fundamental issues, and seek to learn all we can of the mathematics and information theory involved.

Plus, it’s fun! Deep math, amazing physics, great engineering, profound information theory. If we are lucky, we live in a world in which atomic-resolution spin microscopy and large-scale quantum computing are

bothbecoming feasible.Comment #52 December 17th, 2008 at 6:46 pm

As a brief followup to the above—and to save Scott the trouble—let me say that Scott has a (really excellent and highly readable) preprint on his web site, titled

Multilinear Formulas and Skepticism of Quantum Computing, that will appear in a forthcomingSIAM Journal of Computing.Fortunately for my peace of mind, Scott’s

Multilinear Formulasanalysis doesn’t consider noise (at least, the word “noise” doesn’t appear), while in contrast, our QSE simulation methods rely (crucially) on the presence of noise.Aye, lasses and laddies, there you have a major difference between an theoretician and an engineer!

Scott’s article closes with a wonderfully thought-provoking list of topics for further investigation. I would suggest, as an addition to Scott’s list, the noise-related fundamental question: What generalized formulation(s) of Choi’s Theorem (specifically Theorem 8.2 of Nielsen & Chuang) apply in a tensor network universe?

This boils down to a physical question: does the usual quantum prohibition against non-causal communication still apply in tensor network state-spaces?

Enforcing this prohibition would seem to require delicate adjustments in both relativity and quantum physics (which is what everyone seems to think is required anyway).

The point being, it’s one thing to

simulatenon-causal communication, quite another thing to implement it! 🙂Comment #53 December 18th, 2008 at 7:53 am

Since comments here are slowing, I’ll close with a couple of references relating to the concluding challenge of Scott’s article:

This is a problem that the

ab initioquantum chemists have been routinely solving since the 1970s. See the recent review by Chinnamsetty, Espig, Khoromskij, Hackbusch, and Flad,Tensor product approximation with optimal rank in quantum chemistry(BibTeX appended).Appreciating this work is not easy, since the chemists often are more focused on binding energies than qubits, and on code performance than theorem-proving.

For the upcoming SQuINT Conference in Seattle, we translated these ideas from quantum chemistry into the language of QIT and coordinate-free geometry, with special attention to the fundamental role of Choi’s Theorem.

The resulting writeup appears on our http://www.mrfm.org web site as

Five promises from geometric quantum mechanics for efficient quantum simulation.A code library is included, which even on a small computer can handle tree-states for quantum systems of order 500 spin-qubits having Kronecker rank 100. This library exploits the factorization structure of

n-spin tree states to implement matrix-vector multiplications in O(n) operations instead of the (naive) O(n2).Historically speaking, O(

n) matrix-vector multiplication algorithms have always played a major role in large-scale computation; these algorithms typically exploit either natural sparsity or natural factorization of a state-space; the latter is the case with respect to tree-states.Geometrically speaking, we have the idea that the

musical isomorphismbetween vectors and one-forms is generically easy to compute on tree states, and this fast isomorphism greatly speeds practical quantum simulation calculations.Even with these results in-hand, I think it won’t be easy, and will take quite awhile, for the quantum chemists, the QIT theorists, and the quantum system engineers to accept that fundamentally we are all pounding upon the same “big nail”, and that mathematical tools from algebra, geometry, information theory, and complexity theory are all going to play more-or-less equal roles.

This broadening means more work for everyone, but it also means more fun, and more mathematical depth, and more opportunities for young people. Which is good!

——

@article{citename, Title = {Tensor product approximation with optimal rank in quantum chemistry}, Author = {S. R. Chinnamsetty and M. Espig and B. N. Khoromskij and W. Hackbusch and H.-J. Flad}, Journal = {The Journal of Chemical Physics}, Number = {8}, Pages = {084110}, Volume = {127}, Year = {2007}}

Comment #54 December 18th, 2008 at 6:43 pm

Scott, where did the “Recent Comments” widget go?

Comment #55 December 18th, 2008 at 10:44 pm

anonymous: I took it out. Somehow, I sleep better knowing people at least have to click a link to find out what others are saying about me. 🙂 I apologize for any inconvenience.

Comment #56 December 19th, 2008 at 4:25 am

I do not know how this thread also got into QC skepticism:

Scott wrote: “The motivation was that I needed some man (straw or otherwise) to put in the “skeptical” chair for the debate I wanted to start about the validity of QM for “complicated” systems. Quantum computing skeptics kept telling me they were sure quantum mechanics would break down for “complicated” states of hundreds of entangled particles (as opposed to the “simple” states for which QM has already been confirmed). ”

Is this really true? I know that some skeptics made this claims, but isnt it a more common view among skeptics that complicated states cannot be created because of noise?

(Also why “hundreds” some skeptical claims will come to play for handful of qubits…)

“(Indeed, Paul Davies just repeated that same argument to me yesterday.) But then when I challenged them to define “complicated,” they always retreated into word soup.”

Always?

Comment #57 December 19th, 2008 at 10:36 am

My own views are very much aligned with Gil Kalai’s.

Perhapas a relevant lesson from history is that Gauss—working as a civil engineer—attempted (seriously) to observe departures from Euclidean geometry in large-scale surveying, just as we presently attempt to observe departures from from Euclidean quantum mechanics in large-scale physical dynamics.

In both cases, the looked-for phenomenon was defined only vaguely: tiny departures from spherical geometry in Gauss’ case, tiny departures from linear quantum superposition in the present case.

In both cases, the immediate practical challenge was noise in measurements, sufficient to mask the (tiny) expected effects.

In both cases, the required revision of physical theory was far more radical than was originally foreseen: the geometry of classical state-space needed to be treated dynamically in Gauss’ case, and (perhaps) the geometry of quantum state-spaces needs to be treated dynamically in the present case.

It both cases, it was not obvious that departures from classical values could occur in a logically consistent way (geometric consistency was the tough issue in Gauss’ case, causal and informatic consistency are the tough issues in the present case).

In both cases, the development of stronger physical theories awaited the development of stronger physical insights and stronger mathematical tools. In Gauss’ case, progress awaited the physical principle of relativity as embodied in Riemannian geometry, and in the present case, progress awaits the physical principle of (????) as embodied in the mathematics of (????).

So just fill in the (????), and your achievements surely will be as celebrated as those of Gauss and Einstein.

Thus, in asking us to fill in the (????), Scott has set the bar *very* high … which is why this is a truly great weblog topic!

Another (less dramatic) lesson from history is that the physical principle of (????) and the mathematical principle of (????)—whatever they turn out to be—are likely to have considerable value in practical engineering. So perhaps it is not a complete waste of time, to try to do some practical engineering along the way (as Gauss did).

My own best candidate for the physical principle of (????) is Choi’s Theorem (which Thm. 8.2 of Nielsen & Chuang), but as for its generalized mathematical expression in (????), there I have no clear ideas (except “just start coding”) … but perhaps some other

Shtetl Optimizedreader does. 🙂Comment #58 December 19th, 2008 at 1:50 pm

Always?Gil, I hadn’t met you at the time I wrote that paper. 🙂

Comment #59 December 19th, 2008 at 8:04 pm

I have to say, Scott, that I also liked the recent comments list.

Comment #60 December 20th, 2008 at 8:42 am

John, I am sure this is not needed for most readers but can you elaborate on the Gauss’s story you refer to? (I am not familiar with it.) Scott, I see you are really serious about causality just going from the past to the future 🙂

On the main topic of the thread, regarding new quantum algorithms: 1) In most of scientific computing algorithms moving from an nlogn time algorithm to a linear algorithm would be a sensational progress theoretically and practically. Is there some work on quantum speed-up for these and others “low-complexity” algorithms? 2) one quantum algorithm fantasy that I (and others) have (which never got off the ground) would be to come up with quicker quantum algorithm for linear programming or LP-type problems. For exmple: consider the discrete n-dimensional cube and a unique sink acyclic orientation of its edges. (This mean that for every subcube there is a unique sink.) The question is how quickly can you find the sink. Classical random-walk-based algorithms are exponential (the best known is exp (sqrt n) ) and any quantum or clasical improvement would be great. 3) I am sure that the quantum-subroutine allowing sampling the Fourier transform of an n variable function in polynomial time will have many additional applications.

Comment #61 December 20th, 2008 at 11:03 am

Hi Gil! A very thorough and readable account of Gauss’ practical work in surveying and geodesy, and its relation to his fundamental investigations in the foundations of geometry, is given in Dunningham’s

Carl Friedrich Gauss: Titan of Science(recently reprinted by the Mathematical Association of America).Specially for this post, I did a literature search and found that there are diverse opinions regarding the role that these practical geodetic investigations played in Gauss’ research (see the appended BibTeX references.)

IMHO, the best single article to read is Gauss’ own article

Disquisitiones generales circa superficies curvas, which is available on-line in English translation asGeneral Investigations of Curved Surfaces.This celebrated article introduced the concept of what is now called Gaussian curvature, and established by Gauss’

Theorema Egregium(his name for the theorem) that this curvature is an intrinsic property of surfaces. Gauss thought so highly of theTheorema Egregiumthat he published it multiple times.An additional reason that the

Disquisitiones generalesis well-worth reading for students, is that in it Gauss pretty muchinventsthe modern academic style and idioms of academic writing (“it follows”, “we then have”, “it can be shown”, etc.). Hey,someonehad to invent the idioms that nowadays every mathematician, scientist, and engineer uses … students might as well learn these idioms by studying the classics.Oh yeah … did Gauss’ practical surveying work strongly influence his mathematical work in non-Euclidean geometry? Undoubtedly. Did Gauss envision, not only that the earth’s surface was curved, but that 3D space might be curved too? And did he experimentally test this idea? There is no definitive evidence either way. My own opinion, based on reading the

Disquisitiones generales, is that Gauss found no evidence for 3D spatial curvature in his geodetic data, and in keeping with his general conservatism in such matters, declined to publish, as a mere speculation, a mathematical possibility of which he was well-aware.Nowadays, of course, things are different, and vast herds of speculative theories roam the landscape. 🙂

To summarize: very roughly speaking, attempts to construct highly entangled states can be viewed as analogous to Gauss’ geodetic triangulations. In both cases, the technical difficulties are immense. In both cases, the practical applications are important (and pay the bills). And in both cases, the local structure always looks Euclidean/Hilbert, but it is mathematically and physically possible for the large-scale structure to significantly depart from linear extrapolation of the small-scale structure.

—-

@article{Breitenberger:1984qy, Author = {Breitenberger, Ernst}, Journal = {Archive for History of Exact Sciences}, Number = {3}, Pages = {273–289}, Title = {Gauss’s geodesy and the axiom of parallels}, Volume = {31}, Year = {1984}}

@article{1972, Author = {Miller, Arthur I.}, Journal = {Isis}, Number = {3}, Pages = {345–348}, Title = {The Myth of Gauss’ Experiment on the Euclidean Nature of Physical Space}, Volume = {63}, Year = {1972}}

Comment #62 December 20th, 2008 at 11:43 am

Greg: Alright, I reinstated the Recent Comments widget, just further down on the sidebar. That seemed like a reasonable compromise between readers’ desire to stick any sign they want on my front lawn, and my desire to sleep. 🙂

Comment #63 December 21st, 2008 at 3:07 am

From Colin McClarty, The Rising Sea:

Grothendieck describes two styles in mathematics. If you think of a theorem to be proved as a nut to be opened, so as to reach “the nourishing flesh protected by the shell”, then the hammer and chisel principle is: “put the cutting edge of the chisel against the shell and strike hard. If needed, begin again at many different points until the shell cracks–and you are satisfied”. He says:

“Softening the nut with water” referred to Grothendieck’s approach of building new abstractions to study a problem with.

Comment #64 December 21st, 2008 at 3:09 am

(Last paragraph of previous was added by me, not part of the McClarty excerpt.)

Comment #65 December 21st, 2008 at 10:18 am

Hi John, thanks for the further details regarding Gauss. Maybe a more direct analog for a hypothetical (????) of your post is the notion of NP-completeness, which put on formal grounds earlier intuitions and insights.

Comment #66 December 21st, 2008 at 11:09 am

Gee, it must be “thank you” day, because I want to say a heartfelt “thank you” to ASDF for that reference to Grothendieck’s work. I was familiar with Grothendieck’s “nutshell” analogy, but Colin McClarty’s account of it is by *far* the best that I have seen. Outstanding!

One approach to establishing the relevance of Grothendieck-style mathematics to complexity theory is by way of tensor networks (a.k.a., tree states, product-sum states, Kronecker states, etc.). These networks can be alternatively understood in algebraic terms, or combinatoric terms, or geometric terms, or even (via sparse reconstruction theory, for example) information-theoretic terms.

Nowadays, we increasingly appreciate that there are strong connections among these points-of-view (for example, that the manifold of tensor network states has negative Ricci-Kähler curvature is important to the numerical tractability of quantum-state reconstruction from sparse random projections).

Just to mention, Grothendieck did write at considerable length on tensor product spaces … albeit without the advantageous informatic perspective on product spaces that modern QIT is providing. Hey … this means that our generation has at its disposal a crucial mathematical tool that Grothendieck’s generation lacked!

The Grothendieck point of view is that all of the preceding ideas can be understood as different aspects of

onerising tide of theory. And perhaps this happy conception will even turn out to be true. 🙂