As promised, here’s my report on the Quantum Information Science Workshop in Virginia, only a week or so behind schedule.

I *tried* to be cynical—really I did. But despite my best efforts, somehow I went home more excited about quantum than I’ve been in a long time.

The highlight of the workshop was of course the closed, invitation-only, late-night meeting in the basement of NSF headquarters, at which a group of us hidebound quantum computing reactionaries plotted to keep the field focused on irrelevant mathematical abstractions, and to ostracize the paradigm-smashing entrepreneurial innovators who threaten our status. I don’t think I’ve ever heard so much cackling in the space of a single evening, or so much clinking of bone goblets. Stuff like that is why I entered the field in the first place.

But there were some other highlights as well:

1. In his talk on quantum algorithms with polynomial speedups, Andris Ambainis called attention to a spectacular recent paper by Ben Reichardt, which *characterizes* the quantum query complexity of any partial or total Boolean function f (up to a logarithmic factor) as the optimal witness size of a span program for f, and also as the negative-weight quantum adversary lower bound for f. Assuming this result is correct, it seems possible that the remaining open problems in quantum query complexity will be pulverized, one after another, by solving the associated SDPs for the optimal span programs. (Incidentally, using Reichardt’s result, it must be possible to prove, e.g., a Ω(n^{1/3}/log(n)) lower bound for the quantum query complexity of the collision problem using the adversary method. This was a longstanding open problem. Can one say, explicitly, what the adversary matrices are in this case?) Alas, it also seems possible that span programs will turn out to be almost as hard to analyze as quantum algorithms were…

(1+√5)/2. Despite the obvious danger to the future funding of the entire field, by some clerical error I was released from my padded cell to speak about “Quantum Complexity and Fundamental Physics”. My “talk,” if it can be called that, was in my opinion neither rational nor integral to the workshop.

2. In her talk on blind quantum computation, Anne Broadbent (who’s also visiting MIT this week) described some beautiful new results that partly answer my Aaronson $25.00 Challenge from a year and a half ago. The Challenge, if you recall, was whether a quantum computer can always “prove its work” to a classical skeptic who doesn’t believe quantum mechanics—or more formally, whether every problem in **BQP** admits an interactive protocol where the prover in **BQP** and the verifier is in **BPP**. Anne, Joe Fitzsimons, and Elham Kashefi haven’t quite answered this question, but in a recent paper they’ve come close: they’ve shown that a quantum computer can prove its work to someone who’s *almost* completely classical, her only “quantum” power being to prepare individual polarized photons and send them over to the quantum computer. Furthermore, their protocol has the amazing property that the quantum computer learns *nothing whatsoever* about which particular quantum computation it’s performing! (Aharonov, Ben-Or, and Eban independently gave a protocol with the same amazing properties, except theirs requires the “classical” verifier to have a constant-sized quantum computer.) Anne et al. also show that *two* quantum computers, who share entanglement but can’t communicate with each other, can prove their work to a completely classical verifier (while, again, remaining completely oblivious to what they computed).

On top of everything else, these results appear to be the first complexity-theoretic application of the measurement-based quantum computing paradigm, as well as the first “inherently quantum” non-relativizing results. (Admittedly, we don’t yet have an oracle relative to which the blind quantum computing protocols *don’t* work—but the protocols rely essentially on the gate structure of the quantum circuits, and I conjecture that such an oracle exists.)

Rereading my Challenge, I noticed that “the [one-member] Committee may also choose to award smaller prizes for partial results.” And thus, yesterday I had the pleasure of awarding Anne a crumpled $10 bill, with an additional $5 contributed by Seth Lloyd, for a grand total of $15.00 to be shared equally among Anne, Joe, and Elham. (**Update:** Since I wrote that, Anne has elected to trade in for three signed and doodled-upon $5 bills.) (**Another Update:** A $12, or $15-$O(1), prize shall be awarded to Dorit Aharonov, Michael Ben-Or, and Elad Eban the next time I see them.) This is, I believe, the first time a monetary reward offered on *Shtetl-Optimized* has actually been paid out.

3. In a talk that was so good, you almost forgot it involved *chemistry*, Alán Aspuru-Guzik discussed applications of quantum complexity theory to understanding photosynthesis and the design of efficient solar cells (!). To give you a sense of how mindblowing that is, it briefly made me wonder whether I should reread some of John Sidles’ cheerful ramblings about the coming merger of quantum systems engineering with biology in the 21^{st} century (of which, I predict, this very sentence will inspire dozens more).

So what then is the connection between quantum complexity theory and photosynthesis? Well, a few of you might remember my post “Low-Hanging Fruit from Two Conjoined Trees” from years ago, which discussed the lovely result of Childs et al. that a quantum walk on two conjoined binary trees can reach a designated end vertex exponentially faster than a classical walk on the same graph. That result interested me, among other things, because it can be shown to lead to an oracle relative to which **BQP** ⊄ **SZK**, which at the time I didn’t know how to find otherwise. But especially given the bizarre nature of the graph needed to produce the oracle separation, I thought of this result as pretty much the *prototype* of an irrelevant complexity-theoretic curiosity (which, naturally, made me like it all the more).

You can probably guess where this is going.

Shown above is a light-harvesting molecule (image snagged from Alán’s slides), which apparently is efficient at concentrating light at its center for essentially the same reason the Childs et al. quantum walk reaches the target vertex exponentially faster than a classical walk: namely, because of destructive interference between the paths that point backward, toward the leaves. According to Alán, what plants do to harvest sunlight is not entirely unrelated either (it also involves quantum coherence), and fully understanding these mechanisms in quantum information terms might conceivably be useful in designing better solar cells. To be fair, a part of me always *did* suspect that quantum oracle separations would turn out to be the key to solving the world energy crisis. I’ll point you here or here if you want to know more.

Incidentally, Alán’s talk had another, also extremely interesting part, which was about coming up with precise numerical estimates of the number of qubits you’d need to simulate the wavefunctions of (say) benzene, caffeine, and cholesterol. (Many of us have long thought that simulating physics and chemistry will be the real application for scalable quantum computers if we ever build them, practical long before breaking RSA and ultimately more useful too. But it’s not something we often talk about—ostensibly for lack of meaty things to say, really because we don’t know chemistry.)

4. In her talk, Dorit Aharonov posed an open problem that I now have no choice but to inflict on others, if I don’t want to feel forced to think about it myself. So here’s her problem: how hard is it to find the ground state of a local Hamiltonian H=H_{1}+…+H_{m} (that is, a sum of k-qubit interactions, for some constant k), if we impose the constraint that the H_{i}‘s all commute with each other? Clearly it’s somewhere between **NP** and **QMA**. It might seem obvious that this problem should be in **NP**—to which I can only respond, prove it!

There were also lots of great talks by the experimentalists. Having attended them, I can report with confidence that (1) they’re still trying to build a quantum computer but (2) decoherence is still a big problem. If you want to know *even more* detail than I’ve just provided—or you want to know about the theory talks I didn’t mention, or more about the ones I did mention—ask away in the comments. I can’t promise that no one will know the answer.

That molecule looks suspiciously like a parabolic mirror. It does not look, at least to me, like the Childs et al graph. Could you please explain your statement

“which apparently is efficient at concentrating light at its center for essentially the same reason the Childs et al. quantum walk reaches the target vertex exponentially faster than a classical walk:”

I think Dorit’s problem is really a decision problem: “does there exist a ground state which minimizes the energy of every H_i separately?”

matt:

I think the question of finding the groundstate is much harder. As to your question, “does there exist a ground state which minimizes the energy of every H_i separately?”, the answer is known to be, “Yes.”

Because the H_i’s commute with each other and with H, a common eigenbasis can be found. The ground state will then be an eigenstate of each H_i with eigenvalue e_i, so the energy of the groundstate is e=e_1+e_2+…+e_m. The lowest possible value of e is achieved when the e_i’s are all as low as they can be. So, the groundstate of H is also the groundstate of each H_i.

Matt: Yes, formally it’s a decision problem (just like the ordinary local Hamiltonians problem).

Kevin: Can’t we encode 3SAT using commuting local Hamiltonians? If so, then finding the energy of the ground state can’t be that easy…

rrtucci: More accurately, it looks isomorphic to the right half of the Childs et al. graph–that is to say, it’s a tree.

What, no experimental details? Were there many experimentalists, because from a glance at the schedule the workshop seem heavily theory oriented (nothing wrong with that, just saying )

I’m also missing why the commuting Hamiltonian problem isn’t obviously in NP. How are the H_i’s given to you?

Diagonalize each H_i, it has basis (v_i)_j with eigenvalue (E_i)_j (diagonalize only over the k qubits where H_i acts nontrivially). One is tempted to define variables (x_i) \in {0,1}^k and then say the total energy is the sum of these (x_i)’s, but this is not true since there are dependencies between the x_i’s. Further the (E_i)_j’s might be degenerate (in which case the basis is not completely defined). But isn’t there any easy algorithm for resolving this?

Start with H_1 and H_2. If H_1 and H_2 do not act on the same qubits, ignore them. If they do overlap, then simulatenously diagonalize H_1 and H_2 (max size of matrix 2^(2k)). From this simulatenous diagonalization you can match the different energy values, i.e. you can put a constraint between x_i and x_j. Further you should need to deal with degeneracy. But you can keep track of this locally: for degenerate energy levels, you can use the simultaenous diagonalization to say whether this degeneracy is split or not and if it is split, keep track of the new basis under which it is split (i.e. you will update your local basis (v_1)_j and (v_2)_j appropriately.) Proceed to do this for every pair H_i and H_j. At the end of the day you will have a bunch of x_i’s with associated energies and a bunch of contraints between the x_i’s.

I must be missing something. I did just eat lunch so all the blood is in my tummy.

rrtucci: More accurately, it looks isomorphic to the right half of the Childs et al. graph–that is to say, it’s a tree.

Doh!

But Childs et al are looking for a traveling wave solution , so their wave bounces only once off the perimeter of that molecule. Besides, I never had to use complexity theory to understand a parabolic mirror (or a round trampoline.) How is the **exponentially faster** result of Childs et al relevant? Does their model predict the data?

Dave: here is an example will show why this commuting problem is tricky. Consider a toric code Hamiltonian on a sphere. There is a ground state (in fact, 1 such state). Now change the Hamiltonian so that on some given plaquette the system wants \pi flux instead of zero flux. Now there are no ground states. Even though locally everything can be satisfied, and in fact every subsystem of size

Oops, part of my comment got dropped cause I used a less-than symbol.

continued: everything can be satisfied on every system of size less than L, but there is no global solution. This example is easy since we can count whether there are an even or odd number of such plaquettes, but maybe there are harder examples.

More to the point:

Are you claiming that there is quantum coherence across this huge molecule that encompasses thousands of atoms and is embedded in a thermal environment? I can show you a tree in back yard. Does it also illustrate the Childs et al result?

Scott: Thanks for bigging up the bling QC protocol (and for my share of the $15). By the way, blind quantum computing in the sense we describe seems to become impossible if quantum mechanics is non-linear (or if you have closed time-like curves which follow Deutsch’s model). I’ve been working on the communications complexity of blind QIP, but I am limited to processing actual quantum states rather than dealing with a BQP oracle, since the structure of BQP seems to make it impossible to exploit Feigenbaum’s no-go results, so I don’t think there is much chance of proving BQP != IP_BQP directly.

Hey!

Scott told me about this discussion going on about the commuting Hamiltonians problem, so I am chipping in. Me and Itai Arad kind of bumped into this problem when trying to prove quantum PCP theorem for the non-commuting case, thinking that the commuting case is o b v i o u s l y true because it is, well, classical….

But we should have been smarter… That the commuting Hamiltonians case cannot be viewed as simply classical is known to anyone familiar with Toric codes. There we get groundstates of local commuting stabilizers which are highly entangled!

Which also explains why the problem of approximating the ground energy of the commuting Hamiltonian problem is not “obviously” in NP – the witnesses may be terribly complicated!

One might think that since all terms in the Hamiltonian are simultanuously diagonalizable, we can rotate the basis of eigenstates to the standard computational basis by _local_ transformations, and so everything is locally equivalent to classical. well, we do not know how to do this…

At this point we do not know how to give an efficient classical description of ground states of commuting Hamiltonians, which we can also use for efficient classical estimation of the energy.

I think this is a very interesting problem, the computational power of commuting Hamiltonians. Espically if it turns out that the commuting Hamiltonian case is in fact much harder than it seems at first sight… It might be as hard as the non-commuting case! (which might seem unlikely, but who knows…)

And on a different note – congratulations to Anne, Joe and Elham for recieving the honarable 10$(+5!) first Shtetl award for their progress on the beautiful question of proving BQP to a BPP verifier! However, I have to express my protest. How come we, the other team, (that is Elad and Michael as well as myself) who heroically and independently attacked this question and arrived at similar results, did not get even one dollar?

I think the Shtetl-Optimized prize committee should correct this injustice.

Actually, while we are at it, it might be interesting to stir up the discussion here with the provocative motivations we had in mind when we approached the problem. Our team attacked this question of proving quantum dynamics to a classical observer not from the cryptographic angle, but rather we were motivated by more foundational questions: the question of testing quantum mecahanics.

It all starts with the following striking observation about experiments on quantum mechanics (striking to me, at least):

If we view physical dynamics as computations – inputs lead to outputs, then one can talk about quantum dynamics that are within BPP. But general quantum dynamics are, as we now believe, not simulatable in BPP. Of course, our interest in quantum computation is due to the fact that we believe that such dynamics that are outside of BPP are realizable in nature.

However, when looking for experimental evidence for quantum dynamics outside of BPP, you will find that they are impossible to find! (I discovered this when, during a conversation with oded Goldreich, I wanted to come up with such evidence to why we so strongly believe in quantum computation…)

In fact, no property of quantum mechanics that is outside of BPP, was ever tested experimentally! The reason is simply that such properties _cannot_ be tested by the usual scientific paradigm. The usual paradigm is this: peform an experiment and compare its results to the predictions of the theory, which are computed on the side. However, if we assume the Modern Church Turing thesis, all such predictions must be computed in BPP, and so if we are talking about dynamics outside of BPP, we simply have no predictions to compare to…

…so, as far as we know, all experiments on quantum mechanics could be fully consistent with an alternative theory to QM which lies within BPP!

Shor’s algorithm is striking for many reasons, but one of them is that it is the first example of a physical experiment out of the usual experimental paradigm: it is able to test quantum mechanicat dynamics outside of BPP! this is because it uses _verification_, rather than predictions, to compare its results to the theory. It verifies that the product of the outcomes is the integer it needs to factor – though the classical computer cannot predict what the factors (the outcome of the experiment) would be.

So we asked whether it is even possible to test _general_ quantum dynamics despite the fact that in general they cannot be simulated in BPP, and thus there are no predictions for them.

And the result is: yes. We can test general dynamics of QM experimentally. But we need to change the notion of a physical experiment: we allow interactions between the verifier and the experimentalist.

Our answer is partial – our interactions must involve sending few qubits at a time, and in particular, they involve quantum interactions. So the verifier is not entirely classical – it should be able to handle a few qubits at a time. Whether or not general quantum behavior can be tested using solely classical interactions, so that the verifier can be fully classical, is exactly Scott’s original question.

Which I think is a great question!

dorit

Matt, Scott, Dave: I see that I was completely neglecting the constraints on the states. Matt, very good example, it helped a lot.

Hi Dorit,

Since the Hamiltonians are simultaneously diagonalizable, in the case where all terms can be simultaneously minimized the problem would seem to essentially be a SAT instance. If the terms cannot simultaneously be minimized then the problem would seem to be in coNP rather than NP (though I could easily be wrong).

matt: I think I’m missing your point. Classically I can change a local term in a 3-SAT problem and it no longer no longer be satisfied. I guess I’m missing what is different between your example and the following. Take the Ising model with ferromagnetic interactions and an external magnetic field. There is one ground state it is (say) the all zeros vector. Now change one of the interacts to antiferromagnetic. Of course the model does not have a way to satisfy all interactions, but this doesn’t have anything to do with quantum it just has to do with properties of changing one of the energy terms (I guess I also lose you when you say “doesn’t have a ground state” Don’t all quantum systems have a ground state.)

If a Hamiltonian is made up of a sum of stabilizer elements from a stabilizer group there is an even simpler way to look at the energy levels. Let S_1….S_k be the stabilizer generators. Then there is a basis |s_1,…s_k,X> where s_i \in \{+1,-1}. Every term in your Hamiltonian will be a possible product of stabilizer generators (since it is a sum of stabilizer elements.) Thus the energy can be expressed as a sum of products of s_i. Finding the minimal value for the energy is then a classic minimizing problem which is in NP.

For the toric code this just means that E= \sum_{p \neq p’} s_p + \sum_{v \neq v’} s_v + \prod_{p \neq p’} s_p + \prod_{v \neq v’} v_p. Each s_p is associated with a plaquette and each s_v is associated with a vertex. A special plaquette p’ and special vertex v’ is singled out, because these the product of the remaining stabilizer generators (changing which one of these you use doesn’t change the energy function’s form.) (Of course the space of a logical qubit doesn’t enter into the energy function but tells you that each energy level is at least this degenerate.)

Now I have no excuse about my tummy just my dense head!

However, when looking for experimental evidence for quantum dynamics outside of BPP, you will find that they are impossible to find!I don’t entirely agree. You can confirm that a system performed a calculation that exceeds its computational resources, even if some larger classical computer could also perform that calculation.

Of course a skeptic could argue that your bounds on the system’s computational resources are not true. Maybe you think that the system only has room for 4 qubits (say), but it actually has room for 200 classical bits. But if you accept quantum dynamics itself, you can then conclude that the system computed more than it has a “right” to compute.

The demonstration is perhaps clearer with communication complexity than with computational complexity. I suspect that it is already feasible to demonstrate violation of classical communication complexity bounds.

The demonstration is clearer still if, in the spirit of communication complexity, you violate statistical bounds that hold classically with zero allowed communication. I.e., if you violate Bell-type inequalities. This is already related to communication complexity and communication complexity is related to computational complexity. As I see it, once you witness violation of Bell-type inequalities, the strong Church-Turing thesis is already on shaky ground as a scientific hypothesis. Why wouldn’t such a violation lead to quantum accelerations? At best, there would have to be a new reason for it not to happen.

I think perhaps the phrase quantum “dynamics” is a bit misleading. I like to call the precepts of quantum computation quantum probability. Quantum probability is clearly different from classical probability, and quantum probability can be tested and has been amply verified. Either quantum probability is not completely true (which would be new science), or there is some statistical censorship principle that prevents full access to quantum probability (which would also be new science), or quantum computation is possible in principle.

Dorit: Thanks for the interesting comment!

The reason why you haven’t received a share of the prize is extremely simple: because I haven’t seen you (or Elad, or Michael) since remembering about my offer of smaller prizes for partial results. But don’t worry, I haven’t forgotten you! Next we meet, you shall receive the proud sum of $12.00 ($12 being $15-O(1), representing the O(1) qubits held by the verifier in your protocol … but $12 also being divisible by 3, so that you and your two coauthors can share the winnings equally).

Dorit,

I gave a talk on blind quantum computing in Bristol a month or so ago, and was asked whether it was possible to perform blind computation with a constant number of rounds of computation. I’ve been thinking about this, and the answer would seem to be that it is possible. As far as I can see we can reduce our protocol to only two rounds of communication. Have you done any thinking about the number of rounds? I am wondering whether there exists a blind QC protocol which requires only one round.

Are you claiming that there is quantum coherence across this huge molecule that encompasses thousands of atoms and is embedded in a thermal environment? I can show you a tree in back yard. Does it also illustrate the Childs et al result?rr: Firstly,

I’mnot claiming it, Alán was—I’m just reporting what he said and that it seemed pretty exciting to me. But yes, I did ask around among the physicists, and they told me it’s a well-accepted fact that chloroplastsdouse quantum coherence to harvest light—indeed, that the disagreement is just about whether that’s already so well known that quantum information has little more to contribute. Apparently, what makes it possible—despite the fact that chloroplasts are relatively “large,” “thermal” systems—is that the coherence is extremely short-lived. Again, I’ll refer you to the papers for details, or maybe I can get Alán to contribute to this thread…Hi all! FIrst of all, I am honored to be part of the Shtetl, being a happy Guzik, whose great grandparents moved from the Polish Shtetls all the way to Mexico City a few generations ago.

Now, off to science. I am going to answer R. Tucci’s questions:

– The theory that we are working on is based on recent ultrafast experiments carried out by the Fleming group at UC Berkeley on photosynthetic complexes. These experiments show long-lived (for molecules, ~600 fs) coherent energy transfer amongst the chlorophyll molecules. The paper of Engel et. al (Nature, 2007) is noteworthy and interested us originally, as it suggested that the system was carrying out a quantum search.

http://www.lbl.gov/Science-Articles/Archive/PBD-quantum-secrets.html

We showed that this was not the case, but we still wanted to analyze the role of coherence further. Masoud Mohseni, Patrick Rebentrost, Seth Lloyd and myself started thinking about this in the context of quantum walks. We extended the idea of quantum walks to open systems (as most of the action happens while the system decoheres to the ground state after the initial light pulse). We proposed the idea of environment-assisted quantum walks to explain the role of the environment in this context.

– The next problem to tackle was understanding what was the role of coherence on the EFFICIENCY of energy transfer of the system. It is not enough to see nice coherent oscillations; now, you want to see if one actually wants to partition the quantum process in its different components (dephasing, relaxation, dissipation, coherent evolution), and what percentage of the process is due to each one of them. Our results show that the role of coherence in the efficiency of the complex is between 10-30% for the Fenna-Mathews-Olson complex (the molecule studied by Engel et. al).

3) Finally, we thought of the Child’s et al exponential speedup (a lovely paper) and wondered if it was possible to see this exponential speedup in a chemical structure. Dendrimers (branching molecules) come to mind, as it is not hard to imagine having an excitation start at the leaves of the tree and go to the root. (Imagine just carrying out the final half of Childs’ algorithm) Of course, I told you that things decohere quickly, so you could ask yourself the question: Does one gain any speedup under decoherence? In the third paper (Environment-Assisted Quantum Transport), we found that indeed if the dendrimer (or tree structure for the conjoined tree case) had static disorder (mismatch in the site energies of the different nodes of the graph) the destructive interference that resulted from these imperfections could be “fought” by introducing dephasing. The interplay between dephasing and static disorder can lead to a regime where the dendrimer or tree “works again” (albeit less efficiently than if the evolution were fully coherent).

– The story became very exciting recently: Greg Scholes’s lab in Toronto found long-lived coherent transfer in a conducting polymer at room temperature (Science, 2009). This basically tells us that this is more ubiquitous than one thought, in the sense that even a polymer in solution can have these hundreds of femtosecond coherent effects.

There are many open questions still:

– To what extent can one “engineer” physical systems to benefit from these short-lived coherent effects.

– Can we learn from quantum algorithms, quantum information, etc. to help have for example, better light-harvesting systems, such as the dendrimer that Scott pasted here (the image is originally from Professor Takuzo Aida, from Japan)?

– We need better understanding of the reasons for these long-lived coherences. (The main consensus right now is correlated protein environments).

Many quantum info. people are working on this now, such as Martin Plenio, Alexandra Olaya-Castro, etc. You can check out a conference that Masoud and Yasser Omar are organizing in Portugal about the subject:

http://sqig.math.ist.utl.pt/lqcil/quebs09/home/

To get our papers on this, look at #24, 25, 27 on my publications page: http://aspuru.chem.harvard.edu/Publications/

Finally, if you are up to it, you can read a recent Discover article. It was fun to appear there, but more fun to appear here.

http://discovermagazine.com/2009/feb/13-is-quantum-mechanics-controlling-your-thoughts/article_view?b_start:int=1&-C

Well, off to sleep guys. I will check tomorrow to see if there are more comments.

Scott, your answer is a total red herring. It’s no great revelation that chemistry is based on quantum mechanics

You said:

“which apparently is efficient at concentrating light at its center for essentially the same reason the Childs et al. quantum walk reaches the target vertex exponentially faster than a classical walk:”

That is totally false. The Childs et al quantum walk result is for a pure state with quantum coherence over all nodes. There is no way that there is quantum coherence (for any significant amount of time) across thousands of atoms sitting inside a stew at room temperature or hotter. The fact that a moron like me has to point this out, that nobody at the QIS conference figured this out or you would have never said it here, speaks volumes about the competence of the QIS scientists.

Scott,

Unless I’m mistaken, your powerpoint link seems broken–I can’t open the linked file successfully.

As a side note, why don’t you use Beamer? Just curious as to your opinion.

Scott, the vienna.ppt presentation you linked to doesn’t open in Open Office 3. It says something like “general i/o error” suggesting that the file is corrupted somehow. The file I downloaded (twice) has sha1 checksum:

be21b36b83ef25f57a6f4ca561a6bfb7aca94c44

Scott,

Thanks for posting the link to Alán Aspuru-Guzik slides – looks like a pretty excellent talk. As a side note, these molecular graphs, and ones considerably more complex than what he’s presenting, are easily within synthetic reach of modern dendrimer chemistry. I’d bet the turn-around time for experimental verification of a particularly intriguing or efficient structure would be no more than a year or two.

Where does his classical cost for factoring come from though??

Shouldn’t it be O(e^((ln(n))^(1/3)*(ln(ln(n)))^(2/3)) for the number field sieve algorithm?

Very nice post. Can you say something on Leggett’s lecture and , in particular: What is the definition of the “disconnectivity” measure D; and what is the mysterious K defined in the last slide. And a technical question. How to open Freedman’s presentation.

Sorry, everyone! My presentation file was indeed corrupted. I can’t figure out what the problem is: it opens fine on my computer, then I upload it and download it again and it doesn’t work. Anyway, I changed the link to point to the version of my presentation on Caltech’s site, which works fine.

KaoriBlue: In his chart, n is the number of bits in the number; it’s not the number itself.

Dave, everywhere I said “ground state”, I meant “zero energy ground state”.

The classical example you mention shows a case where again you can satisfy the problem locally, but not globally. The difference, though, is that in the classical case there is the very important property that the set of all the local reduced density matrices is globally consistent, and that this global consistency of density matrices is easy to check since the local reduced density matrices are antiferromagnets. In the quantum cases, the local reduced density matrices are not globally consistent and this is harder to check. That is, you can’t just give the local reduced density matrices as a witness.

The case I mention is too easy, of course, as would be any stabilizer Hamiltonian, but maybe there are harder cases.

sorry, total typo in that…replace “are antiferromagnets” with “are pure”

As a side note, why don’t you use Beamer? Just curious as to your opinion.Can Beamer make a giant hammer fall onto the slide, smashing papers that violate the BBBV theorem? Can it make a baseball encyclopedia fall into a black hole and then bounce back out? Or make the black hole expand until it swallows up the slide? Can it make an Uncle Sam with a ψ in his hat rise onto the page, saying “That’s why YOU should care about quantum computing”? Does Beamer easily let you create a binary tree with copies of Gwyneth Paltrow at the bottom? What about a spiky-tailed hydra whose two heads are Razborov and Rudich?

If Beamer can do all these things, then yes, I’ll consider switching from PowerPoint.

Ah, stupid mistake – I should have thought before posting on that.

Concerning the local Hamiltonian problem for commuting Hamiltonians, it was actually studied some time ago by Bravyi and Vyalyi in quant-ph/0308021. They not only mentioned that the problem might not be in NP in general, but actually were able to show that this is not the case for 2-local Hamiltonians: they proved that there is always an effcient classical desciption for the ground state of a 2-local commuting Hamiltonian (see corollary 1 of their paper). That’s of course in contrast to the general case, by the QMA-completeness of 2-local Hamiltonians proved by Kempe, Kitaev and Regev. It would be great to find out what happens in the general case!

Changing topic, I would like to add a remark to the discussion about interactive proofs for quantum computation. In my opinion there are two main reasons why such kind of result would be extremely interesting (not considering the cryptographic applications):

First, we, believers of quantum computation and hence of the correctness of quantum theory in the large scale, would like to have an efficent way to test if a very large quantum system (which we cannot simulate on our desktop computer) behaves in the way we think it does. Here we are happy to assume that quantum mechanics is correct, but are skeptical about whether e.g. we have full knowledge of the system’s Hamiltonian. In this setting – which would be very relevant if we manage to build a quantum computer – the two great papers by Broadbent/Fitzsimons/Kashefi and Aharonov/Ben-Or/Eban essentially solve the question (positively!). We could of course ask for a protocol with a classical verifier, but in my opinion that’s a not so important detail…

But there is also another reason why interactive proof systems for quantum computation would be great, more in the spirit of Scott’s $25 question: can we convice a skeptical of quantum mechanics that we are able to solve any problem in BQP, if we only have access to BQP? In this situation, a protocol whose soundness proof assumes the correctness of quantum theory is not good enough: the prover might think you have access to more than what is possible by quantum mechanics, an extra power which you could use to cheat him. In this context, the interactive proofs presented in the two papers are not good enough, as they assume the correctness of quantum theory (please correct me if I’m wrong :-). Of course, the full $25 shtetl question would be satisfactory also in this context.

What I think is a nice question is whether there are intermediate settings: for example, could the soundness proofs of any of the two papers (which assume quantum theory as correct) be generalized to a broader class of theories (of course I’m being too sloopy, I guess that to find a meaningful set of such theories would also be part of the problem :-)?

Scott writes:

>Assuming this result is correct, it seems possible that the >remaining open problems in quantum query complexity will >be pulverized, one after another, by solving the associated >SDPs for the optimal span programs.

I would love to see whether this method yields nontrivial lower bounds for hidden subgroup problems. As of now, such bounds could be derived for Abelian groups only (using the polynomial method, as in the collision lower bound).

Greg: “I suspect that it is already feasible to demonstrate violation of classical communication complexity bounds.”

Greg, This is very interesting and I would be quite surprised if it is true. What would be the simplest way to demonstrate it?

Just a quick comment on Alán Aspuru-Guzik’s talk, which was indeed fantastic and inspirational.

A common misconception is that the graph on which a (continuous-time) quantum walk occurs has a direct realization in physical space. In fact, the graph on which the walk occurs is an abstract graph, whose vertices are labeled by states in a Hilbert space, not locations in physical space. From a complexity point of view, quantum walks are most interesting when they occur on an exponentially large graph realized using only polynomially many qubits. Certainly (continuous-time) random walks also occur on exponentially large graphs realized by only polynomially many “pbits,” but such walks can’t exhibit interference effects. On the other hand, classical wave dynamics can exhibit interference effects, but not efficiently on exponentially large graphs.

Alán’s picture of a light-harvesting molecule has the unfortunate side-effect of potentially of luring folks into the state graph = physical layout fallacy. I think the main point that Alán and his collaborators are making is that quantum coherence plays a non-negligible a role in transport in such structures, contrary to (at least my) intuition that decoherence would totally wash out quantum coherence effects.

I think it’s too strong to say that Alán and his collaborators have found evidence that Nature has found a way to harness exponential speedups in the quantum query model á la the welded binary tree problem. Instead, they’ve pointed out the (surprising) importance of quantum coherence in a room-temperature process. That said, perhaps a sufficiently advanced civilization will figure out a way to leverage exponential quantum speedups to improve device design, such as for photovoltaics. The jury is still definitely out, in my opinion.

I’m waiting for the day when we find a natural process that implements quantum error correction!!!

I would love to see whether this method yields nontrivial lower bounds for hidden subgroup problems.Unfortunately no, because it is already known (Ettinger, Hoyer, Knill, quant-ph/0401083) that the query complexity is very low.

Greg: “I suspect that it is already feasible to demonstrate violation of classical communication complexity bounds.”Greg, This is very interesting and I would be quite surprised if it is true. What would be the simplest way to demonstrate it?Gil: At the moment this is no more than a suspicion, and I think that it would be good to figure out how to test any such communication bounds, or determine that implicitly they have already been tested.

Again, my intuition is that the violation of Bell’s inequalities is similar in spirit to the violation of communication complexity bounds. But note that for the latter there is a certain apples-vs-oranges aspect, because quantum communication complexity is based on sending qubits, while classical communication complexity is based on sending bits. Although you can reduce the former to the latter (with dense coding) if you allow shared prior entanglement.

In fact — I need to think this through better — I wonder if the known violation of Bell’s inequalities is already a violation of a classical bound in the “refereed simultaneous messages” model. It seems quite possible that if Alice and Bob share prior entanglement, they can reduce the amount that they tell the referee, and that there are examples that can already be demonstrated with existing quantum devices.

Greg: No, he wants a (polynomial, not exponential) lower bound on the query complexity anyway. We actually have an Ω(n) lower bound on the quantum query complexity of Simon’s problem, and it’s quite beautiful (an unexpected use of the polynomial method for a highly non-symmetric problem).

Thank you Scott. May I add that the same beautiful method yields a tight lower bound for all Abelian groups.

Basically no lower bound is known for non-Abelian groups.

In particular, the method seems to be break down completely for dihedral groups.

There are some lovers of dihedral groups among the regular readers of this blog so my challenge is : prove a nonconstant lower bound on the query complexity of the hidden subgroup problem in dihedral groups! Note that I’m not even asking about an optimal lower bound.

By the way, it looks like it might be possible to adapt the Abelian lower bound method to some extraspecial groups (see quant-ph/0701235 for some upper bounds).

But I’ve never taken the time to think about it carefully.

Gil: Sorry for the delay!

Can you say something on Leggett’s lecture and , in particular: What is the definition of the “disconnectivity” measure D; and what is the mysterious K defined in the last slide.Leggett’s lecture was extremely interesting as usual (I’ve heard many previous lectures by him on similar themes). Alas, I don’t remember the definition of “disconnectivity,” but I do remember Leggett saying in the lecture that he’s no longer happy with it. (Nor, alas, do I remember what K is.)

And a technical question. How to open Freedman’s presentation.I was able to open it—but it’s a pptx file, which means that it’s not going to open in older versions of PowerPoint. (Complain to Freedman’s employer about that!) Special for you, I’ve converted his presentation into the old PowerPoint format and uploaded it here.

Thanks a lot Scott & Greg.

Hi!

Sorry for the delay, was out of the loop for a few days… Answering Fernando first.

My own motivation for thinking about those interactive proofs with quantum computers was really the question of whether and to what extent it is possible to test quantum mechanics if we don’t (yet?) believe in it fully (and it _is_ a question of “belief” and “religion” at this stage, regarding the aspects of quantum mechanics that are outside of BPP, given the (striking!) realization that these non-BPP aspects were never in fact tested, and we have no direct experimental evidence for them – except for the fact that the theory extrapolates nicely the things we do have direct experimental evidence for.).

Obviously physically realizing Shor’s algorithm would be some kind of such a test for non-BPP aspects of QM, and quite a convincing one. But it seems that it will take some time to get such experiments actually performed…One can ask whether we can run tests on smaller physical systems, tests that we can already run today. Also, we can ask whether we could, in the future, run tests that would really test the full quantumness of quantum mechanics – its BQP nature – after all, Shor’s algorithm is not quantum-complete.

And it turns out that the protocols we suggest (and those of Broadbent-Fitzimons-Kashefi too) in fact do provide quite satisfying though still somewhat partial positive answer to this quesiton. Yes, we can test quantum mechanics while assuming its correctness only on very small systems, plus some very weak assumptions on larger systems. So you do not have to believe in quantum mechanics to get convinced – but you do have to assume SOMETHING – you do have to assume something that generalizes both known quantum _and_ classical dynamics.

And then you can get convinced that the system is really quantum.

I am in fact writing an article to a philosophy of science onthology which explains this whole line of arguments. Starting from what I mean by the fact that the non-BPP aspects of quantum mechanics were not tested so far, and how (and what aspect of them and in what sense and under what assumptions) we can in fact test them if we use interactive experiments – namely, interactive proofs

with Nature.

It is very interesting to try to say these things more formally, though the essential ideas are explained already in the original A-Ben-Or-Eban paper.

dorit

Hey, Now about Greg’s comment – I am not sure I agree with your disagreement…

“I don’t entirely agree. You can confirm that a system performed a calculation that exceeds its computational resources, even if some larger classical computer could also perform that calculation.”

The point is that you cannot confirm it in the _usual paradigm_ (i.e, without interactions or vierfications, but just by comparing predictions to experimental outcomes), _unless_ a classical computer can compute it…

Perhaps I am missing your point, in this argument?

anyway, I think I disagree with what I thought was your main point:

“I think perhaps the phrase quantum “dynamics” is a bit misleading. I like to call the precepts of quantum computation quantum probability. Quantum probability is clearly different from classical probability, and quantum probability can be tested and has been amply verified. Either quantum probability is not completely true (which would be new science), or there is some statistical censorship principle that prevents full access to quantum probability (which would also be new science), or quantum computation is possible in principle.”

You are right in the three possibilities. The point is that

though quantum probability has been amply tested and verified, it was NOT tested, and in principle CANNOT be tested in its non-BPP aspects, using the usual predict-and-compare scientific paradigm. So there may still be a theory which is alternative to QM – new science indeed – that is simulatable within BPP and which agrees with all known experiments.

BTW – I am not sure why you moved to the static probabilities point of view rather than talking about dynamics. After all, we are talking about computations, which have everything to do with dynamics. The quesiton of which states can exist in quantum mechanics is a related though somewhat different question… How do you translate BPP to your probability language?

dorit

Fernando, Thanks for pointing out to that paper of Bravyi and Vyalyi (quant-ph 0308021) which defined the commuting Hamiltonian problem. It looks beautiful, I was not aware of it!

I think this is a beautiful and very counter intuitive problem…

I am wondering if there would be some brave people here who would be willing to bet regarding its computational complexity for large constant k, QMA-complete or in NP or somewhere in between???

dorit

Sure, Dorit. My bet: the commuting Hamiltonian problem is the same complexity class as quantum PCP, and they both are NP^{DQC_1}-complete.

Hey Scott (or anyone): do physics whizzes these days believe (and does QC require) that there is such a thing as true randomness in the physical universe? That is, suppose I flip a coin 2 million times and interpret the result as a bit string X. Since it’s a real physical coin, there’s likely to be some small sources of bias and correlation in the flips, and there’s a tiny chance that I might get a low complexity string like two million zeros, but can I generally say with very high confidence that X has Kolmogorov complexity at least 1 million? (I.e. X cannot be compressed to less than half its length).

I’m asking this because if X is really random, the arithmetic proposition “K(X) > 10^6” (let’s call this proposition P) is almost certainly true, and yet (per Chaitin’s version of Gödel’s incompleteness theorem), P is surely independent of any reasonable axiomatic theory of arithmetic comprehensible by humans. So where does the universe get access to these extremely complex and utterly unprovable arithmetic truths? Is that question interesting from a logic point of view? It sounds that way to me, but maybe I’m wrong.

Also, am I correct that P is a Pi-0-1 proposition? I think it says: “for all n, the Turing machine T1 has not halted leaving X on the output tape, same for T2, same for T3, …” where T1,T2,… is the (finite) enumeration of all Turing machines with low enough complexity to refute P. I think P being Pi-0-1 would put it in reach of some reflected version of Peano arithmetic.

This question was inspired by an article by Leonid Levin: http://www.cs.bu.edu/fac/lnd/expo/gdl.htm

Matt, seems you have been working too much with computational complexity scientists lately! be careful, you have no idea what dark allies it can take you to. BTW, DQC is already taken… by too many:

DQC Digital Quaker Collection

DQC Data Quality Control

DQC Double Quarter Column (print advertising)

DQC Double Quantum Coherence

DQC Design Quality Control

DQC Data Quality Center

DQC Delta Quadrant Command (game)

DQC Delaware Quality Conference

DQC Definite Quantity Contract

DQC Directional Quasi-Convexity

DQC Dame Quaker’s Club (gaming)

DQC Dream Quest Communications

DQC Dosimetric Quality Control

DQC Domain Quick Connect (Apollo)

DQC Disk Quota Control (Microsoft Windows NT)

DQC Deemed QC

DQC Division Quality Council

DQC Dose and Quality Control

DQC Departmental Qualification Course

DQC Doomed Quantum Computation

Dorit: Well, I am not sure whether we truly disagree or not. I could say more, but maybe we would partly be arguing over semantics. It is difficult to clear up a semantic confusion in a blog thread; but maybe when we next meet in person…?

Okay, I will say a little more, and try to steer the discussion away from semantics.

My position is that yes, while technically you cannot test whether any dynamics exists outside of BPP without actually building a quantum computer, such a test is only impossible if you are partly skeptical of quantum mechanics itself. If you believe quantum mechanics, then you can do a test in which a very small physical system computed more than is classically possible using its own limited computational resources, even if it can be simulated by some other, much larger computer.

But again, if you do not entirely believe quantum mechanics, then who is to say how much computational power is available to ten trapped ions, say. Maybe they have twenty megabytes of storage and the equivalent of a megahertz processor. That would be plenty to simulate ten qubits, and it would also be easy to simulate with modern computers, so it would prove nothing that the ten ions can do a Grover search.

do physics whizzes these days believe (and does QC require) that there is such a thing as true randomness in the physical universe?Quantum computation does not separately require “true randomness”. However, true randomness is a consequence (not an assumption) of quantum probability, which is my name for the axiomatic foundation of quantum mechanics. In other words, you can construct a coin flip using a measurement protocol with some qubits. That coin flip may or may not be “random” in some philosophical sense; but if you or any other physical actor can predict this quantumly produced coin flip, even with the aid of a large computer, then quantum mechanics is wrong.

Dorit, thanks for the explanations!

*Yes, we can test quantum mechanics while assuming its *correctness only on very small systems, plus some very *weak assumptions on larger systems. So you do not have *to believe in quantum mechanics to get convinced – but *you do have to assume SOMETHING – you do have to *assume something that generalizes both known quantum

*_and_ classical dynamics.

That’s really cool. Is that in your paper (perhaps it’s implicit and I missed it)? If it’s not too complicated, could you tell what would be the assumptions you need on larger systems?

I’d bet the commuting problem is in NP, or at most in QCMA: this paper arXiv:0803.1447 has some nice results concerning frustation free commuting Hamiltonains which seems to point in this direction, modulo the fact that I don’t understand the argument

Correction, meant to say “for all n, after running for n steps, Turing machine T1 has not halted…”

It also turns out that Levin has an arxiv paper linked from the article I linked to, where he considers a question sort of like mine. He hypothesizes that these physics experiments won’t reveal anything mathematically interesting, and presents that a bit more formally, as sort of an extended Church-Turing thesis.

Greg, thanks for the response (I didn’t notice it at first, deep down in your comment). I’m not sure what it means to be random in a “philosophical sense”. Certainly, the proposition that a string is compressible to half its length is a purely mathematical property of the string, which is provable if true (though maybe not by any feasible computation). If the proposition is false, its falsehood is generally unprovable using any reasonable mathematical axioms. It is algorithmically undecidable whether a given string is compressible like that or not. So the rather curious thing is that if there is randomness, our creaky old physical universe is able to supply us with endless supplies of almost-certain mathematical truths that we could never generate with any recursive algorithms.

I wonder if you have any idea how much transistors is in simplest calculator (I mean without sinus and cosine and so on, where only +, -, /, * and with precision of about 6-8 decimal digits). How many NAND gates (which consist of about 4 or 6 transistors) or some over need to build such simplest calculator and maybe there is some drowed scheme on internet somthing, because it seems very interesting and usefull). Why you don’t analizing even and not teaching this in university with such great examples (like simplest calculator scheme must be in each textbook)?

I’m not sure what it means to be random in a “philosophical sense”. Certainly, the proposition that a string is compressible to half its length is a purely mathematical property of the string, which is provable if true (though maybe not by any feasible computation).All I meant by “philosophical” was that a quantumly generated coin flip might possibly not be random to “God”. But it cannot be predicted by observers that live within the universe.

Fernando,

I’m not sure exactly what assumptions are made in Dorit’s paper, but in ours the only necessary requirement for the larger system is that operations are linear. If there is a way to perform non-linear operations deterministically then the privacy theorem does not hold.

There is another point worth mentioning about cases that digital computation is computationally sufficient for simulating quantum physics. The difference between simulating quantum physics with digital computers compared to simulating it with a quantum computers is like the difference between a computer simulation of a human heart and an operating artificial heart. Creating an EPR pair is not the same as describing its properties on your lap top.

Another thought about it is that many physics experiments can be regarded as “depth one quantum circuits” so that moving from depth one to depth two can already be powerful. (Again I am not talking about computational complexity but about creating interesting quantum states.

Thank you for the PPT