PHYS771 Lecture 13: How Big are Quantum States?

Scott Aaronson

Scribe: Chris Granade

I've received some complaints that we've done too much complexity and not enough physics, so today we're going to talk about physics. In particular, we're going to talk about QMA, BQP/poly and many other physics topics. Maybe I should explain my point of view, which is that I've been rolling over backwards to give you as much physics as possible. Here's where I'm coming from: there's this traditional hierarchy where you have biology on top, and chemistry underlies it, and then physics underlies chemistry. If the physicists are in a generous mood, they'll say that math underlies physics. Then, computer science is over somewhere with soil engineering or some other non-science.

Now, my point of view is a bit different: computer science is what mediates between the physical world and the Platonic world. With that in mind, "computer science" is a bit of a misnomer; maybe it should be called "quantitative epistemology." It's sort of the study of the capacity of finite beings such as ourselves to learn mathematical truths. I hope I've been showing you some of that.

Gus: How do you reconcile this with the notion that any actual implementation of a computer must be based on physics, so wouldn't the order of physics and CS be reversed?

This is not a very well-defined defined. One could also say that any mathematical proof has to be written on paper, and so math should therefore go above physics. You could also say that math is basically a field that studies whether particular kinds of Turing machines will halt or not, and so CS is ground that everything sits on. Math is then just the special case where the Turing machines deal with topological spaces or something. But then, the strange thing is that physics has been kind of seeping down towards math and CS. This is kind of how I think of quantum computing: physics isn't staying where it's supposed to! If you like, I'm professionally interested in physics precisely to the extent that it seeps down into CS!

I think that it's helpful to classify interpretations of quantum mechanics, or at least to reframe debates about them, by asking where various interpretations come down on the question of the exponentiality of quantum states. To describe the state of a hundred or a thousand atoms, do you really need more classical bits of information than you could write down in the universe?

Roughly speaking, the Many-Worlds interpretation would say "absolutely." This is a view that David Deutch defends very explicitly; if the different universes (or components of the wave function) used in Shor's Algorithm are not physically there, then where was the number factored?

We also talked about Bohmian mechanics, which says "yes," but that one component of the vector is "more real" than the rest. Then, there is the view that used to be called the Copenhagen view, but is now called the Bayesian view, the information-theoretic view or one of a host of other names.

On the Bayesian view, a quantum state is an exponentially long vector of amplitudes in more-or-less the same sense that a classical probability distribution is an exponentially long vector of probabilities. If you were to take a coin and flip it 1000 times, you would have some set of 21000 possible outcomes, but we don't because of that decide to regard all of those outcomes as physically real.

At this point, I should clarify that I'm not talking about the formalism of quantum mechanics; that's something that (almost) everyone agrees about. What I'm asking is whether quantum mechanics describes an actual, real "exponential-sized object" existing in the physical world. So, the move that you make when you take the Copenhagen view is to say that the exponentially-long vector is "just in our heads."

Niel: In the sense of a classical probability distribution, isn't this just a different way of putting what you've said about the Bohmian view?
Scott: So the Bohmian view is this strange kind of intermediate position. In the Bohmian view, you do sort of see these exponential numbers of possibilities as somehow real; they're the guiding field, but there's this one "more real" thing that they're guiding. In the Copenhagen interpretation, these exponentially many possibilities really are just in your head. Presumably, they correspond to something in the external world, but what that something is, we either don't know or aren't allowed to ask. Chris Fuchs says that there's some physical context to quantum mechanics -- something outside of our heads -- but that we don't know what that context is. Niels Bohr tended to make the move towards "you aren't allowed to ask."

Now that we have quantum computing, can we bring the intellectual arsenal of computational complexity theory to bear on this sort of question? I hate to disappoint you, but we can't resolve this debate using computational complexity. It's not well-defined enough. Although we can't declare one of these views to be the ultimate victor, what we can do is to put them into various "staged battles" with each other and see which one comes out the winner. To me, this is sort of the motivation for studying all sorts of questions about quantum proofs, advice, and communication. The question that all of these are trying to get at is if you have a quantum state of n qubits, does it act more like n classical bits, or does it act more like 2n bits? Note that there's this kind of exponentiality in our formal description of our quantum state, but we want to know to what extent we can actually get at it, or root it out.

We have all these complexity classes, and they seem kind of esoteric. Maybe it's just a bad historical accident that we use all of these acronyms. It's like the joke about the prisoners where one of them calls out "37" and all of them will fall on the floor laughing, then another calls out "22" but no one laughs because it's all in the telling. There's all of these enormous concepts and ideas, and we encapsulate them in these sequences of three or four capital letters, and maybe we shouldn't do that.

QMA. You can think of it as the set of truths such that if you had a quantum computer, you could be convinced of the answer by being given a quantum state. More formally, it's the set of problems that admit a polynomial-time quantum algorithm Q such that for every input x the following holds:

What I mean is that the number of qubits of |φ⟩ should be bounded by a polynomial in the length n of x. You can't be given some state of 2n qubits. If you could, then that would sort of trivialize the problem.

We want there to be a quantum state of reasonable size that convinces you of a "yes" answer. So when the answer is "yes," there's a state that convinces you, and when the answer is "no," there's no such state. QMA is sort of the quantum analogue of NP. Recall that we have the Cook-Levin Theorem which gives us that the Boolean satisfiability problem (SAT) is NP-complete. There is also a Quantum Cook-Levin Theorem -- which is a great name, since both Cook and Levin are quantum computing skeptics (though Levin much more so than Cook). The Quantum Cook-Levin theorem tells us that we can define a quantum version of the 3SAT problem, which turns out to be QMA-complete as a promise problem.

A promise problem is some problem you only have to get the right answer if there's some promise on the input. If you, as the algorithm, have been screwed over by crappy input, then any court is going to rule in your favor and you can do whatever you want. It may even be a very difficult computation to decide if the promise holds or not, but that's not your department. There are certain complexity classes for which we don't really believe that there are complete problems, but for which there are complete promise problems. QMA is one such class. The basic reason we need a promise is because of the gap between 1/3 and 2/3. Maybe you would be given some input, but you'd accept with some probability that is neither greater than 2/3 nor less than 1/3. In that case, you've done something illegal and so we assume that you aren't given such an input.

So what is this quantum 3SAT problem? Basically, think of n qubits stuck in an ion trap (since I'm suppose to be bringing in some physics), and now we describe a bunch of measurements, each of which involves at most three of the qubits. Each measurement i accepts with probability equal to P sub i. These measurements are not hard to describe, since they involve at most three qubits. Let's say that we add up n of the measurements. Then, the promise will be either there is a state such that this sum is very large, or that for all states, the sum is much smaller. Then, the problem is to decide which of the two conditions holds. This problem is complete in QMA in the same sense that the classical analogue, 3SAT, is complete in NP. This was first proved by Kitaev, and was later improved by many others.

The real interest comes with the question of how powerful the QMA class is. Are there truths that you can verify in a reasonable amount of time with quantum computers, but which you can't verify with a classical computer? This is an example of what we talked about earlier, where we're trying to put realistic and subjective views of quantum states into "staged battle" with each other and see which one comes out the winner.

There's a result of John Watrous which gives an example where it seems that being given an exponentially long vector really does give you some sort of power. The problem is called group non-membership. You're given a finite group G. We think of this as being exponentially large, so that you can't be given it explicitly by a giant multiplication table. You're given it in some more subtle way. We will think of it as a black-box group, which means that we have some sort of black box which will perform the group operations for you. That is, it will multiply and invert group elements for you. You're also given a list of generators of the group.

From the floor: How long is this list?
Scott: Polynomially long. Good question.

Each element of the group is encoded in some way by some n-bit string, though you have no idea how it's encoded.

Gus: So what's n here?
Scott: Well, you can define n to be the number of bits needed to write down one of the group elements.
Gus: It seems like the whole point, though, is that the number of elements in the group is exponential in the number of generators.
Scott:Yes. You're right.

So now we're given a subgroup HG, which can also be given to us as a list of generators. Now the problem is an extremely simple one: we're given an element x of the group, and want to know whether or not it's in the subgroup. I've specified this problem abstractly in terms of these black boxes, but you can instantiate it, if you have a specific example of a group. For example, these generators could be matrices over some finite field, and you're given some other matrix and are asked whether you can get to it from your generators. It's a very natural question.

Let's say the answer is "yes." Then, could that be proven to you?

Devin: Show how x was generated.
Scott: Yes. There's one thing you need to say (not a very hard thing), which is that if x ∈ H, then there is some "short" way of getting to it. Not necessarily by multiplying the generators you started with, but by recursively generating new elements and adding those to your list, and using those to generate new elements, and so on.

For example, if we started with the group ℤn, the additive group modulo n, and if we have some single starting element 1, we can we just keep adding 1 to itself, but it will take us a while to get to 25000. But if we recursively build 2 = 1 + 1, 4 = 2 + 2 and so on by repeatedly applying the group operation to our new elements, we'll get to whatever element we want quickly.

Q: Is it always possible to do it in polynomial time?
Scott: Yes. For any group. The way to see that is to construct a chain of subgroups from the one you started with. It takes a little work to show, but it's a theorem of Babai and Szemerédi, which holds whether or not the group is solvable.

Now here's the question: what if x ∉ H? Could you demonstrate that to someone? Sure, you could give them an exponentially long proof, and if you had an exponentially long time, you could demonstrate it, but this isn't feasible. We still don't know quite how to do with this, even if you were given a classical proof and allowed to check it via quantum computation, though we do have some conjectures about that case.

Watrous showed that you can prove non-membership if you're given a certain quantum state, which is a superposition over all the elements of the subgroup. Now this state might be very hard to prepare. Why?

Q: It's exponentially large?
Scott: Well, yeah, but there are other exponentially large superposition states which are easy to prepare, so that can't be the whole answer.
Q: There are too many of them.
Scott: But we can prepare a superposition over all n-bit strings, and there's a lot of those.
Gus: You could efficiently sample uniformly at random, but to create them all in a coherent superposition, you have to somehow get rid of the garbage.
Scott: Yes. That's exactly right. The problem is one of uncomputing garbage.

So we know how to take a random walk on a group, and so we know how to sample a random element of a group. But here, we're asked for something more. We're asked for a coherent superposition of the group's elements. It's not hard to prepare a state of the form ∑|g⟩|garbageG⟩. Then how do you get rid of that garbage? That's the question. Basically, this garbage will be the random walk or whatever process you use to get to g, but how do you forget how you got to that element?

But what Watrous said is to suppose we had an omniscient prover, and suppose that prover was able to prepare that state and give it to you. Well then, you could verify that an element is not in the subgroup H. We can do this in two steps:

  1. Verify that we really were given the state we needed (we'll just assume this part for now).
  2. Use the state |H⟩ to prove that xH by using controlled left-multiplication:
    Then, do a Hadamard and measure the first qubit. This is basically like a SWAP-test. You have the left qubit act as the control qubit. If x ∈ H, then xH is a permutation of H, and so we get interference fringes (the light went both through the x slit and the xH slit). If x ∉ H, then we have that xH is a coset, and thus shares no elements in common with H. Hence, ⟨H|xH⟩ = 0, and so we measure random bits. We can tell these two cases apart.

You also have to verify that this state |H⟩ really was what we were given. To do this, we will do a test like what we just did. Here, we pick the element x by taking a classical random walk on the subgroup H. Then, if |H⟩ was really the superposition over the subgroup, then |xH⟩ would just be shifted around by x, whereas if x ∉ H, we get something else. You have to prove that this is not only a necessary test, but a sufficient one as well. That's basically what Watrous proved.

This gives us one example where it seems like having a quantum state actually helps you, as if you could really get at the exponentiality of the state. Maybe this isn't a staggering example, but it's something.

An obvious question is whether, in all of those cases where a quantum proof seems to helps you, you could do just as well if you were given a classical proof that you then verified via quantum computation. Are we really getting mileage from having the quantum state, or is our mileage coming from the fact that we have a quantum computer to do the checking? We can phrase the question by asking if QMA is equal to QCMA, where QCMA is like QMA except that the proof now has to be a classical proof. Greg Kuperberg and I wrote a paper where we tried to look at this question directly. One thing we showed looks kind of bad for the realistic view of quantum states (at least in this particular battle): if the Normal Hidden Subgroup problem (what the problem is isn't important right now) can be solved in quantum-polynomial time, and it seems like it can, and if we make some other group-theoretic assumptions that seem plausible according to all the group theorists that we asked, then the Group Non-Membership Problem is actually in QCMA. That is, you can dequantize the proof and replace it with a classical one.

On the other hand, we showed that there exists a quantum oracle A relative to which QMAA ≠ QCMAA. This is a really simple thing to describe. To start with, what is a quantum oracle? Quantum oracles are just quantum subroutines which we imagine that both a QMA and a QCMA machine have access. To see the idea behind the oracle that we used, let's say that you're given some n-qubit unitary operation U. Moreover, let's say that you're promised that either U is the identity matrix I or that there exists some secret "marked state" |ψ⟩ such that U|ψ⟩ = −|ψ⟩; that is, that U has some secret eigenvector corresponding to an eigenvalue of -1. The problem is then to decide which of these conditions holds.

It's not hard to see that this problem, as an oracle problem, is in QMA. Why is it in QMA?

Q: Why can you verify that the answer is yes if you were given a quantum state as a proof?
Scott: Because the prover would just have to give the verifier |ψ⟩, and the verifier would apply U|ψ⟩ to verify that, yes, U|ψ⟩ = −|ψ⟩. So that's not saying a whole lot.

What we proved is that this problem, as an oracle problem, is not in QCMA. So even if you had both of the resources of this unitary operation U and some polynomial-sized classical string to kind of guide you to this secret negative eigenvector, you'd still need exponentially many queries to find |ψ⟩.

This gives some evidence in the other direction, that maybe QMA is more powerful than QCMA. If they were equivalent in power, then that would have to be shown using a quantumly non-relativizing technique. That is, a technique that is sensitive to the presence of quantum oracles. We don't really know of such a technique right now, besides techniques that are also classically nonrelativizing and don't seem applicable to this problem.

Gus: This whole idea of quantum oracles is relatively new to me, so I wanted to make sure that I'm clear on the difference between them and classical oracles. For example, that black box that performs the group operation...
Scott: That's a classical oracle.
Gus: Which we can apply in superposition. So if we can already apply a classical oracle in superposition, then it seems like the only difference is that a quantum oracle can apply some phase or something.
Scott: It's not just that, because a quantum oracle can act in an arbitrary basis. Classical oracles always work in the computational basis. That's really the key difference.

So there's really another sort of metaquestion here, which is if there's some kind of separation between quantum and classical oracles. That is, if there's some kind of question that we can only answer with quantum oracles. Could we get a classical oracle separation between QMA and QCMA? All I can tell you is that Greg and I tried for a while and couldn't do it. If you can, that'd be great.

Devin: Has anyone found a separation between classical and quantum oracles for anything?
Scott: No. It's something we thought about. It's sort of a very new set of questions, and the jury is still out. These are not necessarily "hard" problems; they aren't ones that have been attacked for twenty years. Some people, mainly me and a few others, thought about them for several months, which is not a very good certificate of their hardness.

OK. So that was quantum proofs. There are other ways we can try and get at the question of how much stuff is there to be extracted from a quantum state. Holevo's Theorem deals with the following question: if Alice wants to send some classical information to Bob, and if she has access to a quantum channel, can use this to her advantage? If quantum states are these exponentially long vectors, then intuitively, we might expect that if Alice could send some n-qubit state, then maybe she could use this to send Bob 2n classical bits. We can arrive at this from a simple counting argument. The number of quantum states of n qubits, any pair of which are of almost zero inner product with each other, is doubly exponential in n. All we're saying is that in order to specify such a state, you need exponentially many bits. Thus, we might hope that we could get some kind of exponential compression of information. Alas, Holevo's Theorem tells us that it is not to be. You need n qubits to reliably transmit n classical bits, with just some constant factor representing that you're willing to tolerate some probability of error, but really nothing better than you would get with a classical probabilistic encoding.

Scott: Intuitively, why is this? Does anyone want to give me a handwaving "proof"?
Devin: You can only measure it once.
Scott: Thank you. That's it. Each bit of information you extract cuts in half the dimensionality of the Hilbert space. Sure, in some sense, you can encode more than n bits, but then you can't reliably retrieve them.

This theorem was actually known in the 70's, and was ahead of its time.

It was only recently that anyone asked a very natural and closely-related question: what if Bob doesn't want to retrieve the whole string? We know from Holevo's theorem that getting the whole string is impossible, but what if Bob only wants to retrieve one bit (Alice doesn't know which one ahead of time)? Can Alice create a quantum state |ψx⟩ such that for whichever bit xi Bob wants to know, he can just measure |ψx⟩ in the appropriate basis and would then learn that particular bit? After he's learned xi, then he's destroyed the state and can't learn any more, but that's OK. Alice wants to send Bob a quantum phonebook, and Bob only wants to look up one number. It turns out that, via a proof from Ambainis, Nayak et al., this is still not possible. What they proved is that to encode n bits in this manner, so that any one can be read out, you need at least n over log n qubits.

Maybe you could get some small savings, but certainly not an exponential savings. Shortly after, Nayak proved that actually, if you want to encode n bits, you need n qubits. If we're willing to lose a logarithmic factor or two, I can show rather easily how this is a consequence of Holevo's Theorem. The reason that it's true illustrates a technique that I've gotten a lot of mileage out of recently, and there might be more mileage that can still be gotten out of it.

Suppose, by way of contradiction, that we had such a protocol that would reliably encode n bits into no more than log n qubits in such a way that any one bit could then be retrieved with high probability -- we'll say with error at most one-third. Then, what we could do is to take a bunch of copies of the state. We just want to push down the error probability, so we take a tensor product of, say, log n copies. Given this state, what Bob can do is to run the original protocol on each copy to get xi and then take the majority vote. For some sufficiently large constant times log n, this will push down the error rate to at most n-2. So for any particular bit i, Bob will be able to output a bit yi such that . Now, since Bob can do that, what else can he do? He can keep repeating this, and get greedy. I'm going to run this process and get x1, but now, because the outcome of this measurement could be predicted almost with certainty given this state, you can prove because of that, that you aren't getting a lot of information, and so the state is only slightly disturbed by the measurement. This is just a general fact about quantum measurements. If you could predict the outcome with certainty, then the state wouldn't be disturbed at all by the measurement.

So this is what we do. We've learned x1 and the state has been damaged only slightly. When we run the protocol again, we learn what x2 is with only small damage. Since small damage plus small damage is still small damage, we can find what x3 is and so on. So, we can recover all of the bits of the original string using fewer qubits then the bound shown by Holevo. Based on this, we can say that we can't have such a protocol.

Why do we care about any of this? Well, maybe we don't, but I can tell you how this stuff entered my radar screen. Now, we're not going to ask about quantum proofs, but about a closely-related concept called quantum advice. So we'll bring in a class called BQP/qpoly: the set of problems efficiently solvable by a quantum computer, given a polynomially-sized quantum advice state. What's the difference between advice and proof? The first difference is that advice doesn't depend on the input itself, but only on the input length n. The second difference is that advice comes from an advisor, and advisors (as we know) are inherently trustworthy beings. As we also know, advisors don't really know what problem their students are working on, so they're going to always give you the same advice no matter what problem you're working on. All joking aside, a proof is not trustworthy; that's why the entity who receives the proof is called the verifier.

So the advantage of advice is that you can trust it, but the disadvantage is that it might not be as useful as it isn't tailored to the particular problem instance that you're trying to solve. So we can imagine that maybe it's hard for quantum computers to solve NP-complete problems, but only if the quantum computer has to start in some all-zero initial state. Maybe there are some very special states that were created in the Big Bang and that have been sitting around in some nebula ever sense (somehow not decohering), and if we get on a spaceship and find these states, they obviously can't anticipate what particular instance of SAT we wanted to solve, but they sort of anticipated that we would want to solve some instance of SAT. Could there be this one generic SAT-solving state |ψn⟩, such that given any Boolean formula P of size n, we could, by performing some quantum computation on |ψn⟩ figure out whether P is satisfiable? What we're really asking here is if NP⊆BQP/qpoly.

Bill: Are you allowed to destroy the advice?
Scott: Yes. There's a bunch of these things sitting around in the nebula. It's like oil; there's an infinite supply of advice. Though it turns out, you only have to gather polynomially many advice states and then you can use them on an exponential number of inputs, because of the observation we made before than an exponentially-small error probability disturbs the state only slightly.

What can we say about the power of BQP/qpoly? We can adapt Watrous's result about quantum proofs to this setting of quantum advice. Returning to the Group Non-Membership Problem, if the Big Bang anticipated what subgroup we wanted to test membership in, but not what element we wanted to test, then it could provide us with the state |H⟩ that's a superposition over all the elements of H, and then whatever element we wanted to test for membership in H, we could do it. This shows that a version of the Group Non-Membership Problem is in BQP/qpoly.

I didn't mention this earlier, but we can prove that QMA is contained in PP, so there's evidently some limit on the power of QMA. You can see that, in the worst case, all you would have to do is search through all possible quantum proofs (all possible states of n qubits), and see if there's one that causes our machine to accept. You can do better than that, and that's where the bound of PP comes from.

What about BQP/qpoly? Can anyone see any upper bound on the power of this class? That is, can anyone see any way of arguing what it can't do?

Gus: BQEXP/qpoly?
Scott: Right, sure, but how do we know that both BQP/qpoly and BQEXP/qpoly aren't equal to ALL, the set of all languages whatsoever (including uncomputable languages)! Let's say you were given an exponentially long classical advice string. Well, then, it's not hard to see that you could then solve any kind of problem whatsoever. Why? Because say that is the Boolean function we want to compute. Then, we just let the advice be the entire truth table for the function and then we just need to look up the appropriate entry in the truth table, and we've solved any problem of size n we want to solve. The halting problem, you name it.
It would sort of be like having access to exponentially many bits of Chaitin's Ω string. You could certainly take that as your advice. Though actually, while the bits of Ω are very hard to compute, if you were given them, they'd be useless. Almost by definition, they'd just look like a random string. You couldn't extract anything useful from them. But you could certainly be given them if you wanted.

Intuitively, it seems a bit implausible that BQP/qpoly = ALL, because being given a polynomial number of qubits really isn't like being given an exponentially long string of classical bits. The question is, how much can this "sea" of exponentially many classical bits that are needed to describe a quantum state determine what we get out?

I guess I'll cut to the chase and tell you that at a workshop years ago, Harry Buhrman asked me this question, and it was obvious to me that BQP/qpoly wasn't everything, and he told me to prove it. And eventually I realized that anything you could do with polynomially-sized quantum advice, you could do with polynomially-sized classical advice, provided that you can make a measurement and then postselect on its outcome. That is, I proved that BQP/qpoly ⊆ PostBQP/poly. In particular, this implies that BQP/qpoly ⊆ PSPACE/poly. Anything you can be told by quantum advice, you can also be told by classical advice of a comparable size, provided that you're willing to spend exponentially more computational effort to extract what that advice is trying to tell you.

Gus: You seem really fascinated with the whole idea of advice, and I'm not convinced that it's entirely relevant other than to the extent that it develops tools that are useful to other, more relevant aspects of complexity theory.
Scott: Why do I care about advice? First of all, yes: it does show up again and again, even if, for example, all we want to know about is uniform computation. Even if all we want to know is if we can derandomize BPP, it turns out to be a question about advice. So it's very connected to the rest of complexity. Basically, you can think of an algorithm with advice as being no different than an infinite sequence of algorithms, just like what we saw with the Blum Speedup Theorem. It's just an algorithm, where as you go to larger and larger input lengths, you get to keep using new ideas and get more speedup. This is one way to think about advice.
Gus: Classes based on proofs capture the difficulty of natural problems, but I don't see that there are any natural problems based on advice.
Scott: I think that we have as many natural problems in BQP/qpoly as we have in QMA. QMA-complete problems are kind of in another category, since you wouldn't have thought of them without quantum, whereas Group Non-Membership is something you might think of just on its own without any connection to quantum.
I can give you another argument. We really don't know the initial conditions of the universe. We make the assumption that a quantum computer should always start in this all-zero state, but the question is if that's a justified assumption. The usual argument that it's a justified assumption is that, for whatever other state your quantum computer might start in, there's some physical process that gave rise to that state. Presumably, this is only a polynomial-time physical process. So you could simulate the whole process that gave rise to that state, tracing it back to the Big Bang if needed. But is this really reasonable?
Gus: But when we're thinking about advice, we're thinking about states that may not be preparable in polynomial time. It may not be possible for them to actually ever exist in the universe.
Niel: If there were some kind of fundamental process which gave rise to those kinds of states, wouldn't that motivate changing our elementary gate system?
Scott: It might. You can think of advice as -- I keep changing my metaphors here -- freeze-dried computation. There's some great, enormous sort of computational effort, that we then encapsulate in this convenient polynomially-sized string over in the frozen foods section and that you can go and heat into the microwave to do work with.
Nick: If you were given an advice string, how could you trust it?
Scott: Well, we rolled into the definition of the advice that you trust it, otherwise, it would be a proof. You could study advice that wasn't trustworthy, and in fact, I have. I defined some complexity classes based on untrustworthy advice, but in the usual definition of advice, we assume that it's trustworthy. It's a question of how much computation can we freeze-dry into this polynomially-sized state?
Gus: I agree that it's an interesting theoretical question, but how to you convince the NSF to fund BQP/qpoly studies?
Scott: If I'm ever cast out into the street, I'll have to think of something else, but right now, Ray [Laflamme] has me on board. Would he fire me because BQP/qpoly isn't a "realistic" class? He seems to have not made up his mind, so while I'm not starving, I'll pursue what interests me. That's advice, and it leads pretty directly into the interpretational question that we started with. We're trying to get at what quantum states really are. It's clear that, to try and learn about this question, we could do a billion experiments and maybe they'd all be consistent with the predictions of quantum mechanics. On the other hand, we could sit around and argue like philosophers do, and it's not very clear that such an approach would get us very far, either. What I'm suggesting is a third approach, which is that we put the different views of what quantum states are into various staged battles with each other, in some sort of concrete mathematical or computational setting, and see which ones come out the winner.
Gus: So, you could say that by studying advice, you're studying one way in which quantum states could be different from classical states.
Scott: Yes.

Returning the question of an upper bound for BQP/qpoly, it's again a two-minute endeavor to give a handwaving proof that BQP/qpoly ⊆ PSPACE/poly. I like the way that Greg Kuperberg described the proof. What he said is that what we do if we have some quantum advice and we want to simulate it using classical advice by post-selection, we use a "Darwinian training set" of inputs. We'll say that we've got this machine that takes classical advice, and then we want to describe to this machine some set of quantum advice using only classical advice. To do so, we consider some test inputs X1, X2, ..., XT. Note, by the way, that our classical advice machine doesn't know the true quantum advice state |ψ⟩. The classical advice machine starts by guessing that the quantum advice is the maximally mixed state, since without a priori knowledge, any quantum state is equally likely to be the advice state. Then, X1 is an input to the algorithm such that if the maximally mixed state is used in place of the quantum advice, the algorithm produces the wrong answer with a probability of greater than one-third. If the algorithm still guesses the right answer, then making a measurement changes the advice state to some new state ρ1. So why is this process described as "Darwinian"? The next part of the classical advice, X2 describes some input to the algorithm such that the wrong answer will be produced with probability greater than one-third if the state ρ1 is used in place of the actual quantum advice. If, despite the high chance of getting the wrong answer when run with X1 and X2 as input, the algorithm still produces two correct answers, then we use the resultant estimate of the advice state ρ2 to produce the next part of the classical advice X3. Basically, we're trying to teach our classical advice machine the quantum state by repeatedly telling it, "supposing you got all the previous lessons right, here's a new test you're still going to fail. Go and learn, my child."

The point is that if we let |ψn⟩ be the true quantum advice, then since we can decompose the maximally mixed state into whatever basis we want, we can imagine it as a mixture of the true advice state that we're trying to learn, and a bunch of things that are all orthogonal to it. Each time that we give a wrong answer with a probability greater than one-third, it's like we're lopping off another third of this space. We then postselect on succeeding. We also know that if we were to start with the true advice state, then we would succeed, and so this process has to bottom out somewhere; we eventually winnow away all the chaff and run out of examples where the algorithm fails.

So, in this setting, quantum states are not acting like exponentially long vectors. They're acting like they only encode some polynomial amount of information, although extracting what you want to know might be exponentially more efficient than if the same information were presented to you classically. Again, we're getting ambiguous answers, but that's what we expected. We knew that quantum states occupy this weird kind of middle realm between probability distributions and exponentially long strings. It's nice to see exactly how this intuition plays out, though, in each of these concrete scenarios. I guess this is what attracts me to quantum complexity theory. In some sense, this is same stuff that Bohr and Heisenberg argued about, but we're now able to ask the questions in a much more concrete way -- and sometimes even answer them.

[Discussion of this lecture on blog]

[← Previous lecture | Next lecture →]

[Return to PHYS771 home page]