## The Collision Lower Bound After 12 Years

Streaming video is now available for the talks at the QStart conference, a couple weeks ago at Hebrew University in Jerusalem. If you’re the sort of person who likes watching quantum information talks, then check out the excellent ones by Ray Laflamme, John Martinis, Umesh Vazirani, Thomas Vidick, Jacob Bekenstein, and many others.

My own contribution—the first “backwards-facing, crusty, retrospective” talk I’ve ever given—was called The Collision Lower Bound After 12 Years (click here for the slides—and to answer the inevitable question, no, I have no idea how to open PowerPoint files in your favorite free-range, organic computing platform). Briefly, the collision lower bound is the theorem that even a quantum computer needs at least ~n^{1/3} steps to find a duplicate in a long list of random numbers between 1 and n, even assuming the list is long enough that there are many, many duplicates to be found. (Moreover, ~n^{1/3} steps are known to *suffice*, by the BHT algorithm, a clever adaptation of Grover’s search algorithm. Also, for simplicity a “step” means a single access to the list, though of course a quantum algorithm can access multiple list elements in superposition and it still counts as one step.)

By comparison, for classical algorithms, ~√n steps are necessary and sufficient to find a collision, by the famous Birthday Paradox. So, just like for Grover’s search problem, a quantum computer could give you a modest speedup over classical for the collision problem, but *only* a modest one. The reason this is interesting is that, because of the abundance of collisions to be found, the collision problem has a great deal more structure than Grover’s search problem (though it has less structure than Shor’s period-finding problem, where there famously *is* an exponential quantum speedup).

One “obvious” motivation for the collision problem is that it models the problem of breaking collision-resistant hash functions (like SHA-256) in cryptography. In particular, if there *were* a superfast (e.g., log(n)-time) quantum algorithm for the collision problem, then there could be no CRHFs secure against quantum attack. So the fact that there’s no such algorithm at least opens up the possibility of quantum-secure CRHFs. However, there are many other motivations. For example, the collision lower bound rules out the most “simpleminded” approach to a polynomial-time quantum algorithm for the Graph Isomorphism problem (though, I hasten to add, it says nothing about more sophisticated approaches). The collision problem is also closely related to Statistical Zero Knowledge (SZK) proof protocols, so that the collision lower bound leads to an oracle relative to which SZK is not in BQP.

Probably the most bizarre motivation to other people, but for some reason the most important one to me back in 2001, is that the collision problem is closely related to the problem of sampling the *entire trajectories of hidden variables*, in hidden-variable theories such as Bohmian mechanics. The collision lower bound provides strong evidence that this trajectory-sampling problem is hard even for a quantum computer—intuitively because a QC can’t keep track of the *correlations* between the hidden-variable positions at different times. The way I like to put it is that if, at the moment of your death, your entire life history flashed before you in an instant (and if a suitable hidden-variable theory were true, and if you’d performed an appropriate quantum interference experiment on your own brain during your life), then you really *could* solve the collision problem in only O(1) steps. Interestingly, you *still* might not be able to solve NP-complete problems—I don’t know! But you could at least do *something* that we think is hard for a quantum computer.

I proved the first collision lower bound in 2001 (actually, a week or so after the 9/11 attacks), after four months of sleepless nights and failed attempts. (Well actually, I only got the weaker lower bound of ~n^{1/5}; the ~n^{1/3} was a subsequent improvement due to Yaoyun Shi. Before ~n^{1/5}, no one could even rule out that a quantum computer could solve the collision problem with a *constant* number of steps (!!), independent of n—say, 4 steps.) It was the first thing I’d proved of any significance, and probably the most important thing I did while in grad school. I knew it was one of the favorite problems of my adviser, Umesh Vazirani, so I didn’t even tell Umesh I was working on it until I’d already spent the whole summer on it. I figured he’d think I was nuts.

**Bonus Proof Explanation!**

The technique that ultimately worked was the polynomial method, which was introduced to quantum computing four years prior in a seminal paper of Beals et al. In this technique, you first suppose by contradiction that a quantum algorithm exists to solve your problem that makes very few accesses to the input bits—say, T. Then you write out the quantum algorithm’s acceptance probability (e.g., the probability that the algorithm outputs “yes, I found what I was looking for”) as a multivariate polynomial p in the input bits. It’s not hard to prove that p has degree at most 2T, since the amplitudes in the quantum algorithm can be written as degree-T polynomials (each input access increases the degree by at most 1, and unitary transformations in between input accesses don’t increase the degree at all); then squaring the amplitudes to get probabilities doubles the degree. (This is the only part of the method that uses anything specific to quantum mechanics!)

Next, you choose some parameter k related to the problem of interest, and you let q(k) be the expectation of p(X) over all inputs X with the parameter equal to k. For example, with the collision problem, it turns out that the “right” choice to make is to set k=1 if each number appears exactly once in your input list, k=2 if each number appears exactly twice, k=3 if each number appears exactly three times, and so on. Then—here comes the “magic” part—you show that *q(k) itself is a univariate polynomial in k, again of degree at most 2T*. This magical step is called “symmetrization”; it can be traced at least as far back as the famous 1969 book *Perceptrons* by Marvin Minsky and Seymour Papert. In the case of the collision problem, I still have no explanation, 12 years later, for why symmetrization works: all I can say is that you do the calculation, and you cancel lots of things from both the numerator and the denominator, and what comes out at the end is a low-degree polynomial in k. (It’s precisely because I would never have predicted such a “zany coincidence,” that I had to stumble around in the dark for 4 months before I finally discovered by chance that the polynomial method worked.)

Anyway, after applying symmetrization, you’re left with a low-degree univariate polynomial q with some very interesting properties: for example, you need 0≤q(k)≤1 for positive integers k, since then q(k) represents an averaged *probability* that your quantum algorithm does something. You also need q(1) to be close to 0, since if k=1 then there no collisions to be found, and you need q(2) to be close to 1, since if k=2 then there are lots of collisions and you’d like your algorithm to find one. But now, you can appeal to a theorem of A. A. Markov from the 1890s, which implies that no low-degree polynomial exists with those properties! Hence your original efficient quantum algorithm can’t have existed either: indeed, you get a quantitative lower bound (a tight one, if you’re careful) on the number of input accesses your algorithm must have made. And that, modulo some nasty technicalities (e.g., what if k doesn’t evenly divide the size of your list?), is how the collision lower bound works.

So, in the first half of my QStart talk, I explain the collision lower bound and its original motivations (and a little about the proof, but no more than what I said above). Then in the second half, I survey lots of extensions and applications between 2002 and the present, as well as the many remaining open problems. For example, I discuss the tight lower bound of Ambainis et al. for the “index erasure” problem, Belovs’s proof of the element distinctness lower bound using the adversary method, and my and Ambainis’s generalization of the collision lower bound to arbitrary symmetric problems. I also talk about Mark Zhandry’s recent breakthrough (sorry, am I not allowed to use that word?) showing that the GGM construction of pseudorandom functions is secure against quantum adversaries, and how Zhandry’s result can be seen—in retrospect, anyway—as yet another application of the collision lower bound.

Probably of the most general interest, I discuss how Daniel Harlow and Patrick Hayden invoked the collision lower bound in their striking recent paper on the AMPS black hole “firewall” paradox. In particular they argued that, in order to uncover the apparent violation of local quantum field theory at the heart of the paradox, an observer falling into a black hole would probably need to solve a QSZK-complete computational problem. And of course, the collision lower bound furnishes our main piece of evidence that QSZK-complete problems really should require exponential time even for quantum computers. So, Harlow and Hayden argue, the black hole would already have evaporated before the observer had even made a dent in the requisite computation.

Now, the Harlow-Hayden paper, and the AMPS paradox more generally, really deserve posts of their own—just as soon as I learn enough to decide what I think about them. For now, I’ll simply say that, regardless of how convinced you are by Harlow and Hayden’s argument (and, a bit like with my free-will essay, it’s not clear how convinced the authors themselves are!), it’s one of the most ambitious syntheses of computational complexity and physics I’ve ever seen. You can disagree with it, but to read the paper (or watch the talk, streaming video from Strings’2013 here) is to experience the thrill of seeing black hole physics related to complexity theory by authors who really know both.

(In my own talk on the collision lower bound, the short segment about Harlow-Hayden generated more questions and discussion than the rest of the talk combined—with *me* being challenged to defend their argument, even with Patrick Hayden right there in the audience! I remarked later that that portion of the talk was itself a black hole for audience interest.)

In totally unrelated news, *Quantum Computing Since Democritus* made Scientific American’s list of best summer books! I can’t think of a more appropriate honor, since if there’s *any* phrase that captures what QCSD is all about, “sizzling summer beach read” would be it. Apparently there will even be an online poll soon, where y’all can go and vote for QCSD as your favorite. Vote early and often, and from multiple IP addresses!

Comment #1 July 7th, 2013 at 9:31 pm

While I’ll naturally vote for Quantum Computing Since Democritus, I don’t envy the challenge of it having to go up against “Zombie Tits, Astronaut Fish and Other Weird Animals” – at least in the catchy titles department.

Comment #2 July 7th, 2013 at 10:18 pm

Two non-collision-lower-bound points:

1) SlideShare (http://www.slideshare.net/) is an easy way to put powerpoint slides on the web.

2) I once read a brilliant piece of advice from Russell Baker, which could be roughly paraphrased as: “The best choice for summer beach reading is a big, classic book like

The Brothers Karamazov. You can take it to the beach hoping to impress an attractive member of the appropriate gender. Chances are you won’t end up impressing anybody, but you’ll have read a really good book.”Comment #3 July 8th, 2013 at 3:44 am

it’s funny how i understand a bit more of this blog by simply reading stuff i don’t understand at all.

but i will have to read QCSD again – at least twice – to get a bit closer to understanding quantum computing and complexity in general.

Comment #4 July 8th, 2013 at 5:16 am

“The way I like to put it is that if, at the moment of your death, your entire life history (and if a suitable hidden-variable theory were true, and if you’d performed an appropriate quantum interference experiment on your own brain during your life), then you really could solve the collision problem in only O(1) steps.”

I do not understand this sentence. Did a part of it get lost?

Comment #5 July 8th, 2013 at 6:03 am

“The way I like to put it is that if, at the moment of your death, your entire life history (and if a suitable hidden-variable theory were true, and if you’d performed an appropriate quantum interference experiment on your own brain during your life), then you really could solve the collision problem in only O(1) steps.”

Scott I am so gutted. I think you missed a part of the sentence and turns out I couldn’t continue reading without thinking what will happen at the moment of my death.

Comment #6 July 8th, 2013 at 9:10 am

Jr #4 and Suren #5: Gasp, sorry! I not only completed that sentence, but in the hopes of making it up to you, I added a special bonus section about how the actual proof works. (Hopefully that section doesn’t

itselfhave unfinished sentences, thereby leading to an infinite regress…)Comment #7 July 8th, 2013 at 9:56 am

Sean #2: Thanks for the pointer to Slideshare! I hadn’t seen that, and will try putting my PowerPoints on it as soon as I have time (I’m at a conference now). I’ll be deeply impressed if it actually renders all the zany animations (and less importantly, equations) correctly… 🙂

Comment #8 July 8th, 2013 at 12:55 pm

“Vote early and often, and from multiple IP addresses!”Tsk Tsk. Bad boy. 🙂Comment #9 July 8th, 2013 at 1:08 pm

I am curious: how does one prove that square-root-of-n is a lower bound for classical algorithms?

Comment #10 July 8th, 2013 at 5:06 pm

john #9: Good question! One uses the union bound. Suppose your classical algorithm has queried T numbers in the list—doesn’t matter in what order. The probability that any two given numbers collide is 1/n. Therefore, the probability that there exist two numbers that collide is at most choose(T,2)/n ~ T

^{2}/n. So, if you want a constant probability of detecting a collision, then T needs to be at least ~√n.Comment #11 July 8th, 2013 at 6:33 pm

Why are black box models famous? I heard there is even a sqrt(n) lower bound for discrete log for classical computing in black box model while one has only a O(log(n)) lower bound for much general models and circuits. Why is black box model good for your collision problem here? Could you have tried something more general without it? Is there a sneaky way around it? (pardon me if I am mistaken)

Comment #12 July 8th, 2013 at 8:27 pm

curious #11: The short answer to your question is that, when you leave the black-box model for (say) the circuit model, you immediately leap into the abyss of P versus NP and similarly difficult questions.

For example, suppose a function f were explicitly described to you by a polynomial-size Boolean circuit, and you wanted to prove a lower bound—even just in the classical case—on the number of computation steps needed to find an x,y pair such that f(x)=f(y). Well then, as the very

firststep (though not the last), you’d better prove P≠NP! For if P=NP then the problem would be trivial. We can prove that √n evaluations of f are needed to find a collision, but only if f is arbitrary and accessible exclusively via black-box queries. As soon as the algorithm knows something about f’s internal structure, understanding what’s the optimal thing to do becomes orders of magnitude harder.If this is so, you might ask, then why do we even spend time wading around in the “kiddie pool” of black-box complexity, rather than jumping into the deep end of circuit complexity? Well, a few reasons:

First,

even the black-box modelcan sometimes be extremely challenging to understand, as witnessed by the huge number of easy-to-state open problems about that model (both quantum and classical). So, if we don’tevenunderstand the black-box model all that well, then it seems like an obvious starting point before moving on to more powerful models.Second, especially in quantum computing, a huge fraction of what we actually know fits very nicely into the black-box model. For example, both Shor’s and Grover’s algorithms are black-box algorithms at their core (Shor’s for period-finding, Grover’s for unordered search). And if you peruse the arXiv, you’ll find maybe a dozen papers falsely claiming to present quantum algorithms to solve NP-complete problems in polynomial time. Interestingly, while this isn’t logically necessary, as far as I know

allof those papers can be rejected on the ground that if they worked, they would violate the known optimality of Grover’s algorithm within the black-box model!In other words, black-box lower bounds can actually have “teeth”; they

don’tonly rule out ideas that no one would ever take seriously in the first place. (Here I should mention that the months I spent proving the collision lower bound actually saved me a few hours one time, when I was able to summarily reject a paper claiming a fast quantum collision-finding algorithm! If the authors claimed there was a mistake in the lower bound, or that they had some subtle way to evade it, then I would’ve read further, but they were totally unaware of it.)Comment #13 July 9th, 2013 at 12:26 pm

“As soon as the algorithm knows something about f’s internal structure, understanding what’s the optimal thing to do becomes orders of magnitude harder.”

Is the above statement true for Grover’s algorithm as well. It seems that Grover’s function searches for an element and its index in an unstructured list. What apparent structure can the search function have that peers into an unstructured data? In this case the black box model seems strong enough and seems replacing the real world. Am I wrong?

For Shor’s problem I can understand what you are stating.

“And if you peruse the arXiv, ….” I just looked. You are correct. There are claims.

Comment #14 July 9th, 2013 at 1:07 pm

curious #13: Grover’s algorithm can be seen as an algorithm that, given black-box access to a Boolean function f:{1,…,n}→{0,1}, searches for an i such that f(i)=1. Now, if you had an actual physical string of bits x

_{1},…,x_{n}, then f(i) would just be x_{i}, and indeed there would be no further structure to exploit. But f(i) itself could also involve a computation—e.g., it might output 1 if i encodes a valid solution to an NP-complete problem instance, and 0 if it doesn’t. In that case, knowing something about f’s internal structure (e.g., the actual clauses and variables, if it’s a SAT instance) could indeed help you find i considerably faster.Comment #15 July 9th, 2013 at 5:21 pm

Thankyou Professor! I feel I got at least $500 worth of lesson today (estimating from MIT’s tuition and your possible pay + bonus):)

Comment #16 July 10th, 2013 at 4:02 pm

Scott, thank you for directing everyone to the QSTART talks. Particularly enjoyable (for me) were Elon Lindenstrauss’ and Yaron Silberberg’s talks. In regard to the latter, Silberberg’s interferometric hardware is evolving in the direction of ever-larger

n-photon BosonSampling capabilities (as he remarks during his talk), and so his group’s research bears directly upon your recent (and thought-provoking) proclamation here onShtetl OptimizedP“I see no good reason why BosonSampling couldn’t be scaled to (say) 30 photons with no explicit error-correction.”This proclamation was emphasized by your subsequent declaration “At the scientific level—i.e., at level (b)—I stand by everything I wrote in the previous post and the comments therein.”

Thus a stronger, Silberberg-inspired, follow-on declaration might be:

P’“At QSTART we discussed concrete technical paths for scaling BosonSampling experiments to (say) 30 photons with no explicit error-correction.”Scott, if it happened that you and Yaron (and any other colleagues) happened to have in-depth discussions along the lines of

P’, then the various technical paths considered, and the conclusions reached, and the resultant strengthening (or weakening) of your pre-QSTART convictions in regard to BosonSampling feasibility, all would be of considerable interest to many readers ofShtetl Optimized(including me).Comment #17 July 10th, 2013 at 8:12 pm

Can I ask a few questions about the whole “looking at the trajectories of hidden variables” thing? I’m still a bit confused about some stuff with that.

I seem to recall you mentioning in an earlier talk that the “look at the history of the hidden variables” model is actually the same as the “once you prepare a state you can ‘measure’ it nondestructively multiple times” model? It’s actually the latter I wanted to ask about.

Like I seem to recall you mentioning that graph isomorphism can be solved in polynomial time in such a model? So what I’m confused about is: if you can do it in polynomial time when you can measure repeatedly, why can’t you just run the algorithm to set up the state repeatedly? So long as it’s only polynomially many times it shouldn’t be a problem, right?

Or does the gain come from being able to measure more than polynomially-many times, and that isn’t counted in the time? In which case it seems kind of like a quantum analogue to PP? Is that an accurate analogy? Does this complexity class have a name, anyway? I was looking for it in the Zoo and couldn’t find it…

(Hopefully the later questions actually make sense…)

Comment #18 July 11th, 2013 at 7:42 am

John Sidles #16: I thought of giving my talk at QStart about BosonSampling, in which case there would’ve been much more discussion about it, but I decided to give the collision lower bound talk instead, simply because I gave a BosonSampling talk at Hebrew University just a couple years ago.

Having said that, I

diddiscuss BosonSampling a bit with Silberberg, but nothing new emerged about technical paths to scaling it up. I think that, for now, the experimentalists who are interested in this subject understand perfectly well what they need to work on (e.g., better photon sources), while we theorists also understand perfectly well what’s needed from us (hardness results that more closely match what the current technology can do—incorporating photon losses, Gaussian-state rather than single-photon inputs, etc). Hopefully there will be progress on both of these fronts, but science doesn’t move at blog speed.Comment #19 July 11th, 2013 at 8:20 am

Sniffnoy #17: Regarding graph isomorphism, the answer to your question is interesting. What the hidden-variables model lets you do, and standard QM does

notlet you do, is to measurepartof a state in the ordinary way, and then perform repeated measurements on the remaining, unmeasured part. For example, suppose you’re trying to solve the collision problem, given a superposition over basis states of the form |x⟩|f(x)⟩, where f is a 2-to-1 function. Then you can easily measure the |f(x)⟩ register, in which case you’ll be left with(|x⟩+|y⟩)/√2

in the first register, where f(x)=f(y). If you can now measure

thisstate multiple times, then you’ve found your collision. And why can this not be simulated by a standard quantum computer? Simply because, if you repeat the whole computation from scratch—preparing a new superposition over |x⟩|f(x)⟩’s and then measuring |f(x)⟩ a second time—then you’ll almost certainly get adifferentf(x) value, corresponding to adifferent(x,y) pair! That might sound like just a technical problem, but what the collision lower bound shows is that there’s no way around it with a standard QC.Now, while we’re on the topic of the sampling-trajectories model and its relationship to the measure-multiple-times model … I should tell you that just last week, my PhD student Adam Bouland and MIT undergrad Mitchell Lee brought an important point to my attention that I’d glossed over in my paper and hadn’t clearly understood before now. Namely: while it’s entirely correct that the sampling-trajectories model can simulate the measure-multiple-times model, the converse is not at all clear! Furthermore, this is not just a technical issue about our current proof techniques: it’s entirely plausible that there exist hidden-variable theories, for which sampling a trajectory would let you do much more than you can do even in the measure-multiple-times model.

More concretely: in my paper, I gave an algorithm for solving Grover search ~N

^{1/3}steps in the trajectory-sampling model, and I also briefly sketched a “proof” that that algorithm is optimal. Well, the algorithm is correct, but the argument for its optimality is wrong. I still think ~N^{1/3}is probably the right answer—simply because I defined the complexity class DQP in such a way that an algorithm has to work foranyhidden-variable theory that satisfies two axioms (called “indifference” and “robustness”), and it seems extremely plausible that one could construct a theory satisfying those axioms in which Grover search really does require ~N^{1/3}steps (indeed, the “flow theory” and/or the “Schrödinger theory” from my paper might already work). But at the least, it will require a new, more complicated proof. Moreover, it now looks very likely to me that one could construct hidden-variable theories for which the ~N^{1/3}lower bound is false. Such theories, for example, might “examine the whole wavefunction” to find the marked item, then “diabolically” encode the information about where the marked item is into the multi-time correlations between hidden-variable values, even though none of those values will reveal the marked item when considered individually.That’sthe sort of thing you can do when you get to make up your own hidden-variable theory and then sample its trajectories, that you can’t do in the measure-multiple-times model (for which the ~N^{1/3}lower bounddoeshold, by the simple argument I gave in my paper).Comment #20 July 11th, 2013 at 8:34 am

Two loosely related AMPS questions:

Is there any point in urging (nay, clamoring for!) you to write a post on Harlow-Hayden / AMPS+QC / … ? I for one would be thrilled to read it.

Have you seen the Maldacena-Susskind ER=EPR paper? Does it seem deep to you? Do you think it has anything to do with the AMPS paradox, given the Harlow-Hayden bounds?

Thanks!

Comment #21 July 11th, 2013 at 10:45 am

Clayton #20: Alright, I’ll put an AMPS/Harlow-Hayden post on my to-do list! Thanks for the nudge.

I read about the Maldacena-Susskind paper on blogs, but I haven’t read the paper itself yet. I’ll also re-add that one to my stack. 🙂

Comment #22 July 11th, 2013 at 4:52 pm

Scott #20: may one say “bodacious” on such an august site as this?

Comment #23 July 11th, 2013 at 5:56 pm

I have wondered about the following situation in discrete log calc. Here we seek r: g^r = h mod p where g,h,p are known and p is prime. Note that one g^((p^i)r) = g^r = hmod p for all i.

Say some one discovers a way to extract r in base q upto log_{q}(p) places in complexity log_{q}(p) but still has not recovered r fully since the algorithm at that stage does not know at what power multiple of p^i it should terminate. Since the algorithm outputs in base q and looks for some multiple of p^i times r, we cannot decide if the algorithm terminates.

Would this still be a polynomial time algorithm since we got log_{q}(p) places to base $q$?

Comment #24 July 11th, 2013 at 6:42 pm

Huh, I see. Thanks! Yes OK that clears up just where the gain comes from. Now I can ask further questions more sensibly. 🙂

Specifically:

So, just to be clear, hidden variables might not be the same as multiple-measurements, but DQP has extra requirements so that it is the same as multiple measurements? Or what?

(Sorry, I know, you care about hidden variables, but multiple measurements seems easier to talk about and think about, so I hope you don’t mind if I focus on that instead.)

The other thing I notice is, the example you give, with detecting collisions, doesn’t seem to use anything particularly quantum. Like, if this were just a classical probability distribution over the x’s, and then we mapped x to (x,f(x)), the sort of “partial measurement followed by repeated measurements of what’s left” you describe is still physically impossible, and so we could gain some extra power by allowing it (for this particular problem, it seems like we get all the extra power of the quantum version).

Does that model/complexity class have a name? (Is it just the same as some more well-known complexity class?) Do you know how well it does on, say, the search problem, as an example?

Thank you again for taking the time to answer!

Comment #25 July 11th, 2013 at 6:56 pm

cubee #3

for a general introduction to the power of complexity theory read Mark Buchanan’s Nexus.

this is too pedantic as a starting point complexity theory has many applications and Mark uses household examples basically social reality, ecology, microbiology, physics to explain complexity theory.

Comment #26 July 11th, 2013 at 8:45 pm

Sniffnoy #24: I’m glad you’re interested in this paper—not so many people were! 😉

Yes, the situation is this:

(1) I proved in the paper that multiple measurements can be simulated by

anyhidden-variable theory satisfying the indifference and robustness axioms.(2) The converse is false: one can construct (artificial) hidden-variable theories satisfying the axioms that are arbitrarily more powerful than multiple measurements, by simply hard-coding some absurdly hard problem into the hidden-variable transition rule.

(3) On the other hand, my definition of DQP requires an algorithm to succeed on

everytheory satisfying the indifference and robustness axioms. So, that probably brings the power of the model back down much closer to the multiple-measurements model—but there still might be a gap between the two; I don’t know.Now, regarding the role of “quantum” in the model: yes, you raise an excellent point. If you’re talking about the multiple-measurements model only, then you can perfectly well define a classical analogue of the model, and that model will

alsosolve the collision problem in O(1) steps. The only difference is philosophical: many people choose to regard probability distributions as representing subjective ignorance only, but regard quantum states as more “out there” in the external world, so that it would make more sense to discuss hypothetical physical laws that let you measure them twice. (Other people disagree with those people, however.)Anyway, the role of quantum is a lot clearer when you consider the hidden-variable model. First of all, if we were dealing with classical computation only, then the computation itself would give us the transition probabilities, so no one would even be tempted to make up “hidden-variable theories” to supply us those probabilities (but that turn out to have the consequence of letting us simulate the multiple-measurements model). Even more importantly, at a technical level, the way I manage to “juggle” the hidden variable between the two classical states |x⟩ and |y⟩ depends essentially on interference, and in particular on a quantum Fourier transform. (See the paper for details.)

Comment #27 July 12th, 2013 at 4:02 am

Ah, thanks for clarifying that. Honestly, the hidden-variable thing doesn’t interest me so much, but the multiple-measurements model sounds neat. 🙂 The hidden-variable-trajectory model just seems complicated; the multiple-measurements model seems more natural to me.

(These things and there associated complexity classes need better/formal names so we can talk about them more easily! Or at least, I think they do. I suppose I’m not actually going and proving any theorems about them…)

The only difference is philosophical: many people choose to regard probability distributions as representing subjective ignorance only, but regard quantum states as more “out there” in the external world, so that it would make more sense to discuss hypothetical physical laws that let you measure them twice.Yes, and you can probably count me among them! But hey, we’re just defining some models of computation, so who cares, right? 🙂

Search seems like a neat benchmark… classical (with our without randomness) is N, quantum is N^(1/2), quantum-multiple-measurements is N^(1/3), classical-multiple-measurements is ???. Hm, maybe I should think about that if I have any free time…

Comment #28 July 12th, 2013 at 4:59 am

Scott #27

Sniffnoy #24 should be Sniffnoy #25

Curious is #24 in case you missed (do not know if it is a reasonable question)

Comment #29 July 14th, 2013 at 3:52 pm

Scott, would you give some details on the Markov result? I’d like to look it up, though I don’t read Russian so a later compilation would be better than the original citation.

Comment #30 July 14th, 2013 at 3:55 pm

Is it http://en.wikipedia.org/wiki/Markov_brothers'_inequality ?

Comment #31 July 14th, 2013 at 4:16 pm

Yes, that’s the one! I don’t read Russian either. But the papers by Nisan and Szegedy and by Beals et al, or this talk I gave 5 years ago, should give you some places to start.

Comment #32 July 14th, 2013 at 6:48 pm

Thanks Scott!