## Schrödinger’s cat is hunting masked chickens

A commenter on my last post — who, since he or she chose not to provide a name, I’ll take the liberty of calling Dr. Doofus McRoofus — offers the following prediction about quantum computing:

[U]nless quantum computing can deliver something practical within the next five to ten years it will be as popular then as, say, PRAMs are today.

Four reactions:

- String theory has been immensely popular for over 20 years, among a much larger community, with zero prospects for delivering anything practical (or even any contact with experiment, which — ahem — some of us have had for a decade). Reasoning by analogy, if quantum computing became popular around 1995, that should at least put us in the upper range of McRoofus’s “five to ten years.”
- For better or worse, the funding outlook for quantum computing is much less depressing right now than for classical theoretical computer science. Many of us have been making the case to DARPA and NSF that classical complexity should continue to be funded in part because of its relevance for quantum computing.
- The right analogy is not between quantum computing and PRAM’s; it’s between quantum computing and parallel computing. Specific architectures, like linear optics and PRAM’s, have gone in and out of fashion. Modes of computation, like nondeterminism, randomness, parallelism, and quantumness, have instead just gotten agglomerated onto the giant rolling snowball of complexity. As long as the snowball itself continues to tumble down the hill (shoot — bad metaphor?), I don’t see any reason for this to change.
- I’m no good at predicting social trends, so perhaps time will prove me wrong and Dr. McRoofus right. But speaking for myself, I’d go insane if I had to pick research topics based on popularity. I became interested in quantum computing because of a simple trilemma: either (i) the Extended Church-Turing Thesis is false, (ii) quantum mechanics is false, or (iii) factoring is in classical polynomial time. As I put it in my dissertation, all three possibilities seem like wild, crackpot speculations, but at least one of them is true! The question of which will remain until it’s answered.

Comment #1 June 12th, 2006 at 1:15 am

String theory has been immensely popular for over 20 years, among a much larger community, with zero prospects for delivering anything practicalAs I’m sure you are aware, there is a backlash against string theory as we speak due to the lack of results.

For better or worse, the funding outlook for quantum computing is much less depressing right now than for classical theoretical computer science.Currently the funding situation for Quantum Computing is enviable, just as it was for some other now out of fashion areas of computing. IF Quantum computing delivers, they will have nothing to worry about, if they don’t, funding will decrease substantially.

I’m no expert on QC but I was under the impression that there is a distinct possibility that QM is right but error/noise issues make QC theoretically feasible but practically non-realizable. Is that not a fourth alternative to your predictions?

By the by, the comment was not meant as a diss of QC. The same applies to any subfield of theory. Of the top of my head I can think of at least two subfields for which NSF funding was substantially reduced (above and beyond the general decline of theory) due to the lack of practical results.

Comment #2 June 12th, 2006 at 1:30 am

As I’m sure you are aware, there is a backlash against string theory as we speak due to the lack of results.Yes, I’m aware — my point was that it took more than 20 years for the backlash to start! That still gives us some time. 🙂

I’m no expert on QC but I was under the impression that there is a distinct possibility that QM is right but error/noise issues make QC theoretically feasible but practically non-realizable. Is that not a fourth alternative to your predictions?“Theoretically feasible but practically non-realizable” always strikes me as a weasel phrase — either quantum computing is compatible with physical law or it’s not! But maybe what you meant is “compatible with physical law, but too hard to build in the next (say) 5,000 years.” Yes, that’s a possibility, but it’s not a fourth alternative. It falls squarely under alternative (i).

Anyway, thanks for providing an occasion to comment on these issues!

Comment #3 June 12th, 2006 at 8:20 am

Is it not a possibility that the QC computer produces the right answer, in accordance with the laws of QM, but that reading of said solution cannot be achieved within an error bound of less than epsilon, thus effectively making QC into a randomized computation model?

Would this not be a different alternative to the three ones you listed?

Comment #4 June 12th, 2006 at 9:08 am

anonymous: The key question you haven’t confronted is,

whycan’t the solution be read within epsilon?Is it because the requisite measurements are fundamentally impossible to perform? Because the probabilities don’t satisfy the Born rule? Because a new law of physics makes it impossible to get the decoherence rate below .2 per qubit per time step? Any of these possibilities would involve dramatic changes to textbook quantum mechanics.

In short, it’s easy to accept both QM and the impossibility of QC, until one considers any specific hypotheses about

whereQC breaks down! This observation is what led to my paper quant-ph/0311039, and in particular the notion of a “Sure/Shor separator.”Comment #5 June 12th, 2006 at 11:57 am

For better or worse, the funding outlook for quantum computing is much less depressing right now than for classical theoretical computer science. Many of us have been making the case to DARPA and NSF that classical complexity should continue to be funded in part because of its relevance for quantum computing.For worse. Of course I think that QC should have healthy funding. But if it’s a funding fad that has rocketed past the rest of theoretical CS — not just complexity theory but algorithms too — then it makes me feel dishonest to work in this area.

Of the top of my head I can think of at least two subfields for which NSF funding was substantially reduced (above and beyond the general decline of theory) due to the lack of practical results.The NSF should fund intellectual merit, not practical results. I’ll grant you that they are related: practical results are a sound argument for intellectual merit. They are just not the only argument.

What worries me is when NSF, like NIH and NASA, funds neither practical results nor intellectual merit, but glitter and colossi. It would be sad if quantum computing became like “Chaos: The Making of a New Science”.

Comment #6 June 12th, 2006 at 12:03 pm

Is it not a possibility that the QC computer produces the right answer, in accordance with the laws of QM, but that reading of said solution cannot be achieved within an error bound of less than epsilon, thus effectively making QC into a randomized computation model?It’s not just a possibility, it’s a certainty. QC

isa randomized computation model. Classical randomized computation is already quite useful. (*)Since QC is based on non-commutative probability, it’s randomization on steroids.

(*) Even though the conjecture is that P = BPP, the exponents of the polynomial complexity bounds almost certainly differ. In other words, the conjecture is that randomized computation can buy you a quadratic acceleration.

Comment #7 June 12th, 2006 at 1:16 pm

In other words, the conjecture is that randomized computation can buy you a quadratic acceleration.Maybe a lot more than that! But do we have any particular problem for which a quadratic speedup is

conjectured? (The best provable speedup in the RAM model should just be the query complexity gap — n versus n^0.753… for the AND-OR tree with log n levels.)Comment #8 June 12th, 2006 at 1:39 pm

We engineers feel that fundamental research in both string theory and complexity theory are absolutely vital to progress in engineering, without regard for whether string theory is the correct physical theory, or whether quantum computers ever are built.

But rather than post another long, bloviating essay about quantum system engineering — for which past sins I apologise! — the UW QSE Group has launched a daily QSE Journal.

By the end of the century there will either be many engineers and scientists, or few of either. So we all have a vested interest in each other’s prosperity.

Comment #9 June 12th, 2006 at 2:38 pm

Suppose one could not attain decoherence past some function of the Planck’s constant. It doesn’t seem to me that this would totally earthshattering. After all, we already know that the minimum amount of energy required to produce an irreversible computation is a function of Planck’s constant.

Comment #10 June 12th, 2006 at 2:54 pm

We engineers feel that fundamental research in both string theory and complexity theory are absolutely vital to progress in engineering, without regard for whether string theory is the correct physical theory, or whether quantum computers ever are built.But this does not mean that they should be funded to any arbitrarily high value. The amount of money is finite and thus has to be invested in those areas that produce the highest scientific returns. Certain theoretical studies have such wide ranging implications that they deserve funding even if results are unlikely to be of direct applicability. Others have proven to be sterile and devoid of promise, and a curtailing of the funding can be the catalyst for healthy re-examination of the so-far fruitless methods and techniques in use.

I can think of several areas in different sciences that would benefit of a reexamination of methods and techniques, including, in particular, multibillion dollar bevatrons.

Comment #11 June 12th, 2006 at 2:59 pm

For worse. Of course I think that QC should have healthy funding. But if it’s a funding fad that has rocketed past the rest of theoretical CS — not just complexity theory but algorithms too — then it makes me feel dishonest to work in this area.Well put.

The NSF should fund intellectual merit, not practical results. I’ll grant you that they are related: practical results are a sound argument for intellectual merit. They are just not the only argument.I agree. I quoted the NSF cases simply to illustrate that curtailed funding in the absence of results is not just a theoretical possibility.

What worries me is when NSF, like NIH and NASA, funds neither practical results nor intellectual merit, but glitter and colossi. It would be sad if quantum computing became like “Chaos: The Making of a New Science”.And then there would be huge backlash, with funding being curtailed well past it’s just point.

Comment #12 June 12th, 2006 at 4:25 pm

Suppose one could not attain decoherence past some function of the Planck’s constant. It doesn’t seem to me that this would totally earthshattering. After all, we already know that the minimum amount of energy required to produce an irreversible computation is a function of Planck’s constant.Minimal amount of energy? My understanding is that there is a tradeoff between energy and gate speed, but no minimal amount. Certainly there is no theoretical reason that we can’t use nuclear levels for our quantum comptuation and these will have gigantic gate speeds. Similarly we don’t know of any hard limits on decoherence, only of tradeoffs (after all decoherence is nothing more than a system and its environment performing an information processing task.)

Further it does appear that, for example, ion traps are well below the threshold for quantum computation (at least when it comes to decoherence, for control it is a bit dicier) So the limit can’t just be a simple single quantum system effect, but has to come from some sort of limit arising from the collective behavior of many qubits. It’s hard to do this without messing quantum theory to smitherines (ii)

Comment #13 June 12th, 2006 at 5:17 pm

Scott, I’m not sure why you dismissed the analogy to PRAMs/parallel computation so quickly. How many papers studying parallel complexity classes have you seen lately? Maybe a better quetion, how many

coursesteaching parallel computation are you aware of?Comment #14 June 12th, 2006 at 5:46 pm

Scott:

Maybe a lot more than that! But do we have any particular problem for which a quadratic speedup is conjectured?I suspect that you know more about this business than I do. My intuition is that Adleman’s proto-derandomization argument is close to the truth. If it were the truth, would that be a quadratic speedup over determinism? That is what I (perhaps falsely) inferred from an aside in some paper.

If no one can even conjecture a scenario in which this upper bound on acceleration is optimal, then that’s interesting. Maybe a quadratic speedup is more than you get.

Do you know what is considered the likely conjecture for the time complexity of primality?

Comment #15 June 12th, 2006 at 8:22 pm

Another comment:

I became interested in quantum computing because of a simple trilemma: either (i) the Extended Church-Turing Thesis is false, (ii) quantum mechanics is false, or (iii) factoring is in classical polynomial time.Concerning (ii), quantum probability has thousands of tons of evidence in its favor (to quote Ed Witten). (iii) is too timid. It’s not just that factoring would be in classical polynomial time; integer period-finding in general would be in classical polynomial time.

That leaves (i). In my view, the extended Church-Turing Thesis is the weak spot. I do not see why it has to be true, except to assuage our egos that neurological computing is as good as it gets, up to a constant factor.

After all, even without quantum computing, we have strong (if not quite complete) evidence that the constant factor is extremely bad.

Actually I agree with anonymous 1:15 that your trichotomy doesn’t address a fourth possibility, that QC is censored by some unknown failure of fault tolerance. I don’t intend this as the weasel answer that it’s just not practical. Rather that it seems conceivable that some unknown law of thermodynamics stops the fun. I think that this is also a wild possibility, but not as wild as either (ii) or (iii), nor as weak as the abject dismissal of (i).

Comment #16 June 12th, 2006 at 8:35 pm

Scott, are you sure your trilemma has no errors in it? If it is valid then I would say that factoring is in polynomial time and we just don’t know it.

Comment #17 June 12th, 2006 at 9:05 pm

I would say that factoring is in polynomial time and we just don’t know it.What a wonderful hypothesis. It leads us to ask ourselves, under what general circumstances is a theorem true, and yet we

can’tprove it?The obvious answer is, when the proof is too long to find by a mixture of human intuition and exhaustive search. Like, for example, the endgame tablebases in chess, which now contain forced wins of more than 500 moves — no human player can fathom these endgames!

If we believe that the intellectual market is just as efficient as the economic market (which is to, surprisingly efficient), then the Theory of Intellectual Efficiency makes two predictions:

(1) none of the Clay Institute Millenium Prizes will be claimed by a short proof.

(2) if a polynomial-time factoring algorithm is found, the code won’t be short.

Can someone (better-conversant with mathematical history than me) provide counterexamples, in the form of a much-studied problem, outstanding for 50 years or more, for which a short proof was eventually found?

I’m sure these counterexamples exist, and it would be illuminating (especially for younger mathematicians) to know how the obstruction was eventually overcome.

Comment #18 June 12th, 2006 at 10:04 pm

Primality.

Comment #19 June 12th, 2006 at 10:07 pm

Can someone (better-conversant with mathematical history than me) provide counterexamples, in the form of a much-studied problem, outstanding for 50 years or more, for which a short proof was eventually found?Well, the impossibility of duplicating the cube was open for 2,000 years, and the proof is only a couple of pages.

A deterministic polytime primality algorithm was open for 50 years, and the proof is only ~6 pages (and the algorithm only ~10 lines of pseudocode).

But for the most part, your “Theory of Intellectual Efficiency” does match experience. Though it does immediately raise the question of whether no short proofs exist for hard theorems, or whether short proofs exist and we’re not finding them.

(Complicated matters further is that proofs of the same theorem get shorter over time, since people increasingly take for granted things that once had to be spelled out in detail.)

Comment #20 June 12th, 2006 at 10:10 pm

A deterministic polytime primality algorithm was open for 50 years, and the proof is only ~6 pages (and the algorithm only ~10 lines of pseudocode)OK cheshire cat, you win priority for this observation (by 2 minutes 37 seconds). 🙂

Comment #21 June 12th, 2006 at 10:11 pm

Scott, are you sure your trilemma has no errors in it? If it is valid then I would say that factoring is in polynomial time and we just don’t know it.LOL. To each his own.

Comment #22 June 12th, 2006 at 10:17 pm

Rather that it seems conceivable that some unknown law of thermodynamics stops the fun. I think that this is also a wild possibility, but not as wild as either (ii) or (iii), nor as weak as the abject dismissal of (i).Yeah, alright. So I ought to amend (ii) from “quantum mechanics is false” to “current low-energy physical theories are wrong.”

Comment #23 June 12th, 2006 at 10:47 pm

So I ought to amend (ii) from “quantum mechanics is false” to “current low-energy physical theories are wrong.”That depends on what you mean by low-energy physical theories. If you mean what physicists call few-body theory, in this case the Schrodinger equation or QED, then it’s monumentally unlikely that it’s wrong. If you mean many-body theory, the extrapolation of the Schrodinger equation to relevant physical systems, then everyone knows that it’s not a finished business. It’s still extremely unlikely (in my view) that there is some sweeping new thermodynamic principle, but it seems just barely possible.

Comment #24 June 12th, 2006 at 11:19 pm

How many papers studying parallel complexity classes have you seen lately? Maybe a better quetion, how many courses teaching parallel computation are you aware of?Hey, I took one at Berkeley. 🙂 But seriously: what about Ishai et al.’s results on pseudorandom generators in NC0, or the whole sequence of breakthroughs on arithmetic in TC0? In general, any time we talk about circuit depth (which we do a lot), we’re implicitly talking about the resource of parallelism.

Comment #25 June 12th, 2006 at 11:56 pm

For those who just plain enjoy complexity for its own sake (like me), here is the recently discovered 517-move longest line in chess, from the Computer Chess forum.

That a community of people exists that

caresabout chess endgames is just as pleasing to me, as the endgames themselves.Comment #26 June 13th, 2006 at 12:24 am

I wasn’t concerned so much with answering the question as with providing evidence

forthe Theory of Intellectual Efficiency 🙂Comment #27 June 13th, 2006 at 12:59 am

For those who just plain enjoy complexity for its own sake (like me), here is the recently discovered 517-move longest line in chess, from the Computer Chess forum.If you really enjoy complexity, you should see how long a string of maximal Komolgorov complexity you can create in a day. Then the next day, beat your record. All you need is a coin and a chair!

Comment #28 June 13th, 2006 at 3:55 am

In my view, the extended Church-Turing Thesis is the weak spot.Indeed. The regular Church-Turing Thesis has a lot of support in reality. Every hare-brained model of computation we have devised so far (and we have lots of them) is equivalent to the Turing machine.

On the other hand we have devised models that are exponentially slower than others, it just so happens that the current local max is only polynomially faster than the single tape Turing machine….wait a minute… the PRAM (with a sufficiently large number of processors) is exponentially faster than the Turing machine. So unless you start adding arbitrary restrictions to what is a valid computation model the ECTH isn’t even true as we speak.

Comment #29 June 13th, 2006 at 8:00 am

…or the whole sequence of breakthroughs on arithmetic in TC0?For the unenlightened, can you say which results you are refering to?

Comment #30 June 13th, 2006 at 8:23 am

I was thinking mostly of (1) circuits for division in TC0 (see this survey by Eric Allender), and (2) the Naor-Reingold pseudorandom function in TC0 (with its consequences for natural proofs, the intractability of learning neural networks, etc.). But there have been other exciting TC0-related results, for instance that of Allender and Gore that permanent is not in uniform TC0.

Comment #31 June 13th, 2006 at 9:58 am

If you really enjoy complexity, you should see how long a string of maximal Komolgorov complexity you can create in a day.We engineers do this all the time! And Komolgorov our strings are terrifically long … no joking.

A few years ago, I had an exchange of email with Chaitin, asking whether it was not true, that for an experiment to be random, it was sufficient for it to have many possible outcomes.

For example, in our quantum microscope experiment, we perform about 10^13 binary measurements per second (as photons pass through our interferometer).

The number of possible outcomes in one second of data is therefore 2^(2^13) — a number so large as to ensure that almost all of our experimental data records are algorithmically incompressible.

We use this example to explain to our engineering students “why” quantum mechanics is random. The point being that quantum measurement is

necessarilyrandom, by Chaitin’s and Komolgorov’s algorithmic definition of randomness.For our young engineering students, the sole philosophical mystery of quantum mechanics then becomes, why is Hilbert state space so large? They are much more comfortable with this mystery of scale than a mystery of randomness (even though they are t he same mystery).

The practical question for our young engineers then becomes, how can we compress this exponentially large state space? Such compression is a technique that we engineers call model order reduction (MOR), and it is becoming a very useful tool in quantum system engineering.

Chaitin regarded this line of reasoning as completely obvious, and not harmful to students. I wonder how many Shtetl Optimized fans would agree?

Comment #32 June 13th, 2006 at 10:56 am

John: In future comments, I respectfully ask that you stop using the phrase “we engineers,” and instead use “I.” Thanks! 🙂

Comment #33 June 13th, 2006 at 11:05 am

Regarding the issue of whether factoring is in P or BPP but the proof is intrinsically difficult (with the proviso that I’m not at all knowledgable on number theory, nor is what I’m going to say very novel):

A lot of simple statements in number theory have really complex or unknown proofs. Take i) Goldbach’s conjecture, ii) the Green-Tao theorem.

What’s notable is that these theorems have interesting heuristic arguments in their favor:

We would expect Green-Tao to hold, and Goldbach to hold at least for all but finitely many values of n, if we replaced the set of prime numbers in their statement with a random subset of the natural numbers where each m is independently included with probability 1/ln(m). This gives a sequence of numbers having roughly the density of the primes, according to the Prime Number Theorem.

So Green-Tao and Goldbach can be interpreted as saying that the primes are ‘not too atypical’ or ‘pseudorandom’ for a certain testing procedure on sequences.

I wouldn’t then claim that such proofs are necessarily as hard as say P vs BPP, but it seems like the difficulty of derandomizing all algorithms via pseudorandom sequences is at least supported cumulatively by the difficulty of proving that individual sequences are pseudorandom against individual tests.

As far as factoring goes, effective heuristic algorithms may rest on similar heuristic arguments about the ‘pseudorandom’ structure of numbers. E.g., Pollard’s Rho heuristic does, at least in arguing about its expected efficiency. If some such heuristic worked in polytime for all numbers, the proof might be way longer than the pseudocode.

Comment #34 June 13th, 2006 at 1:50 pm

Hi Greg,

My intuition is that Adleman’s proto-derandomization argument is close to the truth. If it were the truth, would that be a quadratic speedup over determinism? That is what I (perhaps falsely) inferred from an aside in some paper.I know Adleman’s trick, but I don’t know how to use it to do derandomization of the sort we’re talking about (i.e., simulating BPP in P). If you look at, say, the Nisan-Wigderson derandomization (the one that works if E requires exponential circuits), I’m pretty sure the polynomial blowup will be

huge(though I’m not sure exactly what it is).Do you know what is considered the likely conjecture for the time complexity of primality?I don’t know if anyone’s even willing to conjecture about such things. 🙂 I think the best algorithms we have are n^3.

Comment #35 June 13th, 2006 at 2:02 pm

I know Adleman’s trick, but I don’t know how to use it to do derandomization of the sort we’re talking about (i.e., simulating BPP in P).One of the constructions, more properly due to Bennett and Gill I think, is that BPP = P relative to a random oracle. You replace the random choices in the BPP algorithm by oracle data, you run a linear number of trials and accept a majority-rule answer. A.s. (with respect to the choice of oracle), the process yields only a finite number of wrong answers. These can be patched as special cases.

By this argument, O(n^alpha) BPP complexity decomes O(n^{alpha+1}) deterministic complexity. Calling it a quadratic speedup is perhaps misleading.

My suggestion is to replace the random oracle by a cryptographically strong pseudo-random sequence (if those exist). This is a non-rigorous step, of course.

Comment #36 June 13th, 2006 at 2:41 pm

John: In future comments, I respectfully ask that you stop using the phrase “we engineers,” and instead use “I.” Thanks! 🙂I’ve been meaning to say the same thing for several threads now, but wasn’t sure if it would be considered rude.

Comment #37 June 13th, 2006 at 2:46 pm

I was thinking mostly of (1) circuits for division in TC0…But these results are all at least 10 years old!

Going back to the original motivation for the question: although circuit depth and parallelism are fundamentally “the same thing,” I think it’s still fair to say that research on PRAMs has died out while research on circuit depth has not (though results are difficult to obtain).

Comment #38 June 13th, 2006 at 5:41 pm

Scott said:

I respectfully ask that you stop using the phrase we engineers.No problem, and Scott, thank you especially for your very kindly phrased reply. Your social grace & good manners are a big reason this blog is so lively.

I confess to have been hoping for a reply along the lines of “Hey, I’m an engineer too!” But, it never happened …sometimes I feel like either the last Bachman’s warbler, or the first.

But, oh well, we all belong to “the same ancient Catholic Church to which every mother’s son and soul of us belong; the great and everlasting First Congregation of this whole worshipping world; we all belong to that.”

Comment #39 June 13th, 2006 at 11:33 pm

Your social grace & good manners are a big reason this blog is so lively.Personally, this is always the sentiment that strikes me when I see Scott. Man, he is so socially graceful.

Comment #40 June 14th, 2006 at 12:32 am

I can think of a couple of problems who’s solutions ultimately turned out to be short. De Branges settled the Bieberbach conjecture with a 100-page proof, but there are now two-page proofs. The Quillen-Suslin theorem, originally conjectured by Serre, and basically led people to create the field of algebraic K-theory to try to settle it, but the actual solution is short enough that it appears in Lang’s Algebra.

Comment #41 June 14th, 2006 at 4:18 am

I wish had the qualifications to pick out the schnoods among you.

Comment #42 June 14th, 2006 at 1:21 pm

Anonymous said …

Scott. Man, he is so socially graceful.Hey, Dave Bacon and Scott and I (and several other people) shared lunch at the faculty club last month, and I have to say, it was a pretty darn enjoyable lunch. We talked about the interface between complexity theory, quantum computing, the “discipline-that-must-not-be-named”, and the history of ideas. It was much superior to the usual faculty club lunch discussions, IMHO.

Comment #43 June 14th, 2006 at 3:46 pm

Hey, Dave Bacon and Scott and I (and several other people) shared lunch at the faculty club last month, and I have to say, it was a pretty darn enjoyable lunch.I saw Scott in the hallway once. I almost squealed, but at the last moment I was able to collect myself, at least long enough for him to pass. I couldn’t concentrate for the rest of the day.

They say he used to take the stairs three at a time, and his urine smelled like butterflies.

Comment #44 June 14th, 2006 at 4:42 pm

If I may slip in a few not-very-related technical complexity theory questions:

1) As the Complexity Zoo explains (in the entry for NLOG), NL with an oracle is defined by having separating input and witness tapes. The L machine is not allowed to read the witness tape until all oracle calls are done. In addition, the oracle tape is replaced by a non-storing data port to the oracle.

The second rule actually weakens the correct rule for a large space class such as PSPACE, because then the oracle tape counts towards the space limit. The first rule makes sense for an unrealistic class such as NL or PL. My question is, is the rule consistently applied to RL and BPL as well, or are they sometimes treated as part of a hierarchy below BPP rather than below P?

Likewise, would the right rule for BQL (which is not in the Zoo and may not yet be in any paper) that all quantum superpositions should come after oracle calls?

2) On the same theme, an NLOG machine is still allowed to make oracle calls after reading the witness. So I gather that its oracle calls are not space limited? (If they were, then NLOG would translate to NLINSPACE under simple exponential padding of the input.)

3) Is there a natural way to extend the grammar classes REGEXP, CFL, and CSL to include oracle “calls”? There is a theorem that CSL = NLINSPACE, but I do not know if this should be viewed as a non-relativizing result.

Comment #45 June 14th, 2006 at 11:33 pm

They say he used to take the stairs three at a time, and his urine smelled like butterflies.I

didused to take the stairs three at a time sometimes.Comment #46 June 15th, 2006 at 8:45 am

“his urine smelled like butterflies” Interesting. Could you please cite some journal articles on this topic? I Googled “smelled like a butterfly” and could not find any previous use of this phrase. Did you invent it? Scott, do you have any evidence that your urine smells like a butterfly. And if so, Is it always the case, or only after you eat something odd? Scott, do you eat odd things. Damn, this blog really makes you think

Comment #47 June 15th, 2006 at 12:31 pm

Butterflies are actually notoriously foul smelling.

Comment #48 June 15th, 2006 at 12:49 pm

Scott, do you have any evidence that your urine smells like a butterfly.I do not, and (modulo a reasonable assumption about butterflies) indeed have some evidence to the contrary.

Comment #49 June 15th, 2006 at 5:18 pm

Whenever rudeness breaks out on the USMC open discussion forum, a soothing picture of bikinis and/or tattoos and/or weapons-firing (preferably all three in one photo) serves to quiet things down.

For the present audience, perhaps this animation (sound recommended) may suit the case.

Comment #50 June 22nd, 2006 at 7:07 pm

I think that quantum computation is a good thing. Building the quantum computer is only a detail. We can see these days lots of nice mathematics motivated directly or indirectly by quantum computation. Not to speak about the huge advances in experimental physics propelled by quantum computation (e.g. manipulation of entanglement).