## Barriers to proving P!=NP and moving this blog

Thanks, everyone, for your patience, and your numerous complaints about the *Technology Review* site! Currently, the folks at *TR* say they can do all the minor things people asked for, like adding permalinks to the titles and letting people include their URL’s with their comments. On the other hand, they can’t make it so you can post comments without logging in, and they can’t decrease the size of the ad bar. (I suggested that they at least turn my sidebar into drop-down menus, thereby increasing the screen width available for the entries; they said they’d look into that.) Also, they can’t provide the full text in RSS (since God forbid, that might let people read the blog without seeing ads), although they *can* give the first 150 words or so.

As you can imagine, TR’s response has put me in a difficult position. From their perspective, they’ve been bending over backwards to accommodate me; from my perspective (and I gather from most readers’), their offer still falls short of acceptable. When I originally agreed to let them host me, I imagined that the blog would look just as it does now, with maybe a few unobtrusive ads here or there. I didn’t even think to ask about the RSS feed or the screen width available for entries.

And so, after weeks of introspection (well, mostly being tied up with other work), I’ve reached a decision: *I will continue to host my blog right here, on Bluehost, until TR comes up with something that both parties can live with.* I *like* the TR people and appreciate their interest, but I’m not in any particular hurry to move, especially if it means crippling this blog so that no will read it. It’s true that Bluehost sucks, and that I no longer have time to be a webmaster — but once I get grant money, maybe I can pay someone to take care of these things for me.

Finally, since all this self-referentiality gets tiresome, here are the PowerPoint slides for a talk I gave at MIT last week, about recent joint work with Avi Wigderson on a new barrier to proving P≠NP. (Note: The day before the talk, PowerPoint trashed my file, and I had to recreate the entire presentation from memory. Always make backup copies! Excellent advice, in my opinion.)

Abstract:

Algebrization: A New Barrier in Complexity TheoryAny proof of P≠NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory.

In this talk we present such a barrier, which we call “algebraic relativization” or “algebrization.” The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to the low-degree extension of A over a finite field or ring.

We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. We first show that all known non-relativizing results — both inclusions such as IP=PSPACE and MIP=NEXP, and separations such as MA

_{EXP}⊄P/poly — do indeed algebrize. We next show that most open problems — including P versus NP, P versus BPP, and NEXP versus P/poly — will require non-algebrizing techniques, of which we currently have not a single example. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP.We also exhibit a surprising connection between algebrization and communication complexity. Using this connection, we give an MA-protocol for the Inner Product function with O(√n log(n)) communication (essentially matching a lower bound of Klauck), and describe a pure communication complexity conjecture whose truth would imply P≠NP.

Comments welcome. We’ll hopefully have a writeup soon.

Comment #1 September 16th, 2007 at 4:25 pm

I’m not a theorist, but this appears to be quite an important result.

It’s too bad that ‘progress’ on P vs. NP generally consists of one step backwards followed by two steps back.

Comment #2 September 16th, 2007 at 5:06 pm

I’m not a theorist, but this appears to be quite an important result.Well,

Ithink so! 🙂It’s too bad that ‘progress’ on P vs. NP generally consists of one step backwards followed by two steps back.I think we should be proud of this. Those working on e.g. the Riemann hypothesis haven’t taken

nearlyas many steps back as we have.Comment #3 September 16th, 2007 at 8:12 pm

“It’s too bad that ‘progress’ on P vs. NP generally consists of one step backwards followed by two steps back.”I guess that’s sort of like how, in golf, a lower score is a higher score for every other sport. In math and comp sci it appears that finding more barriers illuminates the problem, while in other areas (e.g., quantum gravity), that doesn’t seem to be true.

Comment #4 September 16th, 2007 at 8:51 pm

What another blog did in a similar situation:

http://freakonomics.blogs.nytimes.com/2007/08/22/the-last-word-for-now-on-our-rss-feed-an-excruciatingly-long-and-boring-post-that-will-please-exactly-no-one/

Also: wow, very exciting result!

Comment #5 September 16th, 2007 at 9:37 pm

Glad you’re keeping the site where it is and open. The possibility of user registration was making Shtetl-Optimized dangerously close to becoming Ghetto-Optimized.

Comment #6 September 16th, 2007 at 9:43 pm

Finding barriers or negative results is not taking steps backwards. It’s actually stepping forward since the answer they have looking for, is a

proof.Comment #7 September 17th, 2007 at 1:12 am

I was going to refer to the freakonomics blog as to how the readers were exactly right about how horrible it would turn out. The quality of the comments has plummeted. And that’s pretty much the only reason to read blogs–Digg and Fark can give you links, but the aggregation of critical and supporting comments, by people motivated enough to contribute is crucial. Then there’s the maddening one paragraph teaser feed. I hate to rag on something that could earn you cash, or possibly educate the public (hmm, …) but I am glad you are staying here.

Comment #8 September 17th, 2007 at 8:22 am

Does this new result give some new connections between ordinary and the Blum-Shub-Smale type of complexity theory?

Comment #9 September 17th, 2007 at 8:24 am

Scott’s result that proving P!=NP meets 2 more barriers is not usual to me. It is because the mathematicians usually prove that a conjecture is fully or partially proved. They don’t often report that they meet more barriers in their attempt to prove a conjecture. Anyway, I believe Scott’s result is an advance to prove the famously hard conjecture.

Comment #10 September 17th, 2007 at 8:37 am

Apt: Interesting question! I suppose you could say we study “Blum-Shub-Smale query complexity” — i.e., that model which is to BSS as ordinary query complexity is to Turing machines — and show that for many problems it’s not that different from ordinary query complexity. (Technically, we’re mostly interested in finite field extensions, but most of our query complexity lower bounds by “designer polynomials” also work over the reals or complex numbers. Also, we don’t allow division whereas BSS do.)

Maybe our results say something deeper regarding BSS, but if so I can’t think of it offhand.

Comment #11 September 17th, 2007 at 11:32 pm

“Those working on e.g. the Riemann hypothesis haven’t taken nearly as many steps back as we have.”

Probably because they are going in circles, and speaking of circles they should throw out the real analysis and use cycles.

Landau’s function g(n) is the largest cycle generated by a permutation on n elements. That’s equivalent to finding an integer partition of n that has the largest least common multiple of its cell sizes.

Giving a tight upper bound on g(n) is equivalent to the Riemann Hypothesis.

Comment #12 September 18th, 2007 at 3:36 pm

I’d stay clear of the adverts just so’s i could use this cool logo 😀

http://www.adfreeblog.org/

Comment #13 September 18th, 2007 at 6:22 pm

How does this reformulation into comm complexity question help, knowing that all the comm complexity proofs are natural? Any insights?

Comment #14 September 18th, 2007 at 7:43 pm

anonymous: You mean the communication complexity conjecture implying P≠NP? I have no idea how it helps, and expect that it doesn’t. I just thought it was neat. 🙂

Comment #15 September 19th, 2007 at 8:53 am

Scott, is there any connection to algebraic circuit complexity lower bounds? Is there any barrier there? Does this work relates to algebraic circuits?

Comment #16 September 19th, 2007 at 10:42 am

Resolventeer: Good questions! We know much less about barriers in the algebraic setting than in the Boolean one. In particular, people have tried for a while to extend Razborov-Rudich to algebraic circuits and haven’t managed to do it.

Now that I think about it, though, I think that our algebrization barrier

doessay something about algebraic circuit lower bounds. In particular, it’s known that if we could prove superpolynomial algebraic circuit lower bounds for certain explicit functions, then we could derandomize polynomial identity testing. (Why? Because in that case we could find an explicit set of points such that, if two polynomials were equal on all of those points but unequal elsewhere, then at least one of the polynomials could not have small algebraic circuits.)But my and Avi’s results imply that any derandomization of (multivariate) polynomial identity testing will require non-algebrizing techniques. In other words, it can’t proceed by treating the polynomials as a black boxes, using only the fact that they have low degree and not anything else about how they were computed.

Comment #17 September 19th, 2007 at 11:47 am

Scott, you are da man.

An announcement of a great result, to start off your new job with a bang.

Comment #18 September 19th, 2007 at 3:54 pm

What does it mean to algebrivitize an oracle, say, A? Is that a polynomial approximation of A.. you say it is an extension but that is not clear.. perhaps I am dense..

Also.. this is my understanding of the proof techniques,

I was wondering if my understanding is correct, or off

track.! cheers,

Simon (part time complexity theorist)

1. propositional logic = the actual machine instance

2. algebrize = reduce #1 to a algebra that is a machine description rather than enumeration.. we could say a sequence s is in language L for instance… introduce combinatorics to say, does a machine exist?

3. relativize: to provide an oracle for #1.

4. naturalize: to provide a sufficiently large truth table to

the boolean functions defined in #2.

5. algebrization: provide a algebra of low degree polynomials derived from the oracle?

Comment #19 September 19th, 2007 at 3:54 pm

Hey Scott, some more questions –

What’s up with you finding a result in purely classical computation?

Do you think my unrelated xors conjecture might have algebrization limits? That would explain why such a seemingly trivial conjecture is unapproachable.

Comment #20 September 19th, 2007 at 9:44 pm

Simon, I’m afraid I don’t understand most of your question.

Algebrizing is something you do to a result or conjecture: for example, “class C is contained in class D.” What it means (basically) is that you feed the C machines an oracle A, and feed the D machines an oracle ~A, and ask whether the result or conjecture is still true. Here ~A denotes a

low-degree extensionof A: that is, a multivariate polynomial over some field or ring that coincides with A when restricted to the Boolean cube.For more details, you can either (1) reread slides 9-12 where I define these things, (2) read the paper (which will hopefully be out in a month or two!), or (3) ask Amit Sahai in your department.

Comment #21 September 19th, 2007 at 10:54 pm

Hey Bram,

What’s up with you finding a result in purely classical computation?Yeah, I know — this so-called “classical” model of computation is bizarre and counterintuitive, but once in a while I like to stick my neck out and study something crazy.

Do you think my unrelated xors conjecture might have algebrization limits? That would explain why such a seemingly trivial conjecture is unapproachable.I hadn’t seen your conjecture, but I googled it and found this page, where you give not one but two beautiful conjectures about linear circuits over GF[2]!

For proving lower bounds on the number of XOR gates, I think the relevant barrier is not relativization or algebrization but natural proofs. More concretely, I conjecture that there exists a set of Boolean functions f:{0,1}

^{n}→{0,1}^{n}such that(i) every function in the set can be computed with O(n) XOR gates, but

(ii) a random function from the set can’t be efficiently distinguished from a random linear function over GF[2].

(To construct such a set, one might use the “linear-size superconcentrators” that were discovered in the 1970’s, with noise added in various places.)

If you’ll allow me O(n log n) XOR gates instead of O(n), then I’m even more confident of my conjecture, since it seems like the set of

allfunctions computable with ~n log n XOR gates has the properties I want.Now, if my conjecture is true, then it immediately implies that

no natural proof can ever give a superlinear lower bound on the number of XOR gates.(Or, if you only assume the n log n version of my conjecture, then any lower bound better than Ω(n log n).)And

that, I think, is the main barrier to solving your unrelated XORs problem.Comment #22 September 19th, 2007 at 11:23 pm

Bram, a few further comments:

(1) By a counting argument, it’s clear that

mostlinear functions f:{0,1}^{n}→{0,1}^{n}require Ω(n^{2}/log n) XOR gates, and this can also be shown to be tight (i.e., every function f:{0,1}^{n}→{0,1}^{n}that’s computable with XOR gates is computable with O(n^{2}/log n) of them). But proving a superlinear lower bound on the number of XOR gates foranyexplicit function has been a notorious open problem since the 70’s.(2) Before today I hadn’t even seen a

candidatefor an explicit linear function requiring more than ~n log n XOR gates — that’s what I love about your unrelated XOR’s problem. (It’s possible, though, that such problems were known to others.) I did know of some explicit linear functions that seem to require Ω(n log n) XOR gates — for example the Fourier Transform, and the linear transform whose matrix looks like the Sierpinski gasket.(3) I’m much more skeptical of your first conjecture: that every linear function has a minimal circuit consisting only of XOR gates. An extremely analogous conjecture — that every monotone function has a minimal circuit consisting only of AND and OR gates — was found in the 80’s to be badly false (when Razborov proved an exponential lower bound on monotone circuits for bipartite matching, a problem that’s known to have polynomial-size

non-monotone circuits).(4) Alas, even if your conjecture is false (I mean false in an asymptotic sense — I won’t sweat the constants 🙂 ), it will be extremely hard to disprove, since that would require proving a superlinear lower bound on the number of XOR gates. On the other hand, supposing your conjecture were true, I don’t see any barrier to proving it: one could imagine an explicit algorithm that converts any circuit computing a linear function into an equivalent and no larger circuit consisting only of XOR gates.

(5) Regarding what you call “splitters,” the usual convention is to assume these are available for free — i.e. that every gate has unbounded fanout. It’s easy to see that this changes the circuit size by at most a constant factor, at least in the case where every gate has bounded fanin.

Comment #23 September 20th, 2007 at 2:48 am

Scott, glad you found my conjectures insightful. I’d like to think that I had the skills to have been able to hack it as a theoretician if certain aspects of my personality hadn’t made it inevitable that I’d wind up in industry.

My own hand-waving intuition about natural proofs is that the argument is sort of like ‘If there’s a decent symmetric cipher [which I of course believe] then you can’t show lower bounds on circuit complexity by induction because you could always take a circuit and make it encrypt then decrypt the intermediate value thus totally obliterating the invariant.’ Of course natural proofs aren’t exactly like that, the argument is rigorous ‘n’ stuff, and has a clever trick to relate a problem to itself, but it’s close enough that I think the intuition about what it says is close enough to be handy. I basically had this realization while trying to demonstrate my first conjecture, and completely gave up – I simply can’t see how to make any progress without either having an invariant which is extremely large and difficult to compute, of which I have no examples, or having some kind of universal deobfuscation algorithm, which while it’s intuitive to me that in practice software deobfuscation is easy, coming up with completely universal algorithms for it is a non-starter (although there is that paper on the impossibility of obfuscation which I don’t understand, dunno if it might in any way be helpful). My vague understanding of algebrization is that it’s a superset of naturalization, but I could be totally wrong on that.

In any case, I’d be fascinated with either a positive or negative result to my first conjecture – positive because it would likely be a theoretically important result, and negative because I’d really like to know what problem defeats it. (It isn’t obvious to me at a glance that bipartite matching is monotone, by the way, although I know the polynomial time algorithm for it.)

It isn’t at all obvious to me that nearly quadratic gates are necessary for any linear mapping at all, mostly because it was quite a bit of work for me to come up with an explicit example which appears to require more than quasilinear. It seems to make sense that random functions would have that property, although it’s kind of odd that almost all functions should have the property of being more difficult than most of the functions you can actually name. In any case, the unrelated xors problem has a whole load of special structure which it seems like ought to be helpful, and using only xor gates makes the sort of obfuscation in natural proofs not as big of an issue because xor gates can’t encrypt. So it seems like my second conjecture might be approachable, but no related result has thus far been proven, which isn’t an encouraging sign.

If my intuitions about what naturalization says are wrong and it can demonstrate difficulties in gates limited to only using xors please let me know.

Those stupid log(n)s which keep turning up in complexity are really annoying. It would be so much more convenient if things with a log(n)^c were part of the same complexity class, then asymptotics would be a lot more robust. Certainly if I’m conjecturing a lower bound of n^1.5, then any result which doesn’t demonstrate at least a lower bound of n*log(n)^c for all c doesn’t really count as progress.

Comment #24 September 20th, 2007 at 3:17 am

Thanks for keeping the site open. I don’t understand why anyone would expect anyone else to log in to comment on a blog; to me it seems to be against the entire spirit of the enterprise. I’m sorry Bluehost sucks, but I hope you can hold out until TR is willing to meet you 100% of the way, or until the heat death of the universe is back in vogue, whichever comes first.

Comment #25 September 20th, 2007 at 4:02 am

My vague understanding of algebrization is that it’s a superset of naturalization, but I could be totally wrong on that.You are. 🙂 Algebrization is a superset of relativization, and seems incomparable with naturalization. (More precisely, we know relativizing results that don’t naturalize, and naturalizing results for which it’s not even clear what it would mean for them to relativize or algebrize.)

It isn’t obvious to me at a glance that bipartite matching is monotone, by the wayIf you add more edges to a graph, you can only increase the number of matchings.

It isn’t at all obvious to me that nearly quadratic gates are necessary for any linear mapping at allHere’s a proof: there are ~2

^{n^2}distinct linear mappings on n bits, but only ~((T+n)^{2})^{T}circuits with T XOR gates. Setting these equal yields T = Ω(n^{2}/log n).it’s kind of odd that almost all functions should have the property of being more difficult than most of the functions you can actually name.Tell me about it! That’s exactly the situation we’ve been in since Shannon’s work in the 1940’s. There’s nothing specific here to XOR circuits: we know, for example, that almost every n-bit Boolean function requires circuits of size Ω(2

^{n}/n), yet no one can “name” a single such function.(Well, strictly speaking, it depends what you mean by “name”: you could, for example, talk about “the lexicographically first Boolean function with maximal circuit complexity.”)

using only xor gates makes the sort of obfuscation in natural proofs not as big of an issue because xor gates can’t encrypt.That’s incorrect: in the sense relevant for natural proofs, I think there’s every reason to believe xor gates

canencrypt. The question is this: by starting from the n-by-n identity matrix, and performing O(n) row operations of the formrow i := row i + row j (mod 2)

(chosen by whatever randomized strategy you want), can you produce a matrix which no polynomial-time adversary can distinguish from a random invertible matrix over GF[2]?

If the answer is yes, then there can be no natural proof of a superlinear lower bound for XOR circuits — for such a proof would yield a distinguishing strategy contrary to assumption.

(Keep in mind that, as long as you’re talking about natural proofs, it doesn’t matter how much structure there is in the function you’re trying to prove is hard. Of course a non-natural proof might take advantage of that structure — indeed, that’s almost what it

meansfor a proof to be non-natural.)Certainly if I’m conjecturing a lower bound of n^1.5, then any result which doesn’t demonstrate at least a lower bound of n*log(n)^c for all c doesn’t really count as progress.I generally agree with you about the annoyingness of log factors, but in this particular case you’re completely, utterly wrong. If someone managed to prove an Ω(n log n) — or even Ω(n log log log log n) — lower bound on number of XOR gates needed to compute an explicit function, it would be universally (and rightly) considered one of the biggest breakthroughs in the history of the field.

Comment #26 September 20th, 2007 at 4:52 am

Is it known where Geometric Complexity (i.e. Mulmuley) techniques are placed, in terms of the Algebra/Natural/Relativ-izing world ?

Comment #27 September 20th, 2007 at 5:16 am

Carlo: No. (At this point, I don’t think anyone really knows what the

techniquesare, let alone what their limitations are!)Comment #28 September 21st, 2007 at 1:44 pm

Bram: Avi Wigderson tells me that your first conjecture has been “well-known” since the 70’s — the question being phrased as, do multiplication gates ever help in evaluating linear functions over a finite field? I’d never heard of it though.

Comment #29 September 21st, 2007 at 8:32 pm

Scott, thanks for letting me know that the conjecture is “well-known”. That is indeed a slightly more general statement of the same problem. I wonder if my unrelated xors problem specifically is original.

Thank you for pointing out that the problem naturalizes. Odd how insidious that concept is.

My thought on how one might be able to bust naturalization is to make an invariant which is too large and hard to calculate for naturalization limits to apply, but which it’s still somehow possible to calculate its value for the very specific output you’re trying to provide a lower limit to. Basically, the problem of intermediate gates being able to being able to encrypt then decrypt the value in question could be overcome by having a value whose computation implicitly involves brute forcing all that encryption and decryption.

The tricky part is finding such a function which actually is computable for a special value. Here’s a function which works, and doesn’t even have to grow fast, but has some obvious problems: ‘number of gates necessary to produce this data’ 🙂

Comment #30 September 21st, 2007 at 10:50 pm

My thought on how one might be able to bust naturalization is to make an invariant which is too large and hard to calculate for naturalization limits to apply, but which it’s still somehow possible to calculate its value for the very specific output you’re trying to provide a lower limit to.Yep! 🙂

Comment #31 September 22nd, 2007 at 5:09 am

Wow, that’s really cool Scott. I’ll have to sink my teeth into this when I get the chance. I remember groping blindly towards something vaguely like what I think I’ve understood from the abstract. I love when papers come out like that. Some people which consider their efforts scooped, but I just accept such papers as free work. I seem to be as much interested in answers to my questions as in answering them myself…

Comment #32 September 22nd, 2007 at 7:24 pm

I may be totally off the hook here, but is there any tie-in to Robinson/Matiyasevich’s results? Or are the polynomials involved there not low-order enough?

I should have paid more attention in that extra-credit class…

Comment #33 September 22nd, 2007 at 8:39 pm

Hey Scott,

Good to see that your idea has gained so much heft since you told me about it. I do think this is the “right” approach to formalizing the distinction between what we’re capable of proving at present, and what we’d like to prove.

One aspect of the definition that might cause some issues is the asymmetry between Boolean oracle access and algebraic oracle access. For instance, is it true that if A in B and B in C algebrize, then so does A in C? The fact that relativization composes is very convenient.

Is there some conceptual reason why you went with this definition rather than the one where both sides get access to the algebraic oracle? Or is it simply that the analogues of your results in the alternative model seem harder to prove?

Not sure if you know this, but Lance Fortnow defined a very similar model in his paper “The Role of Relativization in Complexity Theory”. He defines a notion of “algebraic extension” and shows (Theorem 5.6) that IP = PSPACE relative to the algebraic extension of any Boolean oracle. I wonder how that result relates to the question in your paper of whether IP =PSPACE algebrizes when both sides are given access to the algebraic oracle.

There are advantages to your asymmetric definition – Madhu’s question is a very nice one. In general, it might be interesting to study how “structurally complex” A can get as a function of B so that NP^A in P^B still holds for some B. But the most useful thing about your work, for me, is the (implicit) identification of a “grey zone” where we still have a chance with the old techniques…

Comment #34 September 22nd, 2007 at 11:11 pm

I may be totally off the hook here, but is there any tie-in to Robinson/Matiyasevich’s results?Not that I can see, although Peter Sarnak asked the same question when I gave the talk at IAS…

Comment #35 September 23rd, 2007 at 3:20 am

Hey Rahul,

Sorry your interesting message got held up in my spam queue!

Yes, Russ Impagliazzo and Lance himself both pointed me to that section of Lance’s survey paper. Avi and I will be sure to credit Lance with having the basic idea of a low-degree oracle — even though (1) he didn’t adopt the asymmetric definition that seems necessary for making statements about

alloracles rather than just some of them, and (2) he didn’t seem to be interested in algebraic oracleseparations, which are our main results. (We’ll also, of course, credit you with asking the question that led directly to our work! 🙂 )As for the asymmetry in oracle access, the “conceptual” reasons to go this way are

(1) It’s clear that C

^{A}⊆ D^{A}implies C^{A}⊆ D^{~A}, but it’s not clear that it implies C^{~A}⊆ D^{~A}.(2) What we’re trying to model is starting from a Boolean circuit C (possibly involving oracle gates), and then reinterpreting C as an arithmetic circuit over a larger field or ring. If C can

alreadyquery the low-degree extension, then it might do so in non-arithmetic ways (i.e., it might access individual bits in the binary representation of ~A(x) for various x’s). And in that case, we’d have to take a low-degree extension of a Boolean function derived from the original low-degree extension, and so on ad infinitum. Indeed, that’s essentially what Lance does, and is the reason why his definition is more complicated than ours. The more complicated definition models a sort of “recursive arithmetization,” which is possible in principle but which isn’t used in any existing result that I’m aware of. My results with Avi don’t say anything about the limits of this recursive arithmetization; it would be extremely interesting if one could do so.You raise a good point about composition. What one can say is the following: if C

^{A}⊆ D^{~A}and D^{A}⊆ E^{~A}for all A,~A, then C^{A}⊆ E^{~~A}, where ~~A denotes the “double algebrization” of A as above. However, I can’t think of a single instance in our paper where we actually need this fact (i.e., where we have to compose two algebrizing inclusions, neither of which is also relativizing).Lastly: I completely agree that one of the most useful applications of this work is to indicate which open problems

mightstill be resolved with algebrizing techniques! Here’s one of my favorite examples: is PH ⊆ PP?Comment #36 September 24th, 2007 at 8:34 pm

Apologies in advance for what may be a silly question, but does the proof that SL=L algebrize?

Comment #37 September 25th, 2007 at 8:39 am

Jonathan: It’s not a silly question; in fact I’m grateful you forced me to think about it. The trouble is that with certain zoo animals (L, SL, PSPACE, …), there’s ambiguity in what it even

meansto feed them an oracle.Having said that, there’s a natural oracle access model under which L=SL not only algebrizes but also relativizes. The model is the following: an L

^{A}machine gets a polynomially-long, write-only oracle tape; and whenever it gets an answer, the oracle tape is reset to 0. An SL^{A}machine gets the same, and also is not allowed to make nondeterministic transitions while the oracle tape is being written to. Under this model, the problem just boils down to searching an undirected graph in logspace, but where you can use the oracle to tell you what the neighbors of a given vertex are. It’s clear that Reingold’s proof carries through to this case with no changes.It’s an interesting question what happens if the SL

^{A}machine is allowed to make nondeterministic transitions while the oracle tape is being written to.Comment #38 September 26th, 2007 at 1:09 pm

One thing that has always bothered me about relativization is the notation. It’s pretty annoying when you can write both “A = B” and also “A^C \neq B^C”. (E.g., when A = IP, B = PSPACE.)

When one sees a relativization statement including the expression “A^B”, is A to be interpreted as a set of languages or as a class of machines?

Comment #39 September 26th, 2007 at 1:14 pm

Ryan: A class of machines. The way to think about it is that relativization is not something you do to a complexity class — it’s something you do to a

definitionof a class.Comment #40 September 27th, 2007 at 10:01 am

Hi Scott and Everyone,

I showed your O(sqrt(n)logn)-bit AM-communication protocol for the inner product function today at Tsinghua. It went nicely. I think it’s a nice result, and a simple and cool example for how polynomials can help “spread” an error.

By the way, I’m sure you know it but it might be worth noting that (if I’m not wrong), your protocol generalizes such that for any two numbers b,m such that b*m>=Omega(nlog^2n), Merlin can send m bits to Alice, and Bob can send b bits to Alice, such that Alice can compute the inner product or to prove Merlin false (whp).

By the way, does Klauck’s result prove an Omega(n) lower bound on the value of b*m ? What do you tihnk the truth is: n or nlog^2n (or, most likely in my eyes, nlogn)?

Comment #41 September 27th, 2007 at 10:44 am

Elad: Yes, the protocol generalizes in the way you describe. (Incidentally, it also generalizes to an O(n

^{1/3}log n) MAM protocol, an O(n^{1/4}log n) MAMAM protocol, and so on.)I

thinkKlauck’s lower bound can also be generalized as you describe, but I’d have to check. I would conjecture that the truth is simply n — my only evidence being that, in the past, it’s often been possible to get rid of log factors in communication protocols.Comment #42 October 3rd, 2007 at 2:37 am

Ah, so you’re still here. That’s why there haven’t been any new blogposts on Technology Review for a while. Perhaps you should post an item over _there_ telling folk to come back over _here_.