## Projects aplenty

*When ambitious students ask me for projects to work on, I usually kick myself for not having a list of favorite open problems that I can simply point them to. Sure, half a year ago I listed some of my favorite open problems in quantum complexity theory—but what can I give the majority of students who are more classically-inclined? The following haphazard list is my attempt at an answer. Some of the “problems” are open-ended or ill-defined, some are actually implementation projects, some are no doubt trivial or solved, others are no doubt hopelessly difficult, and a couple are shamelessly filched from MathOverflow or CS Theory StackExchange. Almost all are missing motivation and context. Without further apologies…*

1. Create a zoo of cryptographic primitives (one-way functions, one-way permutations, pseudorandom generators, etc.) and the relationships between them, paralleling the Complexity Zoo.

2. Build a public library of 3SAT instances, with as few variables and clauses as possible, that would have noteworthy consequences if solved. (For example, instances encoding the RSA factoring challenges.) Investigate the performance of the best current SAT-solvers on this library.

3. Find an explicit n (the smaller the better) for which you can prove that the value of BB(n) (the n^{th} Busy Beaver number) is independent of ZF set theory. More generally, find a way to enumerate the proofs of ZF set theory, which requires a shorter or simpler program than the “obvious” proof-enumerating program.

4. Call a cellular automaton “physically universal” if any polynomial-time transformation on any subset of n bits can be implemented by choosing a suitable initial configuration of the surrounding poly(n) bits. (Note that my definition is potentially more inclusive than Janzing’s.) Find interesting examples of cellular automata that you can prove are or are not physically universal.

5. Prove explicit lower bounds on the number of arithmetic operations needed to compute the permanents and determinants of 3×3 and 4×4 matrices. In the 4×4 case, can you obtain a separation between the permanent and the determinant?

6. Are there proofs with (say) n^{2} symbols, in a proof system of your choice, for which (a) there exist proofs of the same statements with n symbols, but (b) finding the shorter proofs is computationally intractable?

7. Call a set of k-by-k unitary matrices U_{1},…,U_{k} “linear-optics universal,” if for any n, any n-by-n unitary matrix U, and any ε>0, it’s possible to approximate U to within error ε by applying some finite set of U_{i}‘s to various ordered lists of k of the n indices. Give necessary and sufficient conditions for a set of unitaries to be linear-optics universal.

8. How hard is it to sample a (nearly) uniformly-random n-by-n invertible matrix over GF(2)? Clearly it can be done in matrix multiplication time, but can we give evidence that it can’t be done in less?

9. Is “collinearity logic” in NP? In other words: given a collection of n points in the Euclidean plane, together with a list of triples of points that should be collinear and a list of triples that should *not* be collinear, is the problem of deciding whether the requirements are consistent in NP? (It follows from known results about the existential theory of reals that this problem is in PSPACE; I thank Peter Shor for that observation.)

10. Given a weighted bipartite graph, is there a polynomial-time algorithm to decide whether or not there are two perfect matchings with the same weight?

11. Give nontrivial examples of problems that are complete for PromiseBPP. Could approximating the permanent of a nonnegative matrix be an example of such a problem? Alternatively, can that problem be solved in randomized NC?

12. Given an explicit description of a Boolean circuit C of size (say) n^{3}, and promised there exists a circuit of size (say) n^{2} that computes almost the same function as C, how hard is it to find the smaller approximating circuit? Can we give cryptographic evidence that this problem is hard? What additional assumptions about C make the problem easy (ideally, easy for reasons that require looking at the structure of C, rather than just treating it as a black box)?

13. What is the randomized one-way communication complexity of the Group Membership Problem (in which, given a finite group G known to both players, Alice knows a subgroup H≤G, Bob knows an element x of G, and Alice’s goal is to send Bob a short message that enables him to decide whether x is in H)?

14. Study the lower bounds on Manifestly-Orthogonal Tree Size in my paper Multilinear Formulas and Skepticism of Quantum Computing. In particular, do these lower bounds evade the Razborov-Rudich natural proofs barrier?

15. Prove an oracle separation between BPP and P^{BPNC}. (Likewise, prove an oracle separation between BQP and BPP^{BQNC}.)

16. Are there plausible pseudorandom functions computable by ACC^{0} circuits?

17. Prove a strong anti-concentration theorem for the permanent of a matrix of iid Gaussian entries.

18. Given the truth table of a Boolean function f:{0,1}^{n}*{0,1}^{m}→{0,1}, are there efficient algorithms to compute (or approximate) the randomized and quantum one-way communication complexities of f?

19. Classify the possible sets of classical reversible gates acting on bits, by the sets of transformations that they generate. (I.e., what is the analogue of Post’s lattice in this setting?) As a warmup, classify the possible sets of classical reversible gates that act linearly over GF(2) (like the NOT and CNOT gates).

20. Do there exist probability distributions D_{1},D_{2} over n-bit strings such that (D_{1}^{2}+D_{2}^{2})/2 (an equal mixture of two independent samples from D_{1} and two independent samples from D_{2}) is efficiently samplable, even though D_{1} and D_{2} themselves are *not* efficiently samplable? (This is closely-related to a beautiful question posed by Daniel Roy, of whether the de Finetti Theorem has a “polynomial-time analogue.”)

21. Is BB(n) (the n^{th} Busy Beaver number) odd infinitely often? Is it decidable whether BB(n) is odd?

22. Show there are tasks that Turing machines with (d+1)-dimensional tapes can solve polynomially faster than Turing machines with d-dimensional tapes.

23. Extend my results from A Counterexample to the Generalized Linial-Nisan Conjecture to show that Σ_{2}P^{A} ≠ Π_{2}P^{A} with probability 1, relative to a random oracle A.

24. Given a function f:[N]→[N], which is promised to be either one-to-one or two-to-one, what’s the optimal MA-protocol for proving f is one-to-one (i.e., what’s the optimal tradeoff between the size of the witness and the number of queries needed to verify it)?

Comment #1 May 26th, 2011 at 7:53 pm

You ask for a separation between arithmetic circuit size of perm and det for 4×4. Why not 3×3 (is it known for 3×3)?

Comment #2 May 26th, 2011 at 8:14 pm

Thanks for the list, Scott, but I’m looking for questions a little more challenging–ones that might take a week or two of serious thought to solve, rather than just a few hours over a long weekend. Is there any chance you could post some serious problems for those of us in this position?

Comment #3 May 26th, 2011 at 8:14 pm

Show there are tasks that Turing machines with (d+1)-dimensional tapes can solve polynomially faster than Turing machines with d-dimensional tapes.This would be pretty huge. If NTIME(n) is contained in DTIME(n^c) (say, for 1-dimensional multitape TMs) then there is a fixed b (dependent on c, but independent of d!) such that every d-dimensional TM running in time t can be simulated by a 1-dimensional TM running in time t^b. (This follows from an O(t)-time 1-dimensional simulation of t-time d dimensional TMs that uses a constant number of alternations.) So a nontrivial polynomial separation of d+1 and d would at least imply a nontrivial polynomial time lower bound on SAT.

Anyway, let me not discourage students from working on P versus NP — by all means, do it!

Comment #4 May 26th, 2011 at 8:19 pm

Josh: I did an estimate years ago, and n=4 was the smallest value of n for which a circuit to compute Det using Gaussian elimination had fewer gates than a circuit to compute Det using an exponential-time algorithm (37 vs. 45). (See slides 6 and 7 in the PowerPoint presentation that I linked to.) So a permanent vs. determinant separation for n=3, even if true, wouldn’t seem to say anything of greater interest…

Comment #5 May 26th, 2011 at 8:25 pm

Thanks, Ryan! I wasn’t aware of that simulation. (I suppose it only works for multitape TMs, though, so one could still hope for a separation for single-tape TMs? Or is a single-tape separation trivial for this problem?)

Comment #6 May 26th, 2011 at 8:29 pm

Buzz Meeks #2: Uh, Ryan Williams already pointed out that one of my problems (under at least one interpretation) would require proving superlinear lower bounds for SAT. If you can do that over a few hours this weekend,

please do!Comment #7 May 26th, 2011 at 8:36 pm

The colinearity problem is exactly as hard as the existential theory of the reals (this easily follows from a closer look at Shor’s reduction of the existential theory of the reals to stretchability). So determining whether the collinearity problem lies in NP is equivalent to placing any of a large (and growing) number of other problems into NP: rectilinear crossing number (Bienstock), intersection graphs of segments (Kratochvil, Matousek), Brouwer’s fixed point theorem (Schaefer, Stefankovic) to name just some examples. I’ve been trying to argue that asking whether one of these problems is in NP is misleading, since we’re really asking whether a whole complexity class lies in NP. And the answer will probably require some heavy real algebraic geometry. Even the much much simpler sum of square roots problem is not known to be in NP.

Comment #8 May 26th, 2011 at 8:45 pm

Thanks so much, Marcus! Alex Arkhipov suggested the collinearity problem to me, and I’m sure he’ll be interested to learn it’s yet another

Monty Python and the Holy Grailrabbit.Comment #9 May 26th, 2011 at 9:36 pm

I am pretty sure that some lower bounds (for d+1 versus d dimensions) are known for the one-tape version of the question, using communication complexity / crossing sequence arguments. There are definitely nontrivial simulations between the two, and I would think those’d be tight in the one-tape case.

Comment #10 May 26th, 2011 at 10:03 pm

Wow, nice list of problems.

One thing about the odd busy beaver numbers problem: that’s highly dependent on the model of computation. One could contrive models of computation where every machine which halts halts in an odd number of steps.

Comment #11 May 26th, 2011 at 10:22 pm

The abstract of “Simulations among multidimensional Turing machines” by Michael Loui seems to cite a known lower bound for how well a d-dimensional multi-tape TM can simulate a (d+1)-dimensional multi-tape TM, and then gives a very closely-matching upper bound. I can’t access the full version to provide any more details than that.

Comment #12 May 26th, 2011 at 10:25 pm

Xamuel: Yes, and that model-dependence is

preciselywhy I got interested in this question!Turing-universality is wonderful, but it’s also incredibly

coarse: the very fact that so many models are Turing-equivalent seems to be inviting us to make finer distinctions among those models. Of course complexity theory is one way to make such distinctions, but I’ve become very interested lately in whatotherways there are. In other words: what questions with a “computability” flavor can we ask, for which the answer actually requires delving into the mechanics of a specific machine model (as one silly example, whether or not programs can halt after an even number of steps)?Comment #13 May 27th, 2011 at 12:37 am

Actually, it should be called cryptography jungle, not “cryptography zoo”

Comment #14 May 27th, 2011 at 12:56 am

@Marcus: I think the collinearity problem would have two version – one, when every triple is specified and another, when only some of the triples are restricted. Are both versions “\exist \R-complete”?

Comment #15 May 27th, 2011 at 6:28 am

Scott, numbering the problems would be very helpful.

Comment #16 May 27th, 2011 at 9:48 am

Hi Scott,

From reading your blog I’ve certainly gleaned that if P equals NP then creativity can be automated, in some sense. On the other hand, if P doesn’t equal NP, then how do we seemingly have creative leaps at all? Would you suggest that “creative leaps” are simply “brute-forced” by the brain, or perhaps “dumb-lucked” by the brain? I ask this not with the tone and intent of suggesting an answer but with genuine curiosity.

Thanks,

Erick

Comment #17 May 27th, 2011 at 10:26 am

numbers: Thanks, done!

Comment #18 May 27th, 2011 at 10:51 am

E. Tudiant:

First of all, if P=NP (and the constants were small enough, yadda yadda yadda), then

“mathematical” or “puzzle-solving”creativity could be automated, in the sense that discovering the solutions to puzzles could be done in not much more time than recognizing those solutions as correct. But P=NP wouldn’t imply the automation of (say) artistic or literary creativity,unlessyou had polynomial-time algorithms torecognizegreat art or literature.Second of all, if P≠NP, that doesn’t mean specific instances of NP-complete problems can’t be solved! Indeed, often extremely nontrivial instances, like the one that encodes “prove Fermat’s Last Theorem.” The way to do so will generally be some combination of: backtracking; exploiting unusual structure, symmetry, or regularities in the instance; evolutionary improvement; adapting the solutions to related instances; other tricks of the brain that are no doubt yet to be elucidated; and (yes) brute-force search. All P≠NP tells us is that there’s no general-purpose polynomial-time classical algorithm that works for

everyinstance—or in other words, that “instance-specific creativity,” whether implemented by humans or machines, is indeed necessary!Comment #19 May 27th, 2011 at 11:13 am

@domotorp: yes, both versions are complete; it is much easier to see in case not all triples are specified (essentially take the two von Staudt gadgets to simulate arithmetic over the reals). For the completely specified version you really need to follow Shor’s (or Mnev’s or Richter-Gebert’s) proof for stretchability closely.

Comment #20 May 27th, 2011 at 11:26 am

Scott,

I read your Who Can Name the Bigger Number essay and I’m curious about something. You said BB(3)=21 was determined by individually analyzing all possible 3-rule TM’s to ensure they don’t halt (or if they do, in how many steps). My understanding of the halting problem must be off because I thought this was the exact thing it disallowed. I can see how some TM’s could enter an infinite loop where they keep repeating states and going back-and-forth over the same section of tape and it would be obvious to an observer that it had no hope of exiting the loop, but are all non-halting TM’s like this, or do some TM’s continue calculating forever without

everentering an obvious loop? To me it seems like the latter type must exist, I imagine a TM executing an algorithm for writing out on its tape the decimal expansion of an irrational number would be like this.I understand that an algorithm for solving the halting problem would have to solve it generally (for any TM, any input), but is it always possible to develop an algorithm to determine if a particular machine on a particular input halts (so that any BB number could be discovered if given enough time)? If that’s the case, how come we (our DNA, etc) aren’t seen as a TM that can solve the halting problem, since our general algorithm is to develop an algorithm for any specific instance.

Comment #21 May 27th, 2011 at 11:40 am

> But P=NP wouldn’t imply the automation of (say) artistic or literary creativity, unless you had polynomial-time algorithms to recognize great art or literature.

Now this will offend many people, and is also off-topic, but I suspect that there are moderate-sized push-down automata for recognizing great literature.

Comment #22 May 27th, 2011 at 11:42 am

8. How hard is it to sample a (nearly) uniformly-random n-by-n invertible matrix over GF(2)? Clearly it can be done in matrix multiplication time, but can we give evidence that it can’t be done in less?

Unless I’m failing to understand, you’re assuming here that matrix multiplication is not O(n^2), which is contrary to current strongly-supported conjecture. If the conjecture is true then the answer to your question is trivially “no”. In other words, the best angle of attack for this problem is to prove that matrix multiplication is O(n^2).

Comment #23 May 27th, 2011 at 11:47 am

@mikero: The lower bound in Loui’s paper is for the on-line setting, which is quite restrictive. The tape head can make only one pass over the input (once you read the ith input bit, you must proceed to the (i+1)th bit, and you can never read the ith input again).

What a great list of questions. I like the question about pseudorandom functions in ACC

Comment #24 May 27th, 2011 at 12:05 pm

I suspect that there are moderate-sized push-down automata for recognizing great literature.Dániel #21: LOL! I’m not sure about that, although we do know that there are regular expressions generating profound postmodernist essays.

The problem, of course, is that finding a work of literature accepted by a PDA is just as hard (NP-complete) as finding one accepted by an arbitrary polynomial-time verifier…

Comment #25 May 27th, 2011 at 12:18 pm

JKU #22: No, I made no such assumption! Given the state of our actual knowledge, it’s extremely common for people to prove that various problems are “matrix-multiplication-complete”: that is, they take Θ(n

^{ω}) time, where 2≤ω≤2.376 is the matrix multiplication exponent. Obviously, if matrix multiplication were solvable in O(n^{2}) then all these other problems would be as well. But, not knowing ω, it can be extremely helpful to know whether, in working on a faster algorithm for some problem, you’re also necessarily working on a faster matrix-multiplication algorithm.Incidentally, yes, some people conjecture that matrix multiplication should be doable in

ntime, for any ε>0. I don’t know if they’re right, but even if they are, doing it in O(n^{2+ε}^{2}) time would be a taller order still.Comment #26 May 27th, 2011 at 12:27 pm

Ryan #23: Yeah, I thought you’d like that question. 😉

Just for you, here’s the variant of the d-dimensional Turing machine question that

reallymotivated me:Given as input a list of n arrows, each pointing either “up,” “down,” “left,” or “right.” Consider the problem of whether a “LOGO turtle” that follows the arrows sequentially, moving one unit for each arrow, will ever revisit a point it’s previously been to. It’s obvious that this problem is solvable in linear time on a 2D Turing machine. Can we show it’s not solvable in linear time on a 1D Turing machine?

It’s possible that a communication complexity / crossing sequence argument easily answers this question; all I know is that I didn’t immediately notice such an argument when I last thought about the problem 10 years ago…

Comment #27 May 27th, 2011 at 1:05 pm

Regarding complexity of matrix operations:

Most complexity statements are given as functions of “n”. What is n? In most cases, n is the size of the data set.

Consider now an m by m matrix. The size of the data is n = m^2. Thus the traditional method of matrix multiplication is Theta(m^3) = Theta(n^1.5), the Strassen product is Theta(n^1.4035), etc.

I consider it a historical accident that matrix size n s traditionally given as n = m, rather than n = m^2, which would put matrix algorithms on the same footing as other algorithms. It is no doubt too late to correct this, but it should be pointed out where appropriate, such as beginner level discussions.

Comment #28 May 27th, 2011 at 1:26 pm

Raoul: Yes, thanks

Comment #29 May 27th, 2011 at 2:29 pm

“In the 4×4 case, can you obtain a separation between the permanent and the determinant?” By that you mean prove #(det(4×4)) < #(per(4×4)) where #(f(.)) is the minimum number of arithmetic operations to calculate f(.) ? What is the current best number for #(per(4×4))?

Comment #30 May 27th, 2011 at 3:46 pm

.: Yes, that’s what I mean. The best upper bound on #(det(4×4)) that I was able to prove was 37—see my PowerPoint slides for the explicit straight-line program.

Meanwhile, the best upper bound on #(per(4×4)) that I know is 63 (unless I’m mistaken), which follows easily from the recursive definition of per(A) in terms of permanents of (n-1)x(n-1) submatrices (i.e., “Cramer’s rule” for the permanent).

There

isan asymptotically faster algorithm for the permanent: Ryser’s algorithm, which uses ~2^{n+1}n^{2}arithmetic operations. However, a back-of-the-envelope calculation indicates that Ryser’s algorithm doesn’t start doing better than Cramer’s rule until n=7.Comment #31 May 27th, 2011 at 3:55 pm

I have a technique to do per(4×4) in 45 arithmetic operations. I am pretty sure that is the best. I am looking into 5×5 now. It seems monstrously hard.

Comment #32 May 27th, 2011 at 4:02 pm

Prof: If I make progress(or stall) at 5×5 could I send you a note to your email (beating my head on it right now). At least the 4×4 seems to be right.

Comment #33 May 27th, 2011 at 4:28 pm

.: Yes, could you please give details on your #(per(4×4))≤45 upper bound? In particular: if you generalize the algorithm that you’re using, what upper bound do you get on #(per(nxn))?

Looking at my slides from four years ago, I see that I obtained #(det(4×4))≤45 by some “dynamic programming” method (whose details I’ve forgotten), which in general gave

#(det(nxn))≤(n-1)2

^{n}-n+1.However, I’m extremely suspicious that this asymptotic complexity can also be achieved for the permanent! For if so, then it would improve on Ryser’s algorithm by a factor of n. But unless I’m mistaken, Ryser’s algorithm has been the fastest known permanent algorithm for 48 years. So then maybe it’s just a coincidence that your #(per(4×4))≤45 upper bound matches the #(det(4×4))≤45 upper bound that I got from dynamic programming?

Comment #34 May 27th, 2011 at 5:49 pm

Vadim #20:

The central point that’s confusing you is this:

it doesn’t make sense to talk about an “algorithm” for answering one specific yes/no question.Foranyspecific yes/no question has an algorithm, obtained by simply hardwiring the answer (yes or no) into the algorithm’s description! This is why theoretical computer science only talks about algorithms for answering large (typically, infinite)setsof questions.In the case of Busy Beaver, it took a very tedious case analysis to work out the values of BB(3) and BB(4)—I invite you to read the papers if you want to see what went into it. Basically, for each 3- or 4-state TM, either you need to run it until it halts, or (the hard case) you need to understand its behavior well enough to

give a proof that it doesn’t halt.Obviously, there’s no general algorithm for finding such proofs—if there were, we could use it to solve the halting problem! People were justluckythat all the 3- and 4-state TMs turned out to be simple enough that such proofs could be given.On the other hand, need I remind you that BB(5) is still not known to this day? And many people suspect that BB(6) and BB(7) will

neverbe known (the lower bounds on them are astronomical). So the idea that humans have some magical ability to compute BB(n) for arbitrary values of n seems almost certainly false.Comment #35 May 27th, 2011 at 6:20 pm

Thank you for the encouraging comment. Ok. I will work on it. But it will take a while. It is notoriously hard. I don’t seem to see any obvious pattern to the 45 number.

Comment #36 May 28th, 2011 at 1:11 am

For (5), do you think it might be possible to compute exactly the number of gates needed to compute the permanent of a 3×3 over GF(2) using just plain brute force? There would be nontrivial algorithmic and operational aspects of getting that to work, but I think it would be on the border of achievability given enough cleverness.

For (6), wouldn’t a proof that there is such a proof constitute a short proof itself? Ignoring that issue, if the proof is of a statement along the lines of ‘there is a structure with property X’ and the proof basically consisted of giving the structure, then I think max-clique on random graphs has this property, although there of course the proofs are very small compared to the questions.

For (21), although the answers are obvious, approaching them appears to be completely hopeless, but that doesn’t seem to have stopped Chaitin from making such assertions as if they’re deep insights and unquestionable fact.

Comment #37 May 28th, 2011 at 3:24 am

Hi Bram,

Yes, I think brute-forcing combined with cleverness might actually lead somewhere for (5)—otherwise, I wouldn’t have suggested it as a student project!

For (6), when I say “proof” I really mean “family of proofs.” For example: give a way to take an arbitrary proof P of size n (in some formal system), and “obfuscate” it into a proof P’ of the same statement of size n

^{2}, so that given P’, it’s computationally intractable to find P or any other proof of comparable size.If you knew how P’ was generated, then of course you would also know P. However, it’s entirely possible

a priorithat the mapping from P to P’ could be a one-way function—in which case, knowing P’ wouldnotneed to imply knowing (a proof that there’s) a short proof of the same statement.I don’t understand your max-clique example, since the proof that a random graph has a clique of size 2log(n) w.h.p. is

constant-sized, certainly not long and obfuscated! (Also, that proof doesn’t imply the existence of such a clique in anyspecificgraph.)For (21), I’m not sure I agree

eitherthat the answer is obvious or that the question is hopeless! I just don’t have any good intuition at all about the parity (or other “low-order properties”) of BB(n). For all I know, there’s some simple structural fact about 1-tape Turing machines that forces BB(n) to be even for all n≥5.I’ve also taken issue with some of Chaitin’s views on “randomness in mathematics,” but can you point me to anything he actually wrote that suggests that he would regard the uncomputability of BB(n) mod 2 (for example) as an “unquestionable fact”?

Comment #38 May 28th, 2011 at 3:45 am

Scott, the O(2^n n^2) upper bound for the permanent is for arithmetic formulas. But it is known that using gray codes this can be improved to O(n 2^n) for circuits. So your suspicion that it could be improved to what you have for the determinant was correct.

Comment #39 May 28th, 2011 at 4:27 am

Scott,

I don’t understand your challenge 3)

>> Find an explicit n (the smaller the better) for which you can prove

>> that the value of BB(n) (the nth Busy Beaver number) is

>> independent of ZF set theory.

BB(0) = 0 and BB(1) = 1, independent of ZF as far as I can see, and you can’t get n any smaller 😎

Comment #40 May 28th, 2011 at 8:24 am

Yes, but what about D-Wave’s selling a Quantum comp. to LM ??

Comment #41 May 28th, 2011 at 10:49 am

anonymous #38: thanks!

Comment #42 May 28th, 2011 at 10:52 am

wolfgang #39: No, the statement “BB(1)=1″ is easily provable in ZF—what would make you think it was independent??

We know that the values of BB(n) are independent of ZF for all

sufficiently large n(by a Gödelian argument), but we also know that the values of BB(1), BB(2), BB(3), and BB(4) arenotindependent of ZF, since they’ve been determined (by methods that could be easily formalized in ZF).Comment #43 May 28th, 2011 at 10:54 am

dwave #40: GOD DAMMIT. Can we keep the D-Wave discussion on the D-Wave thread?

Comment #44 May 28th, 2011 at 12:25 pm

@Bram:

For (5), do you think it might be possible to compute exactly the number of gates needed to compute the permanent of a 3×3 over GF(2) using just plain brute force? There would be nontrivial algorithmic and operational aspects of getting that to work, but I think it would be on the border of achievability given enough cleverness.Careful! The Permanent and Determinant are actually the same function over GF(2). (Just switch to GF(3) instead.)

Comment #45 May 28th, 2011 at 1:06 pm

Well, doing (5) totally naively only takes 2^(2^9) operations, which is merely a constant, so it should be completely tractable. Seriously though, I’ve long wondered what the results would be if you just plain brute forced finding out what the most difficult function to calculate with n bits was for small n, and if I were trying to get a degree it would be an appealing thing to work on because I’m really an engineer and not a theorist. I have this crazy dream that if you did it for a large enough n and someone did the painstaking work of figuring out what that function really did it would directly lead to an important insight in the P vs NP question.

I understand your phrasing of (6) in terms of obfuscating proofs much better, and honestly can’t come up with an opinion of whether the answer is yes or no. If it’s yes then there’s probably a clever trick, if it’s no then proving it is probably hopeless.

On reflection about (21), Chaitin actually makes a claim about a subtly but importantly different statement, whether the number of beavers of size n which terminate is even or odd. I think your question still has the obvious answers and is still completely unapproachable, but perhaps there’s some f such that you can make a statement about f(bb(n)).

Comment #46 May 28th, 2011 at 2:46 pm

Bram: I share your crazy dream that computational experiments will someday lead to important insights into the P vs. NP question—enough that I gave a talk about it once. Unfortunately, because of the gargantuan constants involved (as you say), it would need to be computational experiments combined with a great deal of cleverness—and the cleverness part is still missing!

Comment #47 May 28th, 2011 at 2:49 pm

Scott,

ZF consists of several axioms and I think it is obvious that one does not need all of them to prove BB(0) = 0 or BB(1) = 1

e.g. “ZF minus axiom of infinity” should be sufficient I would think.

Is this not what one would call ‘independent of ZF’ ?

Comment #48 May 28th, 2011 at 3:10 pm

wolfgang: No, that’s not what “independent” means. A statement is

independentof a formal system F if it’s neither provable nor disprovable in F. You’re asserting (correctly) that the value of BB(1) is “provable in ZF with plenty of room to spare”!Comment #49 May 28th, 2011 at 3:21 pm

oh man, yes, I am getting old.

thank you for clearing it up.

Comment #50 May 28th, 2011 at 10:04 pm

Scott, now I’m spending brain cycles figuring out how to handle that particular optimization problem, as if I don’t have too many crazy projects already. Maybe restricting it to just xor gates could make things easier, and maybe there are different techniques for quickly wading through ‘short and fat’ circuits vs. ‘long and thin’ circuits…

Comment #51 May 29th, 2011 at 3:47 pm

Could you give a definition (or a source for one) for ‘efficiently samplable’, as used in question 20? That is, I assume there are some sources of randomness and operations that are ‘allowed’ when doing sampling (and that ‘efficient’ means something like ‘polynomially many operations or random bits’).

Also, it would be nice to hear Daniel Roy’s question!

Comment #52 May 29th, 2011 at 5:18 pm

AMS: A probability distribution D

_{n}over n-bit strings is “efficiently samplable,” if there exists a poly(n)-time algorithm that takes as input poly(n) uniformly random bits, and outputs a sample from D_{n}.(As usual, when we talk about this, really we implicitly mean an

infinite familyof such distributions, one for each positive integer n. Also, it’s usually convenient to relax the definition, to let the algorithm sample from any distribution that’ssufficiently closeto D_{n}in total variation distance.)Daniel Roy’s question was basically the following: suppose we can sample in polynomial time from a distribution D over k-tuples that satisfies de Finetti’s exchangeability condition. Then must we also be able to sample in polynomial time from the individual product distributions that the de Finetti Theorem tells us D is a mixture of?

(Daniel recently proved, in beautiful work with Cameron Freer, that the answer is “yes” if you remove the “polynomial time” condition and just ask about computability theory. Full disclosure: I was on Daniel’s thesis committee.)

The question that I posed was just my attempt to boil Daniel’s question down to a complexity-theoretic essence.

Comment #53 May 29th, 2011 at 5:52 pm

Brahm Cohen says:

“I’ve long wondered what the results would be if you just plain brute forced finding out what the most difficult function to calculate with n bits was for small n”

Isn’t that just that mapping of the 2^n input values into Z that is most incompressible? In that case, your machine would have to carry more or less the whole mapping (needs exponential space) Access to f(x) on a Turing Machine would be terrible, except if every value were on a separate tape. Then you could try to compress the function with loss until you just needed polynomial space or access time…

Comment #54 May 30th, 2011 at 3:38 am

Scott – I think that at least a quantum oracle separation between BQP and BPP^BQNC shouldn’t be too difficult. You should be able to make a random quantum oracle f that requires a polynomial-length composition, f(f(f(…|00.0>…))) that is a “hunt” that leads to an “easter egg”, i.e., the output. Then in the BPP^BQNC model, the BQNC slave cannot explain to its BPP master what it learned from the oracle. (And note also that the correct meaning of exponentiation in this case is not the usual one for decision problems, but rather a modification for classes the communicate probabilistic messages to each other.)

Comment #55 May 30th, 2011 at 9:38 am

Yatima, the question has to do with circuit complexity, not involving an actual machine, so the question becomes what’s the smallest subset of all 2^n possible functions which includes the goal and has the property that everything in the set is either an input function or can be formed as a combination of two other functions in the set.

Comment #56 May 30th, 2011 at 11:23 am

Come to think of it, 2^(2^5) isn’t all that big a number, so it should be imminently brute forceable, and 5 bits is plenty enough for the result to be very interesting.

Comment #57 May 30th, 2011 at 12:27 pm

Greg #54: I agree—I’ve been meaning to write that separation up for 8 years now! But I’ve “generously” thrown the problem out, in case someone wants to do it for me…

Comment #58 May 30th, 2011 at 1:13 pm

News Flash! D-Wave actually sold a machine:

http://www.theregister.co.uk/2011/05/30/dwave_lockheed_martin/

The “El Reg” report quotes Scott A and Dave B on the prospects of the machine actually doing anything.

Comment #59 May 30th, 2011 at 3:10 pm

“A quantum leap in sales, from zero to one” — That’s really good!

Comment #60 May 30th, 2011 at 3:38 pm

The relative increase in sales is infinite!

Comment #61 May 30th, 2011 at 4:39 pm

Scott #43: “GOD DAMMIT. Can we keep the D-Wave discussion on the D-Wave thread?”

That’s NP hard it seems ^^

Comment #62 May 31st, 2011 at 2:33 am

@Bram:

For Boolean formulas over AND, OR, and XOR, the hardest Boolean function on 5 bits is already known to be

f(x1,x2,x3,x4,x5) = [(x1 = x2 = x3 = x4 = x5 = 0) OR (x1 + x2 + x3 + x4 + x5 = 3)]

where + is the usual sum over integers. This f takes 12 connectives (AND, OR, XORs) to describe.

See http://boolean-oracle.swtch.com/

But it is still an interesting question (and perhaps open?) to find the hardest 5-bit Boolean function over AND/OR/XOR

circuits, rather than formulas. From the above, we may assume that the total number of gates is at most 12…Comment #63 May 31st, 2011 at 3:16 am

… but it looks like Knuth already did that in Volume 4. Check out the first 16 pages of:

http://cs.utsa.edu/~wagner/knuth/fasc0c.pdf

Comment #64 May 31st, 2011 at 9:24 am

Thanks ryan! It looks like 5 bits is just barely where anything interesting starts happening. Knuth’s idea of simply enumerating all possible circuit structures up front might make 6 bits attainable…

Comment #65 May 31st, 2011 at 12:14 pm

BB(BB(BB(n))) > BB(BB(n)) > BB(n) which seems to imply a hierarchical order of incomputability. Is there a notion of finite ordinal theory based on this?

Comment #66 May 31st, 2011 at 2:24 pm

“Build a public library of 3SAT instances, with as few variables and clauses as possible, that would have noteworthy consequences if solved. ” This surprising addition to an otherwise theoretical list intrigued me. What problems do you have in mind other than factoring?

Comment #67 May 31st, 2011 at 5:01 pm

(^.^): Although BB(BB(n)) is indeed much larger than BB(n), it’s computable given a BB(n) oracle. If you want something

reallybig, you could take (for example) the Busy Beaver function for Turing machines with BB(n) oracles, which is uncomputable even given a BB(n) oracle.Or, let BB

_{0}(n) = BB(n), and let BB_{k}(n) be the Busy Beaver function for Turing machines with BB_{k-1}(n) oracles. Then we can define the Busy Beaver function for Turing machines with BB_{k}(n) oracles foreverypositive integer k, which exceedsallthe BB_{k}(n)’s … and then we can continue to proceed through the computable ordinals, if we want still bigger numbers …Comment #68 May 31st, 2011 at 5:04 pm

Raphael #66: The “practical” motivation for such a library is that, every few months, I get an email from someone who claims to have an O(n

^{2}) (or whatever) algorithm for 3SAT. Often they’ve even implemented it and tried it and claim that it works. I would like to have a public library of hard instances to point these people to, so that they stop bothering me.(Note: Besides factoring, another source of hard instances that’s been suggested is to take random 4SAT instances near the threshold, then reduce them to 3SAT. Random 3SAT

itselfnear the threshold doesn’t seem to yield hard enough instances, because of the survey propagation algorithm.)Comment #69 May 31st, 2011 at 5:51 pm

What a great well-informed meta-list! [*returns to lurking and completing the writing of a second novel with lots of Quantum Computing in it… *]

Comment #70 May 31st, 2011 at 7:43 pm

Thank you regarding BB(n). Is BB(n) the only such function that grows faster than any computable one or is there a whole class of such functions?

BTW regarding the library connecting between factoring and 3SAT, there are hard instances (from a number theoretic perspective) of rsa numbers. Could possibly encoding them in 3SAT provide hard instances? (I am throwing this in and I warn that I am not a complexity theorist)

Comment #71 May 31st, 2011 at 7:57 pm

(^.^): You can define many other functions that are similar in kind to the Busy Beaver function: for example,

Z(n) := the maximum, over all theorems S of ZF set theory with ≤ n symbols, of the length of the shortest ZF proof of S.

Then Z(n) will also dominate any computable function, and by a similar argument (if you could computably upper-bound Z(n), you could use that to solve the halting problem).

Yes, encoding “hard” RSA challenges in 3SAT is exactly what I suggested to do! Whether you reduce from factoring, discrete log, discrete cube root, or some other number-theoretic crypto problem is mostly just a question of convenience (which choice gives you the most hypothesized computational hardness for your instance size?).

Comment #72 May 31st, 2011 at 8:14 pm

Factoring to SAT has been done for a while:

http://arxiv.org/abs/cs.CC/9809117

According to the paper, a SAT problem with

63,652 and 406,860 clauses is equivalent to factoring a 512 number. Since factoring RSA-512 with a fancy factoring algorithm took 8400 MIPS-years of CPU time: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.1800 , this is likely to provide an easy counterexample to any O(N^2) claims for SAT.

Comment #73 May 31st, 2011 at 8:32 pm

Lou: Thanks! I hadn’t seen that tech report, and it gives

exactlythe first thing that I was looking for: an optimized reduction from factoring to SAT that produces instances of manageable size, not just asymptotically but in practice.So, I can now make my proposed student project more concrete:

(1) implement Horie and Watanabe’s reduction,

(2) measure the performance of today’s best SAT-solvers on the resulting instances,

(3) make the instances (or better yet, an implementation of the reduction algorithm itself) easily-accessible on a nice website that I can link to from my blog.

(Or maybe someone has already done those things, as well?)

Comment #74 June 1st, 2011 at 7:36 am

Another nice set of hard 3-sat instances can be found by considering the search for faster matrix multiplications algorithms (over F_2 to make life easier (and harder)). This undergrad blog http://matrixjames.wordpress.com/2010/07/27/faster-matrix-multiplication/ (full disclosure, he was my undergrad student) sets out the equations nicely. You basically end up with a lot of (XOR, AND) clauses which don’t translate nicely to CNF at all. Solving these for say 21 multiplications and a 3 by 3 matrix would be hugely significant.

Comment #75 June 1st, 2011 at 7:47 am

For the RSA reduction, http://cstheory.stackexchange.com/questions/6755/fast-reduction-from-rsa-to-sat has a link to a web page which produces CNF.

Comment #76 June 1st, 2011 at 5:46 pm

Isn’t #16 solved by a circuit implementation of https://secure.wikimedia.org/wikipedia/en/wiki/Xorshift ? Or am I misunderstanding the definition of ACC0 and that function exceeds it somehow?

Comment #77 June 1st, 2011 at 7:11 pm

Kevin C: Based on the description, that generator should certainly be implementable in ACC, but there are two serious problems:

1. I don’t see evidence that the generator has

cryptographicsecurity (i.e., that it’s indistinguishable from random even by a polynomial-time adversary trying to distinguish it)—do you know if people have studied that?2. I was looking for a pseudorandom

function familyf_{s}:{0,1}^{n}→{0,1}, which is stronger than a pseudorandom generator, in that each f_{s}has exponentially many inputs, but is still indistinguishable from random by a polynomial-time adversary who can evaluate f_{s}on any inputs x of its choice. That’s what would be needed to show that Ryan Williams’s recent NEXP⊄ACC breakthrough evades the Natural Proofs barrier.If you just want a pseudorandom

generatorin ACC, that’s relatively easy (in fact today we even have pseudorandom generators computable in NC^{0}, i.e., where each output bit depends on only aconstantnumber of input bits!). But a pseudorandom function family in ACC seems harder. Right now, the pseudorandom functions that we know how to construct based on known cryptographic assumptions all seem to require at least the power of TC^{0}—a slightly larger class than ACC, which includes threshold gates.Comment #78 June 1st, 2011 at 11:34 pm

Scott, while factoring is an appealing problem for encoding, a much smaller and harder problem would be finding a secure hash preimage.

Comment #79 June 2nd, 2011 at 3:23 am

Bram makes a very good point. One would do much better by directly designing a hash function in 3-SAT language. Factoring can be interpreted as a hash function, but it is a target of attention for many entirely orthogonal reasons and it is not a particularly good choice for this purpose.

Comment #80 June 2nd, 2011 at 10:17 am

Although not explicitly designed as a hash function, crypto functions such as DES or AES can be interpreted this way, since the key is supposed to be hard to compute from the image. At least DES has been encoded into SAT – see, for example, Massacci, F. and Marraro, L. (2000). “Logical cryptanalysis as a SAT problem”. Journal of Automated Reasoning (Springer) 24 (1): 165–203. doi:10.1023/A:1006326723002. in which an instance of DES is encoded as a SAT problem with 10336 variables and 61935 clauses.

Comment #81 June 2nd, 2011 at 10:49 am

In the paper “A Case Against Currently Used Hash Function in RFID Protocols”, Martin Feldhofer and Christian Rechberger show the traditional crypto-quality hash functions such as MD5, SHA-2, etc. take more hardware to implement than DES or AES like encryption. So they probably have more variables and clauses if expressed in SAT terms.

A implied by their paper, a good place to look might be hash functions designed for use with RFID, where hardware resources are extremely limited. They are probably not as high quality as the super-crypto hashes, but they should be a lot smaller. I suspect they are quite strong enough to give any SAT solver a major headache.

Comment #82 June 2nd, 2011 at 12:15 pm

So DES is moderately better than factorization, but it’s still not all that great, because it wasn’t optimized for 3-SAT either.

Comment #83 June 2nd, 2011 at 1:22 pm

It’s all the rage to try to use SAT solvers in cryptography. See http://www.springerlink.com/content/u7k586m2jvl33687/ for example and quite a few others in the last few years. There is even an automated tool to take a lot of the work out of it for you http://www.msoos.org/wordpress/wp-content/uploads/2010/12/GrainofSalt-desc-1.0.pdf .

Comment #84 June 2nd, 2011 at 4:42 pm

So is there really such a cardinal theory? we are dealing with only the finite numbers. it would be interesting to have one I would think.

Comment #85 June 2nd, 2011 at 5:28 pm

For #12, why can’t you just use a PRF?

Llet F be a family of PRFs with circuits of size n^2 (where n is the input size, right?). These probably exist, because the number of rounds of a cipher can scale linearly or less with its block size and key size (this is why wide-block SIMD ciphers tend to be faster than narrow-block scalar ones). To generate C(lambda), choose a random key k and compute F_k(1..n^3). Let C be the circuit which looks its input up in this truth table; its size is O~(n^3), but the PRF’s circuit has size n^2.

Now, if we replace F_k with a random function, there is no small circuit that emulates or even approximates C. Therefore finding such a circuit for F_k constitutes a break of F.

The second half of #12 looks to be harder, though…

Comment #86 June 2nd, 2011 at 10:44 pm

How much is the least upper bound of countable number obtained from BB(n)s related to http://en.wikipedia.org/wiki/Large_countable_ordinal

??

Comment #87 June 3rd, 2011 at 5:02 am

Two more comments on small but hard SAT instances:

The full DES crypto algorithm does 16 rounds of scrambling. In cryptanalysis, a standard practice is to attack reduced-round variants to get insight, and the DES-to-SAT folks did exactly this. At the time they could solve 3 rounds, but not 4. The sizes were:

rounds clauses variables

1 1,645 243 (solved in paper)

2 3,304 489

3 9,064 1,430

4 14,811 2,368 (smallest instance not solved in paper)

5 18,738 3,032

6 22,665 3,696

….

16 61,935 10,336

So this provides a way to make a family of larger but harder problems.

Also, there is a yearly SAT competition, this year at http://www.satcompetition.org/2011/ . These folks surely still keep track of small SAT instances that have not yet been solved. In 2003 at least, (see http://www.satcompetition.org/2003/comp03report.pdf section 4.2 ) there were prizes given for the smallest unsolved but satisfiable problem; it was at that time

1400 clauses, 400 variables and 4200 literals. I don’t know how much progress has been made since then.

Comment #88 June 3rd, 2011 at 11:08 pm

Anyone know if this is interesting, from a QC point of view?

http://www.bbc.co.uk/news/science-environment-13626587

Comment #89 June 4th, 2011 at 6:39 am

Layman here.

>Anyone know if this is interesting, from a QC point of view?

At first look, this does not sound particularly interesting even from a QM point of view. A null experiment? Paper is behind a paywall though and the BBC is not enlightening.

The Beeb quotes Professor Steinberg as saying “while the uncertainty principle does indeed forbid one from knowing the position and momentum of a particle exactly at the same time, it turns out that it is possible to ask ‘what was the average momentum of the particles which reached this position? You can’t know the exact value for any single particle, but you can talk about the average.”

Well, I suppose that’s to be expected. I’m sure the QM math works out beautifully.

Comment #90 June 4th, 2011 at 7:05 am

I’m sure the QM math works out beautifully.Yes! I know Aephraim Steinberg; he’s a great guy and a brilliant experimentalist. But the fundamental problem here is that no journalist wants to run a story with the (accurate) headline:

“QM predictions upheld yet again, like in every single experiment since 1926″

Comment #91 June 4th, 2011 at 12:02 pm

“QM predictions upheld yet again, like in every single experiment since 1926″

Or

“New QM experiment that has nothing to do with quantum computing”

Comment #92 June 4th, 2011 at 10:02 pm

3. Find an explicit n (the smaller the better) for which you can prove that the value of BB(n) (the nth Busy Beaver number) is independent of ZF set theory.

This one puzzles me: can this number be computable? Or you say “explicit” as in “provide an explicit non constructive definition”?

Comment #93 June 5th, 2011 at 7:35 am

Simple mind: Yes, the BB function is uncomputable, but we can still

prove stuffabout it! For example, we can give specific positive integers n_{0}such that the value of BB(n) is provably independent of ZF set theory for all n≥n_{0}. To do so, it suffices to describe a Turing machine with n_{0}states that enumerates the theorems of ZF and halts if and only if it finds an inconsistency. To be concrete: I’m confident that n_{0}= 10^{20}would work (not because I’ve written down the TM — I’m just confident that it exists!); the interesting question is how small we can make n_{0}.Comment #94 June 5th, 2011 at 2:00 pm

Scott, it puzzles me because it sounds as if an angel (something powerfull enough so that it can compute BB(10^20) in the equivalent of it’s head, but still of limited power) could know whether ZF is inconsistent. Or does that means that for any n>no every BB(n) is uncomputable?

Comment #95 June 5th, 2011 at 2:38 pm

Simple mind: Yes, if you knew BB(10

^{20}), then you would also know whether ZF is consistent. But youdon’tknow BB(10^{20})! More to the point, no human or angel that restricts itself to what can be proven in ZF caneverknow the value of BB(10^{20}). The reason is precisely that, if it did, then it could prove ZF’s consistency within ZF, and ZF would be inconsistent by Gödel’s Theorem! (This is what the argument that the value of BB(10^{20}) is independent of ZF amounts to.)As a side note, the notion of “computability” doesn’t apply to individual numbers, only to infinite sequences of numbers. (Or to put it differently:

everypositive integer n is computable, because there exists a program that prints n out!) So it doesn’t even make sense to ask whether BB(n) is computable for aspecificvalue of n. One can, however, ask for a specific n whether the value of BB(n) can be proven in ZF set theory.Comment #96 June 5th, 2011 at 8:22 pm

Thanks for these clarifications 😉

Comment #97 June 7th, 2011 at 8:05 am

I have a related question. Define a sequence of increasingly strong systems by ZF(0) = ZF, ZF(alpha+1) = ZF(alpha) u {ZF(alpha) is consistent}, ZF(gamma – limit ordinal) = union of all ZF(alpha) with alpha < gamma.

Is it known whether for each n, the value of BB(n) is determined by ZF(alpha) for some ordinal alpha? (By "determined" of course I mean "provably takes a certain value")

I ask because I imagine that if one is confident that ZF _is_ consistent – perhaps because one is confident that is is actually _true_ of something – one will presumably be equally confident that all the ZF(alpha) are also consistent (is that correct?). So the fact that BB(n) is not determined by ZF may not be so alarming, at least if one might still hope that it is determined by some ZF(alpha).

Comment #98 June 7th, 2011 at 12:56 pm

Professor: Could you give me the reference for exact arithmetic operation complexity of ryser’s formula and cramer’s operations for permanent calculation?

I think #(per(5×5)) < 300. Although I don't have an explicit formula. That is the number my initial analysis shows.

From your slides, cramer's rule would give 324 operations. How about Ryser's'?

Comment #99 June 7th, 2011 at 1:16 pm

The complexity is definitely different from Ryser’s or Cramer’s. That I can say from my analysis.

Comment #100 June 7th, 2011 at 2:12 pm

Schulz #97: That’s a PHENOMENAL question!

I haven’t been able to figure out the answer, sitting through a talk at FCRC — anyone else want to take a stab?

Comment #101 June 7th, 2011 at 3:11 pm

It’s struck me that a slight correction to my #97 is required. It’s not (is it?) the assumed _consistency_ of ZF which should lead us to suppose that all the ZF(alpha)’s are also consistent. It’s rather the assumed _soundness_ of ZF which should lead us to suppose that all the ZF(alpha)’s are also sound (and hence consistent).

(Just trying to get this stuff sorted out in my own head)

Also just to note that my question could be about any ststement known to be independent of ZF eg (G)CH. It’s not (in spirit) specifically about the BB(n)s.

Apologies for the excursus away from quantum computation BTW.

Comment #102 June 7th, 2011 at 4:56 pm

Schulz: Your question is not only interesting but manifestly on-topic!

It occurs to me that one can construct a k-state TM M that halts if and only if ZF+LC is inconsistent, where LC is some large-cardinal axiom. For a suitable choice of LC, perhaps one could show that whether M halts (and hence, the value of BB(k)) is independent of ZF(alpha) for EVERY computable ordinal alpha. Can any logicians comment on this?

One simple observation (which you probably made also) is that, for every computable ordinal alpha, there exists a k such that the value of BB(k) is independent of ZF(alpha). But of course, to answer your question we’d have to reverse the quantifiers!

Comment #103 June 7th, 2011 at 5:42 pm

#(per(nxn)) ~ O((n/2e)^(n+3.5)) from my method. The improvement is over brute force for all n (but shows up as less than Cramer’s for less than 5).

The improvement is because of (n/2e) instead of (n/e).

On other note which is the best reference for permanent as projection of a determinant?

I have an idea that could reduce it to (n/ke) for some constant k than could help complexity for some moderate sized matrices. I want to see if that idea would pan out. For that I need to understand about projections (possibly an easy to read but substantial introduction would help). Thank you.

Comment #104 June 7th, 2011 at 5:54 pm

Actually more like:

#(per(nxn)) ~ O((n/2e)^(n+w(n)+.5)) where w(n) varies between 1 and 3 and increases as n increases. Don’t know if it goes to 3 though.

Comment #105 June 7th, 2011 at 11:27 pm

.: Unfortunately, I don’t know a reference for these exact complexity calculations — I just did the calculations myself.

If the complexity of what you’re looking at is (n/2e)^n, then Ryser (which is n^2 2^n) asymptotically beats it.

As for projection to determinant: if you find a good ref, let me know! I’d like to read it too.

Comment #106 June 8th, 2011 at 6:36 am

Schulz: I posted your question on MathOverflow, and you should check out the very interesting discussion currently taking place there.

Comment #107 June 8th, 2011 at 9:43 am

I’m not sure exactly what Schulz #97 is asking, but I’m pretty sure that the n in ZF(n) doesn’t simply scale up linearly with the size of the turing machine. For example, the proof of Goodstein’s theorem requires transfinite induction even though it can be stated reasonably concisely, and can easily be used to create unimaginably large numbers.

Whether BB(n) grows in some sense faster than can be shown using transfinite induction on infinites which can be expressed using cn symbols for some constant c is an interesting question.

Comment #108 June 8th, 2011 at 10:56 am

Professor: Yes I observed it. Though mine beats Ryser at n < 5. That is the important point. (But loses to Ryser miserably later). I know no other formula which does that.

I guess it will be hard to projection of determinant to permanent.

But would projection of a 'larger' permanent to a 'smaller' permanent might be easy. If I know how to do that, I have a bit of hope that number (n/2e) can be brought down to (n/k_{t}e) where k_{t} is a constant that varies with a parameter t that depends implicitly on a common property of the sequence of 'larger' permanents that I choose in my formula.

So would you know of a good ref to projection of perm to perm?

Comment #109 June 8th, 2011 at 11:41 am

I mean non trivial projection of perm to perm. Trivial ones of large(exponential) complexity are easier. But for fixed n one might be able to get some thing non-trivial (ofcourse with fixed n we cannot say complexity w.r.t n)

Comment #110 June 9th, 2011 at 5:01 am

Scott: many thanks for the MathOverflow link. I’m afraid a lot of it went over my head though. I hesitate to take up more of your time, but is it possible for you (or anybody) to briefly explain – in terms an interested layman might understand – why my ZF(omega + 1) is not as well-defined as I had hoped? (Since I take it that all the Consis(ZF(n)) statements are well-defined for finite n, I suppose this must be because Consis(ZF(omega)) is not well-defined. Why not though?)

Comment #111 June 10th, 2011 at 2:27 pm

Schulz: Alas, I don’t yet understand it well enough myself to give you a clear explanation.

However, I ordered the book

Inexhaustibilityby Torkel Franzen in an attempt to rectify the situation—let me read it and then get back to you!(And in the meantime, anyone else who wants to explain it to both of us is more than welcome to do so.)

Comment #112 June 18th, 2011 at 4:10 pm

We seem to be betting our entire civilization on P!=NP on a speculation. There is no evidence whatsoever on either way. Isn’t that foolish of us? Speculation is what leads to progress because speculation and instinct is what leads to great works. However that is at the individual level. Civilization can handle individual failures. But today the whole humanity is betting on unproven P!=NP for technological advances. Jobs are being created around it and every inch of our lives is moving that way. The disaster in 2008 was possibly due to all financial firms making correlated bets on speculation. The bets now are more interlinked than ‘statistical’ correlation while the chances are still 50/50 P!=NP. So is it wise of technological advances moving this way underlying the P!=NP conjecture?

Comment #113 June 19th, 2011 at 1:36 pm

.: Firstly, why are the odds 50/50 specifically, and not (say) 70/30 or 99/1? Are they 50/50 for

everyquestion to which we don’t already know the answer with absolute certainty (e.g., whether aliens landed at Roswell)?Secondly,

betting our entire civilization??What exactly are the decisions we’ve taken as a civilization, that we’d take differently if we believed that P=NP (but still didn’t have a proof or a candidate algorithm)? Evenwithintheoretical computer science, it’s not clear to me that the research agenda would or should develop so much differently in that case.Comment #114 June 19th, 2011 at 3:56 pm

50/50 because even Wigderson writes he does not know and he probably has been working on this longer than you are:)

Well for Roswell and moon landing there are proofs. Whether one believes or not is up to them. Just as when you say the center of mass exists for any triangle, there are adults who still try to disprove.

Look at the jobs that are being created. Ecommerce, the hardware and software and consumers around it. What good is a computer these days with out the internet and the email. Many electrical engineering and computer engineering jobs could vanish say if someone settles P=NP with a constructive proof providing an efficient algorithm. The best bounds we get don’t seem to beat quadratic complexity for permanent. Why is that taken as a limitation of our knowledge rather than evidence of a possibility of an efficient algorithm? Well it may sound like fairy tale.. but in science when we don’t know we have to keep our options open.

Comment #115 June 19th, 2011 at 8:20 pm

.: You don’t understand Bayesian reasoning. The fact that we don’t know something doesn’t mean it has 50/50 odds. There’s a whole continuum between knowing something with certainty and not being able to guess better than chance. For example, I don’t know if a meteor will land on me tomorrow, and have no proof that it won’t (so “proof” can’t be the issue), but any sane person would agree that it’s much less likely to happen than not.

You also misunderstood the hypothetical I was talking about. I said: suppose we believed P=NP, but

stilldidn’t know either the proof or the algorithm. How would that change our research priorities? You might say more research should go into finding the algorithm, but there’s already much more research in algorithms than in lower bounds, and it’s not at all obvious to me what algorithms people would or should do differently.It’s obvious that

ifwe knew a proof of P=NP, or even knew a polytime SAT algorithm with no proof of correctness, that would change everything. But that’s not the relevant question here.Comment #116 June 19th, 2011 at 11:19 pm

I was not familiar with that reasoning although that seems intuitively clear. My point of comparison was only an age one and nothing related to your both works or anything.

My point was that even though it is possible that P=NP will have broad negative implications atleast in a short to medium run, we seem to have no worries. We as a society have been building our future on an unproven hypothesis. It is possibly a certainty that our economy would have been much different if RSA were not invented and marketed as a technology that we can rely on. There is no alternative but collapse if P=NP. My point was not to advice you on your research (I am in no way qualified for that). But my point is that I am trying to make a suggestion that no one seems to suggest an optimum exit plan(may be scientists like you could suggest to economists?) in case P=NP so that our future will not end in catastrophe. We don’t seem to have a buffer. If RSA were not as successful, we still would have a great economy but the path in which it got there would have been different.

Comment #117 July 12th, 2011 at 5:06 pm

.: I don’t see the disaster scenario you’re talking about. P = NP would be fantastic — the good would surely outweigh the bad. Even in the worst short-term case (P = NP is discovered, an efficient constructive algorithm is discovered, sample code is written for it, and all are released on the same day) for RSA, all that would happen is that credit card transactions would be done (laboriously!) for a few months while new methods of information transfer are created.

But the scenario seems much less likely than, say, a massive asteroid hitting the Earth, so I would prepare for it less.

Comment #118 October 2nd, 2011 at 8:27 pm

I think I have a solution to #20, though weather it’s a valid solution or not depends strongly on uniformity assumptions: Let p and q be Eisenstein primes with |p| less than |q|, and let N=pq. I write (a/b) for the cubic Jacobi symbol, and w for the principal cubic root of unity. Let r be a Eisenstein integer mod N with (r/N)=1 but that r doesn’t have a cube root mod N. Let D0 be all the Eisenstein integers x mod N with (x/p)=w and (x/q)=w^2 with a uniform probability, and let D1 be all the Eisenstein integers x mod N with (x/p)=w^2 and (x/q)=w. r*a^3 mod N for random a is a distribution equivalent to either D0 or D1, but it’s impossible to tell which. r^2*a^3 mod N for random a is the other one of these 2 distributions. Now (r*n^3,r*m^3)/2+(r^2*n^3,r^2*m^3)/2 is a concrete way to sample D0^2/2+D1^2/2, but there is no concrete way to sample D0, and there is no concrete way to sample D1.

Comment #119 December 24th, 2011 at 11:27 pm

[…] Aaronson’s blog post nowadays gave a report of fascinating available problems/tasks in complexity. just one in […]

Comment #120 March 15th, 2012 at 12:10 pm

[…] Do concrete tabulations of combinatorial objects with certain properties, such as kinds of graphs or knots, enlighten us about the existence of concretely small circuits for substantial tasks? Is there a guided way through the depressingly large search spaces outlined in the slides that Scott Aaronson linked here, and more recently in item 5 here? […]

Comment #121 September 29th, 2013 at 6:17 pm

[…] Aaronson has a post in his blog, Projects aplenty, where he proposes many research problems for students. One of them […]