In our own work, we are presently computing (non-sparse) Riemann tensors on Kahler manifolds of (complex) dimension up to 99. Each tensor thus requires 99^4 = 10^8 complex entries. This is about the limit that can be computed on a desktop scale machine. These tensors are showing us remarkable mathematical regularities that we are far from understanding analytically.

We would dearly love to move to larger dimensions, say Riemann tensors of dimension 1000^4. This would let us explore the Kählerian geometry to protein-scale spin simulations. IBM’s Blue Gene racks seem ideally suited for this scale of computation.

Upon a review of the literature, we presently believe that our 99^4 Kählerian Riemann tensors are the largest ever computed in explicit numerical form — does anyone know of a comparable or larger geometric computation effort?

]]>Whatever your motivations, I’m a big fan of your work, and it made me feel pretty shitty to be competing with you on the job market this year. I mean, yes, the dearth of quantum complexity theorems *is* one of the most urgent crises facing our civilization … but more urgent than third-world poverty? That’s a tough one.

So I’m delighted to read on your homepage that you ended up at Berkeley. I loved it there, and I don’t think you could’ve made a better choice. If you get lucky, you *might* even encounter others there who share your interest in social justice.

Also, I highly recommend the mango smoothies at Hummingbird on Euclid Avenue.

]]>What?! And all these years, Ive been doing this for the sex and the drugs…

No, wait, thats Bono.

]]>http://www.win.tue.nl/~gwoegi/CO2a/exact2.pdf

page 9, Open problem 4.3

It’s all related to k-CLIQUE complexity with fixed k. If you let me know about any progress in this direction, I would greatly appreciate it. ]]>

On the one hand, experimental complexity theory seems much more difficult than, say, experimental number theory, since it seems to require a much larger search space, ruling out many naive approaches. Thus, it’s clear that designing and executing useful experiments would require not just strong computers but a combination of both theoretical understanding and engineering/programming skills of the kind many theoreticians do not posses (even those who know can program in other languages than BASIC :). But of course, this is also true for experimental physicists/biologists etc..

On the other hand, I think that this difficulty just demonstrates why experimental complexity theory could be so important. We don’t really understand the space of algorithms, and I’m sure that such experiments could come up with surprising results. (Probably good candidates are arithmetic problems with relatively efficient algorithms such as matrix multiplication and Fourier transform, since then we still have a good notion of what are the elementary operations but the search space is not doubly exponential.)

Probably the first step, before asking for funding, is for us complexity theorists to be receptive and supportive of such works.

]]>It depends a little on exactly what you mean by ‘similar’. We can’t get time bounds in such tradeoffs to be even as large as n1.00001 for problems in P but Mike Saks, Xiaodong Sun, Erik Vee and I, following work of Miki Ajtai, did prove that a particular simple problem in P can’t be computed simultaneously in n(log n/loglog n)1/2 time and n.99999 space, even non-uniformly.

Also, though Ryan Williams was too modest to note it, his recent CCC 2007 paper improved that golden ratio to a number above 1.80.

I don’t think that the hardness of the permanent alone is what makes it so appealing. For low level complexity lower bounds, it is often easier to prove that a simple function is hard rather than a complex one. (The alternation-trading arguments are notable exceptions.) The appeal of the permanent is not only that it is #P hard but also that it has many symmetries and is very similar to the determinant which is simple to compute.

]]>“SAT requires n^c time and n^{o(1)} space”

but also things like

“Pi_4 SAT requires n^d time on a one-tape Turing machine.”

So far, my implementations have just been written in Maple. Given that, they’re surprisingly powerful– enough to have checked all possible alternation-trading proofs with at most 20 “lines” for the best possible time lower bound for SAT on n^{o(1)} space algorithms (for a natural definition of “line”).

In some settings, this automated approach has found no better time lower bound than the current best– a strong suggestion that the current best is “optimal” in some sense. In other more technically annoying settings (where humans fear to tread?), it has managed to yield new improvements. Eventually, when my thesis is fully revised and all that, I’ll post a preprint. ]]>