See the following article:

http://www.nature.com/nature/journal/v407/n6804/abs/407630a0.html

“One defines something and proves results about it. I don’t think such papers increase our knowledge. Both conceptual and technical paper should answer why this paper is worth reading.”

This would place an unrealistic burden on the author(s). (If they had all the answers, it wouldn’t be called research.) The problem is knowing when something they have defined or discovered is “worth” anything. For example, when imaginary numbers were discovered (or invented depending on your POV), were the ramifications and applications known? And one can think of other mathematical concepts as well.

]]>I think the underlying issue may be that it’s becoming harder to review papers, because they either cover topics beyond the PC’s expertise or are technically harder. (Though it’s just my intuition – I have no evidence for that.) If this is the case, then PC’s are bound to make mistakes, and whether you favor conceptual/technical/practical papers, you’ll probably feel that the mistakes made hurt your favorite kind more than others.

I don’t have any good solution though. A larger PC may be hard to get, and also harder to manage. Maybe a 2-tier PC? (Where a second tier reviewer only needs to commit in advance to reviewing 3-5 papers or so, something people could do at a more or less regular basis.)

I am not sure a larger conference is a good idea, though am not sure it’s a bad one either. The one thing I know is that STOC/FOCS are pretty good conferences, so any change to them should be incremental and not revolutionary.

]]>the blog on Turing, and the congratulation of the STOC’08 PC (which

surprised me by picking out the one of my submissions least likely to

have lasting impact on the real world).

All this stuff in the blog is cleverly written so that everything said

is vacuously true. Obviously I agree that we for any problem want the

simplest possible solution, and that it would have been a mistake to

reject Turing’s article. The question is what conclusions should we

draw?

Should we all be posing new computational models and problems with a couple of simple algorithms, in the hope that it magically turns out a

new Turing (Award)?

I was myself extremely impressed when I read some of Turing’s

stuff. He did not just present a simple new model. He made a very

impressive philosophical argument for the far reaching power of his

simple model. If Turing had not provided such a convincing argument,

then I think a PC could have been rightly justified in rejecting his

paper, for it is the authors problem to convince the PC of the

lasting merits of a contribution. My guess is that Turing only produced

such a strong support for his model because he knew that getting a new model or problem accepted by the scientific community required

substantial support, and I think the support he produced is a main

reason why he got such impact.

Ultimately, I think TCS should be dealing with problems and

issues of lasting relevancy. Broadening the field identifying

new such problems is of course extremely important. However, I think

there is too much game playing out there, and I think many of those

games are based on assumptions that will never become true for real.

Anything can claim to be a possible direction of the future, but

the future will not go in every possible direction. One can always maintain that if clever people work on a problem, then

even if the problem proves irrelevant, the ideas may later find relevant

applications elsewhere, but I do think we need more focus than that.

I would hope that when a STOC/FOCS PC accepts a paper based on a brand new model/problem, then it is because they truly believe in its lasting relevancy worth working on, the intent being to accept later technical progress on the problem.

Finally, we have classic problems out there that have already proved

their continued relevancy over a long period of time, hence which are

known to be worth serious research. Of course, progress on these

problems is hard to come by. Simple general ideas have already been

exhausted, and then progress often depends on a deeper understanding of the nature of the concrete problem at hand. This

the point where some superficial referees start objecting that

the methods are not general. Superficial referees will also complain if the solution looks standard, not realizing that if the problem has been

open for a long time, it is because it is hard to figure out how to

apply standard techniques. A beautiful example is the work on the

k-median problem where it took a long time before the first constant

factor LP based approximation was found, and even longer to find out

how to apply other standards like “primal-dual” and “local

search”. While the progress has gone through many complicated

solutions, the end result is a good understanding of the problem with

a multiplicity of simple solutions with good approximation factors,

not to far from the lower bounds. That’s a good theory story.

A great non-trivial concept story is that of NP-completeness which has

spread far outside our our own community. What makes this story

particularly interesting is that NP-completeness is NOT a simple

concept. It is quite hard to explain what NP-completeness really means

to an outsider. However, the theory community managed to prove the

that all kinds of relevant problems were NP-complete, demonstrating

the ubiquity of the concept. It is this hard earned demonstrated power

that has carried NP-completeness out to the rest of the world. Looking

at neighboring physics, we see plenty of hard to grasp concepts of

amazing power.

What I am trying to say is that I think the good theory stories are

those who involve substantial amounts of work, eventually converging

on a deeper understanding, possibly even finding simple solutions when things have been properly digested. That’s what makes me proud to be a theoretician.

I think it is fine to have some slots at STOC/FOCS devoted to new problems and models broadening the field, but not at a rate that sacrifices the depth of our field. If we aim at a depth

of d (#papers per problem), then generally speaking, we should

only have a fraction 1/d of the slots devoted to such starters. I prefer d>2.

That was the joy of the Artificial Intelligence conferences (AAAI and IJCAI) of the 1970s, the Artificial Life conferences at the Santa Fe Institute, the invitation-only Hackers conferences in Silicon Valley, and the International Conference on Complex Systems series run by NECSI.

In fact, dipping back further, that’s how Cybernetics and Artificial Intelligence were born as disciplines.

Or maybe better to consider early meetings of The Royal Society.

Let Aleph-null flowers bloom!

]]>Also, I agree with Paul: incrementality is the way to go, no point rocking the boat too much.

]]>The TCS field is already mature, diverse and large to consider dropping the conference proceeding system and move on to a multicultural

structure without an authoritative fashionable bi-yearly meetings, as in math, physics and I guess other fields. ]]>