I’m convinced that the following diagram means something precise:

My question is, what does it mean?

Intuitively, it means that if your software package can solve SDP’s, then you can easily use it to solve LP’s; if it can solve LP’s, you can easily use it to invert matrices, and so on, but not vice versa. But it can’t mean (for example) that SDP’s are harder than LP’s in the usual complexity theory sense, since both problems are P-complete!

Maybe it means that, if your axiom system is strong enough to prove SDP is in P, then it’s also strong enough to prove LP is in P, and so on — but not necessarily vice versa. But how would we show such a separation?

(Sorry, no money this time. We’ll see if it makes any difference — I’m guessing that it doesn’t.)

did you make this diagram up yourself? just wondering..

Yes.

Well, perhaps diagrams like that can motivate the study of NC0 reductions, i.e. where every bit in the converted input depends on a bounded number of bits in the original (I’m not claiming that they exist here). Especially that NC0 is making quite an appearance (reappearance?) in relatively recent works (e.g. NC0 cryptography).

If you indeed think about all of these things (including SVD and matrix inversion) as convex programs, then one obvious meaning of the diagram is in terms of allowable syntax–i.e. the type of constraints you are allowed. (Even though, in principle, ML and x86 assembly language are both universal, I think people prefer high-level languages for most coding tasks.)

Practically, we can solve LPs much, much faster than SDPs, and we can do matrix inversion and SVD faster than solving a generic LP. We are talking two orders of magnitude in the size of a program that can be feasibly solved (LP vs. SDP).

But for your aesthetic, it’s probably best not to allow arbitrary P-time or log-space reductions; if you restrict your effort to syntactic translations, then you will get an actual hierarchy.

NC0 is a little drastic. It seems possible that this diagram of problems becomes an inclusion diagram for complexity classes with respect to L reduction, say. Maybe even NC or SC.

Making the edges directed wud have made it much more clearer…..

Just to bring both a note of practicality and an extra intellectual dimension into the discussion (and speaking as someone who presently devotes about 3 GFlops to SVDs, all day, every day), the Mathematica implemation of SVD is pretty seriously broken, in the sense that Mathematica’s

SingularValueDecomposition[]sporadically fails, returning a grossly incorrect result without generating an error message.Needless to say, from a purely algorithmic point of view, this should never happen. So why does it happen?

The link with complexity theory has to do with

social complexity… Wolfram Research has known and acknowledged their SVD problem for almost a year, but are disinclined to fix it (it being unprofitable to do so).This raises the practical question: as a citizen of a Jeffersonian democracy, should I meekly submit to market forces, or should I seek to obtain the four freedoms of Free-as-in-Freedom software, and therefore, abandon Mathematica for less-convenient free software packages?

Up until recently, I would have considered this as a dilemma in economics and/or moral philosophy. And pragmatically, I am in the “meekly submitting” category … although I grit my teeth daily at the ugliness of the resulting code circumlocutions!

But nowadays, and increasingly, it seems that “sociocomplexity” (which is broadly analogous to sociobiology) is emerging the most creative intellectual venue for analyzing these problems.

The economist Amartya Sen is a leader in considering these issues. A recent essay of Sen’s ends with these words:

As Buddha said, “there are some questions that can be asked of which there are no answer” and while I’ve given several answers, they are not final answers, and I very much hope to have more discussion on these topics.Am I the only one who sees, narrowly in dealing with bugs in

SingularValueDecomposition[], broadly in Sen’s article, an emerging challenge in complexity theory?John Sidles said:

The link with complexity theory has to do with social complexitysnipAre you practicing to do a Feynman? or maybe the other way — publish a social paper in a scientific journal. It certainly seems that you are practicing in their gobbling writing style. Being neither here nor there, being untrue to both fields.

John Sidles, I mean this in the least offensive way possible, but maybe you should consider getting your own blog? You appear to have much to say, and quite frequently it’s only vaguely, or not at all, related to the posting by Scott that you’re commenting on.

Just an idea.

I’ll hold it down to one comment per week … with links instead of prose! E.g., your post called to mind the mathematician John Lighton Synge’s preface to his (very hard to find) 1957 mathematical fantasy

Kandelman’s Krim.I agree with other posters that a sharper notion of reduction is likely useful to separate SDP from LP.

However, isn’t there another distinction between the problems? In the case of LP with integral (or rational) inputs, isn’t it the case that the polytime algorithms can be modified to produce an exact rational expression for the optimum? In the case of SDP with the same inputs all we have is an approximation scheme with a running time that is polynomial in log (1/epsilon). Now if one wants a binary/decimal expression of the optimum then the two produce the same quality of answer but I don’t know of an efficient SDP algorithm that produces a closed form expression. (Obviously it is not rational in the SDP case but any closed-form expression, say radicals, would do.)