## Back to safe territory

When you see me getting chased across the blogosphere by livid, paintbrush-wielding artistes, it can only mean one thing: that I’ve once again abandoned my “promise” to stick to the Serious Complexity Theory That I’m Actually Qualified To Discuss. So I’ll tell you what, whistling-pig-lovers: answer me the following; then come back for more.

Given a set of n-by-n real matrices, is there a nonzero linear combination of those matrices with rank 1? Equivalently, given a subspace S of a bipartite n-by-n Hilbert space, does S contain a separable state?

I’d like (1) a proof of NP-hardness for the exact version, and (2) more importantly, whether or not there’s a decent approximation algorithm. (For instance, an algorithm that finds a rank-1 matrix, with the sum of the squares of the entries equal to 1, whose L_{2}-distance from the subspace of interest is at most a small additive constant more than optimal.)

So get cracking! If you do find a decent approximation algorithm, you’ll have shown, among other things, that QMA(2) (QMA with two unentangled yes-provers) is in EXP. Incredibly, right now we don’t have any upper bound better than NEXP.

Oh, yes: while my brain is closed, and while I can barely turn theorems into coffee, I will offer $20 for a solution to (2).

American.

Comment #1 June 17th, 2006 at 12:45 pm

$20 for any solution, or just for a positive solution?

I have most of an argument that (1) is hard, and I suspect that (2) is hard too. You can invert the problem to look for a pair of vectors whose tensor product satisfies linear relations. In that guise, the problem is just to solve a general set of bilinear equations in two sets of variables. That looks equivalent to solving a general set of homogeneous quadratic equations, which in turn looks equivalent to solving a general set of inhomogeneous polynomial equations. That is well-known to be hard, see, e.g., 0-1-NP_C in the Zoo.

I would conjecture that approximation below a certain threshold is also hard, using all of the work on PCP and approximate solutions to TSP and related problems. But for the problem in question, I’m not sure that you care about better and better approximations. It seems like even a rough approximation is good enough.

Comment #2 June 17th, 2006 at 1:00 pm

$20 for any solution, or just for a positive solution?I

wantan approximation algorithm, but being a complexity theorist, I’ll pay for an inapproximability result too.I had a similar thought about (1), which is why I’m morally certain that it’s NP-hard.

For (2), you’re right that I don’t need better and better approximations — within a fixed additive constant would be fine.

Comment #3 June 17th, 2006 at 4:41 pm

Dear Scott,

You may already know this and/or I may be misunderstanding your problem, but to me there seem to be two very pertinent problems that in the control systems community generally go by the names of “Robust Nonsingularity Checking” and “Robust Stablity Checking” Namely,

**********

Consider a N x N “symmetric interval matrix” mathcal{A} defined by the N^2 intervals [alpha_{ij}, beta_{ij}] where i,j are integers between 1 to N. That is, consider the set mathcal{A} of N x N real, symmetric matrices whose ith,jth elements are always bounded to be in the closed interval [alpha_{ij},beta_{ij}].

The “checking robust nonsingularity” problem is the problem: “Does mathcal{A} have a member posessing a zero eigenvalue?”

The “checking robust stability” problem is the problem: “Does every member of mathcal{A} always have negative eigenvalues?”

********************

[NB: I believe these problems are just as hard when, instead of defining mathcal{A} by entrywise bounding intervals, you have your case of linear combinations of given matrices. Indeed, some papers on this problem take this formulation.]

As can be found in the reference quoted below (and references therein), these two problems are NP-hard to solve exactly.

However, as is the key result of the reference quoted below, there exist polynomial-time randomized algorithms that allow you to answer Yes or No to the “robust nonsingularity” and “robust stability” questions to arbitrary accuracy (i.e., how close to a zero eigenvalue) and confidence level.

REFERENCE: M. Vidyasagar and V.D. Blondel, “Probabilistic solutions to some NP-hard matrix problems,” Automatica 37(9), September 2001, Pages 1397-1405

Available at Blondel’s Homepage:

http://www.inma.ucl.ac.be/%7Eblondel/publications/01VB-prob.pdf

Regards,

Bill

Comment #4 June 17th, 2006 at 5:16 pm

Bill: Thanks for the reference! I just had a look.

From page 2: “The objective of the present paper is to show that it is possible to develop polynomial-time (often abbreviated as ‘poly-time’) algorithms for each of the above NP-hard problems.”

Wow, that’s sloppy terminology! It turns out that what they mean (in Section 3 at least) is that they can

estimatethe fraction of solutions using a Chernoff bound. (Admittedly, Section 4 looks less trivial.)I don’t think this is applicable to my problem, since I’m not satisfied to know (say) that

moststates close to the subspace S are far from separable.Comment #5 June 17th, 2006 at 8:33 pm

??? You seem to identify rank 1 density matrices with separable states. If I_2 is the 2 dim identity matrix, and rho_A = I_2 and rho_B = I_2, then rho_Aotimes rho_B is a separable state of rank 4.

Comment #6 June 17th, 2006 at 8:53 pm

“Equivalently, given a subspace S of a bipartite n-by-n Hilbert space, does S contain a separable state?”

I’m probably all wet but here it goes.

If one had a functional L(rho) which vanished on separable states, and one minimized L over all rhoin S. One could find that minimum rho using calculus and then test that it belonged to S. I suggest for that magical functional the Conditional Mutual Information, as is used in squashed entanglement

Comment #7 June 17th, 2006 at 9:01 pm

You seem to identify rank 1 density matrices with separable states.No. The problem only asks about pure states. Given a separable pure state

sum_{ij} alpha_i beta_j |i>|j>,

the rank-1 matrix is the one whose (i,j) entry is alpha_i*beta_j.

Comment #8 June 17th, 2006 at 9:15 pm

If one had a functional L(rho) which vanished on separable states, and one minimized L over all rhoin S. One could find that minimum rho using calculus and then test that it belonged to S. I suggest for that magical functional the Conditional Mutual Information, as is used in squashed entanglementWe’d also want that

(1) L

doesn’tvanish on entangled pure states,(2) L can be computed in time polynomial in the Hilbert space dimension, and

(3) [the kicker] L can be minimized over a subspace in time polynomial in the Hilbert space dimension.

I’m not sure about CMI, but for almost any natural measure (1) and (2) will be true and (3) will be false. The reason (3) will be false is that there can be local optima — if L corresponds to entanglement, then it’s not a linear functional that can be minimized using semidefinite programming! That’s why I’m only asking for an approximation algorithm.

Comment #9 June 18th, 2006 at 1:24 am

Oops. Sorry I didn’t notice that the “polynomial-time randomized algorithms” for robust nonsingularity and stability checking in the reference I cited previously were simply of the form “generate matrices uniformly at random and see if they’re singular or unstable”.

I guess it’s like my momma always said, “You can’t judge a mathematical paper—especially one written by control systems people—by its first page.”

Nevertheless, the reference gave the impression that the NP-hardness of such problems is largely understood by mapping it onto the problem of satisfiability of sets of polynomial inequalities.

Do you have any educated intuition that this could help in finding a nontrivial approximation (either you in your specific problem or in the problems in my previously cited reference of robust nonsingularity and stability checking)? At the risk of sounding like a total fool and once again disappointing momma, aren’t interior point methods and the like useful for this?

Comment #10 June 18th, 2006 at 2:21 am

aren’t interior point methods and the like useful for this?Again, the problem is those local optima.

You can use an interior point method to solve an LP or SDP relaxation, but then you have to prove that the solution obtained from the relaxation is not too far from optimal.

Comment #11 June 18th, 2006 at 2:58 am

Silly me. Quite so. I guess that in addition to reading past the first page of journal articles, I should read the entirety comment threads like this.

Comment #12 June 18th, 2006 at 11:46 pm

“Again, the problem is those local optima.”

Easier said than done, but could perchance the set of local optima have a symmetry which allows you to spot the global ones?

An inspirational quote:

“A guiding principle in modern mathematics is this lesson: Whenever you have to do with a structure-endowed entity S, try to determine its group of automorphisms, the group of those element-wise transformations which leave all structural relations undisturbed. You can expect to gain a deep insight into the constitution of S in this way.” – Hermann Weyl

Comment #13 June 19th, 2006 at 12:07 am

Even better, have you tried using induction? I hear that induction is a very useful tool.

Comment #14 June 19th, 2006 at 6:28 am

I must disagree with that last comment. Induction is not mathematically valid. But

deductionis; some form of deduction always works…Comment #15 June 19th, 2006 at 9:43 am

cheshire cat, was that a joke?

Comment #16 June 19th, 2006 at 4:22 pm

>> cheshire cat, was that a joke?

One certainly hopes so.

Comment #17 June 19th, 2006 at 9:59 pm

One certainly hopes so.Since you posted the comment, one would expect you to do better than merely “hope”. Did you intend your comment as a joke or not? It’s a straightforward question; I expect the answer to be as well.

Comment #18 June 20th, 2006 at 3:03 am

Sigh… OK, to be painfully literal, I wasn’t being serious. In case you haven’t noticed, this is Scott’s blog, a space where irony is rampant. This discussion had stopped being serious possibly by the Anon 11:46 comment, and certainly by the one after that.

Comment #19 June 20th, 2006 at 5:51 am

In case you haven’t noticed, this is Scott’s blog, a space where irony is rampant.I have no idea what you’re talking about.

Comment #20 June 20th, 2006 at 3:05 pm

BREAKING NEWS:Leonid Gurvits just phoned me to explain how to prove the problem NP-hard. Basically, you use the same techniques as in his proof that separability testing for mixed states is NP-hard. (Leonid says the two problems are “sort of dual to each other.”) Ultimately, you’d be reducing from Max Clique or something similar.Unfortunately Leonid doesn’t get the $20, since he hasn’t yet answered the approximability question. His feeling is that it might be inapproximable by an approximability-preserving reduction from Max Clique.

Comment #21 June 21st, 2006 at 4:41 am

OK, fine, irony cancels itself, I was being serious all along. I believe Hilbert was right –

If one keeps making deductions, there is nothing left to prove.

As a side note, all this interactive problem solving is very exciting. Hope Leonid pays due tribute to Scott’s philistinism when he writes up the result…

Comment #22 June 21st, 2006 at 3:47 pm

The Collaborative Problem Solving here is also very interesting to me as well.

There happens to be a parallel discussion going on about open source journalism over at Jay Rosen’s Pressthink blog. I left a number of comments about it there, but what you’re doing here is both interesting and exciting — even though I can’t even begin to understand what you’re talking about.

Comment #23 July 5th, 2006 at 9:49 pm

Scott, I don’t know how much further progress you’ve made with this after Leonid’s comment, but I just happened to notice a paper by Lax, Adams, and Phillips “On matrices whose real linear combinations are non-singular”, Proc. Amer. Math. Soc. 16 (1965), 318-322. It may be worth checking out.

Michael Nielsen