## Ask me (almost) anything

**Update (8/19):** I’ve answered most of the remaining questions and closed this thread. If your question wasn’t answered earlier, please check now—sorry for the delay! And thanks to everyone who asked.

This blog was born, in part, out of existential anguish. My starting axioms, reflected in the blog’s title, were that

- nerds like me are hothouse plants, requiring a bizarre, historically-improbable social environment to thrive in life;
- if such an environment ever existed, then it didn’t survive one or more major upheavals of the twentieth century, such as the sexual revolution, the Holocaust, or the end of the Cold War;
- I and other nerds were therefore essentially walking fossils, absurdly maladapted for the civilization in which we found ourselves (even, ironically, as that civilization relied more than ever on nerdly skills); and
- all that being the case, I might as well kill some time by proving quantum complexity theorems and writing a blog full of crass jokes.

And therein lies the problem: this summer, I’ve simply been *enjoying *life too much to want to take time out to blog about it. Happiness, it seems, is terrible for my literary productivity.

Still, enough people now rely on this blog for their procrastination needs that I feel a moral obligation to continue serving them. So to overcome my own procrastination barrier, from now on I’m going to try writing entries that are basically just “requests for comment”: stones in a stone soup, with the intellectual barley, discursive salt, argumentative carrots, and dialectical beef chunks to be supplied by you, my readers.

(To a few commenters: thanks so much for the plywood, rotting raccoon carcasses, and used syringes, but the soup should be fine without them…)

To start things off, today we’re going to have another open thread. You can ask pretty much anything; my one request is that you don’t ask for grad school or job application advice, since we already covered those things ad nauseum in two previous open threads.

Here are a few examples of things to ask me about:

1. My recent trip to the Azores for the FQXi Conference on Foundational Questions in Physics and Cosmology

2. My recent trip to Paris for the Complexity’2009 conference

3. My recent trip to Lexington, Kentucky for the Quantum Theory and Symmetries conference

4. The recent breakthrough paper by Jain, Ji, Upadhyay, and Watrous, finally proving what many in the quantum complexity world long suspected: that QIP=IP=PSPACE. That is, *quantum interactive proof systems provide no more computational power than classical ones*. (For more see this post from Lance and Steve Fenner, or this one from the Pontiff.)

5. The exciting new Polymath Project, to find (under some number-theoretic assumption) a deterministic polynomial-time algorithm for generating n-bit primes. (Hat tip to Ryan O’Donnell.)

Oh, one other thing: while you’re welcome to ask personal questions, they’ll most likely be answered not by me but by Pablo the PSPACE Pirate.

**Update (7/31):** One question per person, please!

Comment #1 July 30th, 2009 at 10:19 pm

Scott: See my comment in the previous thread! And then answer it here. Oh, what the hell, I’ll ctrl+c/ctrl+v:

“Hey Scott, Dave Bacon says (re: Jain et al.’s proof of QIP = PSPACE):

What does this say about the power of quantum computers? Well on the one hand, I’m not sure it does tell us too much (ask Scott, damnit!) since these complexity classes are, off the top, most interesting for the intractable problems they represent.So I’m asking: What does this say about the power of quantum computers? 😛 Personally, I like to think that this result essentially tells us that God (the SF, whatever) can’t prove anything to D-Wave that he can’t prove to the rest of us, too.”

I guess the more “technical” question is, are the semidefinite programming methods used here likely to be useful in studying other quantum complexity classes? This result is pretty spectacular (I did not expect on January 1 to see it resolved this decade), but it’s certainly not important for the reasons IP = PSPACE was 20 years back. So from the viewpoint of the low-level, bound-farming technicians, how much of a breakthrough is QIP = PSPACE?

Comment #2 July 30th, 2009 at 11:00 pm

Harrison: For me, this result actually says more about PSPACE than it does about quantum computing. It reaffirms our view that PSPACE is an incredibly robust class (“like a rock,” as Watrous once put it to me)—even more so than P. Nondeterminism? Randomness? Quantum? “Bring it on,” says PSPACE mockingly. “What else you got?”

We’ve seen some previous results putting “large-looking” quantum classes in PSPACE: for example, that of Jain and Watrous (in the last CCC) putting 1-round zero-sum quantum games in PSPACE, and that of myself and Watrous putting quantum computing with closed timelike curves in PSPACE.

The Jain-Watrous result also used a parallel SDP algorithm—though I’m not sure what the relationship is to the QIP=PSPACE result—while A.-Watrous used parallel linear algebra algorithms, but didn’t need SDP at all.

So the common denominator in these results would seem to be not so much semidefinite programming as

(1) the amazing power of PSPACE algorithms—or equivalently, of NC algorithms “scaled up by an exponential,” and

(2) John Watrous. 🙂

Comment #3 July 30th, 2009 at 11:18 pm

“nerds like me are hothouse plants, requiring a bizarre, historically-improbable social environment to thrive in life;if such an environment ever existed, then it didn’t survive one or more major upheavals of the twentieth century”

I don’t understand what motivates this declaration. Don’t you have tons of nerdy people to hang out with at MIT?

Comment #4 July 30th, 2009 at 11:20 pm

Scott,

Who are your heros?

Pig

Comment #5 July 30th, 2009 at 11:21 pm

I was wondering what your opinion on what the valid role of sideprojects in keeping acaemics sane is, if any at all. As it’s no secret that I look up to you, I was especially curious about your projects like the Worldview Manager. What are some of your motivations in taking on those kinds of nifty sideprojects? Thanks for answering you adoring public’s questions.

Comment #6 July 30th, 2009 at 11:22 pm

Some 3d puzzles are based on having a few pieces which require coordinate motion to take apart – instead of coming apart in two halves, they come apart in, for example, three pieces, and all three need to be moved at once. If you have bunch of 3d pieces with all flat surfaces put together, what would you guess is the computational complexity of figuring out whether there’s a simultaneous movement way of getting them to move at all, or if they’re just locked immovably?

(Okay, so that’s more a computing it question than a computational complexity question. I ask about the asymptotic because I suspect that a puzzle based on something which hit the bad case of the asymptotic would be interesting – I’m not even sure that movement of subsets of pieces in only two directions is even subexponential).

Not really a question, but you might be interested, if you haven’t seen it already, in Bob Hearn’s thesis on the complexity of puzzles and games – http://groups.csail.mit.edu/mac/users/bob/hearn-thesis-final.pdf

Comment #7 July 30th, 2009 at 11:31 pm

Not a question but an answer: Bram, see Natarajan, “On planning assemblies”, SoCG 1988, DOI:10.1145/73393.73424

Comment #8 July 31st, 2009 at 12:08 am

Sweet, all my questions answered in one spot!

Do you know of any crazy models of physics that naturally lead one to be able to solve problems in EXP? What about NP? I still have problems with NP as it seems very hard to build some sort of consistent laws of physics out of it.

Why do the universal adiabatic quantum computer constructions use the funny clock trick while the classical equivalent doesn’t?

How do we solve the problem of the future cost of healthcare?

What’s more likely to lead to progress in quantum foundations: entanglement (Bell), contextuality (Kochen Specker), or none of the above?

🙂

Comment #9 July 31st, 2009 at 12:13 am

Is there anything interesting about self-replicating computers from a computational complexity standpoint? It seems like a self-replicating computer could produce exponentially many copies of itself in linear time, which could run simultaneously, and trivially solve NP problems in polynomial time. Or does that just shift the bottleneck to the exponential amount of time it takes to gather the resources to build the computers?

Comment #10 July 31st, 2009 at 12:13 am

Pablo, this one is for you: Is Scott happy because of all this lovely conference hopping, or is the true reason that he’s found a boss for himself? 😀

Comment #11 July 31st, 2009 at 12:48 am

Z. M. Davis:

YARRRRRRRRRR! Sure as me parrot’s named TQBF, methinks Cap’n Arrrronson made clear a’whiskey th’ he wa’ discussin’ what he

usedto think, not what hethink now!The question o’ th’ nerd’s place ‘n society be a complex an’ multifaceted one. Take it from an ol’ scallywag who’s sailed th’ poly(n) seas ‘n bases h’words on exp’rience, not harrrrsay: tharrr be no way to state the truth ’bout these matters’n a blog post, w’thout causin’ eitharrr offense arrr embarrrrassment arrr both, all o-which me be loathe to do. Someday some brave buccaneer sh’write a book th’makes i’ all plain as a treasure-map. ‘Till then, tharrr be naught to do but carrrrve the picture in th’ broadest of cutlass-strokes, as ’twere, an’ trust that those who can w’fill th’ holes themselves.

YARRRRRRRRRRRRRRRRRR!

Comment #12 July 31st, 2009 at 1:23 am

Who are your heros?Turing, Feynman, Galois, Galileo, Erdös, Sagan, Dawkins, Darwin, Bertrand Russell, Darwin, Ramanujan, Einstein, Spinoza, Theodor Herzl, Sarah Aaronsohn, Twain, Churchill, Jefferson, Franklin, Lincoln, Obama, Al Gore, Tuvia Bielski, Alan Sokal, Hypatia, Sofia Kovalevskaya, Lise Meitner, Archimedes, …

(in the order in which they occurred to me; and with no implication that, e.g., Sokal’s achievements are comparable to Churchill’s; and with full acknowledgment of these people’s many shortcomings; and not listing anyone I know, for fear of embarrassing them)

Comment #13 July 31st, 2009 at 1:33 am

parallax:

Is there anything interesting about self-replicating computers from a computational complexity standpoint?What

isn’tthere something interesting about from a computational complexity standpoint? 🙂It seems like a self-replicating computer could produce exponentially many copies of itself in linear time, which could run simultaneously, and trivially solve NP problems in polynomial time. Or does that just shift the bottleneck to the exponential amount of time it takes to gather the resources to build the computers?I’m always happy when people answer their own questions! 🙂

Except that not only do you have to worry about resources, you also have to worry about the fact that a bounded region has only a finite information capacity (proportional to its surface area, according to the holographic principle). And therefore, to solve an NP-complete problem using self-replicating computers you’d presumably need an exponential volume of space, which in turn implies exponential time because of the finiteness of the speed of light.

(Also, in a universe with a positive cosmological constant Λ>0, as ours seems to be, the self-replicating computers will eventually get out of causal contact with one another—which means that this approach to solving NP-complete problems wouldn’t work

at allasymptotically, not even assuming an exponential amount of time.)Comment #14 July 31st, 2009 at 1:58 am

Vladimir: YARRRRRRRRRRRR! Many faults hast Cap’n Arrrrrrrrronson, but a love fer repeatin’ mistakes be not among them, so ‘spect ye not t’hornswoggle ‘im into ‘splaying hi’ personal ‘ffairs like th’ Jolly Roger f’rall th’world t’see. S’long as factarrrring integarrrs be harrrd, matters o’land sh’stay o-land, a’ matters o’sea a’ sea, a’ Arthur’s private doubloon-tosses be not reveal’d t’Merlin. YARRRRRRRRRRR!

Comment #15 July 31st, 2009 at 2:13 am

Elder Clown:

Do you know of any crazy models of physics that naturally lead one to be able to solve problems in EXP?Interactive proofs with two competing provers? Or maybe classical physics, but where you have, like, exponential time? 🙂

What about NP?So you want a physics-y model that can do NP and nothing more than that? I doubt you’ll find one, for the reason we discussed years ago: NP is neither known nor believed to be closed under subroutine calls.

Why do the universal adiabatic quantum computer constructions use the funny clock trick while the classical equivalent doesn’t?Uhhhh, the same reason the quantum Cook-Levin Theorem uses the funny clock trick while the classical equivalent doesn’t?

How do we solve the problem of the future cost of healthcare?Triage.

What’s more likely to lead to progress in quantum foundations: entanglement (Bell), contextuality (Kochen Specker), or none of the above?Progress? In quantum foundations?

Comment #16 July 31st, 2009 at 2:36 am

Chris: Thanks! I wouldn’t want to guess whether “side projects” are a net benefit or detriment to sanity. Personally, I’ve never once felt like I was in the driver’s seat of motivation, thinking: “OK, enough quantum complexity for today. Now I’m going to write a blog entry, or start a side project, etc.” It’s just that one day I’m obsessed with one thing and another day I’m obsessing with something else. And if I end up obsessing about one topic (say, complexity) for a large enough fraction of days, it becomes a “research interest.” This is often a real problem for me (e.g. if the STOC deadline is tomorrow and my current obsession is WWII books). So I don’t think I have much to offer here—indeed, I’d love to learn from others who have better techniques for choosing what to obsess about each day.

Comment #17 July 31st, 2009 at 3:54 am

Was there any recent progress on the QMA(k) / QMA_log(k) front?

Comment #18 July 31st, 2009 at 6:26 am

What do you think of France, Paris, the French, and Parisians?

Comment #19 July 31st, 2009 at 6:27 am

My question is a Fermi question that asks for an integer and a real number:

By the way, that was a

wonderfullist of heroes … and I think that all of your heroes regarded questions like the above as being well within the purview of science/math/engineering.Jane Goodall might possibly have been added to that list: “She saw what everyone had seen, and thought what no-one had thought” … and changed our conception of ourselves and the planet forever.

Comment #20 July 31st, 2009 at 8:01 am

“Jefferson”

Seriously?

What did he do that was great? And sufficient to outweigh the bad things. (Slaveholding and supporting the unneccessary revolutionary war. )

Comment #21 July 31st, 2009 at 8:08 am

Do you think there exists a neat characterization of problems the human brain can solve in polynomial time i.e. if F(p, s) is a function which takes in a problem description p and the current state of the brain s, and outputs whether the brain can solve it in polynomial time or not, then the output depends solely on the problem p and not on the problems the brain has previously solved (the state of the brain s).

In other words do you think that the hard problems we solve (math proofs, etc.) are largely a matter of luck (if we represent all candidate solutions as an exponential tree, then being in the right branch by luck)?

Comment #22 July 31st, 2009 at 8:26 am

Hi Scott,

Do you believe that Jesus Christ came and saved us by dying and coming to life again?

Comment #23 July 31st, 2009 at 9:28 am

math idiot:

Dude. I don’t even believe my own religion.

Comment #24 July 31st, 2009 at 9:36 am

Do you think there exists a neat characterization of problems the human brain can solve in polynomial time…If you model the brain as a classical Turing machine (which seems reasonable), then P.

If you want a more detailed complexity characterization that depends on specific properties of the brain (rather than general properties of computers), then no, I don’t think it’s going to be “neat.”

Comment #25 July 31st, 2009 at 9:44 am

What do you think of polymath (http://polymathprojects.org/)? Are you planning to take part in the next project(deterministic way to find primes)? Wouln’t your blog be an excellent place to start such a project?

Comment #26 July 31st, 2009 at 9:49 am

What do you think of France, Paris, the French, and Parisians?Ce clip de YouTube, il explique mieux que je pouvais. La nourriture y est délicieuse. Baguette et croissant.

Comment #27 July 31st, 2009 at 9:57 am

Martin:

Was there any recent progress on the QMA(k) / QMA_log(k) front?The most recent I know about is this paper by Salman Beigi.

Comment #28 July 31st, 2009 at 10:01 am

What did [Jefferson] do that was great?That’s easy! Jefferson regarded his founding of the University of Virginia as more important than his Presidency. Aye laddie … that’s true greatness!

I have examined Jefferson’s personal copies of Spinoza’s works for marginalia, but regrettably found none. It was interesting, though, that Jefferson’s English-language editions of Spinoza’s works are well-thumbed … almost worn out in fact … (which will surprise no one who has studied the moral and philosophical foundations of the US Constitution) … but the Latin editions are pristine … these books have apparently never been read, by anyone, in the 320 years since they were printed.

Comment #29 July 31st, 2009 at 10:03 am

Pablo, matey, that’s why I asked you, not Scotty-boy. You’re a pirate, what do you care? 😀 But seriously, since work doesn’t seem to lower Scott’s blogging output, I will simply draw my own conclusions from the evidence at hand! 😀

Ok, I have another question: Scott, do you think that writing blogs is detrimental or beneficial to your life and work, and mental health in general? Is it more a brain-damaging invitation to constantly flip over to the blog to see the latest comments or a sanity-preserving refuge that let’s you harmlessly blow off steam?

Comment #30 July 31st, 2009 at 10:26 am

Vladimir: This blog has both helped

andharmed by sanity so extensively, that I wouldn’t want to have to estimate which effect is greater…Comment #31 July 31st, 2009 at 10:37 am

I will preface this comment with the disclaimer that I have only a shallow grasp (read: mostly from “pop lit”) of how quantum computers and mechanics works.

I was interested by the earlier reply concerning self-replicating machines, and the increased amount of time needed to communicate in between them.

Could this be handled through entanglement? I must admit I have a very fuzzy idea of what actually “goes on” with entangled states, though, so this is a likewise fuzzy question that goes with it. Entangled particles *does* give “instantaneous” simultaneous knowledge, correct? Or is this already used in current models of quantum computation?

As another question:

In terms of interactive proofs with Quantum Circuits, it seems that you are still resorting to flipping coins for randomness. Is there any conceivably additional power if one allows (I don’t know how natural this is) the production of a random quantum state? That is, if A is talking to B, instead of A passing random bits that B can generate states with, A flips coins itself and generates its own quantum state, that it passes to B for measurement

As a last note, the language of a pspace pirate seems mighty hard to type out. It takes me long enough to translate it back into english as it is!

Comment #32 July 31st, 2009 at 10:52 am

How about an updated list of the 10 most important and 10 most annoying problems in QC/QIT?

Comment #33 July 31st, 2009 at 11:07 am

Hi! I’m beginner in TCS so I can’t ask anything “interesting”. And TCS is in very bad shape in my country (Croatia). I’m undergrad student. Btw. sorry if my English is bad :-/

I bought and read Sipser’s book which really impressed me. Now I’m trying to get through all the problems. (Although I get stuck with some “starred” ones like 1.45 in the beginning – which I saw in practice for final in Luca’s course and was little demotivated because I couldn’t solve it :)).

Anyway, my question is what do you (or your readers) suggest for further reading/studying to someone who got interested in TCS.

You already (in previous open threads) suggested “Computational Complexity by Papadimitriou” and “Complexity Theory: A Modern Approach” for complexity. And “Gems of Theoretical Computer Science” seems like a great book! What about quantum complexity, QIS… What are good introductory books? You say that knowing quantum mechanics isn’t crucial but can it be helpful and how much? Do you recommend Feynman’s Lectures on Physics (part III)? Or some other book for good introduction to quantum mechanics.

I skimmed through some historically important papers like Turing’s (1936), Cook’s (1971) and Karp’s (1972). And looked over Sipser’s paper about status of NP vs. P. Are there some good papers worth reading which I could understand with my current knowledge.

Also, this fall I will attend a theory of information course. Do you have any suggestions about literature here?

Thanks!

Comment #34 July 31st, 2009 at 11:09 am

Hi all,

Re: QIP = PSPACE,

which if either of the two inclusions are relativizing and/or algebrizing?

Comment #35 July 31st, 2009 at 11:09 am

Another question: on the practical side of things, how close are experimentalists to creating something that resembles a quantum computer? I remember hearing a few years ago that upwards of (was it 7?) photons had been entangled in lab – what other breakthroughs need to come in place before something primitive can be demoed?

Comment #36 July 31st, 2009 at 11:27 am

Joe C,

I recommend learning about query/decision tree complexity. It’s in certain ways a simpler and more tractable subject than Turing machine complexity, where we have unconditional lower bounds as well as elegant theorems relating various complexity measures (deterministic, nondeterministic, randomized, quantum). Some of these theorems are good introductions to the use of polynomial approximation, which is a major theme in TCS.

The best place to start learning about this subject is the survey by Buhrman and De Wolf.

The books you mentioned are good sources, and I think combing them for exercises is a good way to learn (though you’ll have to do some reading too). I would also recommend ‘Randomized Algorithms’ by Motwani and Raghavan, which has a ton of exercises and teaches tools pertinent both to algorithms and complexity.

I also wouldn’t overlook math books that teach many of the tools used in TCS. Good examples include ‘The Probabilistic Method’ by Alon and Spencer, and ‘Extremal Combinatorics’ by Jukna. Both have explicit applications to TCS (Jukna’s book has many, and deserves to be more widely read).

Comment #37 July 31st, 2009 at 11:33 am

Andy:

which if either of the two inclusions are relativizing and/or algebrizing?PSPACE ⊆ QIP is non-relativizing but algebrizing (same as PSPACE ⊆ IP).

If I’m not mistaken, QIP ⊆ PSPACE is both relativizing and algebrizing (same as IP ⊆ PSPACE).

Comment #38 July 31st, 2009 at 11:36 am

on the practical side of things, how close are experimentalists to creating something that resembles a quantum computer?Sorry, I should have added that to my list of questions you’re not allowed to ask. 🙂 But if others want to argue about it that’s fine.

Comment #39 July 31st, 2009 at 11:42 am

dear pablo,

can u simulate blog-Scott and real-Scott merging? P

or should I do it for u ?

Comment #40 July 31st, 2009 at 12:17 pm

Would you mind telling Pablo to walk the plank?

Comment #41 July 31st, 2009 at 12:18 pm

Damnit! I used up my one question…

Comment #42 July 31st, 2009 at 12:24 pm

In your intro, you said “if such an environment ever existed, then it didn’t survive one or more major upheavals of the twentieth century, such as the sexual revolution, the Holocaust, or the end of the Cold War.” I’m not sure what you mean by this. How did each of those events damage the nerdy environment?

Comment #43 July 31st, 2009 at 12:25 pm

When you pick a problem to work on, how long do you think it will take to solve. And on average, how much do you overshoot/undershoot

Comment #44 July 31st, 2009 at 12:42 pm

History question: I left that as an exercise for the reader.

Comment #45 July 31st, 2009 at 12:48 pm

I figure the Cold War affected funding (and public perception of usefulness) of science research, the Holocaust personally affected lots of scientists and caused quite a bit of migration, but what about the sexual revolution?

Comment #46 July 31st, 2009 at 12:53 pm

Thanks, Scott!

Comment #47 July 31st, 2009 at 12:58 pm

YARRRRRRR

^{YARRRRRRR}/poly!Cap’n John Preskill: Avast! Sure as plunder an’ oracle separrrrations be me first loves, YE shall walk the plank long b’fore Pablo does. An’ mayhap then we’ll learn ‘tlast whetharrrr th’qubits needed t’reconstitute ye radiate back from Davy Jones’ locker as shark-meat!

An’ me poor ill parrot TQBF: I feel yer scurvy-sickness as tho’ ’twere me own. I beseech ye, take some Advil along with yer grog an’ seeds.

BQYARRRRRRR!

Comment #48 July 31st, 2009 at 1:04 pm

Is there any well formulated problem which the human brain can perform efficiently but which is currently intractable (not in P)?

Comment #49 July 31st, 2009 at 1:06 pm

When you pick a problem to work on, how long do you think it will take to solve. And on average, how much do you overshoot/undershootGM: Generally speaking, I only

everwork on problems that I think I “basically” know how to solve already, or could solve with half a day’s work. And I generally find that I undershot by at least 5000%.Comment #50 July 31st, 2009 at 1:07 pm

Alex: If such a problem was known to humankind, don’t you think you would have heard about it?

Comment #51 July 31st, 2009 at 1:17 pm

Please forgive a complete neophyte. I was thinking for example that humans are much better than computers at certain types of pattern recognition, but I don’t anything about the complexity of PR to be able to make any judgment, hence my question.

Comment #52 July 31st, 2009 at 1:43 pm

Alex: Humans do have the advantage of a billion years of natural selection, which designed incredibly clever pattern-recognition algorithms (at least for recognizing small, edible mammals, and other typical human pattern-recognition tasks) that go way beyond anything we currently know how to program a computer to do.

But crucially, the computational complexity of a problem is not about the best algorithm we know, it’s about the best algorithm that

exists. And current science, at least, knows no reason of principle why the human brain couldn’t someday be simulated efficiently on a classical computer.If you think there’s such a reason, the question you have to confront is what it is about the physical substrate of the brain that could

possiblyprevent its being efficiently simulated by a classical computer. For example, is the brain a quantum computer? Or can it store real numbers to unlimited precision? While I find Roger Penrose’s speculations about microtubules and quantum gravity outlandish, it’s to his credit that he at least understands that’s the sort of thing it would take to put the brain outside P.Comment #53 July 31st, 2009 at 1:50 pm

David:

(1) No, one of the fundamental facts about entanglement (which popular writers conspire among themselves to keep from the masses) is that it

does notallow faster-than-light communication. It lets you win certain multi-player games with higher probability than you could in the classical world (which is how we can verify it exists), but not send signals faster than light. I’ll explain this to you offline if you’re interested.(2) The standard definition of QIP also allows for private quantumness of the verifier. Kitaev and Watrous proved (nontrivially) that any QIP protocol can be put into a special 3-round form, and the new result shows that any QIP protocol in that form can be simulated in PSPACE.

Comment #54 July 31st, 2009 at 1:59 pm

If you could preserve one book while every other book in the world was destroyed, which one would you save and why?

Comment #55 July 31st, 2009 at 2:10 pm

Zaphod: I’d preserve a book explaining how to get on the Internet. 🙂

Comment #56 July 31st, 2009 at 2:16 pm

In response to:

“Generally speaking, I only ever work on problems that I think I “basically” know how to solve already, or could solve with half a day’s work.”

Dear Pablo,

You wouldn’t want to discourage your readers from going beyond the goals that a rather pathetic, very lovable, 80-yr old pirate sets for himself now, would you?

One ought to aspire more!

Comment #57 July 31st, 2009 at 4:13 pm

I hope this question is clear enough to be understood:

What level of the various hierarchies do you think is the lowest one strong enough to contain most practical engineering problems?

Is it something low down like 3sat or is it something high up like the top of the analytical hierarchy?

I think this is a least vaguely related to John Sidles’ concerns…

TIA

Jim Graber

Comment #58 July 31st, 2009 at 4:17 pm

Scott says in comment #16:

“[…] indeed, I’d love to learn from others who have better techniques for choosing what to obsess about each day.”

That’s easy:

(1) Implement “Getting Things Done” by David Allen. There are good techniques there for choosing what to obsess about next. Also for parallelizing all your obsessions.

(2) Implement “k minutes every day” for the long running obsessions, e.g. research interests, where k is a suitably chosen constant.

(3) Ask the I Ching if you still can’t figure out what to obsess about next. True, you have to forget about being a scientist for a while but IMHO that’s a Good Thing(tm).

I must admit that it doesn’t work perfectly unless you’re super-humanly disciplined (which I gather you’re not). I still curse the day I stumbled across your blog AND subscribed to the RSS feed because it regularly makes me ignore (1) and (2) and read half your colleagues’ blogs and most of Wikipedia instead.

Comment #59 July 31st, 2009 at 4:48 pm

Jim: Just for you, I made a handy chart that maps the sorts of computations engineers typically care about (at least, that I

imaginethey care about) to the complexity classes they naturally fit in.Basic arithmetic, matrix and vector multiplication, “arbitrary” operations on tiny amounts of data: TC

^{0}Matrix inversion, eigenvalues and eigenvectors, SVD and related decompositions, root-finding: NC

Linear programming, semidefinite programming, convex optimization: P

Nonconvex optimization: NP

Based on this chart, my guess is that the vast majority of the computations engineers

actuallydo today are best thought of as lying in TC^{0}.That’s not to say that if P=NP (i.e. if there were a fast general method for nonconvex optimization), the consequences for engineering wouldn’t be enormous. For in that case, engineers could do all sorts of computations that they presumably don’t even

tryto do now, since it’s so obvious they’d be hopelessly intractable.Comment #60 July 31st, 2009 at 4:57 pm

Good thing I got in before the one question limit was imposed or I’d have to resort to sock puppets (you’d get to meet my friend Chris Paterson Bacon.) Glad to hear your enjoying your summer and thanks(?) for the answers. Laugh of the week: google “recursion”

Comment #61 July 31st, 2009 at 5:13 pm

Thanks very much Scott

I have now hustled over to the complexity zoo and am reading the class definitions.

Jim Graber

Comment #62 July 31st, 2009 at 5:31 pm

What happens to BQP under Wick rotation ?

Comment #63 July 31st, 2009 at 5:39 pm

What is your impression of the importance of the work of Adleman on dna / membrane / natural computing? They also have a complexity theory of their own ….

Comment #64 July 31st, 2009 at 5:43 pm

Scott,

A metaphysical question. Kurt Gödel believed the independence of the Continuum Hypothesis from ZF indicated that its axioms were inadequate for describing the physical world, and that sooner or later someone would find a better set of axioms that would both resolve the matter of its validity and provide a consistent basis for the rest of mathematics.

Q: Do you have an opinion on the subject?

Comment #65 July 31st, 2009 at 7:09 pm

For those that need translation from pirate language (like me), see:

http://en.wikipedia.org/wiki/Stone_soup

[Otherwise it’s just lost in translation]

Comment #66 July 31st, 2009 at 7:14 pm

Dear Scott,

You shouldn’t blame the girl showing you good time this summer for the consequences to your blog.

Your readers should know that your literary productivity has never been better, only they don’t get to enjoy its fruits.

Yours truly,

TQBF

Comment #67 July 31st, 2009 at 7:42 pm

A positive cosmological constant does wreak havoc on physical asymptotics–in fact, if I understand correctly, it means that eventually our local group of galaxies will be causally isolated, implying a universal constant bound on all physical computations.

Blithely ignoring the cosmological constant, a separate issue is the curvature of space, which is measurably very close to flat (with theoretical predictions that it is exactly flat) but for which a slightly positive or negative value cannot yet be ruled out observationally. Negative curvature would be computationally interesting (still ignoring the cosmological constant*, and also ignoring the possibility of a negatively curved but compact topology**) because it allows you to reach an exponential volume of space in linear time, which in turn does allow you to solve NP-complete problems in polynomial time using self-replicating computers. (It also means that multi-level marketing could succeed in the sense of everyone making money.)

Scott, I think I’ve heard you suggest that the empirically observed infeasibility of NP-hard computations ought to be given equal footing with, for example, the second law of thermodynamics and the lightspeed signaling limit, in the sense that it has held up so well to repeated attacks that one should use it as a criterion for doubting theories that would circumvent it. Does this give you a basis for predicting that future observations of the density parameter Ω will continue to yield a measurement of at least 1, within the margin of error?

(I’m personally willing to make such a prediction based on the empirically observed failure of MLM as an individual business endeavor–the handful of exceptions being embarrassingly concentrated in my home county.)

* putting us in the good company of Einstein, at least after he had renounced his rather brilliant “greatest blunder”

** putting us in the good company of the cosmologists who coined the name “open” for a negatively curved universe

Comment #68 July 31st, 2009 at 7:46 pm

Just to extend Scott’s remarks regarding practical engineering, among the

mostsignificant complexity hierarchies for engineers is the O(n) versus O(n^2) versus O(n^3), wherenis the number of degrees of freedom in the input.Nowadays a good rule of thumb is that practical problems have state-space dimensionality

n~ 10^7 … in the sense that plenty of interesting systems have ten million or so degrees of freedom … in which case this polynomial hierarchy broadly corresponds to problems that are “trivial” versus “feasible” versus “infeasible” to solve or simulate … which is a hierarchy of great practical concern to engineers.This (much-studied) polynomial hierarchy seems kind of stodgy and old-fashioned if we think about it algebraically—perhaps because it’s not readily apparent that there are any exciting new algorithmic worlds to conquer?—and yet it gains new lustre if we reconceive it in coordinate-free terms, e.g., if we ask, what is the computational complexity of computing vector/1-form duals, via symplectic and/or Riemannian state-space structures, as a function of dimensionality

n?Or to turn the problem around, what geometric and/or dynamical properties does a (quantum or classical) state-space need to have, for “the musical isomorphism” (as it is sometimes called) to be computationally efficient?

The above remarks are not intended as a question for Scott, but I would be *very* grateful if someone could point to reference(s) that discuss these complexity classes from an abstract point of view.

Comment #69 July 31st, 2009 at 8:10 pm

@wolfgang Isn’t the real question: what happens to physics when you can’t do a wick rotation? 🙂

Comment #70 July 31st, 2009 at 8:18 pm

Adam Wolbach (63): Good question! See this post for my thoughts.

Comment #71 July 31st, 2009 at 8:26 pm

What happens to BQP under Wick rotation ?wolfgang: As Dave alluded to, quantum computing becomes interesting precisely in the situations where Wick rotation and other similar tricks

don’twork.Comment #72 July 31st, 2009 at 8:38 pm

Tracy:

Does this give you a basis for predicting that future observations of the density parameter Ω will continue to yield a measurement of at least 1, within the margin of error?Yes, and it’s an excellent point!

(Note: here I’m assuming that (1) Ω<1 implies negative curvature, regardless of the value of Λ, and (2) not only can you reach an exponential volume in polynomial time in a negatively curved space, but that volume remains causally connected. If either assumption is wrong, then the conclusion about NP-complete problems no longer follows.)

Comment #73 July 31st, 2009 at 9:06 pm

What is your impression of the importance of the work of Adleman on dna / membrane / natural computing? They also have a complexity theory of their own ….If you only care what’s computable in polynomial time, then the complexity theory of DNA is exactly the same as the complexity theory of ordinary classical computers, i.e. of P or BPP. (Since DNA itself is just another classical computing model, albeit an unusually interesting one… 🙂 )

That’s not to say there aren’t great complexity questions related to DNA computing—I’m guessing there are, but it’s outside my expertise. Maybe someone else could point us to such questions?

Comment #74 August 1st, 2009 at 3:48 am

being a prof at a top cs dept hardly qualifies you to be called/call yourself a nerd. you are too far up to be nerdcore anymore.

Comment #75 August 1st, 2009 at 6:08 am

Dave and Scott,

I am too slow to understand this remark

“quantum computing becomes interesting precisely in the situations where Wick rotation and other similar tricks don’t work.”

I would have thought quantum computers will be mostly based on QED but not quantum gravity (where Wick rotation may indeed be problematic).

Comment #76 August 1st, 2009 at 6:46 am

By the way, speaking about quantum gravity and Wick rotation, another good question would have been

“does it bother you that the ‘wave function of the universe’ a la Hartle&Hawing is real-valued and not complex?” ,

but you told us that only 1 question per person is allowed

(but it is somehow the same question, in very different form, anyway).

Comment #77 August 1st, 2009 at 6:49 am

wolfgang: Yes, of course quantum computing is based on standard 1920s QM (not quantum gravity or anything crazy like that).

My doofus understanding of Wick rotation is that it’s a calculational trick that sometimes lets you calculate ground-state properties of quantum systems—basically by removing the i from e

^{-iHt}, solving the classical statistical mechanics problem you thereby get, and then doing analytic continuation on the answer. However, a quantum computer would be a highlydynamicquantum system, one that hasn’t by any means settled into a ground state, so this sort of trick can’t be expected to work in general.(Indeed, if it could, then presumably quantum computers could be simulated in classical polynomial time, and there’d be a fast classical factoring algorithm.)

I’m sure Dave can elaborate and/or correct me where I’m wrong.

Comment #78 August 1st, 2009 at 7:27 am

>> My doofus understanding of Wick rotation is that it’s a calculational trick

Begin with QCD; its very (non-perturbative) definition is basically provided on a lattice in the Euclidean sector (i.e. after Wick rotation). This is not about a trick which works in some cases but not in others.

I admit that QED is a bit more tricky, because the continuum limit of the lattice models does not exist due to the Landau pole, but I don’t expect quantum computing to depend on the Landau pole. We have to assume that QED is an effective theory and again Wick rotation should be well defined for all situations concerning quantum computers.

I think there is no immediate contradiction here (of course), because in a simple example Wick rotation maps a 1-particle Schroedinger equation to a diffusion equation, basically describing n classical particles.

But it would still be interesting to better understand this map from quantum theory to statistical mechanics from the perspective of complexity theory.

As my question (and if I may say so, your answer) suggests, it is currently not understood at all.

Comment #79 August 1st, 2009 at 7:57 am

see also: arxiv.org/abs/hep-ph/0302228

and arxiv.org/abs/0805.4145

Comment #80 August 1st, 2009 at 8:00 am

Wolfgang: It’s funny, I’ve overheard this conversation again and again among physicists over the past decade.

Physicist 1 (who knows nothing about quantum computing): But I don’t understand—why can’t you always simulate a quantum computer classically, by Wick-rotating it to reduce to a classical statistical mechanics problem?

Physicist 2 (who knows quantum computing): No, you don’t get it at all, because ____.

Physicist 1: Oh, right, I guess that’s true.

The trouble is that I don’t remember ____ (or clearly not well enough), and I also don’t understand the thought process that led to the original question. From my perspective, the conversation might as well have been:

Herpetologist 1: But isn’t it

obviousthat snakes will always prevent quantum computers from getting an exponential speedup?Herpetologist 2: No, the key point you’ve forgotten about snake behavior is ____.

Herpetologist 1: Oh right, of course!

Comment #81 August 1st, 2009 at 9:49 am

If you only care what’s computable in polynomial time, then the complexity theory of DNA is exactly the same as the complexity theory of ordinary classical computers, i.e. of P or BPP.I’m no expert, but I think I know enough to say that this statement is false. In fact, my analysis of the field of DNA Computing in the 1990s is that several researchers went down dead-end roads because, despite their brilliance, they assumed that such a similarity existed, for example that “feasible” and “P” were similar in the biomolecular setting. They’re not.

I recently guest-posted on Computational Complexity about a paper by Seelig and Soloveichik in which they propose a new theory of circuit complexity, because of extensive experimental evidence that chemical circuits do not behave like electronic circuits. I just looked for an online version of their paper and could not find one, but in brief: a circuit built from chemical species that diffuse in a solution inhabited by other species behaves fundamentally differently from a circuit that transmits information from point to point using dedicated wires.

A complexity theory of biomolecular computation does not currently exist — or at least, there are no models or complexity measures that satisfy the oh-so-glorious me. There is a physical cost to information propagation that Turing machine models do not capture. To get a sense of this, one might consider a two-head Turing machine, and a time step at which the heads are very far apart. The machine can still make a decision based on the info it is looking at in one time step. That is not a good model of physical reality.

I believe that an important step to “rehabilitating” the field of DNA Computing in the eyes of many theoretical computer scientists, will be the creation of a complexity theory that accurately captures the computational behavior of molecules.

Comment #82 August 1st, 2009 at 10:20 am

Re Wick rotation: while it is correct that the highly non-equilibrium situation of a quantum computer is not amenable to being Wick rotated, it’s a lot worse than that. Physicist 1 in Scott’s example wasn’t thinking at all. First, even if you’re only interested in properties of the system in the ground state or at low temperature, the corresponding classical statistical mechanics problem you get after Wick rotating may have a “sign problem”: it may not correspond to a problem with positive weights. Then, simulating that problem is not tractable. Second, even if you don’t have a sign problem, the analytic continuation back to real time can be tricky for even very simple correlation functions. Forget about the complicated dynamics of a quantum computer, and just ask about the correlation of two operators at two different times in the ground state. This is can be computed if you have access to the correlation function at different imaginary times, but the calculation can be delicate and can be unstable to error (as would occur if you calculated the imaginary time correlation function by Monte Carlo sampling).

Again, Scott: updated 10 most annoying and important problems would be nice. Or how about: what will be the 10 most important problems 10 years from now?

Comment #83 August 1st, 2009 at 11:25 am

Scott,

Is analog computing of no interest at all to CS researchers? I rarely run across even a passing mention of the topic.

Comment #84 August 1st, 2009 at 1:06 pm

Wick rotation is a key tool in calculating Casimir interactions, which play a central role in many aspects of quantum systems engineering. A clear, thorough introduction to Wick rotation in this context is given in Section 37 “Temperature Greens Functions”, of Landau and Lifshitz

Statistical Physics—highly recommended.Matt’s physical explanation (above) for why Wick rotation is not a computational panacea is correct. A mathematical principle that makes the same point is “Asymptotic expansion and Wick rotation are not commutative operations.”

Suppose (as a toy mathematical example) that a physical system has an (exact) correlation function f(z) = 1 + Γ(z), where Γ(z-1)=n! is the factorial function. However, in general physics problems we don’t know f(z) exactly, and so we resort to approximating f(z) by methods like summing diagrams, stationary-phase path integration, etc.

After a thorough study of the real-axis behavior of f(z), we might deduce (correctly) that for large real z, f(z) is given to high accuracy by Sterling’s asymptotic approximation. Then we might optimistically Wick-rotate Sterling’s approximation, hoping thereby to obtain a high-accuracy approximation for imaginary z. But this strategy fails … that darn “1” term, which we could reasonably neglect in our real-axis asymptotic expansion, becomes dominant on the imaginary axis.

The bottom line is that Wick rotation methods require a thorough understanding of global analytic structure … which for complicated systems like quantum computers we usually don’t have.

A terrific article (IMHO) that explores the general question “What are the informatic properties of the analytic structures that arise in large-scale quantum computation and quantum simulation?” is Schuch and Verstraete’s

Interacting electrons, Density Functional Theory, and Quantum Merlin Arthur.For sure, there are a

lotinteresting questions in this area that have both practical and fundamental consequences.Comment #85 August 1st, 2009 at 2:16 pm

Tracy wrote:

a separate issue is the curvature of space, which is measurably very close to flat (with theoretical predictions that it is exactly flat) but for which a slightly positive or negative value cannot yet be ruled out observationallyTheoretical predictions for

exactflatness? I know of no such thing, nor can I imagine what would motivate them. The usual understanding is that inflation led toapproximateflatness. It’s a dynamical thing, so there’s no reason to expect exactness.Comment #86 August 1st, 2009 at 3:58 pm

Aaron (#80): You’re absolutely right that DNA computers would work differently from Turing machines, in the same way that (say) Turing machines work differently from cellular automata work differently from RAM machines.

However, your comment conspicuously failed to make the case that switching to DNA could change the notion of

polynomial time. Sure, we lack a good theory of how to model chemical circuits. But we do know what sort of thing ittakesto have a chance of going beyond P: basically, you need to change the underlying framework of physics (from deterministic to probabilistic, or classical to quantum, or discrete to analog).Changing your model of spatial locality, or the number of spacetime dimensions, or the types of parallelism allowed, or the type of communication between computing elements (electronic vs. chemical): the class P is incredibly robust to all of those things.

(One exception: As Tracy pointed out, a massively parallel computer in a hyperbolic spacetime geometry really

coulddo an exponential amount of computation in a polynomial amount of time—indeed, it could solve any problem in PSPACE. But rather than pointing to any brittleness of P, that strikes me as another illustration of just how much you need to mess with the underlyingphysicsto go beyond it.)Comment #87 August 1st, 2009 at 3:59 pm

John Sidles,

>> The bottom line is that Wick rotation methods require a thorough understanding of global analytic structure

you will find that the 2nd ref. I provided above deals with this issue and suggest a way to perform the analytical continuation without assuming global information on the function in the entire complex plane.

But my original question was not if/when/how Wick rotation can be a useful tool.

If one deals with a quantum computer of finite size, performing a computation in finite time (so that we dont have to worry about subtleties of the asymptotics) and if we can neglect quantum gravity, there is in principle a 1:1 map between quantum theory and stat. mech. as explained in the 1st ref I provided.

My simple question is what this 1:1 map does to BQP (and I should make it clear that I am not claiming that this disproofs quantum computing or anything like that).

Comment #88 August 1st, 2009 at 4:00 pm

@Aaron Sterling,

Do you have a blog? I am extremely interested in Natural Computing (DNA/membrane/molecular etc.) and the ideas that you mention in your post.

I have seen some dissertations on solving hard problems in such models. Is there experimental evidence for this? And is each step “fast”?

Comment #89 August 1st, 2009 at 4:05 pm

@ Scott and others:

Is there a formulation of problems related to programming? For example, dividing a program into the best possible components/objects must be hard. Can commenting a program actually be useful? In other words, is it possible to write a succinct description of a complicated string? What assumptions would one need to make it possible?

These strike as nice questions … have they been explored?

Comment #90 August 1st, 2009 at 4:18 pm

Jack:

Is analog computing of no interest at all to CS researchers?As a

practical engineering proposal, analog computing largely failed decades ago, because of sensitive dependence on initial conditions (i.e., chaos) causing tiny errors to build up over time and swamp the computation. It still does have a niche: any time anyone does an experiment in a situation where all the underlying physical laws and the initial state are known, and the question is just how they play themselves out (e.g. a wind tunnel), you can think of them as doing an analog computation. But digital computers got so good that they very often replaced analog simulations even for that sort of problem.If, hypothetically, you had an analog computer with

no error at all, which could store and manipulate real numbers to infinite precision and also read out individual bits of their binary representations, then it’s known that you could solve PSPACE-complete problems in polynomial time, or (in certain continuous-time models) even uncomputable problems. However, there are quantum gravity reasons to believe that such a “perfect” analog computer is fundamentally impossible (for more see my survey on NP-complete problems and physical reality).On the other hand, as a useful

theoretical notion, analog computing lives on—just not under that name. Try googling “numerical analysis,” “algebraic complexity,” or “Blum-Shub-Smale model,” and you’ll see that computation over the reals and complex numbers is a thriving part of CS with fundamental open problems (which even include analogues of the P vs. NP question overRandC).Comment #91 August 1st, 2009 at 4:32 pm

In case I gave the impression that I thought biomolecular computation could feasibly do more than P, I’d like to follow up and say that I think it can do strictly less. M. Cook et al.’s recent paper, “Programmability of Chemical Reaction Networks” makes the case that the class LOGSPACE captures what can be feasibly computed by stochastic chemical reaction networks. For hand-wavy “intuitive” (i.e., dopey) reasons of my own, I believe more is possible, and “feasible” computation lies somewhere strictly between LOGSPACE and P.

The notion of “work” in distributed computing captures part of what is missing from the Turing machine model, as it counts how much information needs to be transmitted from point to point through channels. Combining some variant of “work” with more classical time and space complexity measures would be an advance over the tools we are currently using to model molecular systems. And, in particular, since it would cost more resources to perform certain computations, we could feasibly compute less than if we charged for time and space alone.

I don’t have a blog. I’d suggest looking at the publications of Erik Winfree and Damien Woods if you are interested in molecular programming and membrane computing.

And, Scott, I’ll be at the Barriers workshop. It will be a pleasure to meet you.

Comment #92 August 1st, 2009 at 4:35 pm

If one deals with a quantum computer of finite size, performing a computation in finite time (so that we dont have to worry about subtleties of the asymptotics) … there is in principle a 1:1 map between quantum theory and stat. mech. … My simple question is what this 1:1 map does to BQPWolfgang: We might have our problem right there. If we’re talking about quantum computing at all, then “subtleties of the asymptotics” (i.e. how things scale as the input size / number of particles goes to infinity) are the

entire game. Sure, a quantum computer of size O(1) can be mapped to a statistical mechanics problem of size O(1). But it can also be mapped to a classical computation of size O(1), by just writing out the whole wavefunction!In other words, throw away asymptotic considerations, and you throw away the entire subject matter—and in particular, the concepts needed to define a complexity class like BQP.

Comment #93 August 1st, 2009 at 4:37 pm

Aaron: Ah! Thanks for clarifying. Yes, I can certainly believe that biomolecular computing (or at least, some reasonable idealization thereof) could do strictly less than P. (I was just trying to be generous, and take the closure of whatever it can do under P-reductions. 🙂 ) I look forward to meeting you at the Barriers workshop, and would be happy to talk about it then!

Comment #94 August 1st, 2009 at 4:44 pm

… and there is even a (highly entertaining, remarkably scholarly, and pretty darn funny) science-fiction novel that is set in (what amounts to) the Blum-Shub-Smale universe … Rudy Rucker’s

White Light.If your idea of good science fiction involves lengthy verbatim quotations from Kafka’s, Godel’s, and Cantor’s more obscure writings, then

White Lightis the novel for you. 🙂Comment #95 August 1st, 2009 at 5:07 pm

Wolfgang asks:

My simple question is what [the stat-mech / Hamiltonian] 1:1 map does to BQP?My simple guess at the answer is pretty much the same as Scott’s (as I understood it): the exact map is so fine-grained as to constitute (in effect) a look-up table oracle for BQP.

An interesting middle ground is to ask whether some coarse-grained approximation to this map—a coarse-grained map that a PTIME computation could implement—might serve useful purpose(s)?

Pursuing this line of thought—and working through immensely many technical details—might lead us to re-discover the Kohn-Sham theorem and the density functional theory (DFT) framework … where the condensed matter physicists and

ab initioquantum chemists are presently hard at work … for details see the above-mentioned Schuch/Verstraete articleInteracting electrons, Density Functional Theory, and Quantum Merlin Arthur.Comment #96 August 1st, 2009 at 8:26 pm

Scott,

Could you comment on the complexity of the following question?:

What is the computational complexity of X?

Where X is: What is the computational complexity of Y?

Thanks!

Comment #97 August 1st, 2009 at 8:37 pm

@RobertW:

That is probably undecidable in general by Rice’s theorem.

Comment #98 August 1st, 2009 at 8:57 pm

Joe,

And what if Y is an interesting and open problem, that has been widely studied? Is that a useful question to ask?

Comment #99 August 1st, 2009 at 8:57 pm

“Sure, we lack a good theory of how to model chemical circuits. But we do know what sort of thing it takes to have a chance of going beyond P: basically, you need to change the underlying framework of physics (from deterministic to probabilistic, or classical to quantum, or discrete to analog).”

Just for my own clarification would you comment on the following:

With a different type of computer (molecular, membrane, DNA, whatever else) although the notion of polytime obviously won’t change, it is possible that the meaning of feasible, for all practical purposes could change. For example, it could so happen that practically all instances of SAT that one cares about could be solved by a giant vat of chemical goo, in a few days, because we can now afford to have a really large number of “processing elements.”

Comment #100 August 1st, 2009 at 11:26 pm

My limited understanding is that when you replace t by i*t, what you are doing is replacing Quantum Mechanics by a classical Markov process (not just Statistical Mechanics, or a Lattice Gauge theory). The replacement theory obviously doesn’t have interference, but it does retain certain quantum aspects like discrete energy levels. Miraculously, you can “often” rotate time from the real axis to the imaginary one, and if you take into account all the poles and branch cuts that you meet along the way, you can “often” retrieve QM. As long as you don’t perform any infinite limits in the interim, I think you have a “good chance” (I’m so rigorous :)) of getting QM back. Of course deforming a contour of integration from the real to the imaginary axis involves infinite limits already.

Comment #101 August 2nd, 2009 at 5:15 am

Joe (#98): Well,

Icare about SAT instances with many millions of clauses and variables, that encode (for example) that there exists a proof of the Riemann hypothesis in ZFC with at most 10^{7}symbols. And I believe that, if you want a general-purpose method to solve instances of that size, then your vat of goo will have to be larger than the observable universe.But I admit it’s just a belief, which might be hard to prove even

supposingwe had a proof of P≠NP (since you’d have to prove NP⊄P/poly, and by a circuit lower bound with reasonable constants).Comment #102 August 2nd, 2009 at 12:27 pm

Joe, chemical computers transfer information by diffusion, which is far slower than the motion of electrons. It seems unreasonable to expect any speedup from such technology. It’s true that, in the 1990s, several researchers claimed this might provide an avenue to solve NP-Complete problems. Those claims now seem exuberant. The importance of molecular programming comes not from speedup but from the scale at which objects can (potentially) be designed.

Here’s a link to the video of a plenary talk Erik Winfree gave at ASPLOS 2008 about the current state of molecular programming, and his vision for the future. I think it’s an excellent introduction to the subject.

http://dmcc.acm.org/pres/?query=/dmcc///confdata/asplos2008/2008-03-03_09h06

Comment #103 August 2nd, 2009 at 1:16 pm

Hi Scott,

I recently watched some History channel documentary about “The Universe”: an episode titled “Beyond Big Bang”. It was quite nice for non-experts to understand the history of human efforts in theorizing physics of the universe.

Since life evolved after Earth formed I would like to know how would

youdistinguish the efforts of biologists to the efforts of physicists in a broad sense? I mean they are certainly quite different on specific levels but what would the broadest-level distinction be?Thanks Scott!

Best,

Nagesh

Comment #104 August 2nd, 2009 at 3:03 pm

If NP was in P/Poly, polynomial sized circuits would exist that solve SAT instances up to given lengths. Suppose we had a hardwired circuit for some ten thousand variables. Would that already be very useful? Obviously it wouldn’t be enough to find proofs of significant lenghts. Put differently, SAT instances of what length need to be solvable to for what (amazing) applications.

Comment #105 August 2nd, 2009 at 3:09 pm

Nagesh: Physicists study the “systems code” of the universe, while biologists study one of the most interesting higher-level phenomena that that code produces: namely, natural selection of self-reproducing entities. (On earth, all naturally-occurring such entities seem to be based on DNA/RNA, but there’s nothing inherently necessary about that.)

You could remove all life from the universe and physics would still be interesting. Conversely, you could completely change the underlying physical laws and probably still have natural selection that would be interesting for biology. In that sense, neither subject is obviously “deeper” than the other—and I say that despite being a hardcore reductionist.

Comment #106 August 2nd, 2009 at 5:23 pm

At his blog, Richard Lipton suggests that Andrey Kolmogorov may have believed that every problem in P can be described by a family of O(n)-sized circuits: http://rjlipton.wordpress.com/2009/02/12/is-np-too-big-or-p-too-small/#more-86

Leaving aside whether K. actually believed that, is it completely crazy? Are there any obvious problems that would be resolved if you assumed the statement was true?

Thanks!

Comment #107 August 2nd, 2009 at 5:34 pm

>Conversely, you could completely change the underlying physical laws and probably still have natural selection that would be interesting for biology. In that sense, neither subject is obviously “deeper” than the other—and I say that despite being a hardcore reductionist.

I think you may have captured the essence of why my appreciation for biology went up so dramatically after reading Dawkins — from whose point of view biology is a sort of abstract science of organized complexity, rather than merely a discipline that studies the actual furry things and such that contingently happen to be here.

Comment #108 August 2nd, 2009 at 5:42 pm

“You could remove all life from the universe and physics would still be interesting.” No it wouldn’t, unless rocks have interests 🙂

Comment #109 August 2nd, 2009 at 8:03 pm

is it completely crazy?Ed (#105): In my opinion, yes. But it’s a measure of our ignorance that we still can’t prove Kolmogorov’s conjecture false.

Comment #110 August 2nd, 2009 at 8:50 pm

Thanks Scott. So I can probably conclude Biology is a subset of Physics. I know it’s only one question per person but I just want to ask if my conclusion of your answer is right 🙂

Comment #111 August 2nd, 2009 at 10:48 pm

Hi Scott, I’m just wondering if you could tell us which interpretation of QM you favor, and why. In particular, I’d like to know what you have to say about the arguments for many worlds (I’m thinking of David Deutsch here … I thought Ockham’s razor did away with many worlds until I read his book … now I don’t know what to think)

Comment #112 August 2nd, 2009 at 11:44 pm

Scott,

Maybe this will be a more interesting question for you:

What is the role of information loss in one-way functions? Intuitively, I perceive that information is just getting “thrown away” when we put a string into a function and get a result that is not easily invertible.

Comment #113 August 3rd, 2009 at 12:39 am

@Wolfgang: “does it bother you that the ‘wave function of the universe’ a la Hartle&Hawing is real-valued and not complex?”

It doesn’t bother me because real-valued quantum computing is equivalent in power to complex quantum computing! But it bothers me because complex numbers appear so much more natural than real numbers 😉

(I think your question about Wick rotations is interesting…part of my response would be what Matt said. On the other hand I’d love to understand better the exact relationship. See “stoquastic Hamiltonians” for some interesting work in how the sign problem might be related to quantum computational complexity.)

Comment #114 August 3rd, 2009 at 1:47 am

Ed points out that

“Richard Lipton suggests that Andrey Kolmogorov believed that every problem in P can be described by a family of O(n)-sized circuits”… and this nicely relates to the current topic of the Fortnow/Gasarch blog, namely, “Is strong AI feasible?”Because if Kolmogorov is right, then mightn’t neural-net AI’s be generically more powerful than any Turing/von Neumann machines that had an algorithmically compressible program? … which would be big deal for

n~ 10^11 (the number of neurons in a human brain).Of course, no one would know how to program these neural nets … precisely because their operations would

notbe rule-based … so the net-bots would have to blunder around, programming themselves by trial-and-error heuristics … hmmm … it’s by no means clear that these blundering, non-logical life forms would deserve the name “sapient”.For example, they could probably be trained to

checkmathematical proofs … but as forcreatingmathematical proofs, they would have to do that by heuristic-driven trial-and-error … behavior that would hardly be worthy of the name “rational”. 🙂Comment #115 August 3rd, 2009 at 2:25 am

@RobertW:

Don’t know if it is a useful question to ask. But you’d have to define the decision problem (a class of instances) carefully. In your formulation, if X = “SAT is in P”, then it is not clear what the class of instances Y is.

Comment #116 August 3rd, 2009 at 2:28 am

@Aaron Sterling:

Thanks for the link. I am very interested in membrane and cellular computing as a way of studying biological systems and exploiting them computationally. The model is slightly different and there are the equivalent complexity classes (PMC and NPMC if I remember correctly).

If I wanted to read about molecular computing, what paper/book/journal outlines the basic definitions?

Comment #117 August 3rd, 2009 at 6:37 am

Dave suggests:

See “stoquastic Hamiltonians” for some interesting work in how the sign problem might be related to quantum computational complexity.Dave, thank you for that terrific tip … it lead me to an excellent review/presentation by Barbara Terhal titled “The complexity of simulating quantum systems on classical computers” (which can be found on-line) … this talk is the single best short summary I have seen of our present (rapidly evolving) complexity-theoretic understanding of issues like stoquasticity … I will mention also that pretty much any talk or article with Terhal’s name on it is likely to be excellent! 🙂

One key physical idea that is not mentioned in Terhal’s talk, but that

hasbeen mentioned on this forum by Tracy Hall and by Scott is the idea that negatively-curved physical state-spaces offer significant computational advantages in the sense that (as Scott says)“not only can you reach an exponential volume in polynomial time in a negatively curved space, but that volume remains causally connected.”A mathematical link between Hall’s and Scott’s comments and Terhal’s stoquastic review is that the manifold of tensor network quantum states—states that are “physically relevant” by Terhal’s definition—has precisely this feature of having everywhere-negative Ricci-Kähler curvature.

Thus, we may or may not live in a physical universe whose negatively-curved state-space geometry is well-adapted to efficient computation (this will be determined by cosmological observation). But very conveniently, our

simulationsof quantum systemsdotake place in a mathematical state-space that has precisely this geometric property.Comment #118 August 3rd, 2009 at 7:50 am

I’m just wondering if you could tell us which interpretation of QM you favorI don’t like any of them. However, I’m happy to admit that Many-Worlds was a big improvement over the deliberate obscurantism that held sway beforehand.

The idea that Many-Worlds is ruled out by Occam’s Razor is wrong. Occam’s Razor, properly understood, is about the Kolmogorov complexity of a theory, not about how many entities the theory postulates.

Still, there’s a real question about what the

goalof an interpretation is. We all know how to calculate with QM. If the goal is to stimulate new research ideas, then one could argue that Copenhagen (obscure as it was) “led to” things like the uncertainty relation, Bohmian mechanics “led to” the Bell Inequality, and Many-Worlds “led to” quantum computing, so they were all successful in that respect. If the goal is to enlighten everyone about the true nature of QM so they stop arguing about interpretations, then clearly none of them were successful.Comment #119 August 3rd, 2009 at 8:59 am

Hi Joe,

Paul Rothemund’s Ph.D. thesis is probably the best one-stop-shop for algorithmic DNA self-assembly; it is the closest you’ll get to a set of agreed-upon basic definitions. If your interest is primarily in computational models of cellular interactions, though, I’d suggest the work of Luca Cardelli.

Comment #120 August 3rd, 2009 at 9:43 am

Dear Scott,

If it is found that P=NP, do you think the governments of the world should ban computers so as to ensure the continued relevance of human creativity?

Comment #121 August 3rd, 2009 at 1:47 pm

Charles: No.

If P=NP, humans could run the algorithm (albeit much more laboriously), not just computers.

And even if P≠NP, human creativity will probably

stillbecome obsolete eventually.Comment #122 August 3rd, 2009 at 4:19 pm

I’m curious what you think about Lance Fortnow’s recent claim that human brains can’t be simulated by Turing machines. It seems to me that a reasonable stance to take is that the power of human reasoning lies in a small set of robust and flexible heuristics honed by evolution that are not yet understood. So what do you think Lance means by “self-awareness” and how does it translate into a complexity assumption / testable hypothesis?

(I’m not usually one to ramble and speculate, but it occurs to me that evolution could be thought of as a pre-computation that equips the human brain with algorithms with good constants and that it is infeasible to find such algorithms otherwise, in some sense; in other words the heuristics of the human brain may not be strong enough to rediscover themselves. But I’m having trouble making this precise.)

Comment #123 August 3rd, 2009 at 4:27 pm

In their STOC99 paper “Undecidability on Quantum Finite Automata”, Amano and Iwama show that the emptiness problem is not solvable for 1.5-way Quantum Finite Automata (1.5QFA’s). This problem is solvable for 1-way nondeterministic stack automata (1SA’s). They say “Our unsolvability result means that 1.5QFA’s can accept some languages that cannot be recognized by any 1SA’s.”

Why does the unsolvability result mean that?

Comment #124 August 3rd, 2009 at 4:47 pm

Maybe this is a stupid question but I can’t figure it out. I haven’t asked around since i don’t want to appear stupid so now under the cover of anonymity I ask since you are entertaining any kind of question…

Why are the weights of physical objects additive? I mean all we know is that if we lump two objects together the whole is heavier than the part but what is it that allows us to say that for objects assigned weight a and b, the weight of their composite: w(a,b)=a+b and not some other function such that w(a,b)>a, b? We can’t look at measuring devices since they are constructed assuming that this is the case.

Comment #125 August 3rd, 2009 at 5:01 pm

Keep in mind, as mysterious as human consciousness seems to be, there is nothing so far that shows that the human mind can’t be simulated on a Turing machine.

Comment #126 August 3rd, 2009 at 5:02 pm

Anon: No, measuring devices are

notconstructed assuming weights are additive. Try a little experiment: put object A on a scale, then object B, then both of them. I believe you’ll find that w(A+B)=w(A)+w(B), even though logically it could have been otherwise. That weights are additive is not an assumption; it’s an empirical finding about how the world works.(One can of course

explainthis finding within the framework of a physical theory, such as Newtonian mechanics or general relativity.)Comment #127 August 3rd, 2009 at 5:03 pm

Reader from Istanbul: Your question is

waytoo technical for a forum like this one … why don’t you email the authors?Comment #128 August 3rd, 2009 at 5:53 pm

Hi Scott,

Thanks for the reply. I haven’t really understood your answer. Suppose the device we are using is a spring balance. When we put the two weights we get two corresponding distances and of course the distances are additive. But to infer that the weights are additive, the relationship between the two have to be linear (Hooke’s Law). But how do we establish Hooke’s Law in the first place if we don’t assume the weights to be additive?

Comment #129 August 3rd, 2009 at 6:01 pm

Anon, the first—and still the most famous–experiment on additivity of mass (and its parent law, conservation of mass) is Jan Baptist von Helmont’s “Willow Tree Experiment” of (around) 1620 or so … the account of how Helmont’s experiment led to the scientific concept of “mass” as an additively conserved quantity is well worth reading.

Like many people on Scott’s “heroes” list, von Helmont encountered significant opposition from the religious and secular authorities of his era …

It is interesting that, in comparison to von Helmont’s prosecution for heresy by the Spanish Inquisition, today’s mathematicians and scientists face minimal persecution … it’s not clear (to me) whether this is because the ideas of 21st century mathematics and science are less threatening, or because today’s authorities are more enlightened.

Nowadays, what does a scientist have to do to get persecuted? Is the era of of Lysenkoism over for good? Have we

entirelyrun out of dangerous ideas?Comment #130 August 3rd, 2009 at 6:04 pm

OK, here’s another try at using the one-question ticket:

Can there be a quantum Turing machine with a one-way tape head? If so, what would this do to reversibility, unitarity, etc.? Would it be a quantum-mechanically valid model?

Comment #131 August 3rd, 2009 at 6:19 pm

One other historical fact worth mentioning, Anon, is that that level balances are a much older technology than spring balances (and even today level balances are far more accurate than any spring balance).

So how could mass additivity be checked (empirically, and to high accuracy) with a level balance having variable arm lengths? The answer is not too hard to work out.

It’s amazing to conceive, that (very plausibly) there are readers of this blog who have never even

seena level balance—except as the doohicky that statues of Justice wave around—much less used one.Comment #132 August 3rd, 2009 at 6:24 pm

What is the ratio of hard vs “known easy” problems for an NP-complete problem like SAT? For example in factoring I can say that 90+ percent of the time I =know= I can solve the problem in polynomial time. (Whatever division is at nowadays). That 90 percent number converges to some limit as the numbers get larger and on what kind of cutoff I want to make in how long I let my algorithm run. Lets just arbitrarily set it. The other instances of factoring I might be able to or might not do inside my cut-off with a different algorithm, but for this purpose since I can get over 90% easily I don’t care.

Is there a similar limit for SAT or 3-SAT? If so, do all NP problems have the similar ratio using some arbitrary hardness level (e.g. n^2)? I thought at first they would since one could always transform the problem to the better formulation, but in general would the look up time to FIND the best NP formulation screw that up?

This is motivated by the people with SAT solvers saying that they basically can solve all realistic problems fast anyway. If so it seems like the density of hard problems among

NP-complete problems is very low. Seems like it is almost an experimental topic in TCS…

Comment #133 August 3rd, 2009 at 8:34 pm

Hi Scott,

It has been known for over a decade that if the time-evolution in quantum mechanics were nonlinear, then this nonlinearity could be exploited to solve NP problems in polynomial time (Abrams & Loyd, 1998).

My question is whether various aspects of physical systems whose best description is in terms of a nonlinear Quantum Field Theory could be exploited to solve NP problems in polynomial time. Examples: the classical Yang-Mills equations for non-abelian gauge fields in the standard model involve nonlinear time evolution, and (in a substantially different vein) nonlinearity is also an important emmergent property of weakly interacting Boson systems i.e. the Gross-Pitaevskii equation.

I am aware that, at least on some mathematical level that I cannot articulate (I’m just a grad student), none of these “nonlinear schodinger equations” or any QFTs contradict the underlying linearty of quantum mechanical time-evolution — any comments or references you have to illuminate this point would be greatly appreciated — I still can’t get over the crude connection between QFT, quantum nonlinearity, and NP. Even if the nonlinearity is only an emergent property, as in the GP equation, perhaps this could be somehow exploited as a practical method of solution to NP problems, if not a theoretically/asymptotically valid solution.

Comment #134 August 3rd, 2009 at 9:20 pm

Reader from Istanbul (#128):

Can there be a quantum Turing machine with a one-way tape head?Sure, why not? Reversibility means that the transformations preserve information (and hence could be reversed in principle)—not that they actually

will bereversed. A one-way quantum (and unitary) Turing machine would simply start in a special initial state and proceed from there.Comment #135 August 3rd, 2009 at 9:36 pm

Sure, why not? Reversibility means that the transformations preserve information (and hence could be reversed in principle)—not that they actually will be reversed. A one-way quantum (and unitary) Turing machine would simply start in a special initial state and proceed from there.Another answer is that reversibility is not essential to the definition of a quantum computer or a quantum Turing machine. Reversibility, which implies unitarity, is important for many algorithms, but that’s not the same thing. You can define a QTM using quantum operations instead. Indeed, if you want to examine the space complexity of a quantum algorithm, then quantum operations are the correct model and unitary operators are not.

Also in the quantum operations formalism, BQP contains BPP basically by definition, the theorem that BQP is self-low has a two-line proof, and the theorem that BQP is in PP is also shorter and clearer.

Comment #136 August 3rd, 2009 at 11:48 pm

Scott, why does Pablo have a Spanish name but speaks with an Irish accent?

Comment #137 August 4th, 2009 at 12:28 am

In a model consisting of a machine composed of multiple TMs where each TM is allowed to read a subset of the input bits and either output an answer and terminate or allow the process to continue to another random TM, and where the TMs are not allowed to share information, what happens to P, NP and EXP? For example, in a search problem, each TM would be allowed to look only at a subset of the bits and either output that it found a solution or let the process repeat. For NP-Complete problems does this restriction (for example that of looking at only ~ (n/2)(n/2-1)/2 bits at a time) automatically impose an exponential run time requirement or would there still be a chance of solving NP-Complete problems in polynomial time in this model? In this model how much of P would now require exponential time?

I know these are very broad questions, just putting them out there in case there’s some answers.

Comment #138 August 4th, 2009 at 1:39 am

why does Pablo have a Spanish name but speaks with an Irish accent?Sharrrrrrron: Avast ye, insolent wench! Is’t not plain a’ th’ skies ovarrrrr Oberwolfach that, sur’ as a nondeterministic Tarrrrring machine be a rarrrrrr sight on th’open seas, so too Cap’n Pablo be a figment o’ th’rum-soaked complexity-theoret’c piratical imag’nation, creat’d w’out a great deal o’ farrrrr’thought? ARRRRRRRRR … I mean, adiós, señorita!

Comment #139 August 4th, 2009 at 6:21 am

Scott, thanks for creating and maintaining this blog. Whatever you choose to do i really hope you will write some new posts from time to time.

Q:

Lets say you have a determinismtic evolution equation for a certain system but there is continual information loss (for instance dissipation in fluids) so the initial condition of the system is eventually forgotten by the system itself. Still the system is deterministic in the usual sense , but you cannot reverse the calcultion ie take the current state as initial condition and tract backwards in time using the evolution equation and then get to the original initial condition. (an irreversible process)

what level of randomness is implied by irreversibility? After a long time, is the value of the initial condition important at all?

To some extent it seems to be, if the system has invariants like conserved quantities then the order of magnitude of the i.c surely is important. But could the details of the i.c be important too, if you think of non-linear effects. Does one have to apply statistics in order to say something useful about a system of this nature?

I dont know if this question is related to anything other than physics, but if it is in anyway related to complexity and you feel you can comment then, great

Comment #140 August 4th, 2009 at 12:21 pm

Why does Pablo have a Spanish name but speaks with an Irish accent?

I always thought that it’s because he’s a confused old pirate with the super-ego of a Latin lover…

Comment #141 August 4th, 2009 at 6:25 pm

Is there a good description of what a quantum computer is that is accessible to someone whose knowledge of quantum mechanics has mostly rusted out, but whose knowledge of what a Turing Machine is has not rusted out? If so, where?

Comment #142 August 4th, 2009 at 7:39 pm

Did you read any of Neal Stephenson’s Baroque Cycle? If so, what did you think? I read and did not much enjoy the first volume, Quicksilver. Why? It was too long, the actual story meandered and lacked direction, and the detail was overwhelming. Plus too much politics. But I’m guessing you enjoyed it a lot more than I did if only because of its treatment of the Royal Society.

Comment #143 August 4th, 2009 at 10:23 pm

What the heck is quantum logic and is it interesting from a CS or mathematical point of view? I couldn’t make any sense of the wikipedia article about it.

Comment #144 August 4th, 2009 at 10:57 pm

To what extent do you think recreational substance use can affect someone’s output as a scientist? You have been in academia for a while now … how prevalent would you say drug/alcohol use is in the top tier of researchers? Are they all abstainers? I’m just curious because of something I heard about Von Neumann, that he was a big drinker … and then there’s that Einstein quote: “Beer makes a man stupid and lazy” … but I don’t know if he ever actually said that.

Comment #145 August 4th, 2009 at 11:18 pm

Hey Scott, you claim you’ve been “enjoying” life of late; I wonder in exactly what ways you have done so; and whether this recreation has become possible because you’ve learned something in particular about yourself or the world.

Comment #146 August 5th, 2009 at 12:34 am

It is now the case (and has been for many years) that

even if you only care about Deterministic complexity

classes and algorithms you need to KNOW about

randomness. Nobody serious would would dispute

this. I don’t think even people that are unserious would

dispute it.

Fill in the following sentence: By the year XXXX nobody

serious will dispute the claim that quantum complexity

is important for your work even if you only work with

classical complexity classes and algorithms.

Don’t just give me a year- give me your thoughts on this.

bil g.

Comment #147 August 5th, 2009 at 4:07 am

Isaac says:

It has been known for over a decade that if the time-evolution in quantum mechanics were nonlinear, then this nonlinearity could be exploited to solve NP problems in polynomial time (Abrams & Loyd, 1998).Isaac, that statement as-written is far too strong … it holds only for models of quantum mechanics (QM) in which the state-space is linear and the equations of motion are nonlinear … it does

nothold for QM models in which the state-space manifold has compatible symplectic, Riemannian, and complex structures, but isnota vector-space … in other words, it does not hold for Kählerian QM (as expounded by Ashtekar and Schilling, for example).No one knows whether the “true” QM state-space of the universe is Kählerian or not … this is mainly an experimental question … made harder because (at present) we have only a very incomplete mathematical understanding of how Lindblad-Choi processes manifest themselves on non-vector QM state-spaces.

It is empirically the case, however, that the computational properties of Kählerian QM manifolds (which include tensor network manifolds a.k.a. matrix product states, for example) are sufficiently advantageous, that almost practical QM simulations are carried through on Kählerian manifolds .

Heuristically, mightn’t we hypothesize that if computation on Kählerian manifolds is efficient, then Nature might well prefer that QM model?

Comment #148 August 5th, 2009 at 5:21 am

Tom Spencer (#141): My blogroll on the right has a list of good quantum computing primers. You might start with this essay by Michael Nielsen, then try Lance Fortnow’s survey.

Comment #149 August 5th, 2009 at 5:29 am

Vincent Price (#142): No. I did start

Cryptonomiconand enjoyed it, up until the part where it talked about the difficulty of factoring huge prime numbers. I know it was just a trivial goof, but after that I found it hard to stay in the story.Comment #150 August 5th, 2009 at 5:54 am

What the heck is quantum logic and is it interesting from a CS or mathematical point of view?asdf: Quantum logic is basically the study of the lattice of subspaces of Hilbert space, under “logic-like” operations like intersection and complement—in other words, what you’re left with if you take quantum mechanics and throw away the probability part. It’s mostly interesting for infinite-dimensional Hilbert spaces; the one interesting “quantum-logical fact” I’m aware of in finite-dimensional Hilbert spaces is the Kochen-Specker theorem.

Some of the claims that used to be made for quantum logic, like that it supersedes classical logic or shows that the laws of logic have to be determined by experiment, are as fatuous as they sound.

Quantum logic was a neat idea when proposed by Birkhoff and von Neumann in 1936, but the results seem a little disappointing from my perspective. I attended a whole workshop at Perimeter called “Quantum logic meets quantum information”, and I think a fair summary is that quantum logic has had no impact whatsoever on quantum information science. That, by itself, might make one question whether quantum logic was really asking the right questions.

Comment #151 August 5th, 2009 at 5:58 am

In your ZFC/CH post, you make a distinction between “1. questions that are ultimately about Turing machines and finite sets of integers”, and “2. questions that aren’t.” More precisely, we can ask: given a statement, is there a TM that halts iff the statement is true ?

It easy to find examples for the first class and prove that they belong to it (Goldbach, P/NP, Riemann, etc.).

It seems very likely that CH belongs to the second class. But can you give a proof of this fact ? There *could* be a very strange TM that actually halts iff CH is true.

Do you have other examples of problems in class 2 ? (Provably in the best case, or by intuition only.)

Comment #152 August 5th, 2009 at 6:20 am

martiss (#144): Most scientists I know drink beer and wine at conferences (sometimes in copious quantities), but others don’t. I haven’t observed any productivity gap between the two, but I also haven’t thought about it much. Physicists do seem to drink more than computer scientists, and Europeans more than Americans; you can interpret that however you want.

I’ve never known of any real alcoholism problem in a productive scientist, but I’m sure such cases must exist.

It will surprise no one that, like other intellectual tasks, theoretical research is difficult though not impossible to perform while intoxicated.

Like other nerds, a significant minority of physicists and computer scientists seem to have tried pot. But while pot can be good for meditating about life in general, or the beautiful hues of the objects in front of you, it’s less useful for theoretical computer science research (or “so I’m told”).

Harder drugs, I really don’t know about.

With one exception: many, many theoretical computer scientists have a destructive addiction to PCP—the ultimate consciousness “expander.” Some heavy users even report fantasies involving Arthur and Merlin.

Comment #153 August 5th, 2009 at 6:25 am

John Paul Vann (#145): ARRRRRRRRRRRR.

Incident’ly, ye hast a fine pirate name!

Comment #154 August 5th, 2009 at 7:29 am

Fill in the following sentence: By the year XXXX nobodyserious will dispute the claim that quantum complexity

is important for your work even if you only work with

classical complexity classes and algorithms.

Bill: Alas, till the Singularity comes or we all die, I expect you can find

someone“serious” (i.e., someone who did serious research in the past, or even still does it now) to dispute anything. Even if P≠NP were to be proved using quantum techniques, the entire world economy were to become quantum-computer-based, and God Herself were to descend from the sky and roar “BQP IS THE CORRECT MODEL!”, there would still be computer scientists who’d say “but all this quantum stuff is irrelevant, and I don’t understand it!”Personally, I’d say that the recent breakthroughs of Gentry and Peikert, on top of the dozen or so previous examples we knew of the “quantum method” in classical computer science, add up to a strong case

alreadythat you should learn quantum even if you only work with classical complexity classes and algorithms.Comment #155 August 5th, 2009 at 8:06 am

Timo (#151): I’m pretty sure it follows from Gödel and Cohen’s results that, assuming ZF is consistent, CH can’t be

provenin ZF to be equivalent to a Σ_{k}or Π_{k}sentence for any k (that is, to a sentence about Turing machines). So if CHwasequivalent to such a sentence, then that fact itself would be unprovable in ZF.Other examples of statements “not ultimately about Turing machines and integers” are the Axiom of Choice, the Axiom of Determinacy, and large cardinal axioms.

Comment #156 August 5th, 2009 at 8:58 am

I noticed from the program posted on the CCC09 website that apparently there were no invited talks! Is that a deliberate policy of CCC?

Comment #157 August 5th, 2009 at 9:04 am

I’ve never known of any real alcoholism problem in a productive scientist, but I’m sure such cases must exist.George Gamow is one — he died of alcohol-induced liver failure.

There is a Swedish saying that applies in cases like Gamow’s: “The man takes a drink, the drink takes a drink, the drink takes the man.”

Comment #158 August 5th, 2009 at 1:02 pm

Follow up on WHEN WILL QUANTUM TECHNIQUES BE ACCEPTED AS NORMAL question.

Scott- you say that there is always some serious person who

will dispute anything.

Okay then let me rephrase

When will MOST PEOPLE agree that you need to know

Quantum Techniques?

When will MOST PEOPLE actually LEARN those techniques?

You Indicate that MOST PEOPLE SHOULD learn these techniques, which I agree with. But thats not the same as

having them do it.

Comment #159 August 5th, 2009 at 2:15 pm

What do you think of Aubrey de Grey’s Strategies for Engineered Negligible Senescence (SENS.org) ?

Comment #160 August 5th, 2009 at 2:24 pm

Jan (#156): No, I think it was just logistics (they didn’t have room for invited talks in the schedule this year).

Comment #161 August 5th, 2009 at 2:25 pm

Bill G. (#158): I have no powers of prognostication over and above what I already told you. And in any case, you already used up your question. 🙂

Comment #162 August 5th, 2009 at 2:33 pm

What do you think of Aubrey de Grey’s Strategies for Engineered Negligible Senescence (SENS.org) ?Michael G. R.: I have no moral objections whatsoever. As soon as it can be made to work, sign me up! 🙂

Comment #163 August 5th, 2009 at 3:13 pm

Follow up on WHEN WILL QUANTUM TECHNIQUES BE ACCEPTED AS NORMAL question.Here is one answer to that. Even if quantum computers don’t exist, quantum techniques could be accepted as normal when they help you prove things about non-quantum complexity classes. One example is Scott’s paper that PP = PostBQP. This result immediately establishes properties of PP that were considered non-trivial.

Another example is my new paper in the arXiv, “How hard is it to approximate the Jones polynomial?” I show that under a conservative interpretation of approximation (not one that gets exponentially worse with the length of the input), certain values of the Jones polynomial are #P-hard to approximate. If the Jones polynomial is too quantum for you, a slightly weaker version of the result (multiplicative approximation) also applies to certain values of the Tutte polynomial, planar case.

Actually it’s not even mostly my result; it follows very quickly from Scott’s theorem and quantum universality of the Jones polynomial.

When will MOST PEOPLE agree that you need to knowQuantum Techniques?

Obviously, there are too many amazing topics in mathematics and CS theory for any one person to learn even half of them. You “need” to know all of them, but you don’t really need to know any of them. You’re free to take your pick.

Comment #164 August 5th, 2009 at 3:55 pm

The above GASARCH/Kuperbook remarks seem (to me) to point to yet another avenue to answering the question “When will quantum techniques be accepted as normal?”

And that is to recognize that quantum models of computation are

alreadyaccepted as normal … both mathematically and physically … and furthermore, to foresee that in the future,even moremodels of computation will come to be accepted as normal.Now, I’m no expert … but isn’t the “Computational Hardware Zoo” growing to be similarly diverse to the “Computational Complexity Zoo”?

Because (roughly in order of increasing hardware goofiness) we have finite-depth circuits … which can be simulated (AFAIK) by Turing machines … which can be simulated (AFAIK) by quantum Turing machines … or alternatively (AFAIK) by adiabatic quantum solvers … which (AFAIK) can be simulated by Blum-Shub-Smale machines.

Interesting questions (to me) include: (1) Are there intermediate “Hardware Zoo” levels to the above (like spin-glass networks, perhaps) and if so, how many levels does Hardware Zoo have? and, (2) Which of the “Hardware Zoo” devices can be feasibly fabricated in

ourphysical world?Or perhaps it is a theorem—maybe even an already well-known construction—that the “Hardware Zoo” is isomorphic to the “Complexity Zoo”? … that models for physical computation “obviously” map one-to-one onto complexity classes?

There are plenty of real-world problems here … for example the ubiquitous problem of computing the “musical isomorphism” on

n-dimensional spaces in O(nlogn) operations, instead of O(n^2)Comment #165 August 6th, 2009 at 1:39 am

Hi Scott,

What do you think of K. Mulmuley’s (of UChicago) approach to P vs. NP via GCT?

Do you think P =/= NP to be a question worthy of attack by the best minds around? (Many giants disagree famously e.g., Fortnow.)

Comment #166 August 6th, 2009 at 3:08 am

Scott, thanks for the answer about quantum logic, which clears things up helpfully.

I saw some profile of Robert Wilson, where he said his ambition on retiring from physics was to move to Santa Fe and become an alcoholic and hang around the rail yards, or something like that. But he ended up building Fermilab instead.

Timo, there are many questions about arithmetic that can’t be decided by Turing machines–any unprovable statement in Pi-2 or higher, I think. For example, it could be that the Twin Primes conjecture is undecidable, and that this can’t be encoded as a halting problem.

Comment #167 August 6th, 2009 at 10:11 am

I am surprised that more examples aren’t offered of high-functioning alcoholics and substance-users in science. Mathematician Paul Erdös was an amphetamine user … the surgeon William Steward Halstead was a morphine user … molecular biologist Kary Mullis used hallucinogens … and of course innumerable scientists use coffee and tobacco … these examples pretty much span the spectrum of neuropharmacology.

Paul Erdös’ case is especially interesting, since he professed to be unable to sustain his fast-paced creative mathematical output

withoutthe daily use of stimulants.It appears that considerable numbers of younger scientists and mathematicians are following Erdös’ example … a recent

Naturestudy estimated that the prevalence among scientists of ‘brain doping’—via performance-enhancing drugs such as Adderall, Provigil, and Ritalin—may be as high as 20%.Although I come from a culture that is vehemently anti-drug, and never have used them myself, the evidence is convincingly strong that—as with steroids in athletics—these drugs work well for many (most?) people.

Comment #168 August 6th, 2009 at 12:43 pm

@asdf: “Timo, there are many questions about arithmetic that can’t be decided by Turing machines–any unprovable statement in Pi-2 or higher, I think. For example, it could be that the Twin Primes conjecture is undecidable, and that this can’t be encoded as a halting problem.”

What does it mean for a Twin Primes conjecture to be undecidable? The definition of decidability involves a decision problem and the algorithm for solving it has to decide correctly for all cases. If one is given a FINITE problem involving the truth falsity of a single statement, it is trivially decidable: build a machine that gives the right answer.

Can you explain precisely what you mean?

Comment #169 August 6th, 2009 at 1:57 pm

A bit more about Wick rotations. In my understanding, the map through analytic continuation is between quantum time-ordered correlation functions and between correlation functions in classical statistical mechanics only exists for systems admitting stationary states. Essentially this restricts one to time-independent Hamiltonians. Although this condition could probably be relaxed to adiabatic settings, there is a clear cut class of time dependent Hamiltonians that do not allow for Wick rotation of correlation functions. A simple example is a Hamiltonian with discontinuous dependence on time (an interaction instantaneously being switching on or off).

So, even if computing the classical statistical mechanical correlation functions and analytically continuing them were easy, that alone does not give you all of quantum mechanics. I’m no expert in quantum computing, but I imagine that such time dependent Hamiltonians would be important, thus preventing complete equivalence between quantum mechanics and classical statistical mechanics for the purposes of computational complexity theory.

Comment #170 August 6th, 2009 at 2:25 pm

I’m no expert in quantum computing, but I imagine that such time dependent Hamiltonians would be importantFirst of all it’s a bit silly to shoehorn all of quantum computing into Hamiltonians. Hamiltonians are important for certain algorithms and methods in quantum computing. However, the most flexible general model has discrete time, local unitary operators at each time step, and no explicit Hamiltonians.

That does underscore your point, if maybe in a slightly different way. Maybe there could be a Hamiltonian and maybe it could depend continuously on time. However, Wick rotation is a method to classically compute what a quantum system does. Regardless of formalism, the whole point of quantum computing is to compute things that are classically intractable.

Comment #171 August 6th, 2009 at 3:14 pm

John Sidles,

Erdös was known for doing the kind of math that one could actually do while high on speed. 🙂

Comment #172 August 6th, 2009 at 5:56 pm

Scott asserts:

Regardless of formalism, the whole point of quantum computing is to compute things that are classically intractable.Yes … that’s “Plan A” for quantum computing … but doesn’t every enterprise need a “Plan B” as a backup?

Hmmm … let’s see … a certain Scott Aaronson said in Zurich “The linearity of QM helps tame the exponentiality of QM”.

So maybe “Plan B” for quantum computing is to study QM models that abandon *both* the linearity and the exponentiality? This would constitute what this same Aaronson has argued would be very important QIT advance: “a plausible complexity-theoretic story for how quantum computing could fail”.

As Reinhard Selten famously asserted: “Game theory is for proving theorems, not for playing games.” …

… isn’t it arguable that quantum computing should purse at least some “Plan B” paths, to avert the risk of the following the same path?

IMHO, the implications of QIT/QIS are so broad-ranging, and the problems of our planet so urgent, that it might prove be a irretrievable tragedy for humanity if QIT/QIS evolved into (predominantly) a theorem-proving exercise.

Comment #173 August 6th, 2009 at 11:28 pm

Joe, what I mean is that it could be that there is no decision problem for the twin primes conjecture (TPC). If the TPC is undecidable, it means there is no proof or disproof of it (under some reasonable arithmetic axioms). It’s not like Goldbach’s conjecture, which (if false) has a counterexample. Goldbach’s conjecture is true iff a Turing machine that searches for a counterexample never halts. But there might be no such machine that halts iff TPC is true. You can have it search upwards for twin prime pairs P,P+2, but if it reaches some very large one and then goes for a long time without finding another, it might be that it’s found the last one (i.e. TPC is false) or it might only be that it’s reached an unusually large gap between such pairs and it might eventually find more.

Comment #174 August 7th, 2009 at 2:21 am

What moral objection can one raise if a scientist willingly ingests Axiomatilin to speed up his thinking? This is nothing like the case of athletes.

Comment #175 August 7th, 2009 at 8:03 am

Joe Asks:

What moral objection can one raise if a scientist willingly ingests Axiomatilin to speed up his thinking? This is nothing like the case of athletes.Joe raises is a richly-structured question about a richly-structured subject … the answers are discomfiting … and that is why it’s a disservice to dismiss Joe’s question with glib-and-comforting answers.

Aren’t there plenty of parallels between professional science and professional athletics?

Example A: Two AAA baseball players are competing for one major-league slot. One starts juicing … doesn’t the other have to start juicing too? … the inexorable result is a culture of juicing.

Example B: Two investment funds are competing for capital. One starts trading electronically … doesn’t the other have to start trading electronically too? … the inexorable result is a culture of electronic trading.

Example C: Two graduate students are competing for one postdoc slot. One starts juicing to finish his/her thesis faster … doesn’t the other have to start juicing too? … the inexorable result is a culture of scientific juicing.

That “A” has

alreadyhappened is undeniable … that “B” hasalreadyhappened is undeniable … that “C” hasalreadyhappened … well … ???Questions like this are (IMHO) an excellent spur to creative thought … because they encourage us to conceive radical approaches to the enterprise of science, mathematics, and engineering.

Suppose we heed Wittgenstein’s advice, that often it is easier and better to dissolve a problem (by changing the premises) than to waste time trying to solve what may be a poorly-posed problem … how would we go about

dissolvingthese thorny moral problems, Wittgenstein-style?Just to point out one obvious approach—that von Neumann, Wiener, Feynman, and Pauling all advocated—these problems are substantially mitigated if we scale-up the number of jobs in science/math/engineering by one or two orders of magnitude.

Needless to say, this approach too has significant challenges … but they are

differentchallenges … challenges that seem (to me) to be more tractable.Comment #176 August 7th, 2009 at 11:51 am

Scott, I’d like to know if you’ve ever thought about writing a book on complexity, especially an introductory one. I know there are already some good books out there, but you really have a gift for explaining things in a clear and engaging way, and I bet you could turn quite a few people on to complexity theory.

Comment #177 August 7th, 2009 at 2:03 pm

By the way, there is a terrific (IMHO) PNAS article by Jacob Goeree and Charles Holt that presents strong arguments

againstReinhard Selten’s maxim that “game theory is for proving theorems, not for playing games”… and these same arguments speak to the question of whether QIS/QIT/complexity theory is/should/might become predominantly a theorem-proving discipline.

The article is titled

Stochastic game theory: For playing games, not just for doing theoryand the gist of the argument is that “By relaxing the classical assumptions of perfect rationality and perfect foresight, we obtain much-improved explanations of initial decisions, dynamic patterns of learning and adjustment, and equilibrium steady-state distributions.”A cognate expression of the Goeree-Holt idea in QIS/QIT is easy to construct: “By relaxing the classic QIS/QIT assumptions of perfect linearity and exponential dimensionality, we obtain much-improved explanations of algorithmically compressed state descriptions, dynamic aspects of state learning and concentration, and equilibrium thermodynamic distributions.”

Obviously, game theory

remainsa theorem-proving discipline (whose theorems are wonderful!), but thanks to a willingness to embrace broader sets of axioms—axioms that seem at first acquaintance to make theorem-proving undesirably harder—game theoristsalsohave greatly enhanced the relevance of their discipline to real-world game-playing.… and (IMHO) there are useful lessons here for QIS/QIT/Complexity theory.

Comment #178 August 8th, 2009 at 4:20 pm

B is sensible, A and C are nonsense and unsupported by data.

Comment #179 August 8th, 2009 at 8:06 pm

How do you feel about this:

http://www.technologyreview.com/computing/23137/

?

Comment #180 August 9th, 2009 at 3:59 pm

Back from holiday to find a week’s worth of comments, 179.

anon@124: “Why are the weights of physical objects additive? I mean all we know is that if we lump two objects together the whole is heavier than the part but what is it that allows us to say that for objects assigned weight a and b, the weight of their composite: w(a,b)=a+b and not some other function such that w(a,b)>a, b? We can’t look at measuring devices since they are constructed assuming that this is the case.” Some answers were given above, but try Hasok Chang’s “Inventing Temperature”, Oxford, 2004, which won the Lakatos Award for best Philosophy of Physics book. This tackles the same question very effectively indeed for temperature. In answer to some responses, I suppose that for any apparatus, there will be a point at which 2*m will break the scale rather than report the correct sum. If we define mass to be perfectly additive, but we have no scale capable of measuring all possible combinations of masses, that seems to be a problematic empirical foundation, but it’s OK FAPP, for engineers.

Comment #181 August 9th, 2009 at 4:15 pm

To use my one question, a week late, regarding comments on analogue computers, I point out my paper “Equivalence of the Klein-Gordon random field and the complex Klein-Gordon quantum field”, to appear in EPL in a few weeks (in the form found at http://pantheon.yale.edu/~pwm22/KGeq.pdf; I will update the arXiv version after publication). This is a very different relationship between classical and quantum than that of Wick rotation, pointing to a possibility that we may be able to understand QFT to be classical stochastic signal processing in the presence of universal Lorentz invariant noise.

If Planck’s constant is no more than a measure of Lorentz invariant noise, quantum complexity is reducible to correlations of a random field. A random field is presumably more capable than a deterministic analogue computer, however, so this is not so revolutionary, right?

So Scott, any opinion on this?

Comment #182 August 9th, 2009 at 4:40 pm

And finally, Scott, I feel like taking your comment #77 to task, “quantum computer would be a highly dynamic quantum system, one that hasn’t by any means settled into a ground state, so this sort of trick can’t be expected to work in general.”

Although it’s true that a quantum computer would not be in a fine-grained equilibrium state, in order to measure a quantum state the experimenter has to provide themselves with experimental datasets that can reasonably be called ensembles. We have either to provide a large number of widely separated experimental apparatuses, from each of which we take one data point, or else we have to ensure that each data point that we take from a single experimental apparatus can reasonably be called independent of previously taken data points. This is to say that the apparatus is in a coarse-grained equilibrium, even though most measurement apparatuses are thermodynamically nontrivial macroscopic objects that are definitely not at equilibrium, so there is certainly no fine-grained equilibrium, just as you say.

Equilibrium in this case is mostly a matter of invariance of statistics under time translations. As a special case, if we use the same experimental apparatus in a week’s time, we had better obtain the same statistics of the 2nd experimental dataset as we obtained the week before. I also note that an experimentalist does not turn on the power to their apparatus and immediately start taking data, before the power transients have faded, and I suppose one typically ensures that there are no great thermal gradients in the apparatus; there must be some pragmatic moment when an experimenter knows that it’s safe to take data that will be reproducible.

I guess this would be a question if you were to answer it.

Comment #183 August 9th, 2009 at 9:23 pm

Where can someone without a ton of mathematics education learn about complexity theory? It all seems so complicated and untouchable.

Comment #184 August 10th, 2009 at 2:52 am

Matt, I guess cutting-edge complexity theory involves a lot of math, but at the basic undergraduate level there’s not all that much math involved. I started with the old books by Aho and Ullman before learning much math at all. I’m sure there are comparable but newer books now. I’ve been wanting to look at Sipser’s and at Brassard’s. For the math related to the subject, Knuth/Graham/Patashnik’s “Concrete Mathematics” is pretty accessible, starting from a reasonable high school math nerd level (i.e. including basic calculus but no fancy abstract algebra or anything like that).

Comment #185 August 10th, 2009 at 2:53 am

Different Joe #165, Scott wrote a blog post a month or two ago about Mulmuley’s approach to P-vs-NP. You can probably find it by searching backwards.

Comment #186 August 10th, 2009 at 2:32 pm

Where can someone without a ton of mathematics education learn about complexity theory? It all seems so complicated and untouchable.Your request is a bit contradictory since complexity theory is itself some of the math education that you feel hat you’re missing! But as asdf says, at the beginning it’s not all that bad.

There is an introduction at the Petting Zoo. This introduction is certainly more gentle than the main Complexity Zoo, but maybe it is still a bit formal, I’m not sure.

Comment #187 August 10th, 2009 at 4:13 pm

Scott, I have a (sincere and respectful) question about the ideas in your article

Multilinear Formulas and Skepticism of Quantum Computing… ideas that I would like to mention (favorably) in a lecture that is scheduled for this coming Thursday, at the Kavli/Cornell ConferenceMolecular Imaging 2009: Routines to Three-Dimensional Imaging of Individual Molecules.The Kavli attendees have already listened to terrific talks on microscopy by Cornell’s David Muller, the NIH’s Sriram Subramanian, IBM’s Dan Rugar and John Mamin, and ETH Zurich’s Beat Meier. There is a pretty considerable aura of excitement at this particular Kavli Conference, because all of the speakers (so far) are pursuing the same long-term goal: a comprehensive survey of the biome with atomic resolution.

As Beat Meier described it, ongoing advances in many forms of microscopy are leading us to envision “Google Cell” — a zoom-in database showing

allthe components of living organisms.Now, for technical reasons that David Muller reviewed, a great proportion of the atomic-resolution data in “Google Cell” will be obtained via

quantum spinchannels … the reason being that the only generic, structurally localized states in biological molecules that can be excited non-destructively are the spin states.In consequence there is great interest here at the Kavli Conference in simulating systems of hundreds to thousands of interacting quantum spins, at a level of realism and accuracy that can be compared directly with quantum spin microscopy experiments of ever-increasing sophistication.

To construct these simulations, the spin microscopy community requires what your article calls “Sure/Shor separators” … that is to say … we need to understand “Exactly what property separates the quantum states we are sure we can create, from those that suﬃce for Shor’s factoring algorithm.”

Yesterday, I distributed tutorial session notes that outline the spin microscopy community’s working understanding of Sure/SHOR separators. These tutorial notes emphasize the practical necessity of respecting the three key physical aspects of quantum mechanics (interference, entanglement, and “Schroedinger cats”), and feasibility of achieving this by respecting the key mathematical elements of quantum simulation: the well-known “compatible triple” of symplectic, Riemannian, and complex structure (which are the defining characteristics of a state-space being a Kähler manifold).

In essence, symplectic structure bestows simulations with “classical goodness”, Riemannian structure bestows simulations with “geometric goodness”, and complex structure bestows simulations with “quantum goodness” (all of these ideas are borrowed more-or-less directly from the literature of classical symplectic simulation).

As for quantum linearity and large-dimensionality, the tutorial notes blithely assure the Kavli attendees that (in the words of Ashtekar and Schilling) these structures are mere “technical conveniences” that need not be respected—in fact, must

notbe respected—in simulating (with polynomial resources) any present or contemplated quantum spin imaging experiment.To implement this program, spin simulations require a mathematically explicit “Sure/Shor separator”. And because simulation realism demands that the complex, Riemannian, and sympletic properties of orthodox QM be respected, there are not too many mathematical structures available as candidates.

The separator that spin simulations adopt—in practice—is the concentrating drift terms appears when Lindblad processes are specified in Ito-Stratonovich form as drift-and-diffusion processes, and then pulled-back onto (what your article calls) “multilinear tree state” manifolds … which are the low-dimension K&amul;hler state-spaces on which our simulations are computed.

In this picture, Sure/Shor separators appear naturally, as dynamical concentrating processes, from the orthodox Lindblad description of (Markovian) noise and (continuous) quantum measurement.

So my question is, what would your reaction be to a talk that suggested the following? (1) Concentrating drifts, arising from Lindblad measurement-and-control processes, are natural Sure/Shor separators. (2) Sure/Shor separators are generically present in (hot, noisy, measured) spin imaging environments … so that these quantum spin systems can be simulated with polynomial resources.

A question that my Kavli talk will

notattempt to answer, is whether the state-space of nature is linear and large-dimension, versus Kählerian and low-dimension … AFAICT there is little experimental evidence either way … here too we would be very interested in your thinking … and in the thinking of the QIT/QIS community in general.Comment #188 August 11th, 2009 at 1:19 am

Actually large cardinal questions ARE relevant to Turing Machine questions. If there is an inaccessible cardinal, then TMs looking for inconsistencies in ZF will never halt.

For AC and CH it is a theorem of Godel that they can have no relevance to arithmetical statements — neither assuming AC, nor assuming CH, nor denying AC, nor denying CH, will allow you to prove any arithmetical statement you couldn’t have already proven (though if you BOTH assume CH and deny CH, there will be some new arithmetical statements you can prove….).

Comment #189 August 11th, 2009 at 4:31 pm

Can anyone provide a non-technical explanation of how Grover’s algorithm is more efficient than a brute-force search. Every explanation I’ve found has been a bit too technical for me.

Comment #190 August 11th, 2009 at 9:56 pm

Scott, would you be kind enough to provide a “Shor, I’ll do it”-like demonstration of how to reduce an instance of a travelling salesman problem into an instance of a graph coloring problem? Please, it’s my birthday! 🙂

Comment #191 August 13th, 2009 at 8:22 pm

Re: high-functioning alcoholic or addicted mathematicians or scientists, the category theorist Peter Freyd is/was apparently an alcoholic, who went into rehab at some point in the early ’80s and I assume has been sober since. (See this.) I don’t know if he had a problem when he did his strongest work in the early ’60s, though.

Scott, I know I already used my question, but “this’ll be fun!” :). Can you extend Terry Tao’s wonderful expository relativization metaphor to a wonderful expository metaphor for algebrization? I’d been thinking about relativization essentially in terms of similar metaphors before Terry’s post, but never really grokked algebrization, and I suspect I’m not the only one; something similar would be really useful.

Comment #192 August 14th, 2009 at 3:20 am

“Where can someone without a ton of mathematics education learn about complexity theory? It all seems so complicated and untouchable.”

How about the book by David Harel as a start? You don’t have to read the whole book. Some basics will get you started.

Then, you can move to Arora’s book or Papadimitriou’s book. I recommend the latter, because it is better written and much more polished.

Comment #193 August 14th, 2009 at 1:12 pm

Being currently about to finish PhD in Comp Sci (approximation algorithms), I am very much interested in Complexity Theory although I didn’t have a chance to meet anyone who can lecture. Indeed, whatever I have done was marginally related to what Profs around me knew. It seems to me that, in TCS community there are either very smart idealist people who have deep ideas, or people who have no taste of elegance and beauty, trying to publish stuff to get promoted. I don’t generalize by any means, but what I also have seen was unfortunately: extreme collaboration and (implicit or explicit) arrogance. I am, to a great extent, disappointed in Computer Science for several reasons. In particular, do you personally think that it is possible to teach oneself and do something without interacting with the bright minds (which I have been doing)? This was not what I thought when I read G.H. Hardy in high school.

PS: I am technically not a nerd and I don’t have plans to solve P vs NP.

Comment #194 August 14th, 2009 at 8:51 pm

Alis says:

But what I also have seen [in TCS] was unfortunately: extreme collaboration and (implicit or explicit) arrogance.Gee … those don’t seem too bad … aren’t the opposite traits (isolationism and timidity) similarly unfortunate, if carried to extremes? The point being, it is not necessary (or even desirable) for everyone to think and feel and act alike.

Dirac’s definition of a Golden Era is “when ordinary people can make extraordinary contributions.” By this criterion the key questions to ask (depending on one’s degree of optimism) are: (1) “Have we

alreadyentered into Golden Era of TCS//QIT/QIS?” (2) “How near is the coming Golden Era of TCS/QIT/QIS?” (3) “Is there any hope at all that a Golden Era of TCS/QIT/QIS is approaching?”Comment #195 August 15th, 2009 at 2:20 am

Ideal markets beloved by economists are very efficient at absorbing information: “Past performance is not guarantee of future results” The moment somebody figures out a winning trading strategy it changes behavior and wipes out all the gains. Nevertheless, the individual traders are supposed to have a very limited knowledge.

it seems that this theory can be used to create a pseudorandom generator by creating a bunch of polynomial speculators and letting them trade with each other.

Do you know if there was any work done in that direction?

Maybe you can prove that the Efficient Market Hypothesis is equivalent to P=NP, or at least get some interesting bounds.

Comment #196 August 17th, 2009 at 4:02 am

… and perhaps the “extreme collaboration” is making everyone think alike??

Timidity is hardly as unfortunate as arrogance (and I doubt whether they are actually extreme opposites if that was the effect you were going for).

Isolationism is bad. Indeed, most grad students at smaller schools fall prey to it. But again, isolationism is not quite the opposite of extreme collaboration.

Comment #197 August 18th, 2009 at 10:28 pm

In your paper about independence of P vs NP you say that the Riemann Hypothesis is in PI-1 from Lagarias’ result and by “computing increasingly good upper bounds … dovetailing over all n”. I don’t see how to avoid the problem arising when both sides of the inequality are extremely close, and how to deal with real valued functions of exp and log. (Also, do you mean to dovetail over m where m equals the number of bits of accuracy?)

This concern is also noted in Lipton’s blog:

http://rjlipton.wordpress.com/2009/05/27/arithmetic-hierarchy-and-pnp/

where he considers a

Comment #198 August 18th, 2009 at 10:32 pm

…continued

where he considers a

Comment #199 August 18th, 2009 at 10:40 pm

[Sorry for the continuation and mess; obviously I’m having trouble with math formulas! – I’ll stick to English]

In Lipton’s blog see the first Anonymous comment, that Lipton acknowleges but does not fully address.

So, can these objections be overcome and still use Lagarias’s result?

However, there is a theorem that can be used:

RH iff for all n

( SUM over k less equal delta(n) of 1 over k – half n squared ) squared less or equal 36 n cubed

where delta(x) = PRODUCT over n less than x of

PRODUCT over j less or equal n of eta(j) and

eta(j) = p if j is a power of a prime, p to the k; 1 otherwise.

Thanks to Brandon Fodden for the reference: Hilbert’s tenth problem. Diophantine equations: positive aspects of a

negative solution. Proc. Sympos. Pure Math. vol 28, Amer. Math. Soc., Providence, R.I., (1976) 323-378. (The PI-1 statement is also on his website.)

Comment #200 August 19th, 2009 at 11:19 am

Lew: You can compute arbitrarily good

upper and lower boundson a computable real number, not just approximations to it. So given two computable real numbers a<b, you can keep computing better upper bounds on a and better lower bounds on b until you’ve proved that a<b.Comment #201 August 19th, 2009 at 11:21 am

Ali (#192):

In particular, do you personally think that it is possible to teach oneself and do something without interacting with the bright minds (which I have been doing)?Sure, but interacting with “the bright minds” helps, so you should seek them out whenever feasible.

Comment #202 August 19th, 2009 at 11:25 am

Oleg (#194): There’s a huge body of recent work on the computational complexity of finding market equilibria—see especially the breakthrough result of Daskalakis, Goldberg, and Papadimitriou (and followup work by others) on the PPAD-completeness of computing a Nash equilibrium.

Comment #203 August 19th, 2009 at 11:28 am

harrison (#190):

Can you extend Terry Tao’s wonderful expository relativization metaphor to a wonderful expository metaphor for algebrization?I’m a huge fan of Tao’s blog, but to be honest, this particular metaphor seems to take something simple and make it complicated. If it works for you, great—but I’m probably not the right person to extend it to algebrization.

Comment #204 August 19th, 2009 at 11:33 am

Peter Morgan (#181):

So Scott, any opinion on this?Sorry, I don’t like questions that require reading the questioner’s paper even to understand what’s being asked. 🙂

(Specifically, I don’t understand what a “random field” is in your context—is it possible to give a completely discrete explanation, involving only bits and qubits?)

Comment #205 August 19th, 2009 at 11:36 am

Sim (#189):

Scott, would you be kind enough to provide a “Shor, I’ll do it”-like demonstration of how to reduce an instance of a travelling salesman problem into an instance of a graph coloring problem? Please, it’s my birthday! 🙂Happy birthday!! But on your next birthday, could you ask for something less wince-inducing? 🙂

Alright, alright, here’s an outline:

First reduce Traveling Salesman to SAT. (That this is doable follows immediately from the Cook-Levin Theorem.)

Next reduce SAT to 3SAT. (That’s a simple reduction, as pointed out in Cook’s original paper.)

Finally reduce 3SAT to coloring. (That’s a standard gadget construction—it’s even covered in my GITCS notes, linked to on the sidebar of this blog.)

For more read Garey & Johnson, or google for lecture notes about NP-completeness until you find ones that you like.

Comment #206 August 19th, 2009 at 11:43 am

Anon (#179): Looks cool! Wish I knew enough to say more.

Comment #207 August 19th, 2009 at 11:45 am

Different Joe (#165):

What do you think of K. Mulmuley’s (of UChicago) approach to P vs. NP via GCT?I’m planning a whole blog post about it—stay tuned!

Comment #208 August 19th, 2009 at 11:51 am

Joe C. (#33):

What about quantum complexity, QIS… What are good introductory books?Sorry for the long delay! I recommend David Mermin’s “Quantum Computer Science” for complete beginners, and then the Nielsen & Chuang book.

The Feynman Lectures are … well, the Feynman Lectures, among the greatest physics books ever written. Just don’t expect to learn about quantum computing / information from them.

Comment #209 August 19th, 2009 at 11:59 am

Vadim P. (#176):

Scott, I’d like to know if you’ve ever thought about writing a book on complexity, especially an introductory one.I’ve done more than think about it! Within a year or so, Cambridge University Press will be publishing a book based on my Quantum Computing Since Democritus lectures.