Ask Me Anything! Tenure Edition

Update (5/7): Enough!  Thanks, everyone, for asking so many imaginative questions, and please accept my apologies if yours remains unaddressed.  (It’s nothing personal: they simply came fast and furious, way faster than I could handle in an online fashion—so I gave up on chronological order and simply wrote answers in whatever order they popped into my head.)  At this point, I’m no longer accepting any new questions.  I’ll try to answer all the remaining questions by tomorrow night.


By popular request, for the next 36 hours—so, from now until ~11PM on Tuesday—I’ll have a long-overdue edition of “Ask Me Anything.”  (For the previous editions, see here, here, here, and here.)  Today’s edition is partly to celebrate my new, tenured “freedom to do whatever the hell I want” (as well as the publication after 7 years of Quantum Computing Since Democritus), but is mostly just to have an excuse to get out of changing diapers (“I’d love to, honey, but the world is demanding answers!”).  Here are the ground rules:

  1. One question per person, total.
  2. Please check to see whether your question was already asked in one of the previous editions—if it was, then I’ll probably just refer you there.
  3. No questions with complicated backstories, or that require me to watch a video, read a paper, etc. and comment on it.
  4. No questions about D-Wave.  (As it happens, Matthias Troyer will be giving a talk at MIT this Wednesday about his group’s experiments on the D-Wave machine, and I’m planning a blog post about it—so just hold your horses for a few more days!)
  5. If your question is offensive, patronizing, nosy, or annoying, I reserve the right to give a flippant non-answer or even delete the question.
  6. Keep in mind that, in past editions, the best questions have almost always been the most goofball ones (“What’s up with those painting elephants?”).

That’s it: ask away!


Update (5/12): I’ve finally answered all ~90 questions, a mere 4 days after the official end of the “Ask Me Anything” session!  Thanks so much to everyone for all the great questions.  For your reading convenience, here’s a guide to my answers (personal favorites are in bold):

 

292 Responses to “Ask Me Anything! Tenure Edition”

  1. wolfgang Says:

    What is the probability that we live in The Matrix ?

  2. Scott Says:

    wolfgang #1:

      What is the probability that we live in The Matrix ?

    Well, I’d estimate the probability that we live in the “Movie Matrix” (defined as a simulated world that keeps humans entertained while evil AIs extract electric power from them, which can be escaped by swallowing an appropriate pill, and destroyed via slow-motion kung-fu battles) as extremely close to 0. But I’d also estimate the probability that we live in the “Unitary Matrix” (defined as a universe that obeys the computable laws of quantum mechanics, which could in principle be efficiently rendered by a quantum computer, though whether it is or isn’t so rendered is probably an operationally meaningless question) as fairly close to 1.

  3. Barry McManus Says:

    What is the saddest thing?

  4. Arrogant Undergrad Says:

    What do you think are a few of the most accessible open problems in complexity theory to someone with a limited background (presumably, they’ll be unimportant enough that real complexity theorists such as yourself haven’t bothered to solve them)?

    By limited, I mean have digested most of Sipser and Arora-Barak.

    Thanks Scott!

  5. itssummer Says:

    Razbororov produced a superpolynomial monotone circuit bound for an NP complete problem. Is an exponential lower bound provable for the same model within our known knowledge?

  6. itssummer Says:

    In 2011 in your comment here https://scottaaronson-production.mystagingwebsite.com/?p=741#comment-26835 you gave a possible explanation for why Grover’s algorithm does not close the question that quantum computing is superior. Since you were out of time then, I could not ask the question further. Here I go after almost 2 years. Could you comment more on the scaling part?

  7. me Says:

    Could you explain fisher information in a way that makes it seem inevitable?

    Shannon information is beautiful. Kullback-Leibler Divergence, makes sense because of this explaination http://www.snl.salk.edu/~shlens/kl.pdf‎ .

    But I havn’t yet seen the steps laid out on how fisher information was derived. (and so I don’t have confidance that it measures what it is supposed to)

  8. Alex Says:

    Is it true that as a postdoc introducing talks, you used to interrupt and end presentations that were not interesting?

  9. bitter Says:

    Hey Scott, how come you had the title of assoc prof before you had tenure? Is it common that the two promotions are treated separately, or were your circumstances exceptional?

  10. starc Says:

    Can you give a one paragraph exposition of Glauber dynamics?

  11. Curious Wavefunction Says:

    What’s your favorite interpretation of quantum mechanics?

  12. Lukasz Grabowski Says:

    apropos comment #1, how do you estimatete probability of us being a simulation ran by someone else, i.e. we dont have any ‘physical’ bodies?

    since this is a clarification question i hope it’s ok i’ll keep my right to ask a proper question 🙂

  13. Scott Says:

    starc #10:

      Can you give a one paragraph exposition of Glauber dynamics?

    No, absolutely not! Thanks for such an easy question. 🙂

  14. Mathematician5 Says:

    Is the universe (or the entire history of the universe) really just a vector in a Hilbert space, and if not, where does quantum theory break down?

  15. Shehab Says:

    Can a quantum system generate true random sequence? If chaotic system also can generate true random sequence, can we build a partition between the sets of quantum true random sequence generators and the sets of chaotic true random sequence generators based on their scale? If such partition exists can we apply the idea in a reverse direction to test quantumness? If quantum measurements can never be unbiased in real world can we say that a quantum system can not generate true random sequence?

  16. Scott Says:

    Alex #8:

      Is it true that as a postdoc introducing talks, you used to interrupt and end presentations that were not interesting?

    I have no recollection of such a thing! As a postdoc at Waterloo, I did run the weekly IQC lunch for a while, but we didn’t even really have formal talks there. We just had students, visitors, etc. speak for 5-10 minutes about open problems or interesting things they had found. And as “moderator,” sure, my role was both to ask questions and to figure out when a discussion had died down and it was time to move on to the next person’s show-and-tell. So, I assume that’s where the “legend” you refer to originated, but I seem to have turned into some sort of ogre in the retelling! 🙂

  17. James Miller Says:

    My 8-year-old son likes computer programming. We have a system where he does X minutes of “learning” to earn X minutes of video game time, where I get to pick the learning. Do you have suggestions for types of learning that will help develop his interest and skill in programming?

  18. davood Says:

    As a comment to your previous post you said “my tenure case heavily involved three things—algebrization, BosonSampling, and quantum money”. How d’ya know? Are these 3 items what you emphasized in your statement, or have you seen letters of others, or what?

  19. hull Says:

    Suppose you are engaged to lead a project to try to build, in a reasonable amount of time (20 years, 30 years?), a “real” q. computer (you define what “real” means, I suppose one that can be used to solve “efficiently” non trivial problems of practical interest). Suppose you are given an unlimited amount of resources to complete the task (unlimited money, top level scientists of your choice in all the fields you think are relevant for the job etc., anything available on the earth ..) What would you practically do?

  20. ppnl Says:

    Could you do a quantum version of Conway’s game of life?

    For example start out with all cells in a 50% mixed state. As they evolve and interact you should generate arbitrarily complex linked states. But if you never look at any of the cells they will all be in the same state in a sense.

    Now look at one of the cells. What happens to all the others?

  21. RV Says:

    What are some of the primary differences between the cultures of the academic institutions you have been a part of so far?

  22. Mathematician5 Says:

    hull#19: If you have “an unlimited amount of resources” then just build a classical computer to simulate a quantum computer.

  23. Alejandro Says:

    I did an exchange program at UC San Diego 3 years ago, taking Math and CS courses, and found it quite easy compared to my undergraduate program in Chile (btw, I attended your plenary talk in Arequipa, Peru, last year). Knowing that UCSD is ranked amongst the top American universities, many of my classmates felt a bit offended by this opinion of mine (which is quite understandable). My own Algebra T.A. in San Diego –who was American and was starting a PhD in Mathematics– used to tell us that he was realizing how little he knew, and that he had to study extra hard to catch up with his foreign classmates.

    Although I do recognize the superiority of many U.S. institutions at a graduate/research level, I still think the undergraduate world is not living up to the expectations of such big reputations (M.I.T. might be an exception). Your thoughts on this?

  24. Serge Says:

    Why is polynomial behavior considered as the only practical definition of feasibility, given that the feasible algorithms actually require very low exponents and low constants to be executed on a computer?

    Subsidiary question:
    What intuition tells you that the NP-complete problems couldn’t be solved by some polynomial algorithm which can’t be run because it doesn’t meet the above constraints?

  25. Tom Says:

    Who is your favorite complexity theorist?

  26. Matt Leifer Says:

    A while ago, I asked several academic publishers whether they would publish a book that had been made available electronically for free under a permissive license. Most of them said that they would provided they could have exclusive rights to the print version, so I was wondering about your experience of this.

    Did the fact that your lecture notes are available for free on the interenet cause any problems with getting your book published and what is the status of your online lecture notes now that the book is published, i.e. does CUP own the copyright or you and is CUP going to sue me if I print the whole thing out and give it to my students?

    OK, that’s several questions, but I bunched them all together so that I only had to use one question mark.

  27. Evan Says:

    What are your top 5 (or so) books that haven’t yet been written that you would most like to read? (Within a five-year horizon, so let’s say “a popular account of the proof that P!=NP” is a silly answer.)

  28. Jim Graber Says:

    Will we ever use a proof program the way we do Mathematica or Maple? Anytime soon? Why or why not?

  29. Scott Says:

    Barry McManus #3:

      What is the saddest thing?

    First thoughts:

    • The Holocaust.
    • The near-complete destruction of the civilizations of the Americas on contact with the West.
    • The massive, irreversible, easily-preventable loss of the earth’s biodiversity in our own time.
    • The fact that there’s no God willing or able to prevent any of the countless things like the above.

    But on reflection, talking about the deaths of millions of innocent people, or the problem of evil, is way too abstract and impersonal to evoke anything close to the maximum amount of sadness. I could easily give you a “sadder thing” (the “saddest thing”? I don’t know) by relating—in graphic, novelistic detail—the true story of a single child somewhere being raped at gunpoint and then forced by soldiers to dig her own grave.

    But I don’t want to do that, because it would be too sad.

  30. Bram Stolk Says:

    Could the outcome of the double slit experiment be explained by ‘lazy evaluation’ by the computer simulating our universe? The quantum eraser experiment shows that a slit is chosen only if we measure AND we keep the measurement result. ‘Lazy evaluation’ of our universe could be skipping the slit-choice.

  31. hull Says:

    Mathematician5 #22
    Sorry if the question is not properly exposed, I’m absolutely not an expert.. does the question makes more sense if I add something like
    “a ‘real’ q. computer (not a simulated one)…” ?
    What I would like to understand is what prof. Aaronson thinks about the practical difficulties of such a task and how he would tackle them

  32. Sid Says:

    How long would you want to live if you could live as long as you wanted in good health?

  33. Scott Says:

    ppnl #20:

      Could you do a quantum version of Conway’s game of life?

    Yes, of course, why not? There’s an entire field devoted to quantum cellular automata. On the other hand, one immediate difficulty is that Conway’s Game of Life is not a reversible CA. So it’s far from obvious what the “right quantum analogue” of GoL would be. My guess is that there are many different possible “quantum Games of Life” that could be suggested. To decide between them, we should probably just study them and see which one produces the most interesting behavior—i.e., the same thing Conway did to invent his otherwise-arbitrary rule for the original Game of Life!

      Now look at one of the cells. What happens to all the others?

    Well, if it’s entangled (or even just classically correlated) with those other cells, then after we condition on the outcome of our measurement, the states of those other cells will change, of course. That’s just standard QM and probability theory.

  34. Xiaotong Ni Says:

    Maybe a stupid question, but I’m always wondering why people tries to prove P!=NP even though people still cannot prove P!=PSPACE? Thanks 🙂

  35. itssummer Says:

    The massive, irreversible, easily-preventable loss of the earth’s biodiversity in our own time.

    I am unsure if it is easily-preventable. You should take a sabbatical in India or China and see the standard of living there. One can only guess what will happen if even 25% of their populations reach US standard of living. That would be 2 US’ there. May be P=NP will save us all. If P!=NP we are all probably doomed.

  36. Bill Kaminsky Says:

    [Preliminary Note: Hmm… seems like my first attempt got eaten by WordPress… apologies for any redundancy… hopefully this second attempt not only goes through, but also is actually more understandable.]

    To a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?

    I ask since SAT heuristics like survey propagation can sometimes handle millions of variables in linear time. Might SAT heuristics getting to the point that one might hope for sorta-brute-force computer-generated proofs of some wide swathe of interesting-to-humans theorems?

    P.S. I’ve done a nontrivial amount of Googling about automated theorem proving using state-of-the-art SAT solvers, but alas it seems to me that at least among the papers Google brings up (e.g.,”Encoding First Order Proofs in S[atisfiability] M[odulo] T[heories]” http://people.clarkson.edu/projects/carl/papers/smt_encoding.pdf) those in automated theorem proving aren’t focusing on the new nifty SAT heuristics like survey propagation, but good ol’ DPLL… especially since they seem focused actually on how you’d solve *UNSAT*. Please correct me if I’m wrong, but UNSAT is co-NP and proving P = NP wouldn’t settle NP vs. co-NP, correct? )

  37. Ken Schutte Says:

    I’ve read somewhere you criticize (but then, ultimately accept) Dirac (bra-ket) notation. As an outsider (computer scientist), I agree. Every other branch of math does just fine with standard notation for vector spaces, inner-products, conjugate transpose, etc. If you were inventing notation for QM from scratch, would use choose different notation? What?

  38. Jr Says:

    How much General Relativity do you know?

  39. Scott Says:

    Xiaotong #34:

      Maybe a stupid question, but I’m always wondering why people tries to prove P!=NP even though people still cannot prove P!=PSPACE?

    P≠NP is the “flagship” question of our field, and it’s a perfectly good choice of flagship—but the fact that it plays the flagship role doesn’t mean that complexity theorists currently spend much time trying to proving it. It’s just too hopelessly far beyond our current abilities—and so, for that matter, is the P vs. PSPACE question!

    So even the few people who are currently doing serious, ambitious work on circuit lower bounds—such as Ryan Williams, Ketan Mulmuley, and Ran Raz—concentrate for the time being on vastly easier questions even than P vs. PSPACE. One example was the NEXP vs. ACC question, which Ryan actually solved in his 2010 breakthrough. Some examples of current “frontier” questions are EXP vs. ACC, NEXP vs. TC0, and derandomizing polynomial identity testing.

  40. Scott Says:

    Jr #38:

      How much General Relativity do you know?

    Embarrassingly little. I know SR, and I’ve read at least two dozen expositions of GR (popular, technical, and everything in between, including Einstein’s originals), and I usually felt while reading the expositions that I more-or-less understood. But put me on a desert island with some blank paper, and could I retread Einstein’s path? Doubtful.

    For me, the three main stumbling blocks have always been the following:

    1. I hate tensor notation, with the raised and lowered indices, the mu’s and nu’s, the implicit summations, and the “artificial compression” of what are actually extremely complicated differential equations. (I once decided to expand out Einstein’s equation, “G=8πT,” directly in terms of the metric tensor. It took more than a page, but I did it, and I felt better after I did!) Now, I have enough experience (see upcoming answer about Dirac ket notation) to know that my revulsion is purely a function of my lack of familiarity, and that if I studied the subject more, I’d probably grow to love the convenience and power of tensor notation. But it still puts me off today! 🙂

    2. There are many topics in classical physics that are conceptually or historically prior to GR, and that I already don’t understand well enough. Those include gauge theory, Maxwell’s equations, the Lagrangian formulation of classical mechanics, celestial mechanics (i.e., how to solve Newton’s equations), and perturbation methods.

    3. Most important of all, I’ve never been clear on what my goal should be in learning GR. What questions am I trying to answer about it? Something to do with computational complexity, maybe? And if I don’t have a definite goal, then why isn’t my current, handwavy understanding good enough? Now, you might say: isn’t satisfying my intellectual curiosity a worthy goal on its own? And I’d say, sure it is, in principle! But in actual fact, being the slovenly animal that I am, I’ve never been able to force myself to learn anything whatsoever—in math, CS, physics, you name it—without a very visible carrot dangling in front of me to help organize my thoughts. (Now that I have tenure, though, maybe I’ll search for such a carrot, just because of the excuse it would give me to finally learn GR!)

  41. Scott Says:

    Ken Schutte #37:

      If you were inventing notation for QM from scratch, would use choose different notation? What?

    No, I can’t suggest any clear improvement over ket notation (though I also haven’t thought about it much). In fact, I confess that I sometimes slip into ket notation even when doing linear algebra that has nothing to do with QM! 🙂

    There are at least three advantages of ket notation:

    1. Provided we avoid the slip I mentioned above, ket notation makes it obvious at a glance when we’re talking about a quantum state—with all the specific facts and intuitions about quantum states that one can then “load into working memory.” (For example: this vector has to be normalized; its global phase is irrelevant; we can only learn about the vector’s components by applying a quantum measurement, not by observing them directly.)

    2. Ket notation lets us avoid subscripts that we’d constantly be forced to use with conventional vector notation. E.g., instead of v0,x,y, whose meaning would then need to be defined every time, we can just write |0,x,y⟩.

    3. The entire issue of primal vs. dual vectors is handled automatically. E.g., it’s obvious at a glance that ⟨ψ|φ⟩ is a number whereas |ψ⟩⟨φ| is a rank-1 matrix.

  42. Mike Lawler Says:

    In 10 years, how many digits do you think the largest known prime number will have? How about in 100 years?

  43. Jimmy Says:

    Who’s yer daddy?!

  44. Joshua Zelinsky Says:

    A while ago you mentioned the class of problems “Are there infinitely many n such that BB(n) is [even],[odd],[prime],etc.” where BB(n) is the Busy Beaver function. In that context, is it even obvious that for any k, BB(n+1)-BB(n)>k for all sufficiently large n? I’d certainly guess so, but I don’t see any obvious way to rule out even almost all of BB(n)’s growth occurring at n which are far away from reach other.

  45. Artem Kaznatcheev Says:

    Computational modeling and simulations are becoming very common in evolutionary biology and ecology, as is the vague concept of “complexity” (in the SFI sense). However, I am not seeing much headway of computational complexity into evolutionary biology and ecology. Valiant’s work on evolvability (as an extension of PAC-learning) is interesting, but does not resonate with biologists, and Chaitin’s metabiology is a false start. How can computational complexity (and theory A more generally; i.e. algorithmic lens) best contribute to evolution of biological and social systems? Can this contribution ever be as substantial as the one to physics?

  46. Isaac Duhor Says:

    If the universe/multiverse are infinite, then there has to be infinite worlds, and so we would have infinite Earth-like planets, and even a planet very similar to this one, but without leap years, or kangaroos, or Santa Claus. Do you agree with this line of thinking, and if not, what are the arguments against it?

  47. SomeGuy Says:

    What is the probability that the matrix we live in is totally-unimodular?

  48. Vadim Says:

    If you weren’t in academia, is there anything else you could see yourself doing?

  49. Anthony Says:

    What are the most interesting open questions relative to BosonSampling in your opinion?

  50. Serge Says:

    Scott,

    You may ask me anything too – which I’ll answer if I want! 🙂

    Serge #24

  51. Scott Says:

    Vadim #48:

      If you weren’t in academia, is there anything else you could see yourself doing?

    As far as I can remember, the evolution of my career goals was more-or-less as follows:

    garbage man (age ~2) → eye doctor (age ~3) → spaceship designer (age ~5) → marine biologist (age ~8) → videogame designer (age ~10) → writer, novelist, children’s-rights advocate, and social reformer (age ~12) → software/Internet entrepreneur (age ~13) → AI researcher (age ~15) → theoretical computer scientist (age ~16)

    Nowadays, I can still imagine having become some sort of (failed?) professional writer or software entrepreneur, had my teenage years gone a little differently.* But I’m very happy with where I ended up. Among other things, in this job I get to interact quite often with my friends in AI and software—and, of course, I get to write! I don’t have that much contact with marine mammals though.

    (* Or, of course, an AI researcher, but that would probably still be academia!)

  52. Felipe Says:

    Scott,

    Can you sign my copy of QCSD? (I can drop by your office pretty much any time in the next 3 years, whenever’s most convenient for you..)

  53. Scott Says:

    Anthony #49:

      What are the most interesting open questions relative to BosonSampling in your opinion?

    Well, we gave a long list of them in our paper!

    The most obvious problems are to prove our #P-hardness conjecture and our anti-concentration conjecture. I’m optimistic that one or even both of these might be settled in the near future.

    After that, perhaps the next problem is whether there’s a “traditional” decision problem that’s solvable by a BosonSampling device but that’s intractable classically. I’ve revised my opinion a bit, and now conjecture that the answer might be yes (but I’m really not sure).

    After that, the next biggest problems all involve bridging the gap between our result and what the experimentalists can currently do. Some examples:
    – Can you still give formal evidence for the hardness of classical simulation if the inputs are Gaussian states, rather than single-photon Fock states?
    – What if the number of modes is only linear in the number of photons, rather than quadratic?
    – What if the real distribution has constant variation distance from the ideal distribution, rather than 1/poly distance?
    – What if a constant fraction of the photons get lost along the way?

    Finally, there are questions that I really love, but that might be harder to motivate to non-complexity-theorists. These include: is there an oracle where, contrary to my and Alex’s conjectures, FBPP=FBQP but the polynomial hierarchy does not collapse? (Note that Fortnow and Rogers gave an oracle where P=BQP but the polynomial hierarchy doesn’t collapse. But even the intermediate case of PromiseBPP=PromiseBQP remains open.)

  54. Scott Says:

    Felipe #52:

      Can you sign my copy of QCSD? (I can drop by your office pretty much any time in the next 3 years, whenever’s most convenient for you..)

    Of course!! Just let me know by email when you want to stop by, so I can make sure to be around.

  55. Scott Says:

    SomeGuy #47:

      What is the probability that the matrix we live in is totally-unimodular?

    LOL! Well, for the unitary matrix describing our world to be totally-unimodular, it would need to be a permutation matrix times a +1/-1 diagonal matrix. So really you seem to be asking for the probability that our world is deterministic and classical! And I’d say that’s extremely close to 0. 🙂

  56. Bram Cohen Says:

    Well gee, coming up with a question feels like a huge responsibility, because it’s been so long that since I spent time trying to think about deep complexity issues, or at least all the ones I’ve been thinking about lately require way too much backstory to be able to ask simply, and I don’t want to squander this opportunity to ask something deep. So at the risk of impinging on your no-Dwave requirement, I have a question which is of very much interest to me due to my interest in walksat, which is, can you think of a classical problem where it seems likely that an adiabic quantum computer could produce a better than quadratic speedup over classical stochastic search, and if so can you give an intuition of why that should be the case?

  57. Scott Says:

    Isaac Duhor #46:

      If the universe/multiverse are infinite, then there has to be infinite worlds, and so we would have infinite Earth-like planets, and even a planet very similar to this one, but without leap years, or kangaroos, or Santa Claus. Do you agree with this line of thinking, and if not, what are the arguments against it?

    I hate to be the one to break this news to you, but not even our world contains the third item you listed… 😉

    Seriously: from the hypothesis that the universe is infinite, it doesn’t logically follow that all possible Earth-like planets have to exist somewhere. For example, maybe there’s a yet-undiscovered law of physics implying that every Earth-like planet must eventually contain at least one kangaroo! 🙂 But, as the preceding sentence already illustrates, it’s extremely hard to imagine how the laws of physics could be, so that you could get an infinite universe without every possible Earth-like planet arising somewhere in it (indeed, arising infinitely many times) with probability 1. This is similar to how, while no one has rigorously proved that the binary expansion of π must contain the complete works of Shakespeare and every other finite string, on statistical grounds it’s very hard to imagine how that couldn’t be the case.

    (Note that, when it comes to the physical universe, if you accept that quantum mechanics applies always and everywhere, then you could alternatively use that to argue that every possible Earth-like planet arises with some nonzero probability amplitude, and therefore infinitely often with probability 1 in an infinite universe.)

    For the reasons above, physicists and cosmologists who worry about such things do typically assume that an infinite universe would contain infinitely many observers nearly-identical to ourselves—thereby leading to all the infamous philosophical puzzles associated with the anthropic principle, the cosmological measure problem, the Boltzmann brain problem, etc. (Google those things if you don’t know what they are.)

  58. Anthony Says:

    thanks!

  59. Adam Says:

    Does the fine structure constant limit or enhance the power of quantum computers?

  60. Chan Li Says:

    Do you believe that 9/11 was an inside job? Official explanations of collapse of WTC #7 , crash of Flight 77 into Pentagon and crash of Flight 93 at Shanksville are simply unbelievable.

  61. Scott Says:

    Chan Li #60:

      Do you believe that 9/11 was an inside job?

    Not only do I emphatically reject that view, I don’t believe any conspiracy theory — as in none, zero, not one. I believe there have been conspiracies, of course—the conspiracy of the actual 9/11 hijackers was one straightforward example—but I regard conspiracy theories as one of the worst and ugliest known bugs in human reasoning. Everyone should learn the telltale signs of conspiracy theories as part of their education, then run away from them like the plague.

    For more, see my satirical essay “Occam’s Boxcutter”, which I wrote back in 2002 (when the “9/11 was an inside job” virus was already proliferating).

  62. Scott Says:

    Adam #59:

      Does the fine structure constant limit or enhance the power of quantum computers?

    Almost certainly it has no effect on their power! In principle, if the decimal expansion of the fine structure constant were to encode, say, the solutions to the halting problem, then by measuring the constant to greater and greater precision, we could boost our computational power from BQP up to some fixed language in the class BQP/poly.

    On the modern view, however, the fine structure constant has some completely different value at Planckian energies anyway. And it probably has the value it does at low energies because of some complicated sum of contributions arising from a more fundamental theory, not because its decimal expansion (or expansion in any base) encodes a message from God.

  63. Scott Says:

    Arrogant Undergrad #4:

      What do you think are a few of the most accessible open problems in complexity theory to someone with a limited background […] ?

    That’s an incredibly hard question to answer, but check out my “Projects Aplenty” post for a few of my ideas. If you want more, my suggestion is to can the “arrogance” and go talk to a professor at your own university! 🙂 (Or just scan the latest STOC/FOCS/CCC proceedings and ECCC preprints, looking for problems that strike your fancy.)

  64. Scott Says:

    itssummer #5:

      Razbororov produced a superpolynomial monotone circuit bound for an NP complete problem. Is an exponential lower bound provable for the same model within our known knowledge?

    OK, I was just googling your question (you’re welcome 🙂 ), and it looks like the best currently-known lower bound on monotone circuit size is basically that of Alon and Boppana: 2Ω(√k) for deciding whether a graph of size n has a k-clique, provided k≤n2/3. (Note that if k is constant, then Amano proved an essentially-optimal monotone lower bound of ~nk.) I suppose the main question is whether the 2Ω(√k) can be improved to 2Ω(k) for non-constant k by modifications of existing techniques, whether bold new ideas are needed, or maybe even whether the truth is 2o(k). But given that I just now had to look up what the situation was, I can hardly be expected to have a strong intuition about this question! My advice is to go post your question over on CS Theory StackExchange.

  65. Scott Says:

    itssummer #6:

      Could you comment more on the scaling part?

    Oh alright. What I was trying to explain to you there was simply that the quadratic advantage of Grover’s algorithm, compared to the best classical algorithm, is only a proven fact in situations where you know for sure that brute-force search is the best classical algorithm! However, if you were trying to prove (let’s say) that a quantum computer can solve some problem in O(n2) time that a classical computer can’t solve in O(n2) time, then this would not be your actual situation. Yes, the quantum computer could run Grover’s algorithm to search a list of size n4 in O(n2) time. But how would you prove that a classical algorithm can’t do the same? Since the size-n4 list is much larger than the actual input (which, by definition, has size n), the list must have been generated by some algorithmic process, rather than being completely arbitrary. But that means that the classical algorithm might work in a much cleverer way than simply searching the list element-by-element! Indeed, ruling that possibility out would require a breakthrough on the scale of P≠NP.

    I hope that makes it clearer. If not, then once again I’ll direct you to CS Theory StackExchange.

  66. Scott Says:

    me #7:

      Could you explain fisher information in a way that makes it seem inevitable?

    Given that I just had to look up and remind myself what Fisher information was, I’m going to guess that the answer to this is no. 🙂

  67. Scott Says:

    bitter #9:

      how come you had the title of assoc prof before you had tenure? Is it common that the two promotions are treated separately, or were your circumstances exceptional?

    No, at MIT it’s completely standard. First you become what’s called “Associate Without Tenure” (AWOT), and only ~3 years after that are you considered for tenure.

  68. Scott Says:

    Lukasz #12:

      how do you estimatete probability of us being a simulation ran by someone else, i.e. we dont have any ‘physical’ bodies?

    If the simulation was perfect, then by hypothesis, the “fact” of our being in the simulation would have no observable consequences whatsoever. In which case, it’s not even clear what it means to discuss the probability of such a thing being true or false.

    As I was saying before, in the Matrix movies they obviously had to assume that the simulation wasn’t perfect (since otherwise, it could’ve been any “ordinary” movie—say a romantic comedy—and the audience would’ve been none the wiser!). And I’d estimate the probability of our living in an imperfect simulation like the one depicted in those movies as exceedingly close to 0.

  69. Patrick Says:

    Just talking with a buddy about this quantum internet paper from los alamos. http://arxiv.org/pdf/1305.0305v1.pdf

    We got curious about something, probably pretty basic from your perspective, and I remembered that you were doing the AMA.

    So…how do you set the state of qubit? Are people running this network setting the spin of one entangled photon to 1 and then the other entangled photon get’s set to 0? If yes, how do you actually set the spin?
    Or are they just measuring the q-bit, but not setting it, and using it as a one time pad to encrypt data and then send it conventionally. (Relying on the fact that the receivers will get the binary complement of their one-time pad)

    More generally, I’m a little foggy on how you would configure or initialize the state of any quantum computing device to be able to ask it a question.

  70. wolfgang Says:

    Scott #68:

    >> And I’d estimate the probability of our living in an imperfect simulation like the one depicted in those movies as exceedingly close to 0

    I know I have only one question, but I *have* to ask this follow on question:

    You seem to say there are basically three possibilities:
    R: the world is real as it is
    P: we live in a perfect simulation
    I: we live in a flawed simulation

    You say we cannot distinguish R and P so it follows that p(R) = p(P)

    you also think that p(I) ~ 0

    So since p(R) + p(P) + p(I) = 1 it follows that p(P) > p(I)

    Why would you think a perfect simulation is more likely than an imperfect one?

  71. Scott Says:

    Mathematician5 #14:

      Is the universe (or the entire history of the universe) really just a vector in a Hilbert space, and if not, where does quantum theory break down?

    That’s an enormous question, and one that I’ve addressed from different angles many times in this blog’s history (try searching!).

    The short answer is that the universe being “just a vector in Hilbert space” is certainly the view that the logic of quantum mechanics militates toward. Obviously the word “just” has to be interpreted with extreme care in the last sentence: it’s true only in the extreme-reductionist sense that, say, World War II was “just” the movement of a large number of quarks, gluons, and other subatomic particles. With that caveat in mind, though, the superposition principle is like a “universal acid”: introduce it anywhere, and it immediately wants to apply everywhere to maintain logical consistency. On the modern, “decoherence” view, even the measurement process is understood as just another example of unitary evolution of a system coupled to its environment. And in nearly a century, no one has yet come up with any elegant way to modify quantum mechanics in a way that avoids these conclusions.

    On the other hand, taking the above seriously leads to the famous “measurement problem,” of explaining how it is that the Born Rule (and more broadly, our subjective experience) arises from unitary evolution—and to the never-ending debate about whether we’re forced to accept the reality of Many Worlds, and what exactly one even means by that. So I think it’s best to keep an open mind about the possibility that quantum mechanics might someday be superseded—even as we insist, vehemently, that any “claimant to the throne” reproduce all the existing successes of QM as a prerequisite to being taken seriously.

  72. Scott Says:

    wolfgang #70:

      Why would you think a perfect simulation is more likely than an imperfect one?

    You misunderstood me: in your notation, I was claiming that R=P, that there’s nothing meaningful we can even point to that would distinguish those possibilities. Thus, I think “p(P)” is large simply because I think p(R) is large! And I think p(I)~0 simply because p(I) represents the probability of Keanu Reeves materializing in my living room with a slow-motion kung-fu kick, or some similar event that in ordinary language would be called “supernatural.”

    Incidentally, your dilemma reminds me of Homer Simpson’s, after he’d finally reached the guru on a remote mountaintop in India to whom he could ask 3 questions:

    “Are you really the guru?”
    “Yes.”
    “Really?”
    “Yes.”
    “You?”
    “Yes.”

    🙂

  73. Scott Says:

    Patrick #69:

      So…how do you set the state of qubit?

    Do you just mean in general? (Remember the rules: no questions that require me to read and comment on a particular paper.)

    Do you agree that it’s possible to prepare (say) a photon in a state of horizontal or vertical polarization? Or to put an electron in either its ground state or its first excited state? If so, then you know how to set the state of a qubit!

    At a more philosophical level, it’s an interesting point that even state preparation ultimately relies on measurement: to set a qubit to |0⟩, we first measure it to see whether it’s in the state |1⟩, and only if it is do we then apply a NOT gate to change its state to |0⟩. Or to say it more generally, state preparation is inherently an irreversible, entropy-increasing process.

    But that’s fine! After all, entropy-increasing processes are constantly taking place all around us, and initializing a classical computer’s memory to the all-0 state is an example of such a process as well.

  74. wolfgang Says:

    Scott,

    we all know that Homer is real, but how can I be sure that you are really real?

    Oooops, already my 3rd question … sorry.

  75. Scott Says:

    davood #18:

      As a comment to your previous post you said “my tenure case heavily involved three things—algebrization, BosonSampling, and quantum money”. How d’ya know? Are these 3 items what you emphasized in your statement, or have you seen letters of others, or what?

    No, I certainly didn’t see any letters of others! On the other hand, many times over the last year, I got requests from senior colleagues to explain various aspects of my work, in order that they could then explain those aspects to other people at the department, School of Engineering, and then Institute level. These requests were often surprisingly detailed (“in your quantum money scheme, how do you evaluate the low-degree polynomial on a superposition of subspace elements without destroying the superposition?”) So from them, I could piece together a rough picture of what must have been going on above my head.

  76. Jordan Ash Says:

    If quantum computing is equal parts computer science and physics, then why do most QI people assume we *require* physically different hardware to achieve scalable UQC?

    $0.02: Should scalability be fundamentally about the intelligent manipulation of information, it shouldn’t matter whether we’re using trapped ions or an abacus to perform efficient UQC.

  77. Carl Says:

    Do you believe God would be able to solve the halting problem, decide undecideable questions, etc.? (This is assuming you believe in a god but I only got one question.)

  78. Scott Says:

    hull #19:

      Suppose you are engaged to lead a project to try to build, in a reasonable amount of time (20 years, 30 years?), a “real” q. computer … Suppose you are given an unlimited amount of resources to complete the task … What would you practically do?

    Step 1: Build a new facility (“Los Alamos II”?) specifically devoted to the task.

    Step 2: Hire David Wineland, Ray Laflamme, John Martinis, Rainer Blatt, Ike Chuang, Dave Cory, and all the world’s other leading QC experimentalists by offering them exorbitant salaries.

    Step 3: Put them in a conference room and let them hash out several different routes to be pursued in parallel.

    Step 4: Give them all the money they need to pursue whichever routes they decide on.

    “It’s not exactly rocket science — just quantum information science!” 🙂

  79. Nagesh Adluru Says:

    Hi Scott,

    Thank you for this opportunity!!

    Due to the journey of my career, I came to realize that there is an increasing push towards finding biological and molecular correlates to the patterns in social sciences and psychology using tools such as advanced imaging and genomics!

    With current social experimental designs, for the most part it is hard to distinguish if the individual differences in psychology and behavior arise from biology or if the biological effects observed are due to behavior which are mostly caused by much higher-order constructs such as culture, parenting or other such “brain washing” mechanisms etc.

    What kind of breakthroughs do you think/feel are required in social sciences or psychology (or any other discipline) for minimizing the probability of self-destruction of human species a.k.a the “world-peace problem”?

    Thank you!
    Sincerely,
    Nagesh

  80. ThereIsNoGoo Says:

    Do you believe in Goo?*

    *”Goo” is any instance of the continuum, e.g. a copy (topologically) of the real line, anywhere in the actual real physical world.

  81. Peter Boothe Says:

    We know an algorithm (Levin universal search) that achieves the optimal runtime for all problems in NP intersect co-NP but we do not know the runtime of that algorithm. What’s up with that?

  82. Scott Says:

    Carl #77:

      Do you believe God would be able to solve the halting problem, decide undecideable questions, etc.?

    Well, if you simply defined omniscience to be one of God’s attributes (as, I guess, many philosophers and theologians would?), then obviously God could solve the halting problem, along with every other well-posed mathematical problem. Certainly there’s nothing logically contradictory about an entity that’s not itself a Turing machine being able to solve the halting problem for Turing machines. In fact, computability theorists reason about such hypothetical entities all the time! (They call them “oracles.”)

    On the other hand, we could generate a variant of the ancient dilemma of God making a stone so heavy that not even He can lift it, by asking whether God can solve His own halting problem—and thereby run forever if He halts and halt if He runs forever.

    On the third hand, I find it amusing to contemplate a God who wouldn’t be omniscient—just extremely, unimaginably scient—whose mind would encompass (say) the NP-complete problems, yet would be just as impotent as ours when faced with #P-complete problems and higher.

  83. me Says:

    Given the assumption that large-scale quantum computers can be build. Could there be a fundamental reason for the difficulty of building an n-qbit quantum computer to “increase exponentially in n”?

  84. Scott Says:

    Jimmy #43:

      Who’s yer daddy?!

    Uh, my father’s name is Steve Aaronson. He worked as a science writer and then as a PR executive at several companies. He also plays bass guitar and is an avid bicyclist.

  85. Scott Says:

    Sid #32:

      How long would you want to live if you could live as long as you wanted in good health?

    In this thought experiment, do I have to fix a number ahead of time? I.e., will I be like some legendary figure who can’t die before his appointed date, even if he later decides that he wants to? If not, then I’ll simply say “forever,” with the option to kill myself after (say) a trillion years if things have gotten too boring by then.

  86. Douglas Knight Says:

    The world can’t just be a point in a Hilbert space because all points in all Hilbert spaces are the same (up to dimension). How can we extract from it the laws of physics? Well, it’s not just a Hilbert space, but there’s also a one parameter family of unitary transformations. Those are equivalent to their spectrum. I suppose that it is possible that the spectrum encodes the laws of physics, but it doesn’t sound appealing. I think von Neumann’s answer was that there is in addition to the Hilbert space is a von Neumann algebra acting on it.

  87. Scott Says:

    me #83:

      Could there be a fundamental reason for the difficulty of building an n-qbit quantum computer to “increase exponentially in n”?

    Sure, that’s a logical possibility. But on our current understanding of physics, it would be extremely mysterious—in my opinion, every bit as mysterious as a scalable QC just being flat-out impossible. One would demand to know: what is it about the laws of physics that causes the difficulty to increase exponentially with n? For example, is the issue that QC can only work in a “postselected” setting, where the number of postselection steps increases linearly with the size of the computation, so that the probability of all the postselections succeeding decreases exponentially? If so, then what new physical principle on top of QM prevents non-postselected QC from working?

  88. Scott Says:

    Jordan Ash #76:

      If quantum computing is equal parts computer science and physics, then why do most QI people assume we *require* physically different hardware to achieve scalable UQC?

    One answer is already implicit in your question. If you don’t have physically different hardware, then you’re not talking about something that’s “equal parts computer science and physics”: you’re just talking about computer science, as conventionally understood!

    Of course, in principle it could turn out that BPP=BQP, in which case quantum computers would be much less useful than we thought—being needed only for polynomial speedups or for applications (e.g., quantum money) that inherently involve the processing of quantum information. But an algorithm to simulate arbitrary quantum computations in classical polynomial time would be so unlike anything we know, that the burden is on anyone who considers that a serious possibility to give evidence for it. (The best evidence, of course, would be to code up a simulator and use it to break RSA!)

  89. Sabrina Ortiz Says:

    What would you think we’ll achieve first: a breakthrough in genetic engineering so we can create a flying horse (like Pegasus), or a breakthrough in robotics and nanotechnology which allows us to build a flying mechanic horse?

  90. Scott Says:

    Bram Stolk #30:

      Could the outcome of the double slit experiment be explained by ‘lazy evaluation’ by the computer simulating our universe?

    There is something about quantum mechanics that’s strikingly reminiscent of “lazy evaluation” in CS. That is, the universe “cooks up a probabilistic answer to order” for whatever question you asked, but subject to that, it leaves anything that you didn’t ask about in whatever superposition of answers it was in before! (Or at least, that’s one way to summarize the rule for a partial measurement.)

    However, I have two strong cautions:

    First, if you consider (say) a quantum computer with millions of entangled particles, then “lazy evaluation” seems like a misnomer! 🙂 This is still a form of evaluation that (in some sense) works astronomically harder than all the classical computers on earth, even if it doesn’t work any harder than it “needs” to work.

    Second, I wouldn’t say that lazy evaluation “explains” quantum measurement—just that the two things are analogous to each other. Personally, I don’t see enough explanatory oomph here to go further than that.

  91. Scott Says:

    Sabrina #89:

      What would you think we’ll achieve first: a breakthrough in genetic engineering so we can create a flying horse (like Pegasus), or a breakthrough in robotics and nanotechnology which allows us to build a flying mechanic horse?

    The flying mechanical horse. Depending on what exactly the rules are (i.e., how horse-like does it have to be?), we can probably already build that today.

    Addendum: A colleague of mine has pointed out that we can easily get either type of horse today, as long as it only needs to “fly” for a very short time. 🙂

  92. Scott Says:

    Everyone: I’m going to sleep now! Thanks for all the great questions, and I apologize if yours wasn’t answered yet—I’ll get to it, honest!

  93. Bill Kaminsky Says:

    Having done some further Googling and skimming, I’d like to revise my question from

    To a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?
    (see Comment #36 above).

    to the following more qualitative (and more sweeping) one:

    *******
    Wait a second! What’s all this?! Is it somehow the case we could prove P = NP yet still be *unable* to search efficiently for short proofs of theorems that indeed possess short proofs?!
    *******

    I ask since my Googling and skimming found this paper from the 1994 STOCS

    Jean Goubault “The Complexity of Resource-Bounded First-Order Classical Logic”, available freely at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.3225

    At the risk of violating your caveat #3 about not having to read anything from the literature, lemme quote a brief part of Goubault’s abstract and a brief part from his conclusion.

    ABSTRACT: In classical first-order logic without interpreted symbols, we prove that… the search for a proof of bounded difficulty (i.e., for a simple proof) is $\Sigma^p_2$-complete [i.e., 2nd order of polynomial hierarchy]. We also show that the same problem when the initial formula is a set of Horn clauses is only NP-complete.

    CONCLUSION: … Although the problem [of bounded proof search for classical first order logic] has a complicated structure (it is not likely to be in NP, contradicting the common idea that proof search is just another combinatorial problem [Bill’s emphasis]), it lies in the polynomial hierarchy, and usually not higher than level 2, even with interpreted symbols. Hence, mechanized classical first-order theorem proving is not utopic.

    So do human mathematicians still have jobs in a world where P = NP but the polynomial hierarchy doesn’t fully collapse?

    Thanks again, Scott! Yayyyy, tenure!!

  94. Alex Says:

    Fuck Scott, with comment 29 you spoiled my day. But I am glad that you share the opinion that this is the saddest thing ever. 🙁

  95. Tobi Says:

    What are some of the things that you did in college and afterwards that helped you become the researcher you are today and acquire tenure

  96. itssummer Says:

    Followup https://scottaaronson-production.mystagingwebsite.com/?p=1385#comment-73364

    There already seems a question

    http://cstheory.stackexchange.com/questions/4248/improved-lower-bound-on-monotone-circuit-complexity-of-perfect-matching

    From what I understand from the answer and the question, it looks like permanent or #P complete problem has only n^{omega(logn)} lower bound since perfect matching has such a lower bound. So exp lower bound is known in this case.

    However NP compete problem has sub exponential (exp(n^a) with a <1) lower bound from what you are quoting.

    Is my interpretation correct? Why is there a diff between #P and NP complete lower bounds? I would have thought #P was harder and hence should have exp(n^a) lower bound.

  97. itssummer Says:

    should be “So exp lower bound is UNKNOWN in this case (for permanent).”

  98. Carl Says:

    On the other hand, we could generate a variant of the ancient dilemma of God making a stone so heavy that not even He can lift it, by asking whether God can solve His own halting problem—and thereby run forever if He halts and halt if He runs forever.

    Yeah, that’s what I meant, but I was typing on an iPhone. I’d really like to see a serious theologian tackle this problem. Is God His own oracle or not? If He’s not, He’s hardly omniscient, just, like, magnascient. On the other hand, if He is, then He (and by extension reality itself) is self-contradictory, which is unsettling.

  99. Ashley Says:

    Well, how much do you sleep on average daily? 🙂
    (Of course I asked this as you last commented you are going to sleep, but I would like to understand if intellectually productive people are usually those who have a lot of sleep too. If you find this nosy my apologies.)

  100. aykut Says:

    Physics is looking for the axioms of the nature observing the theories nature produces (physical events/experiments). From incompleteness theorem we know that (in a vague manner) for any axiomatic system we have either an incomplete or inconsistent set of axioms. I have the following questions, since they all indicate the same phenomena (i hope) all can be considered as one, if not please answer of your choice.

    1. Are physicists assuming the nature is consistent?
    2. If yes, then this seems to me that at best we can have physical laws that can only explain some fraction of the nature?
    3. If no, then did we observe any inconsistent behavior in the nature?

    Thanks.

  101. PS Says:

    When was the last time you wrote a computer program? What did it do, and what language was it in?

  102. Rahul Says:

    What are the recent (or underrated) advances in Comp. Sci. (QC or not) that ( will ) translate into the largest applied / technological impact?

    To illustrate what I ask with examples from the past: Dijkstra’s shortest path algorithm or Fast Fourier Transforms or Gale–Shapley or Mersenne twister.

    Not sure whether these are the best examples (or even correctly classified as Comp. Sci.) but I’m looking for those algorithmic / theoretical advances which might end up having immediate applications or might solve some applied hurdle. Even ones that are already in use but relatively unknown / underrated are welcome.

    Thanks!

  103. cgy4ever Says:

    What will you ask if you see Terry Tao post “Ask Me Anything!” on his blog?
    (And you can only ask one quesion, like us.)

  104. Scott Says:

    PS #101:

      When was the last time you wrote a computer program? What did it do, and what language was it in?

    A couple years ago, I wrote a quiz program to help me memorize Hebrew words, in QBASIC. If you’re willing to go back a decade, you’ll find a much larger number of programs, mostly written in C — and if you go back two decades, then a much larger number still, but again written in QBASIC. 🙂

    (Note: Here I’m not counting little 20-line programs that I write to do Monte Carlo simulations and confirm or disconfirm this or that conjecture. In those cases, I’m using the programming language basically as a glorified calculator.)

  105. Scott Says:

    Ashley #99:

      Well, how much do you sleep on average daily?

    The normal amount, ~8 hours (though with considerable variation). Now that the baby is around, it certainly isn’t in contiguous blocks, and is often on an opportunistic basis.

  106. Scott Says:

    Rahul #102:

      I’m looking for those algorithmic / theoretical advances which might end up having immediate applications or might solve some applied hurdle.

    Two recent examples that spring to mind immediately are fully homomorphic encryption and the sparse Fourier transform (google them). I’d be extremely interested if other commenters want to suggest more examples!

  107. Gugg Says:

    “[I]f you want a universe with certain very generic properties, you seem forced to one of three choices: (1) determinism, (2) classical probabilities, or (3) quantum mechanics.” (You in QCSD.)

    If one understands the 0-norm to be the intersection of the 1 and 2-norms, shouldn’t the trilemma really be the following (for any universe with very generic properties)?

    1. Determinism, classical probabilities and quantum mechanics are all true.
    2. Classical probabilities is true; the other two are false.
    3. Quantum mechanics is true; the other two are false.

  108. Scott Says:

    cgy4ever #103:

      What will you ask if you see Terry Tao post “Ask Me Anything!” on his blog?

    Well, I had the opportunity to meet Terry at UCLA, where I gave him some problems I’d been saving up for a while (especially the “Aaronson-Ambainis Conjecture” about bounded low-degree polynomials), and he generously spent a half hour thinking about them and gave me some potentially-useful leads. (I also asked him about whether he thought the value of BB(10) was independent of ZF set theory, and about how having a baby was affecting his career.) Since then, he’s also kindly answered several of my MathOverflow questions. So if Terry were to hold such an event, I’d probably figure that I’d already asked the poor dude enough for the time being. 🙂

  109. wolfgang Says:

    Carl #98

    >> then He (and by extension reality itself) is self-contradictory, which is unsettling

    I think you are too narrow minded. If He is *truly* omniscient, then He can be consistent even if He is self-contradictory.

    Just as He can create the world even if He does not exist …

  110. Michael Bacon Says:

    Scott@78,

    I’ve thought about this some and I believe that it would be useful to throw in a few physics and CS “theorists” as well. And perhaps a couple of mathematicians for good measure. Also, I’d add some kind of generalist — perhaps someone with a penchant for jazz or Ikebana — just to sit in the conversations and make occasional comments. What do you think?

  111. Scott Says:

    Mike Lawler #42:

      In 10 years, how many digits do you think the largest known prime number will have? How about in 100 years?

    You might enjoy Section 3.3 of Why Philosophers Should Care About Computational Complexity on the large conceptual difficulties in even defining what we mean by the largest “known” prime!

    But suppose I make the assumption—which seems reasonable for the foreseeable future—that the “largest known prime” is extremely likely to be a Mersenne, 2k-1 (because of the availability of the Lucas-Lehmer test), and that to “know” it simply means to have written down k, and to have verified by computer that 2k-1 is prime.

    In that case, I would generate my projection for 10 years from now by simply taking all the results of GIMPS since 1996, plotting them on a log plot, and extrapolating the result to 2023! (I’d be amused if someone actually wants to do that, or has already done it.)

    I’d be hesitant to predict anything for 100 years from now, since all my predictions about Mersenne primes would be hostage to my predictions about other issues, like whether humans will have uploaded themselves to a post-Singularity borg, or (more likely) whether civilization will have collapsed.

  112. Scott Says:

    Michael Bacon #110:

      I’ve thought about this some and I believe that it would be useful to throw in a few physics and CS “theorists” as well. And perhaps a couple of mathematicians for good measure. Also, I’d add some kind of generalist — perhaps someone with a penchant for jazz or Ikebana — just to sit in the conversations and make occasional comments. What do you think?

    Well, theorists would obviously be needed for this project: at the very least, you’d want a large team thinking up better fault-tolerance schemes. But I’d put the experimentalists in charge of assembling that team.

    Regarding mathematicians, musicians, and generalists: personally I couldn’t agree more! (What’s “Ikebana,” though?) On the other hand, the question asked how I’d go about building a QC given unlimited resources, not how I’d go about assembling the most interesting people to have a rollicking good time. 😉

  113. Michael Bacon Says:

    Scott@112:

    Ikebana is the Japanese art of flower arrangement, also known as kadō: http://en.wikipedia.org/wiki/Ikebana

    Regarding your later point, I think it’s critically important to assemble the most interesting people to have a rollicking good time. In the off-chance that Gil Kalai turns out to be right, at least the time spent wouldn’t have been wasted. 🙂

  114. CognitiveScientist Says:

    (congrats on your tenure!)
    Can you point to the main obstacles and/or draft a research program for how to build an intelligent machine/software? where for our purpose an intelligent system is defined as one that can do everything that humans can do.

  115. Rahul Says:

    Scott #106:

    Thanks! Never knew about Homomorphic encryption: I just read about it and it sounds fantastic.

    Yes, I’d be interested in knowing more similar advances from other commentators too.

  116. Rahul Says:

    Scott #104:

    A couple years ago, I wrote a quiz program to help me memorize Hebrew words, in QBASIC. If you’re willing to go back a decade, you’ll find a much larger number of programs, mostly written in C — and if you go back two decades, then a much larger number still, but again written in QBASIC

    This is a classic example of how, unlike popular perception, computer science has very little to do with computers themselves. 🙂

  117. Nex Says:

    Do you believe in free will?

  118. Scott Says:

    Nex #117: “It’s complicated.” 🙂 As it happens, I’ve been working on and off for the past two years on a huge essay setting out my thoughts about free will and predictability—and the essay will be online in just a week or two! So, can you will yourself to wait for that?

  119. Richard Says:

    How has your progress in learning Hebrew gone? For example, can you explain what is/was the hardest aspect and how you dealt with it?

  120. Clayton Says:

    If you guess randomly, what are the approximate odds of getting the answer to this question correct?
    A) 25%
    B) 50%
    C) 33%
    D) 25%

  121. B. Bogart Says:

    Given that simulations are intentionally created in the context of human knowledge and reality is independent of human knowledge, what is the relation between a perfect simulation and reality?

  122. Karthik Says:

    How does the below approach sound for factoring large integers?

    *Get the nearest even number to the given integer.
    *Factor the even number in many different combinations
    *Now get a collection of primes nearer to the above factors
    *Within the collection, multiply each number with every other number and check if we get the given number.

    Thanks.

  123. NotAnonymous Says:

    Is this AMA still going on?

    In reference to age ~10 of Scott #51:

    What videogame(s) would you design based on your research interests (broadly construed) and are all ideas on this blog public domain/free to use?

  124. Mark Says:

    What is the nature of time?

  125. Scott Says:

    Mark #124:

      What is the nature of time?

    I just posted a video about that very topic last week! And, the nature of time being what it is, the fact that last week was before this week means that you can now watch it. 😉

    (For more, check out Section 10 in Why Philosophers Should Care About Computational Complexity.)

  126. Scott Says:

    NotAnonymous #123:

      What videogame(s) would you design based on your research interests (broadly construed) and are all ideas on this blog public domain/free to use?

    That’s 2 questions. Pick one or the other, dude. 🙂

  127. Juan Says:

    What do you think of David Deustch’s Constructor Theory? Here is his paper on the subject on arXiv:

    http://arxiv.org/abs/1210.7439

    Or is it something that is not on your radar at the moment?

  128. Bob Says:

    What books/films would you like to see Lily grow up with?

  129. Scott Says:

    Richard #119:

      How has your progress in learning Hebrew gone? For example, can you explain what is/was the hardest aspect and how you dealt with it?

    I can speak at basically the level of a 3-year-old. I can order at a restaurant in Israel and be understood just fine, but the waiter will embarrass me by answering in English. Or (say) if Dana is arguing with her parents in rapid-fire Hebrew, I can pick up enough words and phrases to figure out what’s at issue.

    If you were to plot my progress, you’d see a very gradual increase from the ages of 3 to 13, then a slow decline back to almost nothing (i.e., the alphabet, counting to 10, yes and no) after I left Hebrew day school, then a sharp increase when I started dating Dana and decided to practice with her, other Israelis I knew, Rosetta Stone, Pimsleur, my own QBASIC program, and anything else I could think of, and since then, my knowledge either remaining static or else slowly deteriorating again.

    This might surprise people, but I’ve found that the hardest aspect of learning a foreign language is that you have to memorize, like, thousands and thousands of words. Coincidentally, I ran into exactly the same difficulty when I lived in Hong Kong at age 13, and tried to learn Mandarin (which was what the school offered—it didn’t offer Cantonese). My brain is really not well-adapted to learning languages, at least outside the brief childhood window of opportunity.

    I’m still trying to find ways to deal with that problem (hence my QBASIC program). You’d think having an Israeli wife would help, but normally she just wants me to quickly understand the content of whatever she’s saying and therefore switches to English. Her parents provide better practice, since their English is only a little better than my Hebrew, forcing us to use a combination of languages. And there are also some other Hebrew learners at MIT who I practice with.

    After the sheer memorization of words, my next biggest difficulty is the “combinatorial conjugation” (male/female, singular/plural, past/present/future), which follows rules that are “easy” if you’re a native Hebrew speaker but pretty random and arbitrary if you’re not.

  130. Me Says:

    @Karthik
    Sorry i’m curious, how would you factor a Mersenne-Number with your approach?

  131. NotAnonymous Says:

    Scott #126:

    That’s 2 questions. Pick one or the other, dude. 🙂

    I’ll pick question number 1 then.

    At least the answer wasn’t simply “no”. 🙂 Actually this makes no sense since it can’t be read as a logical AND.

  132. Michael Sweney Says:

    “The mouse is eating cat food, but the cat food bowl is broken”

    what conditions must be met for an answer to this be acceptable to a grumpy zen master?

  133. Jimmy Says:

    Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?

  134. William Banks Says:

    Maybe, it’s the equality sign? Maybe, the very expression of mathematical equality is in its essence illusory?

  135. John Sidles Says:

    Scott, here is meta-question, that partly reflects general curiosity with regard to the future, and partly is a tribute to your recent MIT promotion!

    MIT background information
    • MIT was founded in 1861 (one hundred fifty-two years ago).
    • Google Earth shows MIT’s Killian Court (just south of the Infinite Corridor) standing eight feet above mean sea level.

    The Question Asked  How many of the following ten yes-or-no assertions can be predicted with accuracy 55% or better, by a STEM researcher of average knowledge and imagination, after (at most) two minutes of consideration for each question.

    The questions concern either MIT specifically, or the STEM community broadly, in the year 2085 … that is, half as far in the future, as MIT’s founding is in the past.

    —————

    Assertion 1:  In the year 2085, MIT’s Killian Court will stand below mean high-tide sea level, and winter snow will be uncommon in Massachusetts.

    Assertion 2:  At least one MIT undergraduate engineering degree program will include the word “quantum” on its diploma.

    Assertion 3:  At any given time, the majority of MIT students will *not* attend classes on-campus and will *not* view lectures in-person.

    Assertion 4:  At least some MIT courses will be taught (and graded) entirely by artificial intelligences.

    Assertion 5:  At least one quantum computer will exist on the MIT campus that is used mainly for practical computation.

    Assertion 6:  Post-graduation, the majority of MIT students will *not* take jobs associated to residency within the United States.

    Assertion 7:  Post-graduation, the majority of MIT students will *not* take jobs associated to residency upon the planet Earth.

    Assertion 8:  The Earth’s population will be less than five billion.

    Assertion 9:  Median global life-expectancy will be more than one hundred years.

    Assertion 10:  Histories of the 21st century will commonly mention at least three MIT faculty (by name) as exerting strong influence in regard to the outcomes of Assertions 1-9.

    —————

    Reminder  Individual responses are not solicited (except voluntarily) … rather, the question asked is whether (in your opinion) the STEM community can generate at least some predictions regarding the future of MIT (and the STEM enterprise) that are appreciably better than random guesses.

  136. Bill Kaminsky Says:

    Ooops, I withdraw my revised question (Comment #93) above as it betrays me to be an idiot (d’oh!) who never understood what the polynomial hierarchy is and thus never understood that it’d all collapse down to P if P = NP. 🙁

    If it suits you, I suppose my original question (Comment #36) is still valid:

    To a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?

    But if that question doesn’t float your boat (e.g., it’s too tied up in minutiae about optimal encodings of some set of axioms and inference rules into a SAT instance), how ’bout this general ethical question:

    *****
    What’s your view on the proper obligation of one to society, especially if one has scientific skills? Is the threshold-o’-goodness met by

    1) “merely” some basic standard of reciprocity (e.g., the Golden Rule in one of its many tellings… how ’bout Hillel’s since it touches on scholarship: “That which is hateful to you, do not do unto your fellow man. That is the whole Torah. The rest is commentary. Now go forth and study.”)

    or are there more positive obligations like

    2) Tithing income and/or time to charity

    and/or additionally

    3) Speaking out politically, especially on issues which you are expert

    or even more

    4) Spending at least some of one’s vocational time on research topics likely to have positive societal impact within, say, a 10 year time horizon

    or, my personal favorite,

    5) Building a suit of powered assault armor and subduing terrorists and supervillians while abjuring the many temptations to womanize that are just oh-so-synonymous with being a MIT-educated superscientist 😉
    *****

  137. Scott Says:

    Nagesh #79:

      What kind of breakthroughs do you think/feel are required in social sciences or psychology (or any other discipline) for minimizing the probability of self-destruction of human species a.k.a the “world-peace problem”?

    I’m not sure whether there are any plausible breakthroughs in social sciences or psychology that would contribute substantially to solving the “world-peace problem” (there might be, but I can’t think of them). Personally, I see the problem as fundamentally not scientific, but political. Namely, the sane, level-headed, democracy-respecting, scientific-tempered, solution-oriented people have to win over the screaming maniacs who believe anything is justified because they have God (or the Historical Dialectic, or whatever) on their side. And what makes the problem so intractable is precisely that, as their descriptions already indicate, the “screaming maniac” side will almost always be able to muster more boldness and zeal than the sane side.

    Still, I agree that there are certain scientific advances—for example, the development of a superhuman AI—that could completely alter the outlook for this ancient conflict between the maniacs and the “saniacs” (though exactly how is hard to foresee).

  138. Travis Says:

    What is the secret to achieving happiness?

  139. Totient Says:

    How do you decide what is ethical or not? I’m not sure how many moral dilemmas a CS professor runs into on a regular basis, but you seem like a very reasonable sort of person who would agree that some actions are okay, and others are most definitely not.

    (I know there’s a lot of philosophy devoted to the question, but I want to know what *you*, Scott Aaronson, uses as moral rules/guidelines/heuristics…)

  140. Scott Says:

    Bram #56:

      can you think of a classical problem where it seems likely that an adiabic quantum computer could produce a better than quadratic speedup over classical stochastic search, and if so can you give an intuition of why that should be the case?

    It’s a very good question.

    If you’re allowed to tailor your instance specifically to make (say) classical simulated annealing fail—even though someone who knew how the instance was designed could easily solve it classically—then this paper by Farhi, Goldstone, and Gutmann answers your question. Specifically, they construct a set of fitness landscapes such that the adiabatic algorithm reaches the global minimum in polynomial time, whereas simulated annealing gets stuck in a local minimum and needs exponential time to get out of it.

    Now, a harder question—one that I’ve been raising for years (see Section 4.2 of this paper)—is whether you can design a set of fitness landscapes {f} such that the adiabatic algorithm reaches the global minimum in polynomial time, whereas any classical randomized algorithm provably requires exponentially many queries to f.

    In trying to answer this harder question, an extremely natural idea is to use the Childs et al. conjoined-trees construction—which is known to provide an exponential speedup by a quantum walk compared to any classical randomized query algorithm—and then somehow “embed” the conjoined trees into a fitness function f, in such a way that the adiabatic algorithm will simulate the quantum walk algorithm that we already know works for the conjoined-trees problem. And apparently there was recent progress toward doing this! But

    (a) While I saw the preprint on the arXiv about a year ago, I can’t seem to find it now—does anyone want to help tracking it down? 🙂

    (b) The preprint in question didn’t quite achieve a separation using the adiabatic algorithm itself, but only using the “NON-adiabatic algorithm”—i.e., an algorithm that’s like the adiabatic algorithm, except that it transitions from the ground state up to the first excited state and then back down to the ground state.

    Of course, even if problem (b) could be solved, it would remain an outstanding challenge to find a more natural, less contrived example where the adiabatic algorithm got a super-quadratic advantage. I regret that I don’t have a good intuition about this challenge. However, see this important paper by Freedman and Hastings for some very recent results relevant to the challenge—indeed, the only reason I’m not writing more about their paper is that I haven’t yet read and understood it!

    Ultimately, of course, the question of whether the adiabatic algorithm can ever get a superquadratic advantage (or even, for that matter, a quadratic advantage) “in practice” might need to be settled experimentally.

  141. Anonymous Programmer Says:

    How do you think consciousness and free will is related to quantum mechanics?

  142. Scott Says:

    Gugg #107:

      If one understands the 0-norm to be the intersection of the 1 and 2-norms, shouldn’t the trilemma really be the following (for any universe with very generic properties)?

      1. Determinism, classical probabilities and quantum mechanics are all true.
      2. Classical probabilities is true; the other two are false.
      3. Quantum mechanics is true; the other two are false.

    Thanks, that’s a very interesting and amusing way to put it! But I suppose it ultimately boils down to a matter of definition. Many people would interpret “Classical probabilities is true” to mean, not merely that all your states are probability vectors, but also that some states are nontrivial probability vectors. Likewise, they would interpret “Quantum mechanics is true” to mean that some states are nontrivial quantum superpositions.

    One technical comment/correction: in case 2, the probabilistic case, you also shouldn’t rule out quantum mechanics being true—since maybe your states are “really” quantum mixed states whose density matrices just happen to be diagonal!

  143. Joe Shipman Says:

    Why do you call your blog “Shtetl-Optimized”?

  144. Scott Says:

    Joe Shipman #143:

      Why do you call your blog “Shtetl-Optimized”?

    I’m sure I’ve answered that question several times before on this blog—search for it!—but I also like the answer Brian Hayes gave just recently.

  145. Dan Ashwander Says:

    Am I insane?

  146. Scott Says:

    John Sidles #135: Predicting “with accuracy with 55% or better” is an extremely low bar—indeed, a forecaster would only have to adjust her “true” probabilities (whatever they were) by at most 5% to clear that hurdle. And that, in turn, would decrease her “expected rightness” by well under 1 assertion out of the 10. Indeed, I read in Nate Silver’s book that weather forecasters routinely do this: that is, they adjust probabilities that the computer model says are 50% to either 40% or 60%, simply because laypeople think 50% sounds too wishy-washy, and they’re willing to take a slight hit in accuracy to keep up appearances! So, adopting the same principle, I’m going to say that all 10 of your assertions could readily be predicted with “accuracy 55% or greater.”

  147. ramsey Says:

    How do you work?

    More specifically, how do you go about coming up with ideas, how do you structure your work time, and what do you do when you’re stuck?

    Also, how have your work habits changed over time?

  148. Scott Says:

    Bob #128:

      What books/films would you like to see Lily grow up with?

    Good question! I’ll expose her to my own childhood and adolescent favorites: the classic Disney movies, Narnia and Bridge to Terabithia and Wrinkle in Time and Rats of NIMH, Tom Sawyer and Huck Finn, Asimov’s short stories, Martin Gardner’s games columns, Surely You’re Joking Mr. Feynman, Real Genius …

    Ultimately, though, she will form her own tastes, and they will differ from mine.

  149. PeterM Says:

    My question is partially motivated by the Foundation series. There is the science of psycohistory about which Asimov hardly says anything specific but still, even in its vagueness the existence of this discipline within the stories is fun. I am curious if instead of making a novel around a fictitious science you could write an exposition of a fictitious proof of P does not equal NP. Preferably it would involve some sensationalist statement. For example, many people take it for granted that from a proof like that we would gain tremendous insight. However, at least when there is interaction, learning the truth of something is possible without extra insight. So, for the sake of a fun fiction, one could start: “In her monumental proof the first step was the realization that a so called minimal proof of P not equal NP is absolutely insight free in the sense that any (…) “.

  150. Joe Shipman Says:

    The reason I asked was because I spent a while searching for this answer without success 🙁

  151. Peter Says:

    Given the potential security applications of quantum computing, what do you think will be the delay between (A) a “useful” large-scale quantum computer being built and used, and (B) such a computer– perhaps a different one– being publicly announced (and later accepted by the community)? And which will happen first?

  152. Scott Says:

    Shehab #15: Since I’m not entertaining complicated multi-part questions in this session, let me address your first question only.

      Can a quantum system generate true random sequence?

    The answer is yes, in two quite strong senses.

    The first sense is definitional: suppose you measure a bunch of 45-degree polarized photons in the horizontal/vertical basis, and you get outcomes that are NOT uniformly random bits. In that case, the photons wouldn’t have been described by quantum mechanics at all, but by some different theory that modifies QM. I.e., the Born rule is simply part of the definition of QM, and the rule requires true randomness.

    The second, much stronger and more interesting sense, is that if you can do a physics experiment that violates the Bell inequality, then either there must have been faster-than-light coordination between the two detectors, or else the outcomes must have had some genuine entropy, which moreover was “generated on-the-fly” at Alice’s and Bob’s respective detectors. Of course, what makes this relevant is that quantum mechanics does let you do experiments that violate the Bell inequality, and moreover those experiments have already been done. However, the crucial point is that the conclusion, of “true randomness,” doesn’t presuppose the truth of QM—only that the Bell inequality was violated and that superluminal signalling is impossible.

    This was the upshot of Conway and Kochen’s famous 2006 “free-will theorem,” but it’s also been noted independently by others (like me, for example, in my 2002 review of Stephen Wolfram’s book 🙂 ). More recently, this observation is also what underlies protocols for generating “Einstein-certified random numbers,” such as those of Pironio et al. or more recently Vazirani and Vidick.

  153. Scott Says:

    PeterM #149:

      I am curious if instead of making a novel around a fictitious science you could write an exposition of a fictitious proof of P does not equal NP.

    Sure! In some sense, few things are easier than generating a fictitious proof. 🙂 (Indeed, I’ve already described a few fictitious P≠NP proofs, most recently for my April Fools prank.)

    Of course the catch—which also applies to Asimov’s “psychohistory,” or for that matter, to his robots that were intelligent because of “positronic brains”—is that these fictional explanations always contain a black box somewhere, into which the entire difficulty gets shoved. (Another classic example is the “flux capacitor” from Back to the Future.)

    But crucially, consumers of science fiction turn out to be perfectly happy with “flux capacitors”! Indeed, even if Asimov could explain how his psychohistorical or positonic black-boxes actually worked (which, as he understood perfectly well, he couldn’t), all he would’ve achieved would be to bore, alienate, and confuse most of his audience.

  154. John Sidles Says:

    Scott affirms (arguably incorrectly)  “John Sidles #135: Predicting “with accuracy with 55% or better” is an extremely low bar — indeed, a forecaster would only have to adjust her “true” probabilities (whatever they were) by at most 5% to clear that hurdle.”

    LOL … Scott, your post’s reasoning is plausible-yet-faulty … in the sense that history teaches us that predicting the future with “55% accuracy or better” can sometimes be an extremely high bar … when expert-level human judgment is involved.

    Because  History in general, and STEM history in particular, is replete with episodes in which highly qualified experts do not appreciate that flipping a coin is (surprisingly often!) a superior predictive strategy to consulting those self-same experts.

    Example  Fusion researcher Lev Artsimovich: Fusion [power] will be there when society needs it. (1972).

    Generalization  Supposing (for the sake of argument) that making accurate long-term STEM predictions is indeed exceedingly difficult, one hopeful inference is simply this: it may be more feasible — even for individual STEM researchers — to actively control the outcome of Assertions 1–10 (of #135) than it is to passively predict those same ten outcomes.

    Conclusion  Particularly for young STEM researchers, the creative control of — and therefore the assumption of responsibility for — crucial aspects of the future is (assuredly) a happier circumstance than mere prediction of the future and (plausibly) an enterprise more likely of success!

  155. Jay Says:

    Suppose you were a software designed to conduct some experiment and perform bayesian reasoning.

    For what kind of experiment do you think your computation should diverge because of possible copies of yourself?

  156. Scott Says:

    Travis #138:

      What is the secret to achieving happiness?

    Bertrand Russell famously wrote a self-help book called “The Conquest of Happiness” while in the midst of a suicidal depression. Like Bertie, I’ve also spent a decent fraction of my life depressed, which makes me hesitant to field your question! On the other hand, right now I’m quite happy, so maybe I can take a swing.

    I don’t think there’s a “secret” to happiness: just non-secret, widely-agreed-upon information that each individual has to figure out how to apply. Do work that’s important and meaningful to you, and that you’re good at. Be part of a community that appreciates you. Don’t try to be something you’re not; it won’t work. Travel the world. Be careful with drugs. Perhaps most important, with very few exceptions, actually do the things that you fantasize about doing. (Good examples: asking someone out on a date, writing a book, creating a “zoo” of complexity classes. Bad example: shooting up an elementary school.)

  157. Nagesh Adluru Says:

    Thanks for the answer Scott. Yeah it is hard to foresee how some mindset shattering paradigm shifts in understanding ourselves are possible. But a good subset of psychologists and neuroscientists seem to be laying seeds for such a possibility in the “near” future!

  158. Vinicius Says:

    How could you plan to become a “theoretical computer scientist” at age of 16? I guess most people don’t even have a clue of what is this at that age 🙂

  159. Scott Says:

    Tobi #95:

      What are some of the things that you did in college and afterwards that helped you become the researcher you are today and acquire tenure

    Took lots of math/CS classes; spent lots of time in the library (yes, people still went to libraries back then) hunting books and papers both classic and obscure (in math, CS, physics, philosophy, biology, social science…); emailed the authors of those books and papers; found wonderful mentors to bounce ideas off of; went to every conference and workshop I could; and most of all, spent unbelievable amounts of time trying to solve open problems, mostly unsuccessfully.

    Between the ages of 14 and 24, I had zero dating life, which was the overriding reason why I was so depressed (see #155). And if I had to do it over, I’d almost certainly do things differently. But the one thing I did have was math—endless hours to think about oracles and advice states and low-degree polynomials—and the tenacious hope that, if only I could succeed there, everything else in life would eventually sort itself out. So yes, I probably had a hunger that more contented, well-adjusted people don’t have.

  160. Scott Says:

    Bill Kaminsky #36:

      To a rough order of magnitude, do you have any sense of would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?

    This is a wonderful question—for years I’ve tried to get students interested in a project that would help determine the answer to it! (So far without success.)

    There are really two blowups to worry about here: first, the blowup in translating a 10-page human-understandable proof into a machine-checkable format (HOL, Mizar, etc.), and second, the further blowup in translating the HOL or Mizar proof-checking process into 3SAT. If the translations were done naively, I can easily imagine that they’d produce a 3SAT instance with many billions of clauses.

    So the key research question is how to do the translations in a less naive way. I actually see that as a very deep question, and not just a grubby matter of whittling down constant factors. (Which is not the view Lubos Motl’s caricature of me would hold, but is the view I hold!) One wants to know: can (say) the rules of first-order logic and the axioms of ZF set theory be encoded in some much more “succinct” way? Which, as it happens, is directly related to another of my favorite open problems: namely, the problem of finding as small a k as possible for which the value of BB(k) (i.e., the kth Busy Beaver number) can be proved to be independent of ZF set theory.

  161. Scott Says:

    Matt Leifer #26:

      Did the fact that your lecture notes are available for free on the interenet cause any problems with getting your book published and what is the status of your online lecture notes now that the book is published, i.e. does CUP own the copyright or you and is CUP going to sue me if I print the whole thing out and give it to my students?

    My editor, Simon Capelin, raised no problem whatsoever with my leaving the lecture notes online, and the lecture notes remain under my own copyright, not CUP’s. Simon’s open-mindedness about this issue was one reason why I decided to publish with CUP in the first place.

  162. Scott Says:

    Evan #27:

      What are your top 5 (or so) books that haven’t yet been written that you would most like to read?

    I’m sure there are hundreds, but here are the first five that came to mind:

    General Relativity for Theoretical Computer Scientists
    Quantum Field Theory (Plus Strings and AdS/CFT) for Theoretical Computer Scientists
    The Workings of the World’s Economy for Theoretical Computer Scientists
    The Brain for Theoretical Computer Scientists
    Mulmuley’s GCT Program Made Easy

  163. Ray Says:

    How being in communist Berkeley during your formative years have affected you?

  164. Alex Says:

    Scott #162

    If you ever find any of these books please let us know!!

  165. Scott Says:

    ThereIsNoGoo #80:

      Do you believe in Goo?*

      *”Goo” is any instance of the continuum, e.g. a copy (topologically) of the real line, anywhere in the actual real physical world.

    I suppose one can guess from your name which answer you’re hoping for! 🙂

    But it depends on what you mean by being “in the actual real physical world.” I believe that amplitudes in quantum mechanics are genuinely continuous—i.e., they’re arbitrary complex numbers of norm at most 1. The reason is that any attempt to “discretize” amplitudes seems to lead to mathematical nonsense.

    Crucially, however, amplitudes also aren’t directly observable. They’re simply used to calculate probabilities—which I also believe can be arbitrary real numbers in [0,1], but which also aren’t directly observable.

    And I also believe the holographic principle from quantum gravity, which implies that any “physical” Hilbert space should be finite-dimensional, and that any measurement that we can physically perform should have only a finite set of possible outcomes.

    (Whether the holographic principle is realized by space and time literally getting “discretized” at the Planck scale, or only in a more subtle way, I think has to be regarded as a profound open question, which might not even have an unambiguous answer.)

    Anyway, if the above is correct, then in some sense the continuum would be implicit in our best descriptions of Nature, yet it would never manifest itself directly in experiments. So for example, we could never exploit the continuum to build “hypercomputers” able to solve the halting problem in finite time, etc., and there would be no physically-meaningful questions whose answers would depend on the truth or falsehood of the Continuum Hypothesis, the Axiom of Choice, or anything of that kind.

    This balance between discrete and continuous probably won’t satisfy either the Wolframian digital-physics partisans or the Lubosian continuum-snobs! But it strikes me as not only strongly consistent with current physics but as having the harmonious ring of truth.

  166. Sniffnoy Says:

    Which would be more surprising: If PH were to equal PSPACE, or if the polynomial hierarchy were to collapse without PH being equal to PSPACE?

  167. Scott Says:

    Incidentally, Bill Kaminsky #93:

      So do human mathematicians still have jobs in a world where P = NP but the polynomial hierarchy doesn’t fully collapse?

    Err, if P=NP, that implies that the entire polynomial hierarchy collapses to P!

    But even setting aside that issue, the paper you link to must be defining the “proof search” problem in a nonstandard way. If you measure the “complexity” of a first-order proof in the most obvious way—i.e., by the number of symbols it contains over some fixed alphabet—and ask whether there exists a proof of a given statement with n symbols or fewer, then you clearly get a problem in NP.

  168. Scott Says:

    Aykut #100:

      1. Are physicists assuming the nature is consistent?
      2. If yes, then this seems to me that at best we can have physical laws that can only explain some fraction of the nature?
      3. If no, then did we observe any inconsistent behavior in the nature?

    What on earth could it possibly mean for nature to be “inconsistent”? Or better: how would I recognize “inconsistent behavior” in nature if I saw it?

    Certainly we can imagine insane behavior in nature. For example, maybe the gravitational force will triple in strength tomorrow, then on Thursday God will hold a press conference and explain that that was a big mistake and there will be no further changes to the laws of physics, but then on Friday all protons will turn into tiny unicorns.

    But even if that happens, there still won’t be any logical inconsistency! 2+2 will still equal 4, Fermat’s Last Theorem will still hold, and the laws of physics will still be perfectly consistent—just vastly crazier than we’d thought.

    In short, the way I use words, there’s no conceivable state of affairs that could correspond to “nature being inconsistent.”

    Addendum: It’s probably relevant here to clear up a depressingly-common misconception regarding Gödel’s Theorem. Gödel didn’t prove, or find the slightest indication of, any “inconsistency in math.” In fact, if you’re unwilling to regard the truth or falsehood of arithmetical statements as having a definite meaning, then ironically you can’t even state Gödel’s Theorem itself! Gödel showed that any computable formal system for reasoning about arithmetic has to be either inconsistent or incomplete. For common formal systems like PA and ZFC, the truth is almost certainly the latter. But even if, implausibly, such systems turned out to be inconsistent, that would merely be a sign that we should switch to better formal systems! The truths of arithmetic themselves, like 2+2=4, would remain serenely unaffected by this formal axiomatic turmoil. Your common sense, and Winston Smith’s in the novel 1984, are perfectly correct on that point. 🙂

  169. Bill Kaminsky Says:

    Scott #160

    There are really two blowups to worry about here: first, the blowup in translating a 10-page human-understandable proof into a machine-checkable format (HOL, Mizar, etc.), and second, the further blowup in translating the HOL or Mizar proof-checking process into 3SAT.

    I’m not sure if the first of these blowups really is a problem. In the course of Googling and skimming*, I’ve come across proponents of the major higher-order-logic-based “proof assistants” like HOL Light, Mizar, Isabelle, and Coq saying the constant-factor blowup associated with rewriting human-readable informal proofs into software-readable formal proofs is actually quite small. (Of course, they might just be trying to sell you their, ummm, freely distributed software. 🙂 )

    For example, there’s the overview-for-lay-mathematicians article “Formal Proof — Getting Started” by Freek Wiedjik, Notices of the American Mathematical Society, 55 1408 (Nov 2008):

    Although formalization has become a routine
    activity, it still is labor intensive. Using current [2008] technology, a formalization will be roughly four times the size of a corresponding informal proof [measured as the size of gzipped LaTeX file typesetting it] (this ratio is called the de Bruijn factor), and it will take almost a full week to formalize a single page from an undergraduate mathematics textbook.

    * DISCLAIMER: As you noticed my oopsie-daisy in Comment #93, Scott, you probably shouldn’t trust the formal logic literature reviewing abilities of a man who’d invoke the difference between $\Sigma^p_2$ and NP without realizing the whole polynomial hierarchy collapses to P if P=NP, oy! 🙁

  170. Bill Kaminsky Says:

    Speaking of oopsie-daisies, I should’ve mentioned:

    Wiedijk’s article is available online at: http://www.ams.org/notices/200811/tx081101408p.pdf

    That “de Bruijn factor” of informal-proof-size to formal-proof-size being roughly 4 is solely empirical rule of thumb based on a small-but-informed survey done by Wiedijk himself which he documents on this webpage http://www.cs.ru.nl/~freek/factor/. Specific “de Bruijn” values in his survey vary from roughly 0.5 to 9.

  171. Scott Says:

    Michael Sweney #132:

      “The mouse is eating cat food, but the cat food bowl is broken”

      what conditions must be met for an answer to this be acceptable to a grumpy zen master?

    The cat food bowl must have been broken by a single hand that was clapping. A hand belonging to a good-natured zen novice. A novice who is also a hungry mouse. A mouse with hands, I suppose. [INSERT GONG SOUND]

  172. Bill Kaminsky Says:

    And on the note of my literature reviewing abilities probably being a bit lacking here, I must ask (violating your 1 question per commentor rule):

    *****
    Are their any papers you recommend to the students you’ve unsucessfully tried to get interested in this whole issue of finding succinct sets of axioms and inference rules that yield small SAT encodings?
    *****

    The only really pertinent paper that my Googling skills have found is:

    Deepak Ramachandran and Eyal Amir, “Compact Propositional Encoding of First-Order Theories” http://reason.cs.uiuc.edu/eyal/papers/compact-prop-aaai05.pdf

    ABSTRACT: In this paper we present polynomial-time algorithms that translate First-Order Logic (FOL) theories to smaller propositional encodings than achievable before in polynomial time. For example, we can sometimes reduce the number of propositions to O(|P| + |C|), or O(|P|^k · log|P|), for |P| predicates of arity k and |C| constant symbols. The guarantee depends on availability of some graphical structure in the FOL representation. Our algorithms accept all FOL theories, and preserve soundness and completeness (sometimes requiring the Domain Closure Assumption)… Standard translation techniques result in very large propositional encodings ( O(|P||C|^k) for predicates of arity k) that are often infeasible to solve…

    Of course, the most common computer proof assistants like HOL Light, Mizar, Isabelle, and Coq aren’t first-order logic, but higher-order logic, and I imagine the “expressiveness” of higher-order logic is responsible for the small “de Bruijn factors” mentioned in prior comments. Alas, I’ve found nothing on SAT instances associated with higher-order logic proof checking.

  173. Scott Says:

    William Banks #134:

      Maybe, it’s the equality sign? Maybe, the very expression of mathematical equality is in its essence illusory?

    Maybe, it’s the Psilocybin mushrooms? Maybe, a few folks here should try laying off them for a bit? 😉

    On that note, I’m going to sleep now. By my count, I’ve answered 62 questions so far, with 25 remaining to be answered.

  174. Sami Says:

    Hey Scott, if you want some help with Hebrew (or Italian, or Arabic) check this youtube channel, its for a Palestinian girl who group up speaking Arabic/Hebrew/Italian. I use it to learn Italian.

    Hebrew Lessons: http://www.youtube.com/course?list=EC16EF9FB640E17048

    Arabic Lessons: pretty much the rest of the videos in the channel.

    Shalom!

  175. Sami Says:

    *grew up 🙂 I think I need to takeEnglish lessons.

  176. Mark H. Says:

    ‘X’ is to quantum computer as quantum computer is to conventional computer. What is ‘x’?

  177. Scott Says:

    Mark H. #176:

      ‘X’ is to quantum computer as quantum computer is to conventional computer. What is ‘x’?

    Yeah, yeah, I’ve been thinking about that one for more than a decade. It’s possible that there’s simply no ‘x’ that fills the analogy particularly well.

    (For a comparison, try this, suggested by Greg Kuperberg: ‘x’ is to a round earth as a round earth is to a flat one. What is ‘x’?)

    However, for my own best attempt to identify such an ‘x’, see my paper Quantum Computing and Hidden Variables. There I propose a model of computation that’s able to search an unordered list of size N in time ~N1/3, compared to ~N1/2 needed quantumly and ~N needed classically. This model can also do Graph Isomorphism (and breaking cryptographic hash functions) in polynomial time, but probably not NP-complete problems. Have I said enough to whet your interest yet? 🙂

  178. Scott Says:

    Serge #24:

      Why is polynomial behavior considered as the only practical definition of feasibility, given that the feasible algorithms actually require very low exponents and low constants to be executed on a computer?

    The entire premise of your question is mistaken. Theoretical computer scientists have been thinking about particular exponents (and even leading constants), and how to make them as low as possible, for the past half-century!

    Polynomial-time is universally recognized to be nothing more than the “first-pass approximation” to feasibility, but one that has the enormous advantage of mathematical simplicity (mostly because it’s closed under arbitrary polynomial-time reductions—in contrast to, say, quadratic time, which is not closed under arbitrary quadratic-time reductions).

      Subsidiary question: What intuition tells you that the NP-complete problems couldn’t be solved by some polynomial algorithm which can’t be run because it doesn’t meet the above constraints?

    See this post for my attempt to explain my intuitions. Most (not all) of them apply regardless of what the constants and exponents are.

  179. Scott Says:

    Tom #25:

      Who is your favorite complexity theorist?

    The answer changes from week to week. This week, however, it’s Boaz Barak, because of this post.

  180. Scott Says:

    B. Bogart #121:

      Given that simulations are intentionally created in the context of human knowledge and reality is independent of human knowledge, what is the relation between a perfect simulation and reality?

    I’m not sure I agree with the premise that simulations need to be “intentionally created in the context of human knowledge”: the notion of “simulation” seems relevant whenever the functional properties of one system are replicated by another system with a completely different physical or mathematical substrate. And that could happen, e.g., entirely inside a computer, with no conscious human being around to recognize it as a “simulation.”

    Furthermore, even if I grant the premise, I’m not sure why the intention in creating a simulation should be relevant in assessing the simulation’s “relation to reality.” Whatever our answer to that question—and whatever we interpret the question even to mean (it’s not obvious)—shouldn’t we be able to discuss the question purely in terms of the properties of the simulation itself, forgetting about who created it and why?

  181. Scott Says:

    Karthik #122:

      How does the below approach sound for factoring large integers?

      *Get the nearest even number to the given integer.
      *Factor the even number in many different combinations
      *Now get a collection of primes nearer to the above factors
      *Within the collection, multiply each number with every other number and check if we get the given number.

    Err, it sounds like someone totaling their car before making it out of the driveway! 🙂

    – “The nearest even number” to a given n is n itself if n is even. If n is odd, do you want n-1 or n+1?

    – How on earth do you “factor the even number in many different combinations”? You seem to have shoehorned the entire difficulty of the factoring problem into that one step! (Also, of course, every integer has exactly one prime factorization up to ordering, not “many different” ones.)

    – Finally, the prime factors of n+1 (or n+2, etc.) need not be anywhere close to the prime factors of n. Factoring is extremely “non-robust” in that sense. Try some examples and you’ll see!

  182. internationalstudent Says:

    Why does the US not limit temporary visas by country but limit the number of permanent visas (green cards) to 6000 per country? This brings in 50000 thousand H1Bs from India and may 20000 from China and they have to indentured to a particular company (cannot leave company) at a particular position for a decade (if the employee leaves his queue starts all over again; if he is laid off he becomes an illegal) or more while someone from Taiwan or UK can get a green card in 6 months? The people who formulate the laws are from the best schools at US (like at MIT or Harvard). Do you think the lawmakers and hence the US population they represent voluntarily support a mild form of slavery or rather indentured labor even after almost 150 years after emancipation? What mathematical law can explain such absurdities? I myself left the country to see my sick mom after a decade and now I have problems getting back. Why is sensible modifications to laws not achievable by people trained by the smartest professors on earth?

  183. Scott Says:

    Jimmy #133:

      Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?

    I’d ascribe maybe a 40% posterior probability to Bayesianism being true. (Up from my prior of 20%, after reading The Signal and the Noise by Nate Silver.)

    With 60% probability, I think quantifying our uncertainty using probabilities is great whenever possible, but is unambiguously meaningful only when an event is sampled according to a probabilistic process that we know something about theoretically (e.g., in quantum mechanics), or in the limited domain of “bettable events” (i.e., events that belong to a large-enough ensemble of similar events that one could form a decent betting market around them). In other cases—including many of the ones people care about the most—I think we’re really in a state of Knightian uncertainty, where at most we can meaningfully give upper and lower bounds on the probability of something happening. And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning. Indeed, this might both explain and justify the phenomenon of risk-aversion in economics, as well as well-known “paradoxes” of decision theory such as the Ellsberg paradox.

    Again, though, that’s only with 60% probability, and is susceptible to revision as new information comes in.

  184. Scott Says:

    CognitiveScientist #114:

      Can you point to the main obstacles and/or draft a research program for how to build an intelligent machine/software?

    No, I can’t. (And even if I could, I certainly couldn’t do it in the space of a blog comment!)

    The most “obvious” approach to building an intelligent machine would be to have nanobots swarm through a human brain recording all the relevant information about it, then simply to simulate the brain neuron-by-neuron and axon-by-axon in a digital computer. But of course, many people have discussed that idea for a long time, and we still seem incredibly far from being able to do it. I regard whether or not this can be done—and what even counts as “all the relevant information” about a brain—as some of the most profound scientific questions that there are.

    Maybe it’s worth mentioning that, if you believe humans are basically “machines made of meat,” then a procedure to build intelligent machines already exists and is well-known. My wife and I carried it out ourselves this past year and can recommend it. 🙂

  185. Scott Says:

    Jim Graber #28:

      Will we ever use a proof program the way we do Mathematica or Maple? Anytime soon? Why or why not?

    It might be worth mentioning that I’ve still never really learned to use Mathematica or Maple! (As mentioned in previous answers, I more often use MS-DOS QBASIC.) So I regard it as entirely plausible that, a few decades from now, other math/CS researchers will regularly use automated theorem-proving programs, but I still won’t be using them. 🙂

    Or maybe not. The fundamental difficulty is well-known: the general problem of finding a proof (even a bounded-length proof) is NP-hard, whereas typically people only use Mathematica and Maple for problems that are in P or BPP. Of course, there are many NP-hard problems that computers, though they can’t solve them in the worst case in polynomial time, still typically solve much faster than the fastest humans! So maybe that will eventually turn out to be the case for (say) the problem of finding formal proofs in ZF set theory.

    Even if it does, however, there’s still a further difficulty: math, as human mathematicians practice it, is really about understanding and explanation, about asking the right questions and finding answers that are humanly comprehensible, about finding the “right” definitions and axioms (the ones that lead to the “nicest” theorems), etc. etc. Finding a formal proof for a formally-stated proposition is only one (albeit important) aspect of math, much like cutting skill is only one aspect of being a surgeon. And the other aspects strike me as “AI-complete”—i.e., as hard to replicate as any other human ability that I can think of.

    So I guess my prediction is this: automated theorem-provers will occupy a significant and growing niche in mathematical research (they already do have a niche—having led, for example, to the proof of the Robbins Conjecture). But even in cases where they’re used, humans will still be doing almost all of the “conceptual heavy lifting.” Until, that is, we pass the AI Singularity, when computers become able to write love poems and do all the other things humans can do.

    (If anything, I’d guess that we’ll see computer-generated poetry worthy of being published in the New Yorker, before we see computers explaining math at a conceptual level! 🙂 )

  186. Scott Says:

    Vinicius #158:

      How could you plan to become a “theoretical computer scientist” at age of 16?

    First, I started programming at 11—older than many other CS folks, but immediately I got interested in the “big questions” of what sorts of programs you could write and what sorts you couldn’t write, whether one programming language was more powerful than another, etc. (though I wouldn’t have expressed myself clearly about any of those questions). That led me to self-study for the AP Computer Science AB test (which, sadly, was recently discontinued), to programming competitions (which I enjoyed but never really distinguished myself in), to Robert Sedgwick’s book “Algorithms in C++”, and to books about fractals, genetic algorithms, and all sorts of fun topics like that. None of this was “complexity theory” (which I wouldn’t have been able to appreciate yet), but I guess it laid the groundwork.

    Second, I was lucky enough to attend Canada/USA MathCamp at age 15, where Richard Karp gave a series of lectures about theoretical computer science. That was my first introduction to NP-completeness and to the P vs. NP problem.

    Third, the summer I attended MathCamp, I also dropped out of my high school (which I mostly hated—that’s another story), and enrolled in The Clarkson School, a wonderful program for high school students at Clarkson University in Potsdam, NY that (like MathCamp) I’d very warmly recommend to others. There, I was lucky enough to do a research project on hypertext design with Chris Lynch, during which Chris taught me an incredible amount about CS research (he also gave me the Garey & Johnson book, which I devoured).

    Fourth, when I was 16 and a freshman at Cornell (which unlike Harvard, Princeton, Yale, MIT, or Stanford, had kindly accepted me with this bizarre background 🙂 ), I foolishly decided to take Jon Kleinberg’s graduate course on algorithms, then Ronitt Rubinfeld’s graduate course on complexity theory. I didn’t understand anything too deeply (I only got A-‘s), but I was enthralled with whatever I did understand. I distinctly recall my main ambition in life being to achieve 0.1% of Jon Kleinberg’s greatness (and this was in 1996, before Jon really got famous!)

  187. Scott Says:

    James Miller #17:

      My 8-year-old son likes computer programming. We have a system where he does X minutes of “learning” to earn X minutes of video game time, where I get to pick the learning. Do you have suggestions for types of learning that will help develop his interest and skill in programming?

    Firstly, I’m not sure how much I like the idea of making learning a chore that your son has to do to “earn” his video game time! Unless, that is, he enjoys the earning of video game time as a “game” in itself. I feel like the ideal would be to find some topic—almost any topic, really—that he enjoys learning about so much that you have to stop him from spending too much time learning about it!

    Having said that, I can say what developed my interest in programming: wanting to make cool pictures of fractals. Wanting to write little text-based adventure games, or Pong-like games, for my friends and I to play. Wanting to write a program that could compose its own jazz (it turns out to be easy: just string together a bunch of “jazzy” sounds randomly). Wanting to make a calculator that could handle complex numbers (I didn’t know, at the time, about any of the calculators or programs that could handle them). Wanting to know how long a random walk on the 2D grid, which repeatedly picks a random neighboring cell among the cells that it didn’t already visit, will go for before it traps itself on all sides. (It turns out be about 70 steps on average: for some reason, I cared so much about this question that I still remember the answer after 18 years.)

    Admittedly, I wasn’t yet doing any of that when I was 8 (mostly when I was 11-13). And the things that would fascinate “kids these days,” with their iPads and Twitters and whatnot, are probably different from the things that fascinated me back in days when MS-DOS QBASIC seemed like a radical new improvement over GW-BASIC (look ma, no line numbers!). But I hope it at least gives a few ideas that you could adapt for your son. I wish him the best.

  188. Scott Says:

    Dan Ashwander #145:

      Am I insane?

    Searching for insight into this question (I’ve never met you, after all), I googled you, and quickly found out that you’ve published an entire book (!) with the title Am I Insane. Here’s Amazon’s description of the book:

      Unique schizophrenic treatise about how JFK was referring to the author’s cosmic mind when he allegedly said “we have control of the mind,” & how Ashwander fights the likes of Albert Einstein, Mussolini & the Evil Eternals.

    Based on that description, I would say: either you seem to be engaging in some sort of sophisticated, ironic performance-art project, or else the prognosis doesn’t look too good regarding your question. 🙂

  189. Scott Says:

    Peter Boothe #81:

      We know an algorithm (Levin universal search) that achieves the optimal runtime for all problems in NP intersect co-NP but we do not know the runtime of that algorithm. What’s up with that?

    What’s up with it is that Turing machines can be enumerated and dovetailed over, yo! And each machine can be halted when and if it finds either an NP or a coNP witness, my man. But the simple fact that you can dovetail over all possible algorithms tells you nuthin’ about the runtime of the optimal algorithm. Nothing too mysterious about it, know what I’m sayin’? 🙂

  190. GooInterested Says:

    Reading up your answer to “ThereIsNoGoo” (I am not him!), I think you misunderstood the point, or maybe my question is different…

    The point is not getting rid of continuity, it is:

    Do the Real Numbers as defined by Dedekind and Weiserstrauss, exist in the real world?
    Isn’t reality constricted to the Computable Real Numbers?

    Or are all those undefinable reals (which are the uncountable part) just the consequence of a bad axiomatization?
    IMO Lebesgue Theory is another way this bad axiomatization shows up, you need to invent a way around these phantoms that the bad axiomatization brought with it….

  191. Jay Says:

    >‘x’ is to a round earth as a round earth is to a flat one. What is ‘x’?

    Oblate spheroid. 🙂

    >we still seem incredibly far from being able to do it [to simulate the brain neuron-by-neuron ].

    Actually we’re as far from that as 10 years.

    >The Brain for Theoretical Computer Scientists

    I saw at least three neuroscientists asking questions. Maybe you could just try an “Answer anything!” post?

  192. Scott Says:

    Jay #191:

      Actually we’re as far from that as 10 years.

    So, your prediction is that by May 8, 2023, there will exist digital computers able to pass an unrestricted Turing Test, by simulating human brains neuron-by-neuron? Are you willing to bet with me about that? How much: $10,000?

  193. Scott Says:

    GooInterested #190: The point I was trying to make was that you need to explain exactly what it would mean for the real numbers to “exist in the real world,” or for reality to be “constricted to the Computable Real Numbers.”

    There’s not some registry where we get to look up which mathematical concepts are “real” and which ones are “fictitious”! (People imagined such a registry for a long time—that’s why we’re still stuck with the misnomer “imaginary numbers”—but eventually they got over the misconception.)

    So what can we do? Well, we can look at which mathematical structures naturally arise in our best descriptions of Nature. In that case, real and complex numbers of some sort obviously make the cut—although not, I don’t think, in such a way that it ever matters whether we mean “all” the real numbers, or the computable reals “only.” (This point cuts both ways: some people might say, then why not just restrict ourselves to the computable reals? while others would say: then why not use all the reals, without worrying about the superfluous restriction of computability?)

    On the other hand, we can also ask questions like how many bits can be stored in a bounded region of space and reliably retrieved from it, or how many distinguishable outcomes a physical measurement can have. And on those questions, the universe reveals a much more “discrete” character.

    I hope that clarifies my answer.

  194. Jay Says:

    Scott #192

    No, I’m just willing to bet on what I said, namely that we will have a large scale simulation of the brain neuron-by-neuron by 8 may 2023. Please don’t bet before reading about the Human Brain Project, and I won’t go over $1000.

    I’m not yet willing to bet on how much time this landmark will lead to digital computers able to pass a (reasonably unrestricted) Turing test. Imho this will depend on how much time it’ll take to go from “nanomachine” recording all neurons in a living zebrafish brain, published three months ago, to the same thing in a human brain. Let’s wait and see what happens this fall with the BAM/BRAIN project.

  195. Karthik Says:

    Thanks for the answer :-). Sorry if I wasn’t clear. I meant to choose either n-1 or n+1 to get the even number, assuming n is odd. By factoring-in-multiple-combinations what I meant was that, if the prime factors of the even number, for example, are 2,2,2,3, we should also consider (4,2,3) (4, 6) (8,3) (2,12) as factors. So that it helps in the next step where we try to collect prime numbers near to the above factors. (we can pick 11 in the above example).

    Once we choose the even number, one factor is 2 and if the other factor is not an even number and not an obvious composite number, we then again subtract 1 or add 1 to the other number and get another even number, and pretend that this even number is a factor, since the goal is not to factor the even number that we got but to get as many primes as we can which are likely to produce the given number.

    Yeah, if the prime factors of n+1 or n-1 are not likely to lie anywhere near the prime factors of n, then it won’t work. Will try with some large numbers.

  196. Jim Graber Says:

    Scott #185
    So first, congratulations on tenure, I should have said it last time. Second, thanks for a very nice answer.
    On reflection, I should have perhaps asked about routine use of proof checkers by say theoretical physicists and mathematicians outside of proof theory or theoretical computer science.
    And if I had a second question, I would ask about the use of a probable truth program by the same audience.
    Thanks again

  197. jonas Says:

    Re Scott #106: thank you for this reply, sparse fast fourier transform sounds very interesting.

  198. Scott Says:

    itssummer #96:

      From what I understand from the answer and the question, it looks like permanent or #P complete problem has only n^{omega(logn)} lower bound since perfect matching has such a lower bound. So exp lower bound is known in this case.

      However NP compete problem has sub exponential (exp(n^a) with a <1) lower bound from what you are quoting.

      Is my interpretation correct? Why is there a diff between #P and NP complete lower bounds? I would have thought #P was harder and hence should have exp(n^a) lower bound.

    No, deciding whether a bipartite graph has a perfect matching (Razborov’s “logical permanent”) is not #P-complete; indeed it’s in P. It’s only counting the number of matchings that’s #P-complete.

  199. Scott Says:

    Jay #194: OK, thanks for clarifying. Personally, I’d only be interested in betting about a milestone that had to do with the actual intelligent behavior of the neural net—not with the sheer number of neurons being simulated or some other internal criterion like that.

  200. ThereIsNoGoo Says:

    Re:
    ThereIsNoGoo #80
    Scott #165
    GooInterested #190
    Scott #193

    Scott’s #165 answer I think hit the main points. He basically said that the only Goo we have to deal with is the Good Goo, though I would counter that No Goo is Good Goo.

    I can think of a couple of ways to banish Goo from our picture of physical reality. One is to acknowledge that Nature is playing a trick on us by being extremely but not exactly mathematical. So our mathematical models are just that, merely models, and if it’s convenient to use real numbers, then just use them, even if they’re not really real. They’d just be Goo-y models of a non Goo-y reality.

    Secondly, we might be able to banish the remaining vestiges of Goo that Scott #165 found hard to avoid. Maybe quantum probabilities could always be seen as cos^2 of physically real angles (replace Platonic Bloch spheres with physically real spheres in curvy granular space, etc., the rest is left as an exercise for the reader). Admittedly there still seem to be real numbers when you model this, but only because you want to mathematically model (approximate) quantites. In physical reality, the Goo is not there.

  201. Scott Says:

    Ray #163:

      How being in communist Berkeley during your formative years have affected you?

    Well, not everyone at Berkeley is a communist (there are also libertarians, anarchists, apolitical stoners, possibly even one or two Republicans… 🙂 ), but the strange environment there did affect me profoundly.

    By any ordinary American standard, I’m a “leftist”: someone who cares deeply about global warming and endangered species and gay rights and women’s rights and voting rights and gun control and church/state separation and higher education funding and having a progressive income tax.

    But when I came to Berkeley, I suddenly found myself on the “right”! I’ve written before on this blog about the loud arguments I heard there, right before the 2000 election, that Gore and Bush were equally contemptible corporate puppets, and the only thing to do was to vote for Ralph Nader. I hope the stupidity of that position is now obvious to everyone! 🙂 Even at the time, I found it frightening enough that I became an Internet evangelist for “NaderTrading” (the scheme whereby people in swing states would vote for Gore, and would have people in safe states “vote for Nader in their place”). Sadly for the world, we fell short by just a few hundred swaps in Florida.

    I’ve also written on this blog about how, at a “memorial service” held on the evening of Sept. 11, 2001, grief for the victims (who, of course, were just then being pulled out of the wreckage) almost immediately gave way to wildly-applauded speeches about how the US should look inward, examine its own moral failings, etc., rather than going to war against the poor people who had done this. A Communist organization was also on hand to distribute flyers about how it was basically our fault. I got disgusted and left.

    For me, though, possibly the biggest turning point came when anti-Israel demonstrators took over Sproul Plaza, many abandoning pretense and carrying straightforward anti-Semitic signs (like an American flag with the stars replaced by a combination of corporate logos and Stars of David). They deliberately did this on Yom HaShoah (Holocaust Remembrance Day), in order to make the “point” that Israel’s treatment of Palestinians was at least as bad as the Holocaust. They then invaded academic buildings in order to disrupt classes and exams (one class I was taking had to be relocated to another building). Again, I didn’t read some journalist’s account of what happened; I was there and saw. Around the same time, anti-Israel activists also smashed windows of Berkeley’s Hillel building and spray-painted “Fuck the Jews” there, and beat up some Orthodox Jewish students. The administration, maybe justifiably fearful, was extremely slow about condemning these actions. In response, Jewish groups invited a few pro-Israel speakers to campus; they each required massive police presence and were still shouted down (called “murderers,” etc. etc.) at pretty much every sentence. Then I was at a city hall meeting where the activists tried to get the city of Berkeley to boycott all companies doing business in Israel (including Intel, Microsoft, and … well, you see the practical problem there 🙂 ). The measure failed after “only” getting 3 out of 7 votes.

    No, this wasn’t Nazi Germany. No, I didn’t overreact and fear for my life. But I was tempted to say: “so this is your liberal Utopia? This is your safe haven for intellectualism and humane tolerance? If so, I’ll take the America of Bush and Cheney and Fox News, if that’s my only other choice!”

    So I feel extremely grateful that that isn’t my only other choice. There are still people in the US (and, of course, in every country) who basically believe in the liberal, democratic spirit of the Enlightenment, in what Karl Popper called “the open society.” Who reject every kind of fundamentalism: the all-American bible-thumping kind, but also the even more virulent Communist and Jihadist kinds. Who are more interested in advancing science and technology and using them to alleviate human misery, than in blaming all such misery on “the bourgeoisie” or “the elite intellectuals” or “the Jews.” Who want to correct our own numerous moral failings, but not if that means grovelling for forgiveness before a lunatic with a knife. Furthermore, these people are not some tiny minority: depending how you count, they probably constitute somewhat more than half of the country, and were responsible (among other things) for electing our current President—a man who I passionately support despite whatever failings he might have.

  202. John Sidles Says:

    Please allow me to commend to readers of the various factoring/Mersenne sub-threads (in particular, Rahul #102, Scott #111, Karthik #122, Me #130, Scott #181, Karthick #195) the Wikipedia pages on “The Cunningham Project” and “Aurifeuillian factorization”, and also the MersenneWiki page on “The Cunningham Project.”

    The Cunningham Project (which dates back to 1925, and is “the oldest continuously ongoing activity in computational number theory”) is concerned with factoring integers of the form d^n-1 … which happens to be the dimension of an n-qudit quantum state-space.

    Various empirical phenomena in practical quantum simulation computations, involving finite-rank tensor approximations, that we engineers previously viewed as “remarkable numerical coincidences,” or even “inexplicable miracles,” turn out to have a natural number-theoretic explanation in terms of the insights that the Cunningham Project provides.

    In the next week or two our Quantum Systems Engineering group hopes to post a question in this regard, either to TCS StackExchange, or to the MathOverflow StackExchange, or to the Physics StackExchange — it would fit well in any of the three. This is a nice example of how abstract number theory can turn out to have unexpected consequences for practical engineering and fundamental physics.

    In the meantime, the Cunningham Project is an abundant source of mathematical insights that are broadly relevant to numerous overlapping areas of computation, representation, and simulation, and is commended to all who are interested in history, or in computation, or in factoring … or in the history of computational factoring.

  203. GooInterested Says:

    ThereIsNoGoo #200

    OK, so my question was somewhat different.

    We both think there is more to discretize in our conception of reality, but unlike you I don’t think we can’t get rid of the Reals (computable or not)…

  204. GooInterested Says:

    “Israel’s treatment of Palestinians was at least as bad as the Holocaust”, you are right about condemning this analogy.

    But you should concede that Israel’s treatment of Palestinians is dangerously close to South Africa’s Apartheid regime…

  205. Michael Bacon Says:

    Scott@201,

    If there was a “Like” button below your comment, I would have definitely clicked it. Well put.

  206. Scott Says:

    GooInterested #204:

      But you should concede that Israel’s treatment of Palestinians is dangerously close to South Africa’s Apartheid regime…

    I concede that there’s a vague analogy, particularly if you consider the nutty West Bank settlers. However, the analogy would be stronger if the United Nations had proposed giving most of South Africa to its black population and only a tiny sliver to the white; if the white population had grudgingly agreed; if the black population had not only rejected the deal, but spent the next few decades launching three wars against the white population, each with the loudly-proclaimed goal of wiping the whites off the face of the earth, and each with enthusiastic support from all the neighboring African countries.

    For further insight, we can look at the opening words of Nelson Mandela’s 1955 Freedom Charter:

      “We, the people of South Africa, declare for all our country and the world to know: That South Africa belongs to all who live in it, black and white, and that no government can justly claim authority unless it is based on the will of the people.”

    Now let’s compare to the 1988 Hamas Charter:

      “The Day of Judgement will not come until Muslims fight the Jews, when the Jew will hide behind stones and trees. The stones and trees will say, ‘O Muslims, O Abdullah, there is a Jew behind me, come and kill him.’ “

    See a difference? 🙂

    And true to their words, according to Wikipedia, even Mandela’s sabotage operations were designed “to exert maximum pressure on the government with minimum casualties, bombing military installations, power plants, telephone lines and transport links at night, when civilians were not present.” The Jihadists, of course, pack their bombs with nails and shrapnel in an effort to kill as many civilians as possible and maximize their heavenly reward.

    So, carrying your analogy further, one might say that the real tragedy of the Palestinian people is that they haven’t yet had their Mandela.

  207. Scott Says:

    Peter #151:

      Given the potential security applications of quantum computing, what do you think will be the delay between (A) a “useful” large-scale quantum computer being built and used, and (B) such a computer– perhaps a different one– being publicly announced (and later accepted by the community)? And which will happen first?

    Well, D-Wave has already done (B), but as far as we can tell today they haven’t yet done (A)—so I think we can say with some confidence that (B) precedes (A)! 🙂 Asking by how much (B) precedes (A) is equivalent to simply asking how long it will be until we get useful QCs—which of course is a huge question, and one that I’ve touched on in various ways many times on this blog, but not one where I like to speculate. It would astonish me, though, if we were less than (say) 15 years from (A).

  208. Scott Says:

    Ramsey #147:

      How do you work?

      More specifically, how do you go about coming up with ideas, how do you structure your work time, and what do you do when you’re stuck?

      Also, how have your work habits changed over time?

    For the first two questions, I’ll point you to a post in my previous “Ask Me Anything” that addressed them.

    As for how my work habits have changed over time: between teaching, supervising students, various administrative duties, answering emails from journalists and all sorts of other people, infant care, and just generally being a more sluggish, less motivated person, I have vastly less time for research than I did when I was in grad school.

  209. Rahul Says:

    Scott #207:

    Tsk Tsk. You mentioned D-Wave. 🙂

    Ok, fine. The gag order only applied to questions not answers.

  210. Ray Says:

    Wow, I knew that Berkeley was bad but I had no idea that it was that bad.

  211. T H Ray Says:

    ” … the real tragedy of the Palestinian people is that they haven’t yet had their Mandela.”

    Well said, Scott. A rational person might also ask of those who condemn Israel for not wanting to sacrifice security for peace, for those whose professed policy is to destroy it (!) … how many Jews sit in the councils of Arab governments.

    I’m old enough to remember when the Zionist dream was a greater Canaanite culture of mutual respect and economic cooperation. Jews had lived for a thousand years among other peoples of the region as second class citizens, without equality. It’s a great shame that in the 21st century, the larger surrounding population apparently wants to go back to those ways.

    Tom

  212. Scott Says:

    Ray #210:

      Wow, I knew that Berkeley was bad but I had no idea that it was that bad.

    To be fair, there are many, many wonderful things about Berkeley. The weather, for example. And the fresh fruit smoothies. And the fact that you can eat so well for so cheaply. Or expensively (Chez Panisse), or anything in between. And walking down Telegraph Ave. over the weekend and seeing all the loony characters there. I get nostalgic just thinking about it, and am happy whenever I get to return for a visit.

    It’s relevant, of course, that I spent the vast majority of my time at Berkeley with the computer scientists in Soda Hall, one of the finest groups of people with whom I’ve ever been associated. If anyone else were considering going to Berkeley to study CS, math, physics, etc. I certainly would never suggest they shouldn’t go because of politics (at most I might try to convince them that MIT is cooler 😉 ). Probably I spent less than 2% of my time at Berkeley worrying about political issues like those I described in comment #201. But since those are what I was asked about, those are what I discussed.

  213. Scott Says:

    Everyone: Yeah, I know that it’s Thursday morning, and I have roughly a dozen questions still to answer! If your question wasn’t answered yet, please be flattered: it means yours was one of the ones that I saved to really think about. 😉 I’ll try to get through them today.

  214. Scott Says:

    internationalstudent #182:

      Why does the US not limit temporary visas by country but limit the number of permanent visas (green cards) to 6000 per country? This brings in 50000 thousand H1Bs from India and may 20000 from China and they have to indentured to a particular company (cannot leave company) at a particular position for a decade (if the employee leaves his queue starts all over again; if he is laid off he becomes an illegal) or more while someone from Taiwan or UK can get a green card in 6 months? The people who formulate the laws are from the best schools at US (like at MIT or Harvard). Do you think the lawmakers and hence the US population they represent voluntarily support a mild form of slavery or rather indentured labor even after almost 150 years after emancipation? What mathematical law can explain such absurdities? I myself left the country to see my sick mom after a decade and now I have problems getting back. Why is sensible modifications to laws not achievable by people trained by the smartest professors on earth?

    First off: I’m incredibly sorry to hear about your predicament. Personally, I regard US immigration policy—and particularly this weird, persistent issue of people with visas leaving the country and then not being allowed back in—as something of a national disgrace.

    If I had to sum up US immigration policy in one sentence, I’d say: we somehow succeed brilliantly both at keeping Paul Erdös and Andris Ambainis out, and at letting Mohammad Atta and Tamerlan Tsernaev in!

    I suspect the heart of the problem is simply INS bureaucrats blindly applying rules made up by an even more out-of-touch Congress, rather than people who are able or allowed to apply creative intelligence to questions like: “who is this person, actually? is this a brilliant scientist who we should be grateful to have come to the US? is the possibility that he’s a criminal or terrorist something we need to seriously consider, or is that the stuff of comedy?”

    Of course I’m lucky that, being a US citizen, I never had to bear the brunt of these policies. Well, until recently: now that I’m married to a visa-and-then-green-card-holder, I’ve seen firsthand some of the weird, arbitrary indignities she’s had to go through.

    The one place where I disagree with you is in your intimation that MIT and Harvard professors (!) are somehow responsible for this situation. I mean, think about it: if we as a group had that much power, then wouldn’t we have also managed to get the US to enact sensible gun control policies, or set carbon emissions caps, or ratify all those perfectly-innocuous UN treaties that it’s so far refused, etc. etc.? Congress—and the paranoid, reactionary part of the country that Congress overrepresents—are not exactly refusing to do these things at the behest of Harvard professors. 🙂

    OK, I just it looked it up, and it seems that at least in 2010, there were 15 members of Congress with bachelor’s degrees from Harvard (13 Democrats and 2 Republicans), and many others with business or law degrees from there, but I can’t find any current Congresspeople who went to MIT (does anyone else know of any?). Also, according to this graph, 34.8% of Congresspeople have degrees in government and law, and 20.9% in humanities, compared to only 11.5% in science and technology. So even if maybe, just maybe, you could put some of the blame for our current political problems at Harvard’s feet, I really don’t see how you could put any at MIT’s. 😀

  215. Raoul Ohio Says:

    Re 84:

    I think a base guitarist is a bass player that can only thump away at the base note in the chord.

  216. Scott Says:

    Raoul #215: Sorry, fixed!

  217. tulpoeid Says:

    Have you ever examined the possibility of equating an antisemite protester with a communist being a generalization aimed at leaving you with more brain resources available for immediate tasks at hand?

  218. Juan Says:

    I just reread the rules and realized my previous (and only) question may be in violation of #3. Doh! If Scott doesn’t mind I would like to ask another question as a replacement:

    – What’s your favourite Israeli dish? Is there anything in particular you would recommend and that perhaps the folks at home could make? :p

  219. Michael Sweney Says:

    Just as a postscript: I took Scott’s answer(s) to a grumpy zen master (who shall remain anon.) and pantomimed breaking a bowl with a karate chop hand, while squeeking like a mouse….

    She threw a tea bowl at me and then stomped out of the room.

    Did I pass????

  220. Raoul Ohio Says:

    Jonas #197:

    My 2 minutes of research take on Sparse FFT: At first glance it appears to be a joke. On second glance, it is highly specialized and potentially useful.

    1. Note that the complexity is LESS than Theta(n), for n data points! Thus SFFT cannot even see all the data. It supposedly can tell if it needs all the data without looking at all the data. WTF?

    2. On the other hand, someone points out that lots of signals are highly redundant, so you can see how SFFT might make sense for tightly defined data.

  221. Douglas Knight Says:

    Scott #214 on immigration: to nitpick, the example of Tsarnaev is probably due not to bureaucratic rules, but to individual meddling. The policy is that Chechens are not allowed; the family constitutes at least 3% of all Chechen immigrants, surely because uncle Ruslan Tsarni is a CIA asset. Some believe that Erdős was singled out as well.

    But surely most decisions are made bureaucratically.

  222. Scott Says:

    Juan #218:

      I just reread the rules and realized my previous (and only) question may be in violation of #3. Doh! If Scott doesn’t mind I would like to ask another question as a replacement:

      What’s your favourite Israeli dish? Is there anything in particular you would recommend and that perhaps the folks at home could make? :p

    I’ve actually been meaning to read Deutsch’s new paper, but I haven’t gotten around to it yet! Partly prompted by your question, I just printed it out tonight, but it might take me some time.

    Regarding your replacement question, unfortunately I can’t really make anything—American, Israeli, you name it! (Well, I can make fruit smoothies and sorbets, and once I boiled some eggs.) But here are the things I like to eat most when I go to Israel:

    – Chicken schnitzel in a pita with hummus, tahini, salad, and french fries
    – “Sabich” (pita with eggplant, hard-boiled egg, hummus, tahini, and hot sauce)
    – Frozen yogurt with whole fruits (dates, figs, bananas, melon…) and chocolate mashed into it
    – Fresh pomegranate juice

    Another dish that I’m told is very easy to make, and that Dana loves, is shakshuka: basically tomato sauce with mixed vegetables simmered in a pan, with some eggs then cracked over it. You then wipe it with bread.

  223. Joshua Zelinsky Says:

    Raoul Ohio,

    My (very limited) understanding of Sparse FFT is that you use it for data types where you know with a high certainty which aspects you can likely get away with ignoring. So you need to have a good idea of what you expect the data to look like in advance.

  224. Scott Says:

    Clayton #120:

      If you guess randomly, what are the approximate odds of getting the answer to this question correct?
      A) 25%
      B) 50%
      C) 33%
      D) 25%

    Yeah, that’s a fun one! I’ll assume “randomly” means “uniformly at random.” Then presumably, a valid answer is any probability p that satisfies the consistency condition

    p = (# answers that are p) / 4.

    So for example, if there were one 25% option, or two 50% options, or three 75% options, then any of those would constitute a consistent answer. As it is, however, the unique consistent answer is

    p = 0,

    even though of course that isn’t one of the options listed.

    The whole thing is reminiscent of computing with closed timelike curves; see this paper for details.

  225. Scott Says:

    Joshua Zelinsky #44:

      A while ago you mentioned the class of problems “Are there infinitely many n such that BB(n) is [even],[odd],[prime],etc.” where BB(n) is the Busy Beaver function. In that context, is it even obvious that for any k, BB(n+1)-BB(n)>k for all sufficiently large n? I’d certainly guess so, but I don’t see any obvious way to rule out even almost all of BB(n)’s growth occurring at n which are far away from reach other.

    What a beautiful question! I saved it till tonight only because, unlike most AMA questions, I found it neither easy enough to answer nor hard enough to flippantly non-answer. You aimed for, and hit, the dreaded “thought-requiring intermediate zone.” 🙂

    Observation 1: it’s easy to design Turing-universal models of computation where what you ask for does not hold. For example, let BB(n) be the largest finite number of steps taken by any program with n bits or fewer, and suppose our programming language (like most real languages) requires programs to contain an integer number of bytes. Then obviously

    BB(8n)=BB(8n+1)=…=BB(8n+7)

    for all n.

    So, just like with the questions of whether BB(n) is even, odd, prime, etc., to prove your assertion in the case of Turing machines—and I’d be flabbergasted if it failed for them!—we’ll need to use something specific about the Turing machine model. (I assume you also observed this, and that it’s why you asked the question in the first place.)

    Observation 2: For the Turing machine model, we at least have BB(n+1)≥BB(n)+1 for all n. Proof: With n+1 states, take an n-state Busy Beaver, but then instead of it halting, have it transition to a “purgatory” state where it does nothing for one more time step and then halts.

    Observation 3: Suppose we modify the 1-tape Turing machine model in the following way. There exists a special symbol #, such that the tape is initially filled with #’s, but the TM itself is never allowed to write #’s, only 0’s and 1’s. In that case your conjecture holds: indeed for all sufficiently large n, we have

    BB(n+1) ≥ BB(n) + 0.49log2BB(n).

    Proof: Similarly to Observation 2, we’ll define an (n+1)-state machine that first simulates the n-state Busy Beaver, then enters a special “purgatory state” instead of halting. But now, the purgatory state will repeatedly move either left or right (whichever leads to a longer running time) until it encounters a # symbol, and only then halt.

    Let k be the total number of squares visited by the n-state Busy Beaver. Then clearly, the number of additional steps we can “buy” this way is at least k/2. Moreover, we must have

    nk2k+1 ≥ BB(n),

    since otherwise we’d enter the same configuration twice and loop forever. Solving,

    k + log2(k) ≥ log2(BB(n)) – log2(n) – 1.

    And for all sufficiently large n, the above will give us (say)

    k ≥ 0.98 log2(BB(n)).

    Now, it seems likely that we could modify the above construction to make it work even without the ‘#’ condition! To do so, we need a way to take any n-state TM M, and modify it to an (n+1)-state TM M’ that first simulates M, then uses a single additional state to move much of the way across the visited part of the tape and then halt.

    Observation 4: Suppose we could show that, for some universal constant c, it’s possible to output a self-delimited description of an arbitrary n-state TM using an (n+c)-state TM. (This seems probably true, though I’m not sure how to prove it! I only know how to output a description of an arbitrary n/log(n)-state TM. And even with ‘string-based’ programming languages like C, Python, etc., we’d still have to worry about the self-delimiting issue, which could add an unwanted log(n) factor to c.)

    Now, if we could show the above, then I claim that there’s certainly some constant c such that

    BB(n+c) > 2BB(n)

    for all n. Or more generally, that for any computable function f, there exists a constant cf such that

    BB(n+cf) > f(BB(n))

    for all n.

    Proof: If the hypothesis holds, then there exists an (n+O(1))-state TM that explicitly writes to tape the (self-delimited) value of BB(n), by first writing a self-delimited description of an n-state Busy Beaver, then simulating that Busy Beaver using a fixed universal TM. Therefore, for any computable function f, there also exists an (n+Of(1))-state TM that explicitly writes to tape the self-delimited value of f(BB(n)). So if

    BB(n+cf) ≤ f(BB(n)),

    then it can also compute BB(n+cf), and therefore solve the halting problem for (n+cf)-state TMs. But if we choose cf sufficiently large, then this, in turn, would enable such a TM to find, and output, a Kolmogorov-random string with more bits than the length of the TM’s own description. Since this is a contradiction, we conclude that

    BB(n+cf) > f(BB(n)).

    I’m afraid I’m now entering my own “halt” state, at least for tonight! But I’d love to think about this further, and also encourage others to do so. Would you like to post your problem on MathOverflow or CS Theory StackExchange, or would you prefer that I do it?

  226. Joshua Zelinsky Says:

    So I had realized what you call Observation 1 and Observation 2 (1 I think you actually mentioned when you were discussing the problem that got me thinking about this in general). 3 and 4 at least goes a long way to answering the question, and implicitly raises two more (potentially more difficult questions).

    1. Using the Turing machine model, is there a constant k that for f be any computable function, such that for all sufficiently large n f(BB(n)) < f(BB(n+k)). This would be essentially a uniformization of your observation 4.

    2. This one just occurred to me, and generalizes your prime question. Let S be a computable set of positive integers with natural density zero. Is it true that the range of BB intersect S is always finite? This would for example generalize your statement about primes. I'm not at all sure what my intuition is about this, but would at least guess that it is true when S is primitive recursive.

    By all means, feel free to post it to either MathOverflow or CS Stackexchange (I really shouldn't be thinking about this at all since I'm supposed to be finishing grading and other end of semester stuff right now.)

  227. Scott Says:

    Artem #45:

      How can computational complexity (and theory A more generally; i.e. algorithmic lens) best contribute to evolution of biological and social systems? Can this contribution ever be as substantial as the one to physics?

    This is a huge question—one that I strongly encourage everyone in our community to think about—but way too broad to answer in the space of a blog comment! I.e., if I had an answer that satisfied me, then rather than sketching it here, I’d write a paper about it, and hope to launch a whole new branch of CS theory the way Valiant has on several occasions.

    The central difficulty is obvious: to whatever extent your model is “real” CS theory, biologists and social scientists will probably deride it as too idealized; conversely, to whatever extent biologists and social scientists like your model, it will probably have too many “arbitrary” aspects for CS theorists to prove anything clean about it.

    Let me put it this way: to connect computational complexity (as actually practiced by computational complexity theorists) to physics (as actually practiced by physicists), seems to me like digging a tunnel between England and France. Highly nontrivial, but by 1994 people finally managed to do it!

    To form an equally-compelling link between computational complexity and biological evolution feels to me like digging a tunnel between England and the US. A worthy aspiration, but orders of magnitude more ambitious!

    Of course, as you’re probably aware, there are aspects of biology (e.g., DNA sequence alignment, protein folding, phylogenetic tree reconstruction), and aspects of social science (e.g., game theory, auctions, “Six Degrees of Separation”), that already have pretty compelling links with CS theory. But I assumed above that you were after something bigger: namely, a “complexity-theoretic re-imagining” of the entire foundations of biological or social evolution—a new yet mathematically-precise way to frame the central questions, etc.—that’s as fruitful as the complexity-theoretic re-imagining of quantum mechanics that’s already well underway.

  228. internationalstudent Says:

    Thankyou Professor. However I am not completely satisfied with your answers. Can I assume “Congress—and the paranoid, reactionary part of the country that Congress overrepresents……” as an hint to the question “Do you think the lawmakers and hence the US population they represent voluntarily support a mild form of slavery or rather indentured labor even after almost 150 years after emancipation?” I am highly inclined to say it may be possible that it is illegal constitutionally to have insane temporary visa rules to make people from a few countries suffer while people from a majority of elite countries reap the benefits of us immigration? I wonder if we would even be having this discussion if European Union were considered as one supra country and the whole ‘supranation’ is subjected to same per country quota as China and India?

    One technical question. Do you think Shannon capacity of cycles will be known before the end of the decade or is it as hard as P=NP question? What techniques you think will make it decidable either way?

  229. Scott Says:

    internationalstudent #228: Dude, your first question was already well after the deadline. To ask a second question after the deadline is just presumptuous. But, alright … I have no idea when the Shannon capacity of cycles problem will be solved, but it would astonish me if it were anywhere near as hard as P vs. NP.

  230. Scott Says:

    NotAnonymous #123:

      What videogame(s) would you design based on your research interests (broadly construed)

    Probably I wouldn’t design such games, because they wouldn’t sell! 🙂

    I did once write a game based on the Euclidean Traveling Salesman Problem—where you simply had to find the shortest route you could connecting a bunch of randomly-chosen points in the plane. And it was actually kind of fun, and maybe games of that sort could be used to do empirical studies of how humans solve NP-complete problems when confronted with them. (I believe Michael Kearns has already done some studies of that kind.)

    I’ve also tried out videogames that tried to teach the principles of special relativity and of quantum mechanics (not both in the same game…). However, those games mostly just induced a feeling of confusion! As someone who already knows the theories in question, I had a hard time figuring out what the point of the games was, or how the theories were reflected in them, or what people were supposed to learn about the theories by playing the games. So I regard it as an extremely interesting open problem whether it’s possible to create a videogame that successfully provides intuition for modern physics!

    (The easier problem, of creating a videogame that provides good intuition for Galilean physics, has arguably been solved by Angry Birds. 🙂 )

    Update (May 29): Check out “A Slower Speed of Light”, recently released by the MIT GameLab. When I tried an earlier beta of this game, it mostly just confused me. But trying the new, improved version today, I’ve actually taken a liking to this game. I would never have imagined that the special theory of relativity could be made vivid using motifs borrowed from psychedelic drug trips or creepy ghost stories, but try playing the game yourself and see if that isn’t precisely what they’ve done!

  231. Michael Gogins Says:

    Arten #45, Scott #227, I recently read a short book titled Proving Darwin, by Gregory Chaitin, about the theory of evolution from an abstract computer science perspective. No idea how to evaluate the book though I certainly found it very interesting.

  232. Clayton Says:

    Scott, thanks for the attempt! I’m not entirely convinced there is a good way of asking or of answering this. Your reasoning:

    …a valid answer is any probability p that satisfies the consistency condition

    p = (# answers that are p) / 4.

    …seems totally sound and indeed makes the question insoluble.

    One way I thought of possibly answering this is to solve for the expected value of an answer that could be valid given two appearances of 25% and one appearance of 50%. Then we’d have

    0.25 * 1/2 + 0.5 * 1/4 + X * 1/4 = X

    and solving for X we get 0.3333…

    This method has the downside of obviously not satisfying the criteria above, but has the upside of being soluble.

    Can you think of a way of phrasing the question that would make that solution acceptable, but wouldn’t give away the trick? It seems kind of pointless to say “solve for the final value in this list of four answers that would provide the correct expected value of answering a four point multiple choice question correctly…” I haven’t been able to think of a better way of posing it, though.

  233. internationalstudent Says:

    Ok Apologize!

  234. Scott Says:

    Clayton #232: Well, it’s clear that not every question of this form has to be soluble! Here’s a simple example that captures the difficulty:

      If you were to guess 0 or 1 with equal probability, what’s the probability that you would get the right answer to this question?
  235. Clayton Says:

    Scott #234: I absolutely agree that with you: not every question of this form is soluble! What I’m interested in is a maximally clever form of the *question* that would admit the solution I gave above. The difficulty is in asking the question.

    It’s not clear that there are non-trivial question forms, but I do think there are consistent ways to construct non-trivial solutions.

  236. Scott Says:

    Jay #155:

      Suppose you were a software designed to conduct some experiment and perform bayesian reasoning.

      For what kind of experiment do you think your computation should diverge because of possible copies of yourself?

    After rereading it several times, I still don’t understand this question! For example, I don’t know what it means for a computation to “diverge” (does it just mean to run forever?).

    Having said that, yes, there are well-known examples where it’s totally unclear how software should be written in order to reach the “correct Bayesian answers” in situations where there might be multiple copies of itself. For example, suppose that we toss a fair die, and if it lands 6, we make 1000 copies of your program P, and if it lands anything else, we leave just 1 copy. Then we ask P to guess whether the die landed 6. If it guesses “no,” then it will be right with probability 5/6, and wrong with probability only 1/6. However, if it guesses “yes,” then most of the copies of P that could ever come into existence (or “do” come into existence, in a quantum multiverse) will be right! So, what should P be trying to maximize? What should its goal be? For more about this question, I’d encourage you to read Quantum Computing Since Democritus Chapter 18, or the book Anthropic Bias by Nick Bostrom.

  237. jh Says:

    Scott #225:

    A trivial extension of your Observation 3: there exists c such that

    BB(n+c) > BB(n) + 0.49 log B(n)

    Just take the BB on n states and supplement it with keeping track of the numbers of leftmost and rightmost visited cells and the current head position. Then apply your argument.

  238. Jay Says:

    Scott #236

    Well, this was a follow up on your comment #32 in “Quantum Information and the Brain”, which I understood as a statement you had an answer to this question. If I misunderstood, too bad. I’m not sure what to answer to your last scheme as the question was not defined, but for your original question in chapter 18 I’m SSA conditionned to God told the truth on what he did.

    While I’m at it: three comments and one question on this chapter 18:

    1) I like very much the “student” version of the God’s coin toss: God either created 1000 little red men, and will give them a cookie if they push a button, or created 999 little green men plus 1 little red man, and will punch them in the face if they push a button. You wake up as a little red man, what do you do? Of course you should push the damned button.

    2) there is a very simple point against Doomsday argument you didn’t mention. As every Baysian scheme you have to trust some hidden hypothesis. Here the whole scheme is conditionned on the number of living humans growing exponentially. That can’t be on theorical grounds, and actually the demographs predict that the current “exponential phase” will turn into a sigmoid for the next decades to come.

    1 + 2 => SSA does not implicates acceptation of the Doomsday argument. SSA means you accept baysians rules. Doomsday argument means you accept both baysians rules and Bostrom’s belief that everything growth exponentially.

    q) May I interpret appeal to the dice room as a very clear indication that the problems you may see with baysian reasoning are nothing specific to copies?

    Suggestion for blog post: “Buy my book and then ask me anything”. 😉

  239. Scott Says:

    jh #237:

      A trivial extension of your Observation 3: there exists c such that

      BB(n+c) > BB(n) + 0.49 log B(n)

      Just take the BB on n states and supplement it with keeping track of the numbers of leftmost and rightmost visited cells and the current head position. Then apply your argument.

    Thanks! I had the same idea, but I couldn’t immediately see how to make it work. The issue is this: if our alphabet consists only of 0 and 1 (and we have only one tape), then how exactly do we keep track of the leftmost and rightmost visited cells, without using some sort of self-delimiting code, which might add a number of states to our TM that grows with n (like n or at least log(n))?

    We could try to do something like this: encode each “0” cell as “00”, and each “1” cell as “10”. And use “11” to signal the start or end of the visited part of the tape. The problem is that the logic needed to handle the “even-numbered squares” actually seems to double the number of states! I’d be very interested if you or anyone else had a solution to this problem.

  240. Ungrateful_Person Says:

    I cannot resist though I know i missed the deadline. So feel free to skip answering this question.

    How on earth do you find time to be this entertaining 😉

    On an unrelated note, I wanted to learn how did you manage your time as a student. I am kind of hopelessly addicted to blog like yours, Luca’s and Lipton’s. And add to it facebook and now quora (shudders down my spine).

    So, to cut it short, how did you manage your time as a grad student between assignments, research and everything else?

    I promise, I will be grateful 🙂

  241. Pseudonymous Says:

    Scott #225:

    Here’s a minor improvement to your bound — Let X be the most frequently occurring state in the nth Busy Beaver. Define two new states X_1 and X_2 s.t.

    i) If the TM enters state X, it transitions to state X_1 and moves one step to the right.
    ii) If the TM is in state X_1, it transitions to state X_2 and moves one step to the left.
    iii) The transition function for X_2 is the same as that for X in the original TM.

    It is now clear that BB(n+2) > BB(n) + 2*BB(n)/n. I have no idea how tight the bound is though — This is just the first idea that popped into my head 🙂

  242. Scott Says:

    Sniffnoy #166:

      Which would be more surprising: If PH were to equal PSPACE, or if the polynomial hierarchy were to collapse without PH being equal to PSPACE?

    This is another question that hit the dreaded “thought-requiring intermediate zone”! (See comment #225.)

    Let me first rephrase the question in a way I find easier to think about: what’s my estimate for

    Pr[ PH=PSPACE | PH collapses ]?

    Of course, we could ask the following closely-related and better-known question: what’s my estimate for

    Pr[ P=PSPACE | P=NP ]?

    For both of these questions, the central difficulty is that the hypothesis is something I consider so unlikely that I have a hard time forming “reasonable judgments” about a hypothetical world where it held. (Compare: “if aardvarks suddenly started speaking fluent English, would they also want to learn math and science?”)

    On the other hand, we do have some past experience with a surprising thing happening to NP, and then the surprise “percolating all the way up to PSPACE.” E.g., coNP⊆IP was followed within less than a year by P#P⊆IP and then by IP=PSPACE. Likewise, the existence of a computational zero-knowledge protocol for NP then percolated up to all of PSPACE being in CZK assuming the existence of one-way functions. In short, while PSPACE seems like a pretty secure “containment facility” for complexity-theoretic surprises, the same can’t be said for NP or PH.

    So, based on that experience, I’m inclined to say that if P=NP, then “all bets are off” and the probability of P=PSPACE is certainly far from negligible (maybe 1/2? I don’t know). Similarly, if PH collapses then the probability of PH=PSPACE is far from negligible.

    (Incidentally, we could also ask both of these questions with P#P in place of PSPACE, though for me it wouldn’t change too much.)

  243. Parsed-AMA Says:

    Hello everybody!

    Thanks Scott for doing this.

    I’ve written a quick throwaway script to parse the page and pull out only the questions and replies. I’ve already noticed a bad bug (it only pulls out the ‘s of the reply, so it misses e.g. the lists in reply to #3) but I’m on my way out for dinner so can’t fix it at the moment. If there is any demand I can do this later, but I’ve uploaded the script and the output to github as a gist so perhaps somebody else can do with it as they wish.

    AaronsonAMA-output.

    It might be best to click the view raw button (two angled brackets, looks like ) to view the output.

  244. Scott Says:

    Curious Wavefunction #11:

      What’s your favorite interpretation of quantum mechanics?

    What I’ve told people for years is that I agree with every interpretation of quantum mechanics, to the extent it criticizes the other interpretations! 🙂

    What I meant by that is that I get all the usual arguments and counterarguments—and once you get them, in some sense there’s nothing further to understand in this subject. The rest is just more or less entertaining name-calling (David Deutsch: “If you refuse to accept the clear implication of the quantum formalism that many worlds are real, then you’re a coward—no less than the Churchmen who insisted that math is one thing, but clearly the Earth doesn’t really revolve around the Sun!” Lubos Motl: “If you deny the central insight of Bohr and Heisenberg—that science is about predicting the results of observations, and the rest is meaningless metaphysics—then you’re an imbecile, trying to force Nature into the mold of your own limited intellect!”).

    Anyway, that’s what I think about the spectrum between the “Everettian realists” and the “Copenhagen instrumentalists.” Other interpretations—like deBroglie-Bohm, or Cramer’s “transactional interpretation”—I think can be more-or-less rejected, on account of having too much contrived and arbitrary about them, or (worse) drawing all their plausibility from simple examples involving one or two particles, which don’t show the full range of what quantum mechanics actually entails.

    Bohmian mechanics at least has value in that it tells an interesting story on top of QM—and the very fact that it’s possible to tell such a story, and have it reproduce all the predictions of QM, is interesting. But an underappreciated point is that Bohmian mechanics is just one of an infinity of similar stories one could tell, all of which reproduce the quantum-mechanical Born probabilities, and which are therefore empirically indistinguishable from each other. (On the other hand, if we believe that Nature’s true Hilbert space is finite-dimensional, then none of these stories can possibly be “deterministic” in the sense that Bohm’s is.) For more see my old paper Quantum Computing and Hidden Variables. For the “transactional interpretation,” or other accounts involving backwards-in-time causation, etc., I have a hard time seeing any redeeming features.

    Note that I’m not discussing (say) the GRW theory or Roger Penrose’s ideas, since I wouldn’t call those “interpretations” at all, but rather proposed modifications to QM that make novel physical predictions.

    I should add that, over the past year or so, I’ve become partial to the speculation that the solution to the quantum measurement problem might lie in cosmology—and in particular, in the profound fact (discovered in 1998) that we live in a deSitter space. On this view, just as the Everettians believe, there would be no special “measurement postulate” that comes in and modifies unitary quantum mechanics; the latter would apply universally. However, we would be entitled to say that a “measurement” had occurred, decoherence had become irreversible, a microfact had turned into a macrofact, etc., precisely when photons encoding the information about the event were “flying away from the scene of the crime” at the speed of light, toward our deSitter horizon, so that they could never again be collected together and recohered even in principle. Arguably, this view still wouldn’t resolve the “metaphysical” debate between the Everettians and Copenhagenists, which might be regarded as forever outside the scope of science. But it would say that we can at least give a principled criterion for when decoherence becomes “irreversible”—and better yet, a criterion that depends on the particular features of our universe (it wouldn’t work, for example, in an AdS space, which has a “reflecting boundary”).

  245. Scott Says:

    Pseudonymous #241: Thanks, nice observation! As for “tightness,” it would astonish me if we didn’t have

    BB(n+1) > BB(n)BB(n)

    for all sufficiently large n, or even (say) for all n≥8. The question is how to prove such a thing.

    In the meantime, it would be nice to improve your observation to get

    BB(n+1) ≥ BB(n) + BB(n)/n.

  246. Anupam Says:

    As for your upcoming D-Wave story, here is a link from Nature – http://blogs.nature.com/news/2013/05/quantum-computer-passes-speed-test.html , which I will love to hear your opinion about.

  247. internationalstudent Says:

    @Anupam Comment #246
    https://scottaaronson-production.mystagingwebsite.com/?p=1136

    ” The quantum algorithm on which D-Wave’s business model is based — namely, the quantum adiabatic algorithm — has the property that it “degrades gracefully” to classical simulated annealing when the decoherence rate goes up. This, fundamentally, is the thing that makes it difficult to know what role, if any, quantum coherence is playing in the performance of their device. If they were trying to use Shor’s algorithm to factor numbers, the situation would be much more clear-cut: a decoherent version of Shor’s algorithm just gives you random garbage. But a decoherent version of the adiabatic algorithm still gives you a pretty good (but now essentially ”classical”) algorithm, and that’s what makes it hard to understand what’s going on here.”

  248. Remi Says:

    Too late for the party …

    Maybe this would worth a future blog post ?

    http://arxiv.org/abs/1212.4969

  249. Bill Kaminsky Says:

    Remi @ #248

    Too late for the party …

    Maybe this would worth a future blog post ?

    http://arxiv.org/abs/1212.4969

    Always ready to be wowed, I click on over and what do I see? Why, it’s…

    Polynomial time factoring algorithm using Bayesian arithmetic

    Michel Feldmann
    (Submitted on 20 Dec 2012)

    In a previous paper, we have shown that any Boolean formula can be encoded as a linear programming problem in the framework of Bayesian probability theory. When applied to NP-complete algorithms, this leads to the fundamental conclusion that P = NP…

    Now I’m the type of guy who like Fox Mulder has an “I Want to Believe” poster on his wall (e.g., http://noscope.com/photostream/albums/various/I_Want_To_Believe_01.jpg ). That said, alarm bells are ringing like crazy in my lil’ ol’ wanting-to-believe head:

    1) Feldmann buries his lede, only getting around to the fact he proved P = NP(!!) in his abstract, choosing merely an efficient algorithm for factoring for his title.

    2) Nowhere in his paper does Feldmann mention the massive amount of work on L(inear) P(rogramming) formulations of NP-complete probems and explain why all previous formulations have failed to give polynomial size programs whereas his has succeeded. For example, he grievously omits comparison of his work to:

    http://arxiv.org/abs/1111.0837

    Linear vs. Semidefinite Extended Formulations: Exponential Separation and Strong Lower Bounds

    Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary, Ronald de Wolf

    (Submitted on 3 Nov 2011 (v1), last revised 18 Jul 2012 (this version, v3))

    We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope…

    3) Biggest Flaw of All: All would be forgiven (and not just by want-to-believers like me), of course, if Feldmann said: “Oh, and by the way, here’s the factors for all the unsolved RSA Challenge Numbers! Choke on it, all y’all doubters!” (http://www.rsa.com/rsalabs/node.asp?id=2093) But does he? Nooooo! Adding insult to injury, Feldmann calculates how big a linear program would need to be solved in his formulation to get said challenge numbers and his sizes are indeed within the current state-of-the-art of top-shelf LP solvers. So run the darn algorithm, for Pete’s sake! (To be specific, Feldmann claims he can rephrase factorization into an LP roughly 15.7 million variables over 17.8 million equations for the RSA-1024 Challenge Number and roughly 63 million variables over 71 million equations for its biggest brother RSA-2048. So, again, run the darn algorithm for Pete’s sake!)

    **************

    In closing:

    1) I demand that crackpots become more thorough!

    2) A question for Scott, if you’re still reading this thread:

    If you someday derived an algorithm that proved P = NP and moreover successfully tested it out on a test bed of hard problems (e.g., RSA Challenge Numbers, Random k-SAT in regimes guaranteed to make all local search heuristics fail, etc.), how would you unveil it to the world? Would you publish it immediately? Would you go to some science liasion office of some 3-letter US Government agency and say there’s a grievious threat now to “national security” and go along with the authorities if they want to keep it a secret as long as possible? Would you strut around MIT in a Superman costume, but with “P = NP” on your chest instead of the big red “S”? What would you do?

  250. Scott Says:

    Bill Kaminsky #249:

      If you someday derived an algorithm that proved P = NP … how would you unveil it to the world? Would you publish it immediately? Would you go to some science liasion office of some 3-letter US Government agency and say there’s a grievious threat now to “national security” and go along with the authorities if they want to keep it a secret as long as possible? Would you strut around MIT in a Superman costume, but with “P = NP” on your chest instead of the big red “S”? What would you do?

    The last of the three.

  251. Scott Says:

    Anonymous Programmer #141:

      How do you think consciousness and free will is related to quantum mechanics?

    Alright, let me address consciousness in this comment, and then free will in the next comment.

    Famously, the “hard problem of consciousness” seems “hard” because, for logical reasons, it doesn’t seem amenable to any scientific answer—neither with classical physics, nor with quantum physics, nor with any future theory that could ever be discovered. That is, let X be your favorite “scientific explanation for consciousness” in terms of physical causes and effects. Then why isn’t X equally compatible with a “zombie world,” which fulfills all the same causal relations, but where no one “really” experiences anything? For that reason, it seems impossible to have a complete scientific explanation for consciousness, any more than we can have a complete scientific explanation for why the universe exists.

    But of course, there’s nothing there that necessarily rules out a complete scientific account of all the physical correlates of consciousness. So then we can ask: is there any reason to think such an account would involve quantum mechanics? Well, I regard quantum mechanics as probably the deepest thing we know about the workings of the physical world, so the speculation isn’t a priori stupid. The problem is that almost all attempts to make the speculation more concrete haven’t even cleared the bar of respecting the most basic things we already know.

    In particular, I don’t see the slightest evidence, from any standpoint (evolutionary, physical, cognitive…), that the brain is a quantum computer in any interesting sense. I.e., to whatever extent one regards the brain as a computer at all, everything we know points to the computation being classical. But furthermore, even if it somehow were a quantum computer—or even, as Roger Penrose wants, a quantum-gravitational computer!—I don’t see how that would help in “explaining the mystery of consciousness,” which is usually the whole point of these sorts of speculations! To put it differently, a conscious quantum computer strikes me as precisely as mysterious as a conscious classical computer: just better able to break the RSA cryptosystem. 🙂 Indeed, that’s one of my main disagreements with Penrose.

    (Stuff like the movie “What the Bleep do We Know?”—which relates quantum mechanics not only to consciousness but also to ESP, the ability to channel 35,000-year-old dead people, etc.—strikes me as unworthy of a response. Penrose, by contrast, might be wrong on a dozen or more points but certainly deserves a response!)

    For more about all these things, see my talk on Quantum Information and the Brain, or Chapters 4 and 11 of Quantum Computing Since Democritus.

  252. Scott Says:

    OK, now I’ll try to address free will.

    Like consciousness, free will has an aspect that seems outside the scope of scientific investigation almost by definition. Are “you” the author of your choices? Well, what exactly do we mean by “you”? No matter what sequence of physical causes, random events, or even supernatural interventions led up to your making a particular choice, a skeptic could always claim that the choice wasn’t really made by “you,” but only by an impersonating demon that looks and acts like you.

    On the other hand, there’s another aspect of free will that’s perfectly within the scope of science. Namely, to what extent can your choices actually be predicted by an external entity constrained by the laws of physics? Obviously they’re at least partly predictable: advertisers, seducers, and demagogues have always known as much, and modern fMRI scans confirm it! But is there any limit to the accuracy of prediction? If there isn’t, then can at least the probabilities be predicted to arbitrary accuracy, as they can in (say) the case of radioactive decay? Or are they subject to Knightian uncertainty?

    Of course, even supposing your choices were unpredictable in the strongest sense imaginable, a philosopher could always say that that’s just a practical problem, and still doesn’t mean that your choices are in any sense “free.” Well, fine—but let’s at least try to answer the “straightforward empirical question,” of whether your choices are predictable or they aren’t!

    Now, it’s on this latter, predictability question that quantum mechanics might finally become relevant. Or it might not—we don’t know yet. But it’s at least conceivable that the No-Cloning Theorem puts some fundamental limit on how well you can learn the state of a chaotic dynamical system, like (say) a human brain, and that it might forbid you from making a second copy accurately enough to “instantiate a new copy of the person, with the same will” (whatever that means!). Again, I stress that I have no idea whether this is true or not; many people feel confident that the “classical, macroscopic, non-invasively measurable” degrees of freedom should already contain all the relevant information. If it were true, though, then as many others have pointed out, it could have all sorts of “applications” to resolving classic philosophical paradoxes involving brain-copying, by simply explaining what goes wrong when you try to set up the paradox (see comment #236).

    For more about the above issues, see my upcoming long essay on them (also referenced in comment #118), or Chapter 19 of Quantum Computing Since Democritus.

  253. Artem Kaznatcheev Says:

    Scott #227.

    Thank you for the response! I was worried that my question slipped through the cracks.

    I really like your comment about the central difficulty, but I think it stretches a little bit deeper than that. It seems like a matter of being comfortable with abstraction. Biologists seem to be fine with throwing away a lot of details in their computational or mathematical models, the whole field of evolutionary game theory (EGT) is an example of this in action. Almost nobody in EGT worries even a little about the complicated mappings between genotype and phenotype, or development, or many other aspects essential to the messy biology and yet they seem to get along well with mainstream biology (the occasional argument over inclusive-fitness versus group-selection not-withstanding). However, all these models are meant as heuristics, connected to each other only through intuition and words. None of the biology models I’ve seen are ever intended as an abstraction, as a general treatment that has to be true of any system that meets certain prerequisites, instead they are heuristics or “this could happen, how would you explain it?” sort of models. I wonder how this can be overcome.

    It would be great to see a complexity-theoretic re-imagining of the foundations of evolution. I wonder what it will look like, and I hope I’ll get to see it! (or better yet, contribute to it :D)

  254. anonymous Says:

    Scott #29,

    You can actually take your third item up another level of abstraction: since have no way of knowing the amount of sadness that would be induced by the events that God does prevent, then a statement expressing anguish over what He allows is possible for any cutoff other than the maximum possible happiness level. But if we were at a constant maximum happiness level at all times, would we even perceive it, given that there is nothing to compare it to?

    That could be argued to be the saddest thing of all.

  255. Scott Says:

    anonymous #254:

      That could be argued to be the saddest thing of all.

    I still think that the child being forced to dig her own grave is sadder.

  256. Scott Says:

    RV #21:

      What are some of the primary differences between the cultures of the academic institutions you have been a part of so far?

    There are perfectly-real differences between the places; the problem is that those differences were often swamped by the differences in my own life circumstances! E.g., if I told you that people seemed more deferential to me when I was a professor at MIT than when I was an undergrad at Cornell, how much would that tell you about MIT vs. Cornell? 🙂 Still, let me try my best:

    The Clarkson School (Potsdam, NY): I had a wonderful year there—learned a lot, did my first research, and got to know real academics for the first time in my life. However, even though the other “Schoolies” (Clarkson School kids) were, like me, also “weirdos” looking for something better than high school, I regret that I was still too nerdy to relate to most of the other kids—unless they made a special effort, which I’m happy to say a few of them did.

    Cornell: My first year there, I was a hyper-nerdy 16-year-old with braces, thrown into a dorm with ag school and hotel school folks mostly interested in wild keg parties. You can imagine how well that went. 🙂 My second two years at Cornell, I lived at Telluride House, a “self-governing intellectual community” surrounded by frats who would drive their cars over our shrubs—after which we would hold a meeting and decide to write a strongly-worded letter to the frat’s national governing board. At Telluride, I’d sometimes get annoyed by the pretentiousness of literary types sitting around the dinner table discussing Derrida—but there were also scientists in the house, and I’m happy to say that one of them became a lifelong friend. (I also managed to get Ryan Williams admitted to Telluride, but he only started there the year after I left.)

    As I wrote in a previous comment, I idolized my professors (especially Jon Kleinberg) as gods—though that had at least as much to do with me as with them. Classes were large at Cornell, which prevented me from having as much personal contact with professors as I wanted (or as I’d had at Clarkson). But I was very lucky to fall in with Bart Selman, who helped guide me to grad school and to whom I owe a large fraction of my current success.

    Berkeley: Now I was in one of the coolest places in the country, but still not cool enough myself to fully take advantage of it! I lived at International House my first two years there, which reproduced some of the party atmosphere of the Cornell dorms, and where once again I found myself yearning to have a “normal college social life” but not yet able to. Then my second two years I lived off-campus, so for better or worse I’d mostly just have to interact with the other computer scientists in Soda Hall. 🙂 (I should confess that a large part of my memory of both Cornell and Berkeley, is the memory of unrequited crushes. At least, I think they were unrequited…)

    Meanwhile, I tried to imbibe as much wisdom as I could from Umesh Vazirani and the other professors there, like Luca Trevisan and Christos Papadimitriou—and a large part of that wisdom was simply their more-or-less relaxed attitude to life! Not worrying overly about deadlines, administrative requirements or other such details, always looking for the big picture (or the high-order bit, as Umesh put it). It might seem ironic for a student to look to his professors for models of how to chill out and take things easier, but there you have it. 🙂 There was also a tremendous feeling of community among the theory students, which meant a lot to me. I made several friends at Berkeley who I still have.

    For the political aspects of Berkeley, see this answer.

    Institute for Advanced Study, Princeton: IAS is a strange place — where almost everyone else was married with small kids, and where the wildest the social scene gets is little embossed cards asking members to RSVP, by paper mail, for a director’s wine-and-cheese social two months in the future. Where after ~6PM, you’re basically in the middle of a dark forest, with the closest restaurants an hour’s walk away and the main sounds those of deer.

    Still, I enjoyed spending eight months at IAS, especially because of the opportunity to learn from two of the greatest of the greats: Avi Wigderson and Sasha Razborov. But after that, I felt ready to move on and try one more time to have a social life.

    University of Waterloo: Looking back, Waterloo was the first place where I really felt happy and accepted for who I was—like a full-fledged member of the human race, rather than a nerd constantly seeking permission to exist, but being judged and found wanting. Again, I don’t know how much that had to do with Waterloo and how much with me! But I can say that people in Waterloo were incredibly friendly, in that way Canadians are. 🙂 Furthermore, while Waterloo is a “small town in a middle of nowhere” (well, an hour and a half from Toronto on the 401), I found that didn’t hinder my social life at all: quite the contrary! There might be only one or two places to go out at night. But if I went to those places, I’d run into other CS/physics people from Perimeter Institute and the Institute for Quantum Computing—who’d recognize me and introduce me to other people they knew. Imagine that: me, Scott Aaronson, having a “nightlife”! I even learned how to salsa-dance (I’ve since forgotten).

    Meantime, Waterloo of course was then and is now one of the world’s premier places for quantum information research, due in large part to the generosity of Mike Lazaridis. Admittedly, there was something strange about a formerly sleepy town whose scientific footprint was expanding so much, so fast—but it also produced a heady feeling, maybe a little like being at Los Alamos in 1942. In short, I look back fondly on Waterloo, both for scientific reasons, and because it’s the place where, after finishing my PhD and one postdoc, I finally started learning how to be human.

    MIT: Maybe being a professor is sufficiently different from anything else that I shouldn’t even try to compare MIT to any of the other places. But living in a major metropolitan center, while actually being old enough to take advantage of much of what it has to offer, has generally been good for me.

    Some say the atmosphere of MIT’s theory group is a little more buttoned-up and “East Coast” than Berkeley’s. But not too much more: I’m always happy when I see the grad students organizing outings and just generally hanging out together, the same way Berkeley students did. Also, it’s hard to take anything too seriously when you work in a building that looks like it was hit with a wrecking crane. Or when, in addition to suit-wearing types, you have Gerry Sussman as a living embodiment of the hacker spirit of 60s and 70s, or Erik and Marty Demaine showing off their latest origamis and glass sculptures.

    I also love the undergrad culture at MIT, from what I’ve gotten to experience of it. (This Friday night, for example, Dana, Lily, and I went to dinner together at one of my student’s dorms, and a wonderful time was had by all.)

  257. Scott Says:

    Totient #139:

      How do you decide what is ethical or not? … (I know there’s a lot of philosophy devoted to the question, but I want to know what *you*, Scott Aaronson, uses as moral rules/guidelines/heuristics…)

    As you say, there’s an entire academic field devoted to how to behave ethically—typically in situations involving trolleys, switches, and differing numbers of people who you could either kill or allow to be killed through your inaction! 🙂 But I confess that, in practice, I haven’t found most ethical choices to be anywhere near that complicated. Usually, I find, all it takes to figure out (what I consider to be) the ethical course is to apply the Golden Rule, which I hereby offer in my personal version:

      Never be an asshole (except possibly to people who have already been assholes to you). Treat people as you’d want them to treat you (unless they’ve already demonstrated zero inclination to do likewise).

    As the above formulation suggests, I think being “ordinarily decent” is hard enough, and that’s the standard to which I hold myself and others! I don’t place on anyone the further, near-impossible demand that they “turn the other cheek” to those who have wronged them.

    But I’d go further: compared to most people I know, my moral instincts are pretty strongly retributivist. Every time an evil act goes unpunished, I feel like the moral equivalent of toxic sludge has been dumped onto the world. Every decent person is then a little less safe—and worse yet, less confident about his own choice to be decent. And every asshole is then a little bit more shameless. For that reason, I hold the punishing of bad acts—i.e., the administering of justice—to be an inherent good. For me, deterrence is not something that needs to be proved on a case-by-case basis, and rehabilitation (if it occurs) is just a welcome side effect rather than the main point.

    What else? Like most nerds, I naturally gravitate toward an ethic that values the telling of uncomfortable truths over sparing people’s feelings or maintaining social cohesion. (Indeed, that could arguably be taken as the definition of a nerd!) However, these days I do often spare people’s feelings, simply because I realize that I live in a society where that’s usually expected. Before telling someone (what I see as) an uncomfortable truth, I might try to gauge whether the person can handle it.

    Finally, one type of ethical conflict that I do face pretty often is that between following the rules (in the form, let’s say, of various MIT policies), and breaking the rules to do what obviously seems to make more sense for the students or others affected by my choice. While it sounds overly dramatic, I confess that, every time I’m confronted with that kind of choice, I literally imagine myself as some mid-level Nazi bureaucrat sitting at his desk in Berlin, filling out paperwork about the latest transport of Jews to Treblinka, and stamping the documents as per official policy. And then I choose to follow the rules, only if I find that the choice can survive that reflection. 🙂

  258. Scott Says:

    Juan #127:

      What do you think of David Deustch’s Constructor Theory?

    As was pointed out earlier, I’m under no “official obligation” to answer this question, since it violates rule 3. However, I did read Deutsch’s paper yesterday, and I think it probably merits a blog post of its own! So I’ll try to write such a post sometime within the next few weeks.

  259. Scott Says:

    And that’s it! Unless I missed something, I believe I’ve now answered all outstanding questions, a mere 4 days after the official end of the “Ask Me Anything”!

    Thanks so much to everyone for the great questions, and let’s do this again in another 2 years or so!

  260. Curval Says:

    I believe you missed Comment #23.
    I’d be interested to read your answer to that question.

  261. Scott Says:

    Curval #260: Yes, sorry, you’re right, I did completely miss the question of Alejandro #23—which related his observations about math/CS classes in Chile being much harder than the analogous classes in the US, and asked for my thoughts about that. Here are four thoughts, in no particular order:

    1. I have no experience with the education system in Chile, but nor do I have any great problem believing what you say. We’ve known for decades that precollege education in the US lags woefully behind most of the industrialized world. So it wouldn’t be a tremendous shock if the lower standards percolated up to the undergraduate level as well.

    2. In the US, millions of students go to college who very arguably shouldn’t. These students leave exactly as prepared for the rest of their lives (career and otherwise) as they were when they came, having gained nothing except four more years’ experience boozing, partying, and getting laid. Admittedly, these students tend to congregate in the “soft” majors, but they probably have some effect on dragging down the average even in math and science classes. I don’t know whether things are similar in Chile or elsewhere. But if we want to compare two countries’ college students, then it seems crucial to normalize for who’s going to college at all (and also, not to compare one caliber of college in Country A against a totally different caliber in Country B).

    3. Anecdotally, many people have described the average American college student as lazy, entitled, unmotivated, and incurious. Now, it’s not my purpose here to decide whether those stereotypes are true. Instead, I simply want to point out that, if they were true, then that might be great news for American students! For it would suggest that we can explain the American students’ lower performance compared to their foreign counterparts, without needing to invoke the hypothesis that American students are intrinsically less intelligent.

    4. While, as I said, I’m ready to believe that undergraduates in Chile (or almost any other country one wants to name) run laps around their counterparts in the US, that hypothesis does raise an obvious difficulty. Namely, anyone who accepts it then needs to explain why, within (say) the last few decades, Chilean universities haven’t so far outpaced American ones that the best students from India, China, etc. all want to flock to Chile. Do those students simply not know of the Chilean universities’ superiority—is the lag time too long? Do they know, but still choose to go to the US because other people don’t know? Or is the issue simply that, compared to the rest of the world, US universities tend to be “high-variance”—with a huge undertow of unmotivated, poorly-performing students, but with the best students and faculty still the envy of the world? I don’t know, and would be interested if anyone else had thoughts.

  262. internationalstudent Says:

    @Comment #261 May 12th, 2013 at 12:23 am

    I too had the same opinion as the Chilean when I came in. However I have changed.
    1) The people who study and grow, study and grow no matter where and in US opportunities are more because the ‘culture’ is better. There is an ‘intellectual’ infrastructure here.
    2) Regarding Chinese and Indian students coming here, that is a whole different story. They want to escape 3rd worldedness and they are told if they study in the US they will give green card and treat you special and they can help their home country as heros. Only after coming here they find out the immigration issues and get stuck as indentured half-slaves. There are some who want to learn but most of these as well get attracted by the quality of life in US and become indentured as well.
    3) One example comes to my mind. Stephen Smale failed his exams at the graduate level but went on to win the Fields. At the undergraduate level may be the Chilean students perform well. That does not necessarily mean they have anything ‘new’ within them They may grow to master the rules. As Grothendieck put it there are people who master many things instantaneously but in the end if he looked at their lifetime work he can hardly say if there was anything new. Creativity needs an environment and a culture for the individual to grow. Where else is this environment best found other than US?(Grothendieck stated about the individual; ione can extrapolate it to a culture)

  263. Rahul Says:

    The biggest flaw in the Chilean’s reasoning is that he’s probably comparing the very top cohort of the rest of the world with the average American. There’s a very strong selection bias here.

  264. internationalstudent Says:

    @Rahul #263
    The Chilean compares UCSD with Chile. UCSD is a good school. American students as far as I understand have to take a lot of breadth classes and for the most majority unable to take a dive into the field of their choice early on.

  265. A Says:

    I’d guess that the percentage of students at UCSD with real hands-on experience is greater than in Chile. And that while school is a lot easier, stuff sinks harder (because students are taken more by the hand), so in the end people don’t know that much less. The very very top are also probably better than the very very top in a similar university in Chile, and have more room to be creative. And the entrepreneurial aspect is probably better. That probably explains why the overall end results in the American environment are better than the ones in the Chilean environment by many metrics, but the Chilean guy sees what he sees.

    It’s funny the things Scott puts in the nerd category, I grew up identifying as such, and would have never put something like “telling uncomfortable truths” there…when I think of that I think of Dr. House, who is not someone who I’d consider a nerd.

    And re: being sad with “The near-complete destruction of the civilizations of the Americas on contact with the West.”, it would be more practical at this point to be concerned with the progressive erosion of cultures around the world caused by the contact with a dominant North American culture (maybe that will be a chapter for “The Workings of the World’s Economy for Theoretical Computer Scientists”).

  266. internationalstudent Says:

    Some of my friends who had better information access due to relatives in other nations moved those nations (like Canada) and are much better than an average MS student at a very top school in US who still struggle to fair treatment after a decade. So it is not that the fact (for an average grad school student from India and China) we know the schools there in US are better that attracts us than a better quality of life. We do know that some schools like MIT, Stanford, CMU, Berkeley are good. But as undergrads, we hardly know anything that Professors like you do. The students who even go there as well all do not get great treatments unless he/she lands up as a professor and gets a GC.

    As you say there are these ‘best’ students flock to US and may be to schools like MIT and become great names like your own advisor but for the rest they would have been better off if they had moved to a different country to escape thirdworldedness. Coming to the US has for the most part been a tragic comedy in most immigrants lives.

  267. Rahul Says:

    @internationalstudent

    for the rest they would have been better off if they had moved to a different country to escape thirdworldedness. Coming to the US has for the most part been a tragic comedy in most immigrants lives.

    Pray, what countries may those be? I agree that the immigration system sucks in the US, say, compared to Canada or Australia etc. but OTOH potential rewards are higher too (larger job markets, higher wages, closer to the center of action etc.)

  268. internationalstudent Says:

    @Rahul:
    There are immigrants who are standing for Mayors in those countries. Here too what jobs are there other than grunt tech ones which locals do not do. Sure if you want to get stuck in tech and lose your job 10-20 years down the line when you are old and when your skills deteriorate and when you have contributed enough to the local social security and asked to leave, yes, US could be the place. And how many of those jobs are there and how many are asked to go back after investing your youth and contributing to the local economy. At the end of the day you need to have a place that you can call home. After a decade or so of investments, paying lawyer after lawyer to stay and suffer (in case you lose job or sponsor your loved ones), you may very well start a simple life somewhere understand the local markets and grow. You may not make it big ( you do not make it big here either except work 12 hours a day or more) but at the end of the day you can enrich a country that you can call your home some day down the line.

  269. Scott Says:

    Actually, internationalstudent raises a good point. If I lived in a third-world country and was looking to get out, I would definitely research Canada, Australia, and the UK (…and the Netherlands, Sweden, France, and maybe some other places), each of which has fine universities, and which might treat me better than the US would.

    (An anecdote: When I spent a summer at the Perimeter Institute in Waterloo in 2004, people joked that they should donate to George W. Bush’s reelection campaign, since the US’s hassling of immigrants was such a boon to Canada’s recruiting of scientists.)

  270. Rahul Says:

    @Scott:

    One can definitely research them, and in individual cases these other nations may even be a better option. But IMHO, in net, despite all its failings, the US is still the most attractive option and it isn’t just through hype nor ignorance.

    There’s factors to this besides immigration law alone. As a third-worlder who’s traveled not a lot but a bit I’d say the US people remain the most welcoming to an immigrant and it remains the easiest to assimilate into.

    Some nations like Canada might do better in immigrant friendliness but not many. Definitely not France nor Netherlands. UK has traditionally attracted immigrant hordes but lately with the faltering economy etc. it isn’t that great.

    Again, always wise to research all options, but if anyone paints a gloomy picture of the US scenario, I think he’s lying. US remains one of the best hosts for a third world immigrant and I think the numbers speak for themselves.

  271. wolfgang Says:

    Scott, I am aware that I am past the deadline, but (while reading your book) I came up with the outline of a strategy on how to attack P vs NP and I feel the need to share it 😎

    It is based on the observation that captchas are difficult for computers to solve, but it is also difficult for humans to come up with new captchas.

    So we define two classes of machines: H and C
    H contains certain hardwired components (e.g. eyes), which you would call oracle, which make it easy to solve certain problems. The C contain other oracles (fast multiplication units etc.) which make other problems easy to solve.

    The problem for H is to find algorithms, which run fast for H but slow for C, the problem for C is to solve problems posed by the H’s, by using transformations which take advantage of their own oracles.

    Now I assume that if P = NP it would be both easy to find good captchas AND easy to solve them, which is a contradiction.

    Obviously, we don’t want to think here about the special case of real captchas currently used on the internet, but
    about the general problem, involving two or more classes of algorithms posing problems to each other, as decribed above.

    As you are aware, I know nothing about CS except what I read on your blog and in your book, so I dont know if/how such an idea could be made rigorous or if somebody is actually already working on something like it.

  272. wolfgang Says:

    Btw for BB(n) it should be easy to show that BB(2n+k) > BB(n)^2 and in general BB(m*n+k) > BB(n)^m with a (near)constant k.

    The idea is that a program of length 2n + a could contain 2 busy beavers of length n plus some “cheap” switching logic (which is where the a comes from).
    The switching logic lets the 1st BB run until it halts, then it allows the 2nd BB to advance by one tick, now the 1st BB runs again until it halts etc. (nested loops of BBs)

    This idea can be generalized further of course …

  273. Scott Says:

    wolfgang #271:

      Now I assume that if P = NP it would be both easy to find good captchas AND easy to solve them, which is a contradiction.

    Err, why couldn’t we avoid the contradiction by simply saying that if P=NP then no captchas exist to be found?

  274. wolfgang Says:

    Scott #273

    >> if P=NP then no captchas exist to be found?

    but this would mean that no matter what the difference in capabilities (the “oracle content”) between H and C , they could both solve all problems with the same efficiency.

    This is very counter-intuitive and therefore I thought it might be a way to find a proof in this direction …

  275. Scott Says:

    wolfgang: The closest formal analogue I know of to no CAPTCHAs existing is no one-way functions (OWFs) existing. And yes, it would be “counterintuitive,” but ruling it out is only strictly harder than ruling out P=NP!

    Regarding your Busy Beaver idea: yes, you’re right. In fact it’s not hard to see that for every computable function f, there exists a constant cf such that

    BB(2n+cf) > f(BB(n)).

    I conjecture that this can be improved to

    BB(n+log(n)+cf) > f(BB(n))

    or even

    BB(n+cf) > f(BB(n)).

    But for the last improvement, as I wrote in my Observation 4, we’d need some clever idea for how to simulate the n-state Busy Beaver in a self-delimiting way.

  276. ñoño Says:

    @ Alejandro #23:

    I’m not Scott, but I am an American physics student who studied at a prestigious US university and did an exchange in Chile (at la Universidad de Chile), so maybe you will be interested in my experience. I found my Chilean class in Mathematical Physics quite challenging, but after accounting for language barriers and my relative lack of motivation to study instead of meet people and travel, I think it was probably comparable to my native university. In general, a major difference is that you guys typically take 6 years of undergraduate, and as a result my impression is that chileans usually come in to uni with less knowledge than Americans but often graduate with more. That said, since you are graduating at the age of 23 or 24, an American at this point might be a year or two into graduate school already, so judged at equal ages the American might be more knowledgable. I’ll let you decide what way is measuring is most appropriate, but either way I think it is fair to say that the top schools in Chile are at least competitive with the top ones in America in undergraduate education, and I hope to see this translate into stronger basic research programs in your country as well over the long term.

  277. Alejandro Says:

    Thanks for answering my question, Scott. However, I wasn’t expecting an “American vs Chilean universities” point of view, because I know by experience that that discussion ends up being religious and biased and it leads to nowhere. However, I did expect to get your opinion on the state of undergraduate education in the U.S., and it seems to be aligned with the impression I got while studying there.

    I would say that here in Chile, only the top 4 or 5 universities are comparable to top U.S. universities in terms of how much knowledge and skills students gain until they graduate. After those, we have a bunch of institutions of the kind “if you pay enough, you get a degree” with very poor standards.

    In reply to ñoño’s comment #276: not everybody studies 6 years, that depends on the university and the program you choose. I studied a Bachelor’s degree in Mathematics, which takes 4 years, and similar degrees in science take 4 years as well. Engineering takes from 5 to 6 years depending on the university (that includes a “BsC in Engineering” and some kind of specialization, and also a thesis). Medicine takes 7 years, and 2 or 3 more to get a specialization.

    That said, I was comparing what I was studying, which takes 4 years, with an equivalent program at USCD, also lasting 4 years, and I was shocked by the differences. My classmates in algebra were in their 3rd and 4th year of a Mathematics major, but were unable to prove very simple properties (e.g. uniqueness of the identity in a group), and were generally unfamiliar with the concepts of “proof” and “logic consequence”. In contrast, I was asked to prove properties from my very first midterm in my very first course, so in time, proving or finding counterexamples becomes second nature for Math students. I’m not saying it’s necessarily better here, but it sure is different.

    Finally, in reply to A #265: you based your entire comment on a guess, saying that stuff is “probably” better by “many metrics”, of which you mention none. This is why I didn’t want the discussion to become a contest, because it touches a nerve, and then people take it personally and stop reasoning.

  278. Alejandro Says:

    Oh, and one more thing: I *did* acknowledge that graduate education is generally superior in top U.S. universities, and that would be why flocks of students from many countries go there (most go for grad school).

    But also, there’s the language barrier and the money thing. Most people learning a second language will go for English, period. And as for the money, people go to the U.S. attracted by the possibility of what’s arguably a “better life” for them, and we just can’t compete with that (same for Canada, Europe and others). Our current political and economical situation is great for investors with big pockets, but not for scientists or students.

  279. Itai Bar-Natan Says:

    On the BB problem: I have an idea for proving BB(n + O(log n)) > f(BB(n)) for any computable function f. The idea is this: for any n, there are roughly n^(2n)*4^n/n! Turing machines with n states, taking into account permutation symmetry. Similarly, there are roughly (2n)^n/n! finite-state machines with n states. Now, for this to work I need a lower bound of the following form: There are at least (2n)^n/(n!*a^n) n-state FSMs which always terminate when given a string which is eventually all 0s. (A weaker bound will do here.) My proposed Turing machine uses n+O(log n) states to simulate such a FSM with n+O(log n) states. An additional finite number of states are used to do the following: First, a series of experiments are run on the finite state machine to find which FSM it is according to some indexing system. Second, the index is used to find a corresponding Turing machine with n states. Third, that Turing machine is simulated and its running time calculated. And finally, the function f is evaluated on the running time.

  280. internationalstudent Says:

    @Alejandro
    Comment #277 May 12th, 2013 at 4:14 pm
    Comment #278 May 12th, 2013 at 4:29 pm

    I would think there are some differences.
    First: Students are not just admitted based on GPA and SAT scores. A significant portion of students get admitted based on a broad set of factors in American schools. Even at the top schools like UCSD this is the case and these students if they have interest can choose Math as major. These students are also given opportunity to pursue graduate schools if they are pretty good at even a few classes that they are able to impress their professors and hence able to get good recommendation. In Asian schools (I am unsure of Chilean schools) your exam scores are the only license to take you in. So we may be getting a bunch of nerds at the top schools who are sharp at studying and mastering rules.

    Second: The number of breadth classes a student needs to take is much higher in US. Typically major related courses are much higher in India/China compared to US. US schools do not usually go for uniformity. They go for exceptions. So even though you may meet on a day to day basis a rather average (from your viewpoint) seeming students at the top schools, there are a few who are no doubt exceptional (from your view point).

    Third: A major task at Asian universities for Professors is to teach. A major task for Professors at American universities is to research. We get professors who guide us almost line by line sometimes (spoonfeed) until we stand up. Here students get broad guidelines right from day1 as a freshman. The students themselves have to swim and struggle through and grow. TA sessions and professor sessions help only so much. Not many hit the right note early on but they still may be qualified enough to impress in a few select courses to go to graduate school by good recommendation and good GREs and decent GPAs. Those who hit the right note early on do achieve a lot ( true even at the high school level; that is why you see even some high school students mastering Algebraic Geometry at the graduate level).

  281. Scott Says:

    Itai #279: REALLY cool idea! Yes, it seems plausible to me that one could store n log(n) bits in an n-state Turing machine M, and then retrieve those bits via a sequence of experiments that used M essentially as an oracle. (Especially since, in this game, we get to design M however we want: for example, it could be an FSM, or an FSM of a restricted type, but it could also be something more complicated.)

    In fact, if this worked, then conceivably it would even yield

    BB(n+Of(1)) > f(BB(n))

    for any computable function f.

    Of course, to show such a thing we’d need to design a suitable coding scheme and show that it worked.

  282. Itai Bar-Natan Says:

    I’ve thought more about this. First, I made a mistake. What this method actually establishes is that BB(n+O(n/log n)) > f(BB(n)). I also wrote in my description (2n)^n where what was intended was n^(2n). Second, I got the method to work: Consider the following FSM. It’s states are labeled 0 to n. The starting state is n, and the halting state is 0. The FSM decrements its state when given a 0 bit. When given a 1, the FSM changes state i->f(i) according to some arbitrary function f:{1…n}->{1…n}. There are n^n such FSMs, and it is possible in general to figure them out by introspection. By the way, I suggest calling such Turing machines introspectable Turing machines.

    I have been thinking about how to improve the constant factor here. The information in Turing machines is given by log(#TM(n)) = n log n + (1 + 4 log 2 + o(1)) n. (Here and in the rest of my comment the natural log is used.) The information in an introspectable Turing machine in my system is log(#ITM(m)) = m log m. This means that we need m=n + (1+4 log 2) log n to encode an n-state Turing machine with an m-state Turing machine. However, there is a simple way to improve this: have the FSM place an arbitrary bit while moving onwards depending on its state and its current observed bit. This improves the information content introspectable TM to log(#ITM(m)) = m log n + 2 log(2) m. In addition, there is another way to improve to constant factor: reduce the number of Turing machines which need to be encoded. For example, a Turing machine which has an unreachable state cannot be a Busy Beaver machine. Any given state in a random Turing machine has a e^(-2) chance of being unreachable from any other state. The chance that all n states are reachable from at one other state is roughly (1-e^(-2))^n, and this reduces the information content of a Turing machine to (log n + 1 + 4 log 2 – log (1 – e^(-2))) n. It should be easy to reduce this even further by demanding that not only is every state is reachable by some other state, but that every state is graph-theoretically reachable by the starting state. However, I don’t have the necessary combinatorics knowledge to do that. After that, it is possible to improve further by demanding that every state is reachable in the actual execution of the TM, that no set of states reaches creates an unavoidable infinite loop, the no two states have identical behavior, etc..

    Overall, I have a bound BB(n + (c + o(1)) n/log n) > f(BB(n)), where c = 2 log 2 + 1 – log(1 – e^(-2)) = 2.39. Interestingly enough, the two methods for improving the constant factor are in a sense providing upper and lower bounds for the number of possible behaviors of a Turing machine.

  283. Mathematician5 Says:

    Scott #71, it’s just as well that you have acknowledged that quantum theory might be wrong, because quantum theory has been experimentally falsified by my cat! 😉

  284. Scott Says:

    Mathematician5: That’s fantastic! Has your cat published this remarkable result, or at least posted a preprint to the arXiv (or purrXiv, or whatever)?

    I assume you don’t simply mean that your observing your cat in a definite position state has falsified quantum theory? After all, a third party who accepted standard QM would just say that you entangled yourself with your cat (and much of the rest of the universe)! 😉

  285. Mathematician5 Says:

    My cat merely flipped a coin and it came up something definite (heads or tails*). That’s good enough for me. Of couse I realize that some people won’t be convinced by an experiment, they’ll only be convinced by a theory.

    * As it happens it came up heads. Prior to the coin flip my cat had decided to publish it if and only if it came up tails.

  286. Scott Says:

    Mathematician5: Ah, so you did mean what I feared you meant!

    The question you then have to answer is: why isn’t this just like saying, “I observed my cat standing still rather than spinning round the earth’s axis at enormous speed, therefore Copernicanism is falsified?”

    (For my own attempt to answer that question, see here.)

  287. Mathematician5 Says:

    I recommend you read my cat’s paper then. 😉

  288. wolfgang Says:

    is this about the “Cheshire cat grin” paper?
    arxiv.org/abs/1305.2519

  289. Douglas Knight Says:

    Scott 257: “For that reason, I hold the punishing of bad acts—i.e., the administering of justice—to be an inherent good.”

    This seems to me like a strange use of “inherent.” If it has a reason, the goodness isn’t inherent. It sounds to me like this is a factual disagreement you have with most people you know, about the costs and benefits of punishment, or the costs and benefits of debating punishment, not about the inherent moral qualities.

    Maybe no one would qualify as counting punishment of bad acts as “inherently good” by my definition, but I think a lot of people hold that punishment mainly benefits the victim, not society. I think they have a better claim to this description and putting them in the scale makes you look fairly close to the people you know.

  290. Lou Scheffer Says:

    Jay #191/Scott #192,

    Extracting neural circuits suitable for simulation from biology is my day job. Barring some extreme and un-anticipated breakthough, simulating a human sized circuit is well out of the question in the next 10 years.

    The current state of the art technique is freezing a piece of the brain, embedding it in plastic, slicing it very thin (a variety of techniques are possible), then imaging in electron microscopes. A large project today, taking person-years to complete, is to reconstruct 7 columns from the circuits behind each lens of the fruit fly’s eye. This is roughly 1% of medulla, which is roughly 40% of 1/2 of the brain of a fruit fly. About 20 fly brains would fit in a cubic mm, and you need about two million mm^3 for human brain. Of course we are frantically improving our techniques, but even at a factor or 2/year the human brain is quite far out.

    The zebrafish technique (http://www.youtube.com/watch?v=KE9mVEimQVU from the guy right down the hall) is super cool, but not yet suitable for extracting neural circuits. They get one picture each 1.5 seconds, so they cannot use cause and effect to deduce detailed connections. The dyes, although faster than the camera, are also too slow. It’s not even clear you *can* get the rates you’d need to get the circuits this way – the photon collection efficiency is already pretty good, the best known dyes are being used, and you can’t crank up the light too much more without cooking the fish.

    And even assuming you can get the netlist with one of these techniques, or techniques yet undiscovered, you need more info that we don’t know how to collect before you can simulate. This includes which neurotransmitters and receptors are used by each of the cells in the system. We are just experimenting with techniques to get this data, and can only do it in the fruit fly due to the existence of huge libraries with single cell types called out via genetic markers. These don’t exist for other animals, and breeding the needed libraries takes thousands of generations. For mammals, entirely new techniques will be required.

    There are other barriers as well, but these (entirely practical, not theoretical) are enough to (likely) ensure no neural simulations of humans in the next 10 years.

  291. Martin Schwarz Says:

    Scott #140:

    (a) While I saw the preprint on the arXiv about a year ago, I can’t seem to find it now—does anyone want to help tracking it down?

    I guess you have been refering to this paper:
    http://arxiv.org/abs/1202.6257

  292. Hotel Vaise Says:

    I was just searching for this info for a while. After six hours of continuous Googleing, finally I got it in your site. I wonder what is the Google’s issue that doesn’t rank this type of informative web sites closer to the top. Generally the top sites are full of garbage.