QIP’2010: The Power and the Glory of Quantum Complexity Theory

Firstly, if you haven’t contributed to relief efforts in Haiti, you can do so (the charity linked to was recommended by a Haitian-American working at MIT CSAIL).  I wish I had something more useful to say about this tragedy, but I don’t.

For the past couple days, I’ve been at QIP’2010 in Zurich, Switzerland.  I’d had premonitions, even before arriving, that this was going to be an unusually awesome QIP.  Having been on the PC, I knew the strength of the technical program, and I’d learned that the turnout—320 participants—would be a record high.  My positive feelings only intensified when I saw the following in the hallway of my hotel:

My buzz reached a fever pitch when I entered the lecture hall and found giant, gourmet Swiss chocolate bars on every seat.

But I only knew for sure that this QIP would rock when my erstwhile adviser, Umesh Vazirani, gave his opening plenary talk on “New bridges between computer science and quantum computation.”  Umesh highlighted several developments, including:

  1. the relationship I noticed last year between the BQP versus the polynomial hierarchy problem and the Generalized Linial-Nisan Conjecture.
  2. the QIP=PSPACE breakthrough of Jain et al. (based on the recent multiplicative-weights update method from classical computer science).
  3. recent breakthroughs in lattice-based cryptography (most famously, Gentry’s fully homomorphic encryption system), which took inspiration from the quantum computing work of Oded Regev five years ago.
  4. the work of Broadbent, Fitzsimons, and Kashefi and independently Aharonov, Ben-Or, and Eban (for which I paid out pieces of the Aaronson $25.00 Prize), which lets a “classical” verifier (equipped with the ability to send single, unentangled qubits through a channel) verify an arbitrary quantum computation; and which Umesh illustrated by a story about a verifier investigating the claims of a shady, fly-by-night company called “Q-Wave.”

Umesh argued that the deepening connections between quantum computing and classical complexity theory—open problems in classical complexity being solved using quantum-inspired techniques, tools that weren’t available in the classical world until a year or two ago already being used for quantum purposes, etc.—represent one of the most exciting new developments in the field.

The combination of the chocolate bar (which I was already eating), and Umesh preaching so much truth from the pulpit, was heavenly.

Amazingly, subsequent talks managed to keep up the momentum.  Daniel Gottesman spoke about his work with Sandy Irani on the quantum complexity of translationally-invariant tiling and Hamiltonian problems.  By giving tools to say something useful about computational complexity even in cases where the only free parameter is the system size, Gottesman and Irani open up some exciting avenues for further work.  Jordan Kerenidis spoke about the effort to prove an analogue of the Valiant-Vazirani witness isolation theorem for QMA (Quantum Merlin-Arthur).  Stefano Pironio talked about how you can exploit the Bell inequality violations to generate a long random string starting from a short random seed, assuming only the locality of the laws of physics, and not assuming anything about the reliability of your randomness-generating devices.  This observation, which I find to be of great conceptual (and conceivably even practical) interest, is related to the so-called “Free Will Theorem” of Conway and Kochen, as well as to a result I proved eight years ago in my review of Stephen Wolfram’s book.  For Conway and Kochen, though, the motivation was to prove that “subatomic particles have free will” (a strange interpretation that I don’t by any means endorse!), while for me, the motivation was to prove that Wolfram was wrong.  Neither I nor (as far as I know) Conway and Kochen thought about the obvious-in-retrospect application to generating random numbers.  (Incidentally, if anyone’s interested, my talk slides from yesterday morning are here.)

There’s also been a great deal of excitement at this year’s QIP about the efficient simulation of quantum systems occurring in nature, using recent techniques for model-order reduction (including MERA, matrix product states, quantum metropolis sampling, area laws…).  I hope I haven’t just made John Sidles faint from excitement.

The full schedule is here; feel free to ask in the comments about any talks I didn’t mention.  If there’s enough interest, I might also write a followup post about the rest of the conference.

31 Responses to “QIP’2010: The Power and the Glory of Quantum Complexity Theory”

  1. John Sidles Says:

    LOL … “faint with excitement”? No … but we paleolithic engineers are plenty excited about … and very appreciative of … all the fantastic new ideas and tools that are showing up at QIP2010. 🙂

    Nature did a five-part series in 2008 called “Meetings that Changed the World” and it seems to me that QIP2010 bids pretty fair to eventually join that elite class of world-changing meetings … not solely by virtue of the (many) wonderful presentations, but also (and mainly) by the assembly of a critical mass of talent, exciting ideas, and new enterprises.

    And thanks are due especially, Scott, to you and to everyone who is taking the trouble to blog about it.

  2. Anthony Says:

    Hi Scott,
    the last of your talk is quite mysterious. Why are hardcore computer scientists suddenly interested with continuous variables?
    As far as I know, continuous variables are nice for quantum communication, but they appear to be less useful for computation.

  3. Charlie Stromeyer Says:

    Wow, Scott, the paper by Stephano Pironio et al. is truly exciting and it makes me think about this other new paper in Nature Photonics entitled “A matterless double slit” by B. King et al.:

    http://www.nature.com/nphoton/journal/vaop/ncurrent/abs/nphoton.2009.261.html

    As you know, the greatest experiment in the history of quantum theory is Young’s famous double slit experiment, and the above paper by B. King et al. talks about how in just a few years we will have the laser technology needed to create a quantum double slit comprised solely of light, i.e., someone will then be able to study the nature of the quantum vacuum predicted by QED (and three Harvard physicists showed experimentally in 2006 that QED is the most accurate scientific theory we have).

  4. rrtucci Says:

    So, what of this would be of use to
    (1)a computer programmer?
    (2)an experimentalist?
    (3)a string theorist?
    (4)a chemist or biologist?
    (5)a statistician?
    (6)a physicist that knows almost nothing about complexity theory
    (7)a mathematician that knows almost nothing about complexity theory.
    You don’t have to answer them all, and you can add your own categories, as long as the category is of a scientist that isn’t intimately familiar with complexity theory.

  5. Scott Says:

    Why are hardcore computer scientists suddenly interested with continuous variables?

    Anthony: All will be revealed in my second QIP talk (on Friday)! 🙂

  6. Job Says:

    Does “with free will” in this context mean “without external cause” for its behavior?

    Thinking about it, a machine whose next state depends exclusively in its internals might be said to have free will. It would at least have as much as i do.

  7. Scott Says:

    Job: In this context it means, “in a way not determined by the whole previous history of the universe” (which includes the internal state of the machine).

    The reason I don’t like the term “free will” here is precisely that Conway and Kochen have no problem applying it to a sample from a known probability distribution, just so long as you can be sure that the universe is generating that sample “on-the-fly” (and didn’t precompute it earlier). For me, the term “free will” suggests “neither deterministic nor randomized, but some other unspecified thing whose nature philosophers have debated for 2000 years and will probably continue to debate for the next 2000.” Which makes it particularly unlikely that you’d prove a theorem about it!

  8. Dr Hyver Says:

    If there is a 60% chance of a major earthquake striking in Haiti over the next thirty years, what are the odds of an earthquake striking Haiti over the next 2 years? How about the next week?

    Are the odds for two years simply 1/15th of 60%, that is 4%? And are the odds for the next week simply 1/52nd of 1/30th of 60%– or is this not the way to calculate the answer? Or is there no way to calculate the probability of a single event occurring in a fraction of the 30 year period?

    I recognize that this is not the best forum for a question like this, but I am a regular reader of you blog, and I hope that someone here can assist a simpleton with such a simple, trivial problem.

    I’ve been having a debate about this question with my girlfriend, neither of us can resolve it, and I can’t seem to find an answer to it on google (see: http://www.google.com/search?hl=en&q=%22probability+over+time%22&sourceid=navclient-ff&rlz=1B3GGGL_enUS353US354&ie=UTF-8).

    Would anyone on this blog please help us with this?

  9. Scott Says:

    Dr Hyver: Don’t worry; better you should ask a question I can answer than one I can’t!

    It seems reasonable to assume, for your problem, that an earthquake happens on a given day with probability p, independently of whether an earthquake happened on any previous day (even though I doubt anything like that is geologically true!).

    To avoid bringing e into the discussion, I’ll also assume that at most one earthquake happens per day (not counting the aftershocks).

    Your suggestion was that the probability of an earthquake happening in a d fraction of 30 years, might just be a d fraction of 60%. But a simple thought experiment should convince you that that can’t literally be the right answer. For consider: under your assumptions, what’s the probability of an earthquake happening in the next 60 years? If linear scaling were the right thing to do, then you’d get a probability of 120%! 🙂

    The real answer is this: let p be the probability that an earthquake happens on a given day. Then the probability that no earthquake happens on a given day is 1-p. By your assumption, the probability that no earthquake happens in 30 years is 40%. 30 years = 10957.5 days. So by the independence assumption,

    (1-p)10957.5 = 0.4.

    Solving, we find p ~ 0.0000836.

    So, the probability of an earthquake happening in the next year is

    1 – (1-p)365.25 ~ 3.01%

    (compared to the 2% we’d get from your linear estimate).

    The probability of an earthquake happening in the next week is

    1 – (1-p)7 ~ 0.0585%.

    The probability of an earthquake happening in the next 60 years is

    1 – (1-p)365.25*60 = 84%.

    Of course, once you drop the independence assumption, all these estimates go out the window. But to know what the independence assumption should be replaced by, you’d actually need to know something about geology.

  10. Dr Hyver Says:

    That was so enormously helpful Scott. Thank you so much!

  11. Steve Flammia Says:

    Scott, I can explain continuous variables to you. I’ll come find you at a coffee break.

  12. John Sidles Says:

    I too am very much interested in whatever final states result from the Flammia/Aaronson interaction, and I hope that Scott will post the slides from his second talk too.

    In scattering theory—as in pretty much all other aspects of quantum dynamics—“We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another” (Feynman).

    In linear optics, for example, scattering (both classical and quantum) is wholly described by matrices whose action on coherent states entering an n-port optical device is specified by a unitary matrix U ∈ SU(n).

    Since the scattering is linear, U encodes the answer to any question we might ask about quantum inputs and outputs … but the encoding can be surprisingly hard to compute. It has long been known, for example, that if the input is a single-photon number state on each port, then the probability of observing exactly one photon on each output port is |perm(U)|^2, where “perm” is the permanent function that is much-cherished in complexity theory … because it is thought (for very good reasons) to be infeasible to compute by any efficient algorithm.

    Leaving deep questions of mathematics and physics aside, we now have the sobering engineering reality, that any simulation code we write to describe even so simple a dynamical process as linear optical scattering, is likely to run either very slowly, or very inaccurately, when given number state inputs—even if that same code is fast and accurate for coherent state inputs.

    So already we have in hand information that is valuable to engineers … we know one set of inputs (number states) that is sure to “break” our simulation codes, and we know another set of inputs (coherent states) for which our simulation codes should run quickly and accurately. Examples of this kind are essential to simulation validation and verification.

    Now we are in a position to ask simple questions … whose answers definitely are not known to me! One of the simplest is, for random n-dimensional scattering matrices U uniformly distributed with respect to Haar-measure, what is the n-dependent distribution of perm(U)? What is the large-n asymptotic mean of |perm(U)|^2?

    Questions like this exist at what meteorologists call a convergence zone of mathematics, physics, and engineering. Meterorologists *love* convergence zone, because the weather there is always interesting. I hope Scott’s talk will give us more information about, and new insights into, this fascinating quantum convergence zone.

  13. John Sidles Says:

    Oh yeah … I will mention a couple of real-world points relating to Scott’s final slide (which I take to be about preparing number-state photon inputs for purposes of computing permanents).

    First, the lowest-loss telecommunication couplers cannot synthesize arbitary scattering matrices U, because their internal workings are time reversal invariant (mathematically, the matrix logarithm of U is wholly imaginary). To fabricate couplers having a general Hermitian log( U) requires magneto-optical elements, for reasons set forth by Raleigh in a 1901 Nature article titled “On the magnetic rotation of light and the second law of thermodynamics.” Yes, this is a consideration that we run into in our real-world quantum spin microscopy experiments!

    Second, it definitely is 1000X easier to specify n-port number-state inputs mathematically than it is to supply them physically! Not only is this task technically challenging, it is subtle to think about (see e.g., A. L. Bennet et al. “Interference of dissimilar photon sources” in a recent Nature Physics).

    One trick that Feynman and Wheeler originated sometimes can be helpful: regard the quantum dynamics of the sink (the detectors on the ouput ports) as primary, rather than the source (the lasers/quantum dots/whatever on the input ports). From this point of view, we can think of U as describing a very subtle quantum measurement that the output port measurement devices are performing on the input port optical sources. Sounds goofy, but this kind of backaction is the dynamical mechanism by which quantum measurement works, and (for me) it makes experiments like Bennet’s (somewhat) easier to understand.

  14. NOT John Sidles Says:

    Well, gosh! How can a quantum systems engineer not become faint with excitement? First, Scott Aaronson used the phrase “model order reduction” in a blog post! Sounds goofy, but in our real-world quantum spin microscopy experiments, we find blog posting to be of primary importance.

    I’d like to draw your attention to a footnote written by Terry Tao in one of his blog posts. Has everybody read it? Good! Now you’re in a position to appreciate why it is that we quantum systems engineers find blog posting to be of such primary importance.

    Further, you can appreciate why it is that the “Aaronson/Arkhiposov kabitz Kaehler manifold Feynman experiment”, if Scott will let me coin a phrase, is so important. You see, in our real-world quantum spin microscopy experiments, we frequently encounter systems we can’t simulate even with Kaehler manifolds. Gosh! If only we could simulate all systems with Kaehler manifolds, we could simulate such trivial problems as the mass of nucleons in Yang-Mills, without such paleolithic techniques as quantum Monte Carlo. But, it isn’t enough for our simulation codes to get the answer wrong! We already know a lot cases where dynamical concentration onto the Feynman-Wheeler-Kaehler-Aaronson manifold fails. We have to know that our simulation codes will fail in advance for deep computational complexity reasons. Sounds crazy, but if a problem is computationally intractable, it might be useful for our codes, while real-world problems (that might be computationally tractable) are useless because if we spent the time tinkering with our code to make it work there, it might interfere with our blog posting.

  15. John Sidles Says:

    Hmmm … may I say that a very intelligent person wrote the preceding post? The technical details are accurate, and some of the sociological turbulence that can arise at the “convergence zone” of math/science/engineering is well-illustrated too.

    The same kind of passions were aroused at all six of Nature’sMeetings that changed the world” … so this is a good sign IMHO.

  16. rrtucci Says:

    Dr Hyver, look up “poisson process”. That would be a reasonable model for this

  17. Joe Fitzsimons Says:

    Umesh illustrated by a story about a verifier investigating the claims of a shady, fly-by-night company called “Q-Wave.”

    Coincidentally I had a meeting last week with Geordie about exactly that topic.

  18. Ungrateful_Person Says:

    I am a bit elated at reading this post. Simultaneously though, I am a bit frustrated too.

    It is because I would have loved to see the whole thing (Umesh’s talk). Why don’t they make videos of such interesting conversations and upload on youtube

  19. Ashley Says:

    Ungrateful_Person: The QIP organisers are indeed videoing the talks, which hopefully will be available online at some point.

  20. John Sidles Says:

    Scott says: Feel free to ask in the comments about any talks I didn’t mention

    I’d be very interested in the contents of Poster 6, by Marti/Reiher/Bauer/Troyer/Verstraete, titled “Flexible non-linear tensor network expansion for the electronic wave function of molecules” and Poster 28, by Banuls/Hastings/Verstraete/Cirac, titled “Dynamical simulation of infinite chains using Matrix Product States” (possibly relating to http://arxiv.org/abs/0904.1926 ?)

    Of particular interest is any discussion by these authors (or any other QIP attendees) of adapting the dimensionality and/or geometry of tensor network state-spaces during the course of a simulation—the idea being to regard state-space geometry as a computational resource, to be adapted to the demands of the dynamics at-hand.

    Is anyone reporting on “on-the-fly” quantum state-space adaptation? If so, what methods/tests are people trying? And in particular, how do QIP intuitions/results guide the adaptation?

  21. Paerson Says:

    as for your motivation of proving wolfram wrong, i remember one of you guys from MIT disproved 44 of his claims on one single page. that was quite a highlight. hehe ….

  22. ithink Says:

    took me a while to find it. i think u refer to Evangelos Georgiadis’s (sp ?) note. Slashdot at http://science.slashdot.org/story/07/12/23/1817233/44-Conjectures-of-Stephen-Wolfram-Disproved?art_pos=1

    I kinda liked that a lot.

  23. aram Says:

    Earthquakes cluster in ways (e.g. there are aftershocks, and even now the risk of a larger earthquake in Haiti) that don’t follow Poission distributions. (Although I guess there are debates about the model with possibly Poisson being a good approximation after you take some clustering into account.)

  24. John Sidles Says:

    Alice (the experimentalist) and Bob (the engineer) are discussing QIP2010 …

    Alice  So, Bob, was QIP2010 a “Meeting that changed the world?”

    Bob  Well I thought so, Alice … but I would be interested to hear your opinion.

    Alice  For me, the exciting thing about QIP2010 was the (numerous) suggestions for practical experiments whose outcome cannot be predicted—even in principle—with classical computational resources.

    Bob  Yes, that kind of experiment has fundamental importance for scientists and engineers alike. So, which experiments are you going to try first?

    Alice  Scott Aaronson’s talk gave the simplest example, and the clearest theoretical discussion too. So I stopped at the airport gift shop and bought a 30-photon source and a optical coupler having 30^2 ports and a random unitary transfer function U … and I’ve been measuring photon-count outputs A all week!

    Bob  Wow! How’s that experiment going, Alice?

    Alice  Well … estimating the low-order mements of the distribution A is easy and fun! But estimating the high-order moments is very slow; we’ve yet to measure the same output A more than once … there are just too many permutations of output channels.

    Say … Bob … you’ve got one of those fancy tensor network simulation codes … can’t you simply predict the probability |⟨A|U|Ψ0⟩|^2? The QIP folks tell us this probability is related to that sexy quality, the permanent of an A-dependent subset of U!

    Bob  We engineers are excited about that too, Alice, but our simulations run into a glitch that is rather like your experimental glitch. You see, whenever we feed number state inputs into our tensor network simulation of your experiment, the simulation keeps asking for more memory, and running slower-and-slower.

    So it looks like both of us require classical resources that are exponential in the number of photons, in order to reliably estimate the permanent |⟨A|U|Ψ0⟩|^2 … at least, if we do our experiments and code our simulations the naive way.

    Alice  Heck … can’t you adjust your simulation code to not use so much memory?

    Bob  We’re thinking about it, but in all the schemes we’ve thought of so far, this amounts to simulating a noisy version of your optical scattering experiment—so noisy that you wouldn’t be able to measure a permanent.

    Alice  Hmmmm … well … perhaps the QIP2011 theorists will suggest to us some trickier methods for estimating/simulating the permanent … and more broadly, suggest still more classes of real-world quantum experiments that can’t be simulated, even in principle.

    For sure, we can be confident that the QIP theorists will keep coming up with new ideas!

    Bob  And whatever those new QIP ideas are, Alice, it’s a pretty safe bet that we engineers will be able to adapt many of them to make our simulation codes run faster, and our devices function nearer to quantum limits.

    Alice  Gosh, so it’s clear that QIP20XX ideas have *already* changed the world!

    Bob  Absolutely!

  25. Clare Says:

    “If there’s enough interest, I might also write a followup post about the rest of the conference.”

    Registering interest…

  26. John Sidles Says:

    Alice (the experimentalist) and Bob (the engineer) discuss QIP2010 some more …

    Alice  Bob, I’ve been thinking some more about QIP2010 and whether it’s “a meeting that changed the world”.

    Heck, Nature‘s list of six world-changing meetings describes pretty clearly what it takes to do the job!

    Bob  Hmmmm … since you’re an experimentalist, Alice … and I’m an engineer … and the QIP attendees are mainly theorists and mathematicians … pretty clearly it’s no small challenge to change all our worlds.

    Alice  That’s what I was thinking too, Bob, but it turns out that the Nature meetings didn’t actually change the world, but rather gathered together people who foresaw that the world was going to change, and decided to play an active role themselves in making that change happen.

    Bob  Ho ho, Alice! Something tells me that you foresee that our world is going to change … and that you’re don’t intend to passively and watch it happen.

    Alice  That’s absolutely right, Bob! For us experimentalists, a wonderfully exciting aspect of QIP2010 is the ideas presented for exploring physics beyond the standard model in benchtop-scale experiments.

    Bob  But gosh … isn’t new physics nowadays to be found mainly at very small length-scales (high energy physics) or at very large length scales (cosmology)?

    Alice  Yes, but QIP2010 points to another area of physics that is little-explored … classical datasets that are reproducibly measureable (with polynomial time and space resources) but not explicitly constructable (with polynomial computational resources).

    Bob  You mean … like permanents of optical scattering matrices?

    Alice  Yes, that is the simplest example I know, but there are others, like factoring large integers.

    Needless to say, we experimentalists are completely agnostic about the various possible outcomes, which include: (1) we measure the permanent, and the polynomial-time Church-Turing Thesis is thereby refuted (modulo plausible CT assumptions), (2) it turns out that we can’t measure permanents with present-day technology, but solely for technical reasons that (plausibly) are non-fundamental, (3) it turns out we can’t measure permanents, because the polynomial-time Church-Turing Thesis is true, for deep and general mathematical reasons that we don’t (yet) understand.

    For us experimentalist, the most exciting alternative is the one that finds new physics: (4) it turns out we can’t measure permanents, because the physical state-space of QM experiments is *not* a Hilbert space, but rather a lower-dimensional manifold, such that (again) the polynomial-time Church-Turing Thesis is true, but for fundamental physical reasons that we can observe in the laboratory.

    Bob  (smiles) We simulationist engineers prefer alternative (4) so strongly, that in our professional lives we behave as though it’s true!

    And in this regard we engineers are much like mathematicians; just as mathematicians behave as though every theorem has a proof, engineers behave as though every classical dataset can be simulated.

    Alice  (laughs) … and we experimentalists behave as though new physical laws are just around the corner in every experiment.

    It’s pretty amazing, Bob, that throughout the whole 20th century, all three of our illusions have served the world so well!

    Bob  Yes, surely *that* is a deep mystery that is shared by math, science, and engineering.

    So, Alice, how will the 21st century be changed by these QIP-inspired insights … assuming that all three of our professions get to keep our cherished illusions?

    Alice  We’ll have to have at least one more discussion about *that* … but it seems pretty safe to foresee that big changes are coming.

    Bob  Now that’s a mighty sobering topic to contemplate … but it’s an exciting one too.

  27. Ross Snider Says:

    I think what the Haiti discussion is looking for is “extreme value theory”?

  28. John Sidles Says:

    Alice  Good morning, Bob! Are we going to talk some more about world-changing aspects of QIP2009?

    Bob  Sure Alice! But first let me ask, how’s your permanent-measuring experiment working?

    Alice  Pretty well from a technical point of view, Bob … yesterday we measured 10^6 outputs—of which no two were alike—and so we now have a set of 10^6 different scattering matrices whose permanents are supposed to be larger-than-average.

    Bob  That’s interesting, Alice, because we’ve been running a forms-and-flow simulation on a tensor-network state-space, and we too have a set of 10^6 scattering matrices … and just like you, we’re not sure whether these matrices have larger-than average permanents or not.

    Alice  Golly, that raises an interesting question, Bob: is there any way to distinguish between our two data sets? What if we just emailed them to a complexity theory group, asked them to apply any test—using polynomial resources, of course—that can distinguish them?

    Bob  We simulationists have been watching for such a test, Alice, and it’s a pretty safe bet that ingenious new theoretical ideas are in the works; we’ll just have to wait while these new ideas develop.

    But in the meantime, this theoretical work definitely is suggesting new insights into our quantum simulation codes.

    Alice  Like what, Bob?

    Bob  (sighs) Well, we simulationists have always taken granted that simulations can do something that experimentalists never can: simulate quantum post-selection experiments. But if we model the output measurement process too as a stochastic flow, the high-order photon statistics of simulated measurements are not those predicted by projection methods … and this deprives of our only efficient tool for efficiently simulating post-selection.

    Alice  (laughs) But Bob, isn’t that good news? If forms-and-flow quantum simulations are weaker than QIP theorists generally supposed, that means the compatible triple of QIP illusions is safe!

    Bob  (cheers up) Alice, you’re right. It appears that present knowledge allows: (1) QIP theorists to continue to believe that BQP is strictly stronger than BPP, (2) simulationists to continue to be believe that every experimental dataset can be simulated with polynomial resources (even from quantum experiments), and (3) experimentalist to continue to believe that new quantum physics is just around the corner in every experiment!

    And what’s really striking, is that all three communities can show solid technical grounds for their growing confidence.

    Alice  That’s a wonderful attitude, Bob!

    But before we have a final discussion of whether QIP2010 changed the world—and if so, how?—why don’t you and I wait a couple of days, and see what turns up on Dick Lipton’s blog, Gödel’s lost letter and P=NP.

    Dick’s topic this week is a very interesting one:  An Educational Extinction Event? Could Universities become extinct in the next twenty-five years?

    Bob  That’s bound to be an interesting discussion, Alice. It’s pretty clear that the whole world is changing—not just the universities in it—and so the key question for you and me, and for theorists too, is whether QIP has a key role to play in that change.

    Alice  (smiles again) I think we both agree that the answer is almost surely “yes” … and that QIP’s role is destined to be hugely positive.

    Bob  Especially if our illusions all prosper!

  29. Raoul Ohio Says:

    JS/Alice, thanks for the lead on “An Educational Extinction Event? “, perhaps a singularity like y’ = -y^2. Rather interesting stuff for those of us in the .edu biz.

  30. asdf Says:

    More pop quantum:

    http://www.newscientist.com/article/mg20527464.000-natures-hot-green-quantum-computers-revealed.html

    Interesting?

  31. Stephen Harris Says:

    I was reading a paper that compared the views of Searle and Peirce about the capability of AI. Probably both would agree that AIs can achieve deductive thinking. Peirce apparently thought that inductive and abductive (hypothesis making) reasoning would involve the element of chance. The idea of parallel computing came up. Several philosophers and also Penrose have asserted that parallel computing and sequential computing have the same power. That is true only if the parallel configuration is computing a real computable number, otherwise an analog real number cannot be simulated by a sequential machine. I mentioned this because of the remark about how continuous might interest classical computer scientists. The paper did not explain how a real number provided the element of chance that Peirce thought was needed. Here is a quote by Godel (Gibbs 1951) that runs in this vein,

    “On the other hand, on the basis of what has been proved so far, it remains possible that there may exist (and even be empirically discoverable) a theorem­ proving machine which in fact is equivalent to mathematical intuition, but cannot be proved to be so, nor even be proved to yield only correct theorems of finitary number theory.”

    I read the Platonic Heaven post on this blog before getting to here.