Black-hole complexity theory: from joke to science

One of the most fun arXiv preprints I’ve read in a while:

Black holes as mirrors: quantum information in random subsystems
by Patrick Hayden and John Preskill

Abstract: We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the “half-way” point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole’s information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis.

Many of this paper’s arguments depend on speculative assumptions about quantum gravity, and might very well be wrong. What’s nice is simply that they’re not not even wrong! This is an 18-page paper about the Planck-scale dynamics of black hole horizons where I never once found myself wondering what the authors were trying to say, or how their ideas would in principle be tested.

When I used to spout off about the complexity of retrieving information from a black hole as a function of the Schwarzschild radius rS, people assumed I was just fishing for laughs. And they were basically right. But even then, I felt sure that actual physicists would eventually say something real about this question, unapologetically using the language of quantum computing and information. Like Alice with her k qubits, I didn’t have to wait as long as I’d thought.

28 Responses to “Black-hole complexity theory: from joke to science”

  1. Not even right Says:

    Many of this paper’s arguments depend on speculative assumptions about quantum gravity, and might very well be wrong. What’s nice is simply that they’re not not even wrong! This is an 18-page paper about the Planck-scale dynamics of black hole horizons where I never once found myself wondering what the authors were trying to say, or how their ideas would in principle be tested.

    If what Scott said is true, (sorry that my physics is not good enough to understand their paper) then it is in the same spirit as some papers which talk about the Schrodinger equation which can include some nonlinear terms or quantum mechanics that take into account nonlinear effects. I don’t know how quantum mechanics can take up nonlinear effects. Their results include that the hybrid nonlinear quantum mechanics enables us to make a quantum computer that solves NP problems in polynomial time.

    Their assumptions are wrong or even absurb. So, is it that bad? They may stimulate other thoughts that lead to other breakthroughs. I really can’t say it must be so bad!

  2. Patrick Says:

    But even then, I felt sure that actual physicists would eventually say something real about this question, unapologetically using the language of quantum computing and information.

    Hayden is a computer scientist, no?

  3. Scott Says:

    Hayden is a computer scientist, no?

    Yes, honorarily. But he studied physics in undergrad and grad school.

  4. Dave Bacon Says:

    But he studied physics in undergrad.. And math! Which I’m told, is kind of like CS theory :)

  5. John Sidles Says:

    Dave Bacon says: But he studied physics in undergrad. And math! Which I’m told, is kind of like CS theory. :)

    LOL! Very funny, Dave! It put me in mind of a quote (pretty much everything puts me in mind of a quote).

    Here is John von Neumann on the need to periodically reinvigorate mathematics via the reinjection of what von Neumann called “empirical ideas”, which IMHO includes the invigorating Hayden/Preskill idea that a black hole might usefully be regarded as a coding device:

    “As any mathematical discipline travels far from its empirical source, or still more, if it is a second or third generation only indirectly inspired by ideas coming from `reality’, it is beset by very grave dangers. It becomes more and more purely aestheticizing, more and more l`art pour le art.

    This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men [sic] with an exceptionally well-developed taste.

    But there is a grave danger that the subject will develop along the lines of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities.

    In other words, at a great distance from its empirical source, or after much `abstract’ inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical, when it becomes baroque, then the danger signal is up.

    … Whenever this stage is reached, the only remedy seems to me to be the rejuvinating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to preserve the freshness and vitality of the subject, and that this will remain equally true in the future.”

    IMHO, the Hayden/Preskill article positively contributes (in an exceptionally enjoyable way) to the conjoined “freshness and vitality” of physics, QIT, and mathematics.

  6. Patrick Hayden Says:

    Hey Scott –

    Thanks for the positive comments. This is better than an invitation to appear on Oprah!

    Now, as for your kickback, I assume you’re still accepting payment in backdated D-Wave stock options?

  7. Scott Says:

    Patrick: Nah, the only kickback I need is for you to keep up the good work (plus cite me whenever possible). The Oprah reference reminds me that I should start a monthly Scott’s Paper Club [note: unaffiliated with the toilet paper brand]. “This paper touched my heart … and it will touch yours as well.”

    John: Thanks for the John quote — I don’t think I’d ever seen it in its entirety, and it’s terrific!

  8. John Sidles Says:

    Scott says: Thanks for the John quote — I don’t think I’d ever seen it in its entirety, and it’s terrific!

    Thanks Scott … it is indeed a wonderful essay … except (puts on Jack Nicholson voice) … “Entirety? You can’t handle the entirety!

    Here is another passage from the same essay, which was written by von Neumann in 1947. It seems pretty likely that in the following statistics, von Neumann is referring to himself. Because, who else? :)

    A good theoretical physicist may today still have a working knowledge of more than half of his subject. I doubt that any mathematician now living has a relationship to more than a quarter.

    What are the fractions today? I dunno, but I am pretty sure, that exponential notation is needed! :)

  9. John Preskill Says:

    I’m wondering … Maybe it would be better to be on Oprah. I’ll have to think about it.

  10. Johan Richter Says:

    I think number theory has done alright without any infusion of empirical ideas, for say the last 2000 years.

  11. csrster Says:

    Hasn’t number theory had a shot in the arm from the development of digital computing – e.g. from its relation to algorithmics and cryptography?

  12. John Sidles Says:

    Johan Richter Says: I think number theory has done alright without any infusion of empirical ideas, for say the last 2000 years.

    The historical record seems to indicate that number theory is the most empirical of the mathematical disciplines; hence its enduring vitality. See (e.g.) Polya’s Heuristic Reasoning in the Theory of Numbers.

    Purely on the evidence, isn’t it true that number theory is far more of an experimental discipline than string theory? :)

  13. flyingoracle Says:

    Does somebody give me prise if I proof, what quantum computing is inposible?

  14. flyingoracle Says:

    At least for grover algorithm based on gate model.

  15. Scott Says:

    yes I prise you $2 if you proof what quantum computing is inposible.

  16. Kurt Says:

    Does somebody give me prise if I proof, what quantum computing is inposible?

    That’s pretty funny, but aren’t those LOL expressions supposed to be superimposed on a picture of (Schrödinger’s) cat?

  17. Johan Richter Says:

    csrster, even if that were the case it still would be 2000 years without any input from the empirical sciences. And I don’t think it is. As far as I can tell, cryptography uses number-theory results but hasn’t given any ideas in return. And algorithmic number theory is a fairly peripheral branch of number theory. (I’m not saying it should be, just that it is.)

  18. flyingoracle Says:

    By reading this article
    http://www.ddj.com/architect/184404559;jsessionid=1CCHAIO1NPDTSQSNDLRCKH0CJUNN2JVN?_requestid=180738

    I find, where Lov Grover claims, what amplitudes has squere root probability, and this is becouse amplitudes constructively and destructively interference. Grover says, what if we want mark one of searching state, then we must cgange sign of amplitude, in another words, we have to change phase of amplitude (phase shift). So to mark state need to change phase. How we can change phase? Rotating qubit… So to rotate qubits (to change phase) we it do with classical computer.
    Does can classical computer rotate phase with 100% precision? NO! This is inposible, becouse phase is analog. So we can rotate phase (to change sign of qubit), only with some precision under 100% (for example 99.9% precision). So in quantum computing now is error when we change phase. This errors constructively and destructively interfernce! So if your quantum computer is with 1000 qubits and error rate is 0.1 (10%), then square root of 0.1 is about 0.316 (31.6%). Errors grows quadraticaly with number of qubits! This is becouse you never build useful quantum computer, witch use grover algorithm. If you try to “cure” errors with error corection (which describe Peter Shor), then you must number qubits increase 5-10 times. Errors grow then 25-100 times. This is why nothing can help to quantum computer, which runing grover algorithm.
    So does I receive my $2?

  19. Devin Smith Says:

    Nope. Threshold theorem, look it up. If the error rate is less than some threshold, error correction is a win.

    Try better next time.

  20. Jonh Says:

    How error corection is calculated, with sense, what error rate grows quadraticaly or with sense, what error rate grows linearly?
    Becouse in my opinion error rate (noise) grows quadraticaly with number of qubits. And how you suggest, how you say, error rate growing linearly or quadraticaly with number (N) of searching states ?

  21. John Sidles Says:

    Jonh, a textbook that IMHO you ought to read, is Michael Nielsen and Isaac Chuang’s Quantum Computation and Quantum Information.

    If you read it carefully, this textbook will answer most of the questions that you are asking.

    But don’t worry! This fine textbook will inspire in you a large number of even better questions! :)

  22. flyingoracle Says:

    Now I will corect a little bit my previous proof. Supose, what your quantum computer with 1000 qubits is far (10^10000000000000000000000 light-year) away from center of Universe. So in this way any decoherence can’t be, becouse any foton even don’t fly so far from big-bang begining. So (supose) you don’t need use Error Corection for your quantum computer, which runing Grover’s algorithm (becouse there no noise and decoherence).
    So you start runing your quantum computer and you now want mark some state with negative amplitude (sign), you want to do phase shift 180 degrees. So what you then have to do? You with CLASSICAL computer do this phase shift 180 degrees. But becouse phase nature is analog, you with your classical computer doing phase shifting to do small error. Say, you cahnge phase with 179,9999 degree precision. Becouse you can’t change phase with 180 degrees precision, error rapidly occure in quantum computing. After first grover iteration rate of errors increase quadraticaly and so on. And, what is bad new, what you even can repair this errors. This is diferent error type, than, which becames from noise, decoherence. This error you can’t “cure” and those errors grows quadraticaly… And I reaped one more time, this errors ocures becouse, you can’t do phase shift with 100% precision.

  23. Scott Says:

    Please study the threshold theorem before continuing to make a fool of yourself.

  24. Patrick Hayden Says:

    Scott: A kickback would have been easier, but I’ll see what I can do on the research front. No guarantees.

    John S.: Great von Neumann quote. I think the jury will be out for a long time on whether our ideas are “empirical” or merely “aesthetic” but I appreciate your vote for the former!

  25. Michael Bacon Says:

    Scott,

    I think the new blog arrangements are just fine. Registering was very easy, the ads are not too intrusive and the layout seems clear enough. As always, the content will be what matters :)

  26. flyingoracle Says:

    Thershold teorem:
    “In particular, quantum computers would resemble analog rather than digital
    devices if the number of bits of precision scaled as an inverse polynomial in L, since then physical
    parameters such as rotation angles or pulse lengths determining the accuracy of each quantum gate
    would have to be specified to precision that increased as an exponential in L.” ( http://arxiv.org/pdf/quant-ph/0703230 )
    I don’t understood. What increase? Errors increase or good answer increase exponentionaly?
    If errors increase exponentionaly, then that is what I’m saying. And if errors increase exponentionaly (or quadraticaly), then how you can correct this errors? It is imposible! Then quantum computing is imposible!

  27. flyingoracle Says:

    In any physical implementation of a quantum computation, the elementary quantum gates will be
    executed with some limited accuracy; e.g., the rotation angle in a beam splitter or the length of
    a pulse can be specified only to some finite precision. In order for the quantum circuit model to be realistic, it is important to show that any desired accuracy for the computation outcome can be
    achieved if the accuracy of each gate in the quantum circuit scales at most as an inverse polynomial
    in the size, L, of the computation. Equivalently, the number of bits of precision specifying the matrix
    elements of each quantum gate will need to scale “slowly” as at most an inverse polynomial in the
    logarithm of L. In particular, quantum computers would resemble analog rather than digital
    devices if the number of bits of precision scaled as an inverse polynomial in L, since then physical
    parameters such as rotation angles or pulse lengths determining the accuracy of each quantum gate
    would have to be specified to precision that increased as an exponential in L.”

    Loosly speaking, does errors increase exponentionaly or linearly or decrease exponentionaly in L?

  28. flyingoracle Says:

    OK, seams like such kind of errors decrease exponentionaly with number of qubits…
    Just thought: Does somethere in Nature exist entagled qubits and in superposition, that it have 2^1000 states? I think NO. Then my thought is, that if you wanna do superposition of 2^1000 states, then you need exponentional amount of energy. In optical implementation this energy taken from beam splitters, beam splitter double amount of energy taking energy from beam splitter atoms.