Paul Bernays Lectures

Last week, I had the honor of giving the annual Paul Bernays Lectures at ETH Zürich. My opening line: “as I look at the list of previous Bernays Lecturers—many of them Nobel physics laureates, Fields Medalists, etc.—I think to myself, how badly did you have to screw up this year in order to end up with me?”

Paul Bernays was the primary assistant to David Hilbert, before Bernays (being Jewish by birth) was forced out of Göttingen by the Nazis in 1933. He spent most of the rest of his career at ETH. He’s perhaps best known for the von Neumann-Bernays-Gödel set theory, and for writing (in a volume by “Hilbert and Bernays,” but actually just Bernays) arguably the first full proof of Gödel’s Second Incompleteness Theorem.

Anyway, the idea of the Paul Bernays Lectures is to rotate between Bernays’s different interests in his long, distinguished career—interests that included math, philosophy, logic, and the foundations of physics. I mentioned that, if there’s any benefit to carting me out to Switzerland for these lectures, it’s that quantum computing theory combines all of these interests. And this happens to be the moment in history right before we start finding out, directly from experiments, whether quantum computers can indeed solve certain special problems much faster.

The general theme for my three lectures was “Quantum Computing and the Fundamental Limits of Computation.” The attendance was a few hundred. My idea was to take the audience from Church and Turing in the 1930s, all the way to the quantum computational supremacy experiments that Google and others are doing now—as part of a single narrative.

If you’re interested, streaming video of the lectures is available as of today (though I haven’t watched it—let me know if the quality is OK!), as well as of course my slides. Here you go:

Lecture 1: The Church-Turing Thesis and Physics (watch streaming / PowerPoint slides) (with an intro in German by Giovanni Sommaruga, who knew Bernays, and a second intro in English by Renato Renner, who appeared on this blog here)

Abstract: Is nature computable?  What should we even mean in formulating such a question?  For generations, the identification of “computable” with “computable by a Turing machine” has been seen as either an arbitrary mathematical definition, or a philosophical or psychological claim. The rise of quantum computing and information, however, has brought a fruitful new way to look at the Church-Turing Thesis: namely, as a falsifiable empirical claim about the physical universe.  This talk seeks to examine the computability of the laws of physics from a modern standpoint—one that fully incorporates the insights of quantum mechanics, quantum field theory, quantum gravity, and cosmology.  We’ll critically assess ‘hypercomputing’ proposals involving (for example) relativistic time dilation, black holes, closed timelike curves, and exotic cosmologies, and will make a 21st-century case for the physical Church-Turing Thesis.

Lecture 2: The Limits of Efficient Computation (watch streaming / PowerPoint slides)

Abstract: Computer scientists care about what’s computable not only in principle, but within the resource constraints of the physical universe.  Closely related, which types of problems are solvable using a number of steps that scales reasonably (say, polynomially) with the problem size?  This lecture will examine whether the notorious NP-complete problems, like the Traveling Salesman Problem, are efficiently solvable using the resources of the physical world.  We’ll start with P=?NP problem of classical computer science—its meaning, history, and current status.  We’ll then discuss quantum computers: how they work, how they can sometimes yield exponential speedups over classical computers, and why many believe that not even they will do so for the NP-complete problems.  Finally, we’ll critically assess proposals that would use exotic physics to go even beyond quantum computers, in terms of what they would render computable in polynomial time.

Lecture 3: The Quest for Quantum Computational Supremacy (watch streaming / PowerPoint slides)

Abstract: Can useful quantum computers be built in our world?  This talk will discuss the current status of the large efforts currently underway at Google, IBM, and many other places to build noisy quantum devices, with 50-100 qubits, that can clearly outperform classical computers at least on some specialized tasks — a milestone that’s been given the unfortunate name of “quantum supremacy.”  We’ll survey recent theoretical work (on BosonSampling, random circuit sampling, and more) that aims to tell us: which problems should we give these devices, that we’re as confident as possible are hard for classical computers? And how should we check whether the devices indeed solved them?  We’ll end by discussing a new protocol, for generating certified random bits, that can be implemented almost as soon as quantum supremacy itself is achieved, and which might therefore become the first application of quantum computing to be realized.

Finally, thanks so much to Giovanni Sommaruga and everyone else at ETH for arranging a fantastic visit.

95 Responses to “Paul Bernays Lectures”

  1. William Hird Says:

    “how badly did they screw up to wind up with me ? ”
    Hi Scott, you are too modest, are you kidding, who better than you to give this talk? Yes, you can name names 🙂

  2. Scott Says:

    William #1: I mean, no one better than me to give these specific lectures, but surely many better people to give other possible Bernays Lectures. 🙂

  3. Andrei Says:

    Scott #2:

    Well there’s your answer! Maybe they just thought the Bernays series was ripe for your brand of quantum computation.

  4. Ian Says:

    These were great, thanks so much for sharing!

  5. Paul Topping Says:

    I’ve watched the first lecture and will watch the other two soon. I was wondering if you think there are kinds of computation more powerful than that of a Turing Machine. You mention hypercomputation but if you gave an opinion on whether it exists, I missed it. The Wikipedia page on “Hypercomputation” has a great quote attributed to Martin Davis:

    Hypercomputation is little more than: ” if non-computable inputs are permitted then non computable outputs are attainable.”

    Is this your view as well?

  6. Scott Says:

    Paul #5: In some sense, the entire point of the lecture was to give an answer to the question, which I did in the conclusion when I said that the (physical) Church-Turing Thesis still seems to be on extremely solid ground. Or in other words: that physical hypercomputation seems to be impossible. As I covered in the talk, the arguments to the contrary strike me as extremely flimsy—they either
    (1) cheat with definitions in uninteresting ways (what I called “theft over honest toil” in the talk), or else
    (2) completely disregard what we learned from black hole thermodynamics about fundamental limits on computation and information storage imposed by the Planck scale.
    And yes, Martin Davis’s quip strikes me as a sufficient response to many of the arguments in class (1) (like Wegner and Goldin’s).

  7. p Says:

    Hi Scott, permit me a naive question: Could one use a small quantum computer to verify the computation of a larger one, in the same way that a classic computer can verify the computation of a small quantum computer (taking much more time)? Thanks for the talks!

  8. Scott Says:

    p #7: That depends on the problem, but we don’t think there’s any general reason for a small quantum computer to be useful in verifying a larger one. Unless the small QC was actually exchanging qubits with the large QC in order to verify it—in which case, it could run the protocol of Aharonov-BenOr-Eban, but in which case, the other protocol of Broadbent-Fitzsimons-Kashefi showed that it doesn’t even need to be a QC anyway; a classical computer that can send polarized photons into a fiber-optic cable suffices.

  9. Ted Says:

    Fantastic lectures Scott, thanks so much for posting.

    Couldn’t Google largely sidestep all the theoretical issues around the classical hardness of sampling, by simply holding a public competition? At a certain pre-announced time, an independent third party publicly announces the random circuit instance C, then *anyone* has a short window to publicly submit a sample of some fixed number of bit strings. Then there’s a lengthy judging period when a classical supercomputer calculates the cross-entropy of each submitted sample, and Google pays $1 million to whoever submitted the sample with the highest cross-entropy. If all goes well, their own submission is the only sample whose cross-entropy is significantly above k/2^n, and no money changes hands.

    (There may be more submissions than can be feasibly checked, but there are ways around that – e.g. an auction to pay a retroactive entry fee which caps the number of eligible entries to the “best” ones. Your complexity-theory results suggest that the probability of beating Google would be low enough that only a negligible entry fee would be required. You’d want to give plenty of preparation time before starting the competition, and set the prize high enough to motivate the best classical simulation people to compete – something that D-Wave has conspicuously never done. But the real motivation would probably be the publicity of publicly beating Google.)

    My proposal doesn’t answer the theoretically interesting question “On complexity-theoretic grounds, could any classical computer feasibly perform this calculation?”. Instead, it answers the theoretically uninteresting question “At this particular moment in time, can any actual existing classical computer perform this calculation?”. But it has several advantages: (a) it doesn’t rely on (although it is motivated by) any unproven conjectures from complexity theory, so it is arguably a more convincing demonstration of the “practical” version of quantum supremacy. It’s also much easier for laypeople to understand, and so arguably better publicity for Google if they win. (b) Everything is totally public and externally verifiable. In particular, it removes any suspicion that Google cheated and chose a non-random circuit instance with some hidden structure that they’ve learned to simulate classically. (c) It removes the need to choose a semi-arbitrary value of b > 1 ahead of time. Of course, they can calculate the value of b after the fact for each submission, and if Google’s is the only submission with b significantly greater than 1, then that would simultaneously provide the “theoretical” evidence as well.

  10. cb Says:

    Can the certified random bits in the 3rd lecture not only be certified as random but also certified as coming from a particular trusted source, were the configuration of the circuit used in the sampling process to be kept secret? In practice how constrained or how large is the space of possible configurations to the circuit?

  11. Paul Topping Says:

    Scott (#6): I guess I didn’t equate the validity of Church-Turing with the existence/non-existence of other kinds of computation. Would the existence of some other kind of computation really disprove it? To be clear, I’m not trying to claim such a computation exists.

  12. Scott Says:

    Ted #9: Google will basically do as you suggested. That is, they’ll publish the sampling challenges online, so it will be open to any other research group to demonstrate a faster classical simulation (and indeed, groups at IBM and Alibaba have already put out papers in this direction, pushing the frontier of how many qubits they can simulate classically). This is an extremely important complement to complexity-theoretic hardness arguments, and is similar to what people do in applied cryptography.

  13. Scott Says:

    Paul #11: The entire position I took in the talk is that today, the Church-Turing Thesis is most fruitfully understood as a falsifiable empirical claim—namely, that any physical system can be simulated by a Turing machine to any desired precision. Under that conception of the thesis, yes, a ‘hypercomputer’ would disprove it, by definition.

  14. Scott Says:

    cb #10: If you wanted to verify the source of the random bits, the obvious way to do that would just be to append a conventional digital signature to them.

    If we had a system with (say) 50 qubits and 500 gates, the number of possible challenges would exceed 10500. In particular, enough that no lookup table caching deterministic responses to the challenges could fit in the observable universe. (Although in practice, one would probably generate the challenges pseudorandomly, in which case the number of possible challenges would “only” be exponential in the seed length of the PRG.)

  15. New top story on Hacker News: Paul Bernays Lectures – Golden News Says:

    […] Paul Bernays Lectures 4 by weinzierl | 0 comments on Hacker News. […]

  16. Jacob Says:

    Great series of lectures! Very exciting time for quantum computing.

    I have a small question about ‘hyper computing’. You said that you don’t agree with Deutsch’s claim that different manifestations of physics could lead to different (more powerful) notions of computing. How would a Zeno computer not qualify here? Do you think it is even impossible in theory to construct a theory of physics that could include a Zeno computer?

    Also separately, in Deutsch’s book I believe he conjectures some sort of Zeno computer that could possibly exist in a big rip/big crunch scenario, where the metric of the universe goes to zero or infinity in finite time. He uses this as a way to give the possibility the humans could ‘live forever’ by uploading there consciousness onto these infinitely fast Zeno computers.

    So I guess my question is in 2 parts. Is a big rip/big crunch Zeno computer ruled out by current physics? And could any system of physics exist at all that could give rise to a Zeno computer (maybe physics ‘existing’ is not well defined enough to answer this question)?

  17. Scott Says:

    Jacob #16: Again, yes, I’d say that a Zeno computer is ruled out by current physics, as long as you consider the holographic entropy bound “current” rather than “future” (it’s never been experimentally tested, but looks like an unavoidable conclusion of GR combined with QM).

    If our world had had only QM and no gravity, or only gravity and no QM, then for all I know a Zeno computer might have been possible—but it would depend on the details of that hypothetical world.

    In his book, Deutsch was reporting the speculations of Frank Tipler, which Tipler also relates to his Christian religious beliefs. In 1998, a year after Deutsch’s book was published, Tipler’s cosmology was clearly ruled out by the discovery of the dark energy, which (if it’s as constant as it appears to be) implies that there won’t be a Big Crunch. I believe that Deutsch has accepted this and moved on, though Tipler has not.

  18. Ari Says:

    So is universe a simulation? Are we ems?

    Sabine Hossenfelder did not believe in the simulationability of universe.

    Dear Professor, I request a live video talk with you and Sabine discussing this simulationability of universe. Although only one of your is a physicist.

  19. gentzen Says:

    Great series of lectures! Nice arguments, debates and opinions.

    I first consumed the slides on an ipad, but there were annoyingly many ‘?’. So I downloaded them on my laptop, and found out that most of the slides contain nice animations. So probably your decision to not provide pdfs of the slides was a good idea, because otherwise I would not have discovered the animations.

  20. Concretion Says:

    The CTT abstracts from the finiteness of space, and that has proven really useful. It does not abstract from the finiteness of time (in the sense of allowing infinite number of compute steps). Is there a neat explanation why the CTT treats space and time differently? I conjecture that this is because space can be reused, while time can’t, but I’m not sure.

  21. Scott Says:

    Ari #18: If you ask, could the universe be simulated by a Turing machine to any desired precision—then as I said, I think the evidence strongly points to “yes” and that that’s a profound fact about our world.

    If you ask, could we get direct evidence of ‘simulators,’ like a bug or an error message—well, you can place your own bet, but I’d personally place this in the same class with any other religious hypothesis, like Armageddon or the Second Coming or the Moshiach. I don’t live my life in expectation of such things. 🙂

    If, finally, you ask: are we in a simulation even if no such evidence can ever be obtained—well, that’s a question whose answer, by definition, can have no effect on any of our explanatory accounts of the world. So the more we can train ourselves to reformulate the question more productively, the better off we’ll be.

    I’ve had many exchanges with Sabine over the years. There’s probably some daylight between the answers I gave above and her answers, but I don’t remember what it is.

  22. Scott Says:

    Concretion #20: I don’t see that the Church-Turing Thesis actually treats time and space differently in the manner you say (even though they are, of course, different). In principle, a Turing machine could just move right on its tape head forever, traversing an infinite amount of both time and space. On the other hand, in order for a function f to be “computable”—and this is really what the thesis is about—there has to be a Turing machine for f that halts on every input, having used only a finite amount of both time and space.

  23. cb Says:

    Scott #14: I guess the circuit configuration must be revealed anyways, to anyone wishing to certify the randomness– is that correct? As in, to apply the inequality they have to know the source distribution which is really just a specification of the circuit. In that case I really can’t think of anything interesting about trying to keep the circuit configuration secret for a period of time.

    Aside from secrecy, is there anything at all consequential about the choice of circuit configuration?

    Since the samples are certifiably random, I guess the only thing that remains is the question of how the non-uniform source distribution ultimately interacts with a chosen randomness extraction process. I’m not familiar enough myself, but Wikipedia says:

    >”there is no single extractor that has been proven to produce truly random output from any type of weakly random source”

    So I suppose someone could fear that a circuit configuration could be cleverly chosen such that it exposes a backdoor when coupled with a popular randomness extraction procedure? (Of course when I say backdoor I just mean rendering the randomness extraction useless in the worst-case, so that the output bits have the same bias or entropy as the original sampled bits– not that they are somehow completely de-randomized.)

  24. gentzen Says:

    Martin Davis’s actual quip from The Myth of Hypercomputation reads:

    When the claims are viewed critically, it is seen that they amount to little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable.

    I had hoped that he was hinting at Chaitin’s ingenious construction by this, but apparently the proposals for hypercomputation he had to refute were less illuminating. Still, Martin Davis’s article itself is nicely written, and makes similar arguments like Scott.

    Anyway, that quote reminded me of what I tried to write in January 2017, but then decided to not even try: One part of the readers would just understand it without much explanation, and the other part would have a hard time even if I managed to explain it clearly. So here is the version without any explanation or elaboration (since I never wrote one):

    Say you buy a brand new TM with oracle for HALT from the company D-Wine. How can you be sure that they didn’t cheat you? They assure you that it passed both their PA and ZFC test suites. After you receive the machine, you run your Woodin test suite on it, and it fails. Can you conclude that the machine is rubbish?

    Assume D-Wine now has a new internet offering, which they call “book of arithmetic”. It actually seems quite cheap, only 1 cent per gigabyte. And apparently the information is highly compressed, using state of the art algorithmic information theory. Their sales representative said that it would enable you to decide arbitrary arithmetic sentences, if you had sufficient computational resources to decode the compression. For that he recommended their famous TM with oracle for HALT.

  25. Scott Says:

    cb #23: That’s correct, the verifier knows the circuit.

    What’s important about the circuit is mostly just that it gets drawn from a distribution for which spoofing is classically hard.

    While I unfortunately didn’t have time to go into it in this talk, my scheme (like other certified randomness schemes) requires the classical challenger to start with a small seed. There are several reasons, one of which is of course to generate the challenges. But a second reason is that, with a seed, you can use a seeded randomness extractor—for example, the Guruswami-Umans-Vadhan (GUV) extractor—to get out almost all the min-entropy that’s present in your randomness source, regardless of how it’s distributed, and output nearly uniformly random bits.

  26. Seed generator Says:

    Scott #25: In that case, what prevents the seed-generator from being compromised and be used to generate a trivial circuit (that even classical computers can simulate)?

  27. fred Says:

    Scott #21

    If we accept the following two statements:

    1) realities similar to ours can be simulated with a Turing machine.

    2) realities similar to ours naturally assemble Turing machines (in our case, life appeared, intelligent life appeared, computers appeared, VR appeared, …).

    It seems then unavoidable that most realities similar to ours would be simulations.

    Also, if we ever build general AIs that seem to be conscious, the above argument would be strengthened because this would suggest that a simulation and what’s being simulated are indistinguishable from a subjective point of view.

  28. Eric Says:

    Dear Scott,

    I was attending the first lecture but did not quite make it through: The constant deluge of `ah’-pauses made for a very annoying listening experience. While I believe that your writing is at the top of what is offered in terms of clarity and grace, I would like to leave this comment as (implicit) constructive criticism for you to improve your lecturing style.

  29. Scott Says:

    Seed generator #26: Given your handle, we can’t just use you? 🙂

    Seriously, as long as we trusted the classical code that was generating the challenges, a seed would in principle only need to be generated once—from the weather, quasars, the Dow Jones average, radioactive decays, anything—in order to provide a stream of challenges that were unpredictable to the QC for decades, and hence true random bits from the QC for decades.

    It’s true, though, that how to delegate trust among all the parties involved is one of the biggest questions with a scheme like mine. Classical cryptographers will probably have more to say about it than I will.

  30. Scott Says:

    Eric #28: Experience shows that the only way for me even to approximate the diction you want is literally to write down my entire talk in advance—like a humanities professor, or a journalist or politician reading from their TelePrompTer, or Stephen Hawking. Alas, with two rowdy kids and no nanny to take care of them, it’s a miracle whenever I manage to leave the house to give a talk at all. Which means that you get to experience my talks, if you do, with my natural diction, as if I were telling it to you over lunch. Sorry that that’s not good enough.

  31. Randall K McRee Says:

    Fred #27

    There seems to be quite a confusion here between a simulation and a *prediction*. As Scott pointed out in the Bernays lecture the simulation of a future event supplies you with knowledge of a probability distribution. It does not actually tell you whether the bullet is in the chamber and therefore whether you will be dead, or not. So simulation is not tantamount to *knowing* the future. That isotope still decays randomly in the real world, etc.

  32. Randall K McRee Says:

    Eric #28, Scott #30.
    This is what Toastmasters is for. I went for some time a few years ago. Mostly I was trying to cure the anxiety I felt when talking in front of more than two people. It helped with that, and the ah, huh, stutter as well. It only took a few visits for me to improve immeasurably.
    https://www.toastmasters.org/Find-a-Club/00004256-austin-toastmasters-club

  33. Scott Says:

    Randall #32: But I don’t have speaking anxiety. Probably 5000 other anxieties, but not that one! 🙂 I did once try a speech therapist, who didn’t noticeably help, though maybe I could try again. Mostly, though, I already know what I’d need, for those listeners who actually care about this, and it’s simply more time to prepare/practice my talks.

  34. cb Says:

    Scott #25, #29: I was missing the obvious point that every new batch of bits someone requests must be a new challenge-response, where the response comes in a timely fashion after the challenge, otherwise non-certifiably random bits could have been slowly generated on a supercomputer. The circuit specification is the challenge.

    I guess there is another situation though where you just want a good hardware RNG in-house, so– 100% trusted and no challenge-response protocol needed. In that case, it seems to me that you might still prefer one of these classically hard sampling problems over some QC algorithm (or hardware RNG for that matter) that doesn’t have these complexity theory arguments about its classical hardness. For this use-case, you don’t need any seed or anything, and can just use a single circuit configuration for all of time, is that correct? And theoretically that circuit could even be publicly known and it wouldn’t matter (although conservatively you would probably not want it to be public or for it to be leaked), at least insofar as you trust your randomness extractor to eliminate the bias of the source distribution.

  35. Dimitri Says:

    Hi Scott,

    I am a longtime reader of your blog, that’ s my first comment. I live in Switzerland, had I not missed that you were in Zurich, I would have definitely joined your audience. In any case, I would like to express a big THANK YOU for the amazing job your are doing in general in explaining your work in a manner that is understandable to an audience with only a superficial background in theoretical CS, like me (I am a machine learning engineer). I watched yesterday the video streams you posted, I am amazed on how you are able to explain quantum computing in a few slides and to give an overview of cutting-edge theoretical research in a few minutes, casually and with a sense of humor. Just by these video streams and a general CS background, one that cares enough to listen can get a pretty good level of understanding of what QC is about, how and why the field was born and an idea of the engineering challenges that all these companies are trying to overcome. I also admit that for the first time in my life (despite being taught about it in the university) I realized this about the P ?= NP question: It is not that it is the only interesting question that one can ask about what is “efficiently” computable, but it is one of the natural questions one should try to answer first before attempting to see further.

    Following a remark made earlier, I would like to comment that maybe almost everyone (for sure everyone I know, including myself) needs hours and hours of preparation, usually spread over many days, to eliminate all these pauses that occur naturally when we give a presentation on a complex subject. I, for one, couldn’ t care less: The message is delivered even when the speaker takes a few pauses to organize his/her thoughts. As a listener, in any case I need to really concentrate to follow such a talk (dealing with topics that I am not 100% comfortable about), a few “uhm” moments are irrelevant. Thanks again.

  36. Scott Says:

    Dimitri #35: Thanks so much! Made my day.

  37. Scott Says:

    cb #34: It would be great, for purposes of my protocol, to be able to flesh out the assumptions under which the use case you describe makes sense! E.g., a customer that owns a QC, but doesn’t fully trust it, nor does it fully trust any hardware RNG that it’s able to buy or build for itself.

    Both in this case, and in the sort of use case I had in mind (a public randomness beacon), the question of whether you can just keep reusing the same challenge circuit over and over is difficult, and indeed is one of the main open problems I leave. It seems plausible that you could squeeze more and more certified entropy out of the same circuit, at least until an adversary would’ve had enough time to backdoor that circuit, at which point a new circuit would be needed. But any security analysis justifying that conjecture would need to go beyond mine.

  38. Eric Says:

    Dear Scott,

    I meant no offense. I greatly appreciate you taking the time for these talks and if my previous comment caused you any discomfort, I sincerely apologize. Regarding your explanation, I can see that with time being the main issue, there is indeed little one can do.

  39. J.C. Says:

    Scott Lecture #1: A starker objection to your hypothesis is that a Turing machine can’t even generate a single truly random bit, which nature can do via quantum mechanics. Adding a random sampling step to Turing machines is just a version of no true Scotsmanism. No one doubts the Church-Turing thesis for Church’s and Turing’s original motivation of modeling axiomatic formal mathematical systems, but for modeling physical systems it was Born (rule) dead.

  40. Dogwin Says:

    Dogwin’s law: no discussion of the simulation hypothesis ever brings up the Landauer limit, no matter how long it continues, and so no discussion of the simulation hypothesis will ever be of any value.

    Oh, wait…

  41. Scott Says:

    J.C. #39: I explicitly discussed that in the lecture itself—in the section about tiresome and boring objections to the Church-Turing Thesis, the ones that quibble over definitions and try to get away with theft over honest toil. And I gave two responses:

    1. It’s a theorem that even a probabilistic TM can decide only languages that are computable in the ordinary sense.
    2. As long as there are only finitely many possible outputs of a (halting) probabilistic TM, a deterministic TM can perfectly calculate their probabilities, leaving only the actual spin of the roulette wheel.

    Ultimately, then, I think this objection to the Church-Turing Thesis is in the same class as Wegner and Goldin’s “but a Turing machine isn’t connected to the Internet,” or my parody “but a Turing machine can’t toast bread.” I.e., it faults Turing machines not for any intrinsic lack of computational expressive power, but merely for lack of some external peripheral that you’d need to plug in to the USB port to get the TM to do what you wanted in the physical world. The trouble is that there are infinitely many possible external peripherals you might want, and it’s obviously outside the scope of the theory of computation to try to enumerate all of them.

    PS. If you just wanted a single random bit, and it was allowed to be the same from one run to the next, an obvious solution would be to hardwire it into the TM. 🙂

  42. Ari Says:

    Scott,

    Thanks for the reply. Well I’m happy to hear its possible. I think Sabine said its not.

    http://backreaction.blogspot.com/2017/03/no-we-probably-dont-live-in-computer.html
    https://scottaaronson-production.mystagingwebsite.com/?p=3208

    “I’m not claiming it’s not possible to simulate our observations on a (quantum) computer. My point is that it’s difficult to show it can be done, and nobody presently knows how to do it. Whether you can even do it from qubits is an active area of research, as you say, and that’s leaving aside the question of whether speaking of a programmer is still science or religion. ” Comment #9

    I’d really love you two discuss these things on a video.

  43. gentzen Says:

    Scott, I have a “straightforward” question with respect to the Cauchy-Schwarz inequality as an example for first order reasoning about real numbers: The provably double exponential part of the decision procedure is the quantifier elimination part, but the Cauchy-Schwarz inequality is already quantifier free. I can see why deciding quantifier free formulas is NP-hard, and googling turned up a PSPACE algorithm for deciding them, but I couldn’t find a proof (neither by googling, nor by trying it myself) that deciding them would be PSPACE-complete. Do you know a better “lower bound” for the computational complexity of that problem, or why it is hard to come up with a better lower bound than NP-hard?

    Another question with respect to the first lecture and “Enough with theft over honest toil!”: Is this merely a fight over words? If the “programming language semanticists” would have used new names like “Recursion Thesis” or “Abstract Recursion Procedures” to describe the questions they try to answer, would you have been willing to grant that they are trying to resolve a legitimate puzzle? I guess you won’t have time to read it, but Theses for Computation and Recursion on Concrete and Abstract Structures by Solomon Feferman (2013) tried to do that and other similar things to reduce the “disconnect” which has occured in that discussion.

  44. Scott Says:

    gentzen #42: Yeah, with just one quantifier, you get a slightly mysterious complexity class called ExistsR (existential theory of reals), which sits between NP and PSPACE (indeed it’s in the counting hierarchy I think).

    Yes, I’d be more than happy to grant the programming language semanticists their falsified thesis so long as they they used a different name for it, to make it clear that they’re talking about internal program organization rather than about what machines can exist in the world.

  45. Rainer Says:

    I have difficulties to understand your response to the chaos argument.
    The physical Church-Turing thesis postulates that “any bounded physical system can be simulated by a Turing machine up to a desired precision”.

    But in your answer to that argument you define what and what is not a physical realisation of a Turing machine. I cannot see why is this a response to the chaos argument.

    If there exists a Turing machine it must be able – according to physical Church-Turing -to simulate the chaotic system up to any desired precision. It doesn’t matter what physical realisation is used for this Turing machine.

  46. Scott Says:

    Rainer #44: It’s a response because when you say that a system “can be simulated,” it’s presumed that the appropriate data about the system’s initial state is provided to the simulator. If that data isn’t (or can’t be) provided, then the reasons why the simulation will fail are obvious, and have nothing to do with the inherent computational power of the system you’re trying to simulate.

  47. Scott Says:

    Dogwin #40:

    Aaronson’s Law: From now on, any commenter who comes here just to say that a whole conversation about topic X has been valueless because it never brought up the apparently remote topic Y—and who does so in a “drive-by” manner, taking Y’s relevance to X completely for granted and never bothering to explain or justify it—will be banned from this blog.

  48. Rainer Says:

    In the quantum mechanical argument you say that one can calculate the probability distribution. I think this means that you can in principle solve the Schroedinger equation of the system and interpret its endstate as a probability distribution over all possible measurement results.
    Maybe you can do this. But what you cannot do is provide an algorithm which tells you _when_ to do the measurement.
    I think that this is a necessary ingredient of any simulation, because a quantum mechanical system development is a complex interplay betweeen evolutions of the wave function according to the Schroedinger equation and measurements.

  49. Scott Says:

    Rainer #48: Your objection would be valid if we thought of “measurement” as some sort of freely-willed, atomic action that stood outside the normal laws of physics. But while that may have been how at least some of the founders of QM thought about measurement, it’s not how physicists overwhelmingly see it today. With the benefit of decoherence theory, today measurement is regarded as just another ordinary physical interaction—i.e., one governed by the Schrödinger equation—involving the measured system, the measurement apparatus, and the external environment. The remaining disputes largely center around how to interpret the resulting picture of the world—for example, whether or not to invoke the concept of “branching into multiple worlds,” as the math seems to suggests to many that one should.

  50. adamt Says:

    As you know, it is my very big hope/bet that the Church-Turing Thesis is false and that the laws of physics are not computable ie., no turing machine can simulate what the universe does with her laws. I’m also hopeful this will be found in my lifetime. Oh the things we will learn! Can’t wait to read/watch these 🙂

  51. fred Says:

    Isn’t each of us is already living in a simulation?
    The human brain is a computer which collects data points from the senses and uses them to build a totally internal representation of some supposed reality. This internal model is used to come up with a probability distribution for the values of future input data points. Measured deviations from the projections are then used to refine the internal representation.
    If this is not the definition of a simulation, I don’t know what is 😛

  52. William Hird Says:

    Scott # 49
    Every time I hear the “Many Worlds” phrase, I think of the Richard Feynman elevator joke. Feynman was on an elevator with another physicist who believed in the ” Many Worlds” theory but of course Feynman never did. So the story goes that the Many Worlds guy says “good morning” to Feynman and Feynman replies ” Good morning, which universe are we living in today ? ” 🙂

  53. Scott Says:

    William #52: If true, then surely one of Feynman’s least funny wisecracks. I’ve heard 10,000 variants of it from lesser minds than his. 🙂

  54. Rainer Says:

    Scott #49:
    Now we can see the elephant in the room. The physical Church-Turing thesis implies that free will does not exist and consciousness is not necessary at all (Zombies could do the same job).
    I do not think that this is common belief because more and more philosophers of mind are going to be pan-psychists.
    I think (hope?) that I have free will and therefore believe that the physical CT thesis is not true.

  55. cb Says:

    Scott #37:

    >at least *until* an adversary would’ve had enough time to backdoor that circuit

    Ah yes, I didn’t think of that either- I guess for the in-house RNG scenario (where the circuit is kept secret) there are two tasks for the attacker: 1) infer or learn the circuit and 2) invest the time to “backdoor” the circuit so they can easily sample it classically. (1) is quite cool to think about because it brings to mind the question of security against leakers (your own personnel who have access to the hardware and could choose to subvert you by leaking the circuit to an attacker).

    Clearly a bad solution would be to use a (deterministic) PRNG to generate the sequence of circuits as then the leaker could simply leak the internal state of the PRNG, perhaps with information about the schedule by which the circuit is rotated. I suppose by a sort of induction, if you trust your first circuit, you can just use some of the certified randomness from that first circuit to construct your next chosen circuit, sort of trusting subsequent randomness by induction on the first circuit.

    Would love to develop this use-case, if I could find an interested party– to me it seems quite applicable either to government or industry, especially as you mentioned Google is already putting effort into these sampling problems of yours and some major developments may be right around the corner.

  56. Scott Says:

    Rainer #54: Free will and consciousness are infamously hard to fit into any scientific understanding of the world. This problem is not special to the Church-Turing Thesis. You’re welcome to worry about it (I worry about it too, on some days), but do you also see it as a reason to doubt quantum field theory? Or what about evolution?

  57. Rainer Says:

    “Rainer #54: Free will and consciousness are infamously hard to fit into any scientific understanding of the world. This problem is not special to the Church-Turing Thesis.”

    That is true without doubt. But the physical theory should not explicitly rule it out.
    If one says that _any_ physical process can be simulated by a Turing machine, one rules out consciousness and free will. Its the same as saying all physical processes work like clockworks.
    Evolution does not rule out consciousness and free will. And the nice thing of quantum mechanics is (according to my understanding) that it had opened the door to include consciousness and free will.

  58. Ian Says:

    I keep waiting for an update along the lines of “And here we go!” with a link to google’s publicly released random quantum circuit and a description of the contest to generate samples using classical computing…

  59. fred Says:

    1) No need to bring in fancy theories to see what’s wrong with free will:
    Any event depends on previous events, if it doesn’t, it’s what QM refers to as pure randomness.
    If one accepts this simple premise of basic physics, then there’s no room for free will.
    Even if one’s definition for free will is some magical way for a soul to apply top-down causation that would “overrule” the atoms, there’s still no way to escape causation at the level of the souls.
    What’s left is pure randomness, and I doubt that would be anyone’s idea of free will.

    2) consciousness – direct observation of one’s own mind makes it clear that thoughts arise in consciousness, as everything else consciousness is perceiving (sounds, sights, emotions, sense of volition, sense of self, …). It’s obviously expected, a consequence of 1), for how could one manufacture his/her own thoughts before they arise, and in a manner that’s independent of everything else going on?

    Not only free will can’t exist, but there isn’t even such a thing as “the illusion of free will”, there is only a lot of conceptual confusion created by one’s ego (a result of evolution).

  60. Scott Says:

    Ian #58: Be patient! 🙂

  61. Raoul Ohio Says:

    Scott,

    In case you are not watching much of Ken Burns’ “Country Music” FYI, you are a ringer for current country superstar Vince Gill.

  62. Scott Says:

    Raoul #61: I just had to look up who he was. I guess I see a vague resemblance … maybe? But probably the same is true of at least 1-2% percent of all the men who you’d pass on the street. 🙂

  63. Yovel Says:

    Hey Scott, congrats on the lectures!
    Off topic, if you don’t mind- I have been reading philosophers of science recently- Kuhn, Laudan, Psillos etc., and so I wondered: As a computer scientist with strong coneection to physics and philosophy, with which side of the realist- non realist argument do you identify?

  64. Scott Says:

    Yovel #63: I confess that I’m not even sure what that argument is—could you give me an actual statement to agree or disagree with?

    FWIW, I do believe that there’s a real world and that science can learn things about it.

  65. Jon K. Says:

    Already listened to and posted the first talk on Twitter. Good stuff… a lot of material recycled from other talks, including a beautiful Quantum Computer pic, but it’s probably still new to much of the audience. Love the Zeno and Relativity computer discussions.

    I also think it is very helpful how you bring up and try to address all the usual issues people raise when talking about the Church-Turing thesis and whether different architectures/inputs/networks push beyond this definition. One critique though… I think your point about a Turing Machine not being able to toast bread (an example I’ve heard you use before) can be a little confusing, even if the point you are trying to make is correct… or maybe I need to just rephrase the point you are making in my head to make sense of it…

    Another thought that popped into my head around this same time was a nice property of Kolmogorov Complexity. It doesn’t matter if a deterministic process, a random process, a human-generated process, a network of computers, or a mixture of any of these processes generated the string… You can still ask (after fixing the programming language), “what is the length of the shortest program that produces this bit string (and I don’t care how it was generated in the first place)?” Seems relevant to the conversation in some way (beyond the fact that KC is not computable), but too long to elucidate here…

  66. anonymous Says:

    I think consciousness with free will rules and zombies drool. If the universe is conscious with will, then the universe as a spacetime personality controls space and time. Spacetime seems to be unified like a consciousness that makes free will decisions.

    Suppose you are a science nerd that doesn’t believe in free will working for Evil Corporation and the Princess of the Universe comes down to Earth to save it and take charge. Princess Universe performs many seemingly supernatural feats and gets an increasing following.

    Your bosses at Evil Corporation feel threatened and implore you to come up with a supercomputer with the most advanced AI algorithms to model Princess Universe and her followers so they can be countered because obviously Princess Universe is a hostile alien with very advanced technology preparing the way for a hostile alien take over of Earth. You agree with your bosses because you are an atheist who doesn’t believe in free will.

    Thanks to your brilliance and your company working hand in hand with Evil Government, Princess Universe’s followers are on the run, being incarcerated and killed in increasing numbers, it looks like the attempted alien invasion will be thwarted for now — you pat yourself on the back.

    Princess Universe and her followers make a last stand at Megiddo, Israel and the Evil Union of Governments sends swarms of jets and tanks to finally eliminate the alien insurrection. You watch on TV as the tanks and jets get near. When the first jet gets close to Megiddo they enter a space around Megiddo where time stops. The jets freeze in the air and they don’t fall down — tanks freeze too. Time is stopped around that area. This a seeming supernatural miracle or maybe it is just another trick of advanced alien technology. How would you interpret the event when asked for your opinion?

  67. TKK Says:

    Great first lecture; haven’t seen the rest, though I’m surely they’re just as good, if not better.

    TBF, I don’t think that the “argument from sensitivity to initial conditions” is that you can’t make useful computers from such chaotic systems. It’s more about: how can you simulate a chaotic system to a useful precision if you don’t have exactly the right initial condition?

    I agree with you that it does not refute the CT thesis: just because you don’t know the exact initial condition, doesn’t mean you cannot ever simulate it, especially if you try enumerating over all the initial conditions (though this may be a very large set). Time is not the important constraint here: it’s more about whether you can ever simulate it even in principle.

  68. Scott Says:

    TKK #67: Thanks! Regarding the initial conditions bit, see my comment #46.

  69. Rainer Says:

    Hi Scott,
    what do you think about weakening Church-Turing in the following way:
    “Any physical theory can be simulated by a Turing machine up to any desired precision.”

    This would be equivalent to the statement:
    “Any physical theory is based on constructive mathematics.”

  70. Yovel Says:

    Scott #64:
    I did hope you knew the arguement since I still did not read a lot about it. The arguement, as much as I understand it, is this:
    While there is a real world, science does not necessarily describe it- or the entities science describes do not refer to the real world. The main motivation to believe science is getting progressively better at describing the world is that its predictions are getting more accurate. However, this notion is not quite true. Many of science’s greatest advances are considered such *because* they are very different from the previous main theories, for example quantum mechanics throwing away the notion of aether, Copernican astronomy the crystalline spheres, modern chemistry the phlostigon, modern biology the vital force, etc.
    To the claim that nevertheless the empirical predictions are becoming more accurate, you get to the problem of defining a non referring theory as approximately true. Can the genetic theory be approximately true genes do not exist? Can the atomic theory without atoms? in this way, can the aether theory be approximately true without the aether? If the criterions are fertility or prediction, the theories I stated above seem to meet them while not being referring.
    I’m sorry for the long answer. I tried to summarize “A Confutation of Convergent Realism” by Laudan (Philosophy of Science, doi.org/10.1086/288975), which is way more detailed and is more formalized.
    Thanks in advance!

  71. gentzen Says:

    Scott #44: Thanks for your helpful answers. But I still think about that slightly mysterious complexity class called ExistsR: Is it possible that your vague recollection that ExistsR would be in the counting hierarchy is related to the article On the complexity of numerical analysis by E Allender, P Burgisser, J Kjeldgaard-Pedersen, P Bro Miltersen?

  72. Scott Says:

    Rainer #69: Why is that a weakening? Because it talks about “theories” rather than actual physical systems? I strongly prefer the CT Thesis to talk directly about actual physical systems; otherwise it’s confusingly meta. I also don’t know what constructive mathematics has to do with it, unless you just mean “math that can be simulated by a Turing machine” (in which case, why not use the more precise wording?).

  73. Scott Says:

    Yovel #70: I think it’s blindingly obvious that the predictions of science have (on the whole) gotten progressively better. Weather prediction got noticeably better within my lifetime, and you can tell just by going outside and looking. Before quantum mechanics, people had no idea how to build things like transistors and lasers; after quantum mechanics, they did. One doesn’t need a complicated theoretical framework to see any of this. So if some philosopher is using language in a way that makes them not see it, isn’t it more likely to be a problem with their language use than a problem with science?

  74. Scott Says:

    gentzen #71: Yup, that’s the one. Does it prove what I thought or only a weaker result?

  75. Yovel Says:

    Scott #73: I think that this exactly is the kind of argument that Laudan tries to reject. Technological advances using phenomena described in the theory are not evidence of the theory itself, e.g, transistor using tunneling is not evidence for the existence of the electron, just like radio telescope using interference, a prediction of the wave theory of light, does not prove the existence of aether.
    However, I started this discussion hoping you are familiar with the relevant arguments, and I should probably hold this discussion until at least one of us has good understanding of them. Thanks!
    PS- as encouragement for you, the *other* Scott published a post about Kuhn and his similar claims in January, and this could be your opportunity to check and raise 😉

  76. Scott Says:

    Yovel #75: Then it sounds like Laudan and I have a totally unresolvable disagreement—and to be perfectly honest, I’m not more inspired to read him from your description than I’d be to read a young-earth creationist!

    As for Kuhn, since you brought up the other Scott A, I’d say the best description of Kuhn’s system is that it’s a gigantic motte-and-bailey. The motte is the unexceptional claim that each field of science has “paradigms”—like Newtonian mechanics or quantum mechanics—in which “normal science” gets done, and that occasionally there’s a “paradigm shift.” The bailey—one that Kuhn occasionally indulged in, and that his followers embraced with gusto—is that the paradigms are mutually incomprehensible, and that the paradigm shifts are just random drift, or something, rather than taking us closer to the actual truth about nature.

    As I see it, the latter idea is absurd on its face. As is particularly obvious in the case of physics, each paradigm explains the successes of the previous paradigms, while also explaining new things that the old paradigms were unable to explain. In other words, there’s progress.

  77. gentzen Says:

    Scott #74: Thanks for the confirmation.

    Yup, that’s the one. Does it prove what I thought or only a weaker result?

    There is a FOCS version of the paper, and a later version published in SIAM Journal on Computing. The last sentence of that later version reads:

    Can some important problems in NP_R (such as the existential theory of the reals) be shown to lie in the counting hierarchy?

    So there seems to be the expectation that ExistsR will sooner or later be shown to lie in the counting hierarchy, but apparently it didn’t happen yet.

  78. Rainer Says:

    Scott#73
    I thought that “constructive mathematics” is a well defined term. But anyway, let it define as you did.

    A physical process is anything what happens in nature. The term “Process” is used here in a very broad sense.

    A physical theory is a mathematical axiomatic system which describes a part of the physical processes.

    The physical Church-Turing thesis states that all physical theories are actually based on constructive mathematics.

    My definition is weaker in the sense that it does not claim that each physical process has at least one theory. This means that there could exist physical processes which cannot be described by physical theories. I.e. processes that transcend mathematical methods.
    I will not claim that those processes exist, but I would also not exclude this possibility.
    In fact, I think this is the only way out of the free will dilemma.

  79. Scott Says:

    gentzen #77: Ok, thanks for looking this up!

  80. sokrates Says:

    Scott,

    I have enthusiastically watched the Paul Bernays Lectures and generally enjoyed them and learned a lot. As much as I enjoy your straight-shooting style, I found your remarks regarding Optimization-oriented Quantum Computation dismissive. Even Shor argues this is the primary next application for Quantum Computers in the short run, for example via QAOA (can source this as needed).

    You mentioned these ideas are based on “theory” that doesn’t exist. A great student of history such as yourself probably doesn’t need a reminder that great surprises in experiment often precede theorists’ expectations. This could be an exception in CS but it is definitely the norm in physics.

  81. Andrew Knight Says:

    Scott… really enjoyed your first lecture. However — and I’m speaking as a physicist — why did you spend so much time discussing the ideas of “relativity computers” and “Zeno computers” to get infinite computing and solve the halting problem? Are these ideas that anyone takes seriously? I understand that falling into a supermassive black hole might allow me to “see” infinite time, but even then there is no point during my fall that I witness the last digit on an infinite tape of a Turing machine. Both ideas that you discussed are fascinating mathematically, but then again lots of fascinating things happen mathematically when you allow limits to infinity. For example, 100% of the Devil’s staircase, after infinitely many iterations, has zero slope, while its average slope is one. Of course, the limit to infinity is the problem, as there is no nth iteration at which 100% of it has zero slope. Your response to both (particularly that reaching the Planck scale is nature’s way of saying “No!”) was great, but again I am just wondering whether you discussed them in your lecture because they are fun examples or because computer scientists take them seriously…?

  82. Scott Says:

    Andrew #81: There is a small community of philosophers and others who actively argue for the possibility of “hypercomputation” (i.e., things like the Zeno computer), who have whole journals and conferences about the subject, and who even once had a totally supportive article published in Scientific American.

    However, my comments in the talk were addressed less to them than to the vastly larger number who, while not actually believing in hypercomputers, also don’t know of (or believe that there is) anything current physics can say to rule them out. Or, just as likely, who think that hypercomputers would be “impractical because of noise,” but not ruled out for any deep theoretical reason.

    I think it’s a profound fact about our universe that, when you throw Planck’s constant h, the speed of light c, and Newton’s gravitational constant G into the same brew, you end up with what look for all the world like fundamental limits on how many computational steps can be done per second, and on how many (qu)bits can be stored per square meter, without using so much energy that one’s computer will collapse to a black hole. That these limits, which are the limits that enforce the Church-Turing Thesis, might have been false in a world without gravity, or in a world with gravity but without quantum mechanics, but they seem to be true in our world. Don’t you think that deserves to be much better known?

  83. Scott Says:

    sokrates #80: It’s possible that the sheer volume of hype I now have to endure, about optimization and machine-learning speedups from near-term quantum computers, made me even more negative about the subject than I otherwise would’ve been. If so, then I apologize for that bias.

    But let’s look at the facts: even Farhi, the (co)inventor of the QAOA (and my friend and colleague for more than a decade), agrees with me that there’s currently no theoretical grounds for expecting the QAOA to deliver a speedup over classical algorithms for solving optimization problems. There was a brief hope that QAOA would at least get a better approximation ratio for some specially constructed NP-hard problems, but that hope was dashed when a post on this blog spurred the discovery of better classical algorithms for those problems.

    Meanwhile, in quantum machine learning, Ewin Tang’s breakthrough and the subsequent developments have removed most of the case for exponential speedups of practical importance that there had been. What remains are mostly
    (1) heuristics (the algorithms I described as “based on theory that doesn’t exist”),
    (2) Grover-type speedups, and
    (3) exponential speedups of unknown applicability (like HHL).

    Having said that, I’m well aware of the fact that the discovery of useful heuristics has often preceded an understanding of why they work. That’s why one of the most exciting applications of NISQ-era quantum computers will be to do experiments to try to learn more about the performance of various quantum heuristics!

    But I don’t think that the “quantum computing optimists” (among whom I count myself) get to pull a motte-and-bailey—talking up the huge benefits that we plan or expect to see for machine learning and optimization on NISQ-era devices, and then retreating, only when challenged, to the weaker claim that no one has proven that those benefits can’t exist.

  84. Anonymous Says:

    Is there possibility of Ewin Tang type result for Shor’s algorithm?

  85. Scott Says:

    Anonymous #84: That’s a hilarious way of putting it, but yes, it’s entirely possible (for all we know today) that someone could someday prove Factoring is in P. Some experts even consider it likely, although I’m on the fence.

  86. Craig Says:

    Scott,

    I enjoyed listening to Lecture 3 last night. You are the “great communicator” of the quantum computing community.

    Question: You talked about the way companies like IBM and Google are trying to prove quantum supremacy by showing that a quantum computer can perform a task in microseconds that the best classical supercomputers take weeks to perform and that doing this requires a quantum computer on the borderline of classical and quantum with around 50 to 60 qubits.

    This is great, but what if one wants to test whether a quantum computer with 70 to 150 qubits works? Are there ways that one can test these computers? If not, then how will quantum computing progress?

  87. adamt Says:

    Hi Scott, just finished watching your first lecture and it was awesome. Thanks so much for putting this all together in one place. It matches some of the discussions we’ve had and I completely agree with your complaints of the so-called “challenges” to CT thesis so far other than Penrose. I think Penrose’s effort to challenge it – while it may just be wrong – was an actual attempt to wrestle with it and not to take the easy way out.

    If I were a betting man I would place a bet that CT is actually false. That we are going to discover at some point an empirical refutation of CT by encountering a physical system that can only be explained by adhering to laws of physics that incorporate uncomputable behavior in some fashion. I don’t think I’ve ever asked you this explicitly, but what would your bet be and how sure of it are you? I’m guessing you would place large wager on CT being correct and that it won’t be found to be violated in your lifetime or ever 🙂

    Another thing I would note that while the “box on the beach” could very well disprove CT it might be very very difficult for humans to *realize* that uncomputable behavior lies in physical systems we encounter all the time. Why? Because as you stated in your slide all these things we already know are uncomputable – the halting problem, kolmogorov complexity, diophantine equations – are in a sense equivalent, they don’t always appear so or appear *obviously* equivalent. Much in the same way I would add that the lambda calculus does not at first blush appear equivalent to Turing’s formulation. Or the SKI calculus for that matter.

    Finally, to lend support for your statement early on in the talk about response to category theorists and type theorists complaining that Turing didn’t anticipate higher order functions… actual transcompilers for all of these things *already* exist and have been written. And interpreters for lambda calculus exist and can be run on your computer right now. The engineering for these compilers is done and it is entirely possible to right now take most any language which employs higher order functions or incredibly complex type systems and transcompile that to lambda calculus and run on a regular old desktop computer thus *demonstrating* the equivalence. You probably already know this, but just wanted to echo it.

  88. adamt Says:

    Jon K. #65,

    I’m not entirely clear what you are trying to say about Kolmogorov Complexity. However, I am glad you brought it up because for me it is my favorite example of uncomputable behavior. For whatever reason, this is the one that really sparks my *aha* moment of *grokking* what it means to be uncomputable. I think it is really interesting that different people understand these concepts best through different means. Scott will forever have turing machines and the halting problem as his favorite avenue of thinking about computability and uncomputability. My favorites are the lambda calculus and kolmogorov complexity. I don’t know why, but they spark my imagination better than turing and the halting problem. And yet I know they are completely equivalent. Like Scott I’m also struck by this fact of their equivalency and think it very profound and meaningful.

  89. Scott Says:

    Craig #86: There are several ways to test a quantum computer that’s too large to directly simulate classically—depending on what kind of QC it is, what kind of access you have to it, and what kind of guarantee you want.

    (1) Most obviously, if the QC is large enough to run Shor’s algorithm, then you just ask it to factor a huge random number and check the result! Unfortunately, that probably won’t be practical until we get full scalability and fault-tolerance.

    (2) If the QC can simulate interesting quantum-mechanical systems, then you can test it by comparing the output of the simulation directly against experimental data.

    (3) In the very near term—if you have (say) 100 or 200 qubits, but not 1000 qubits, and you want to continue to do the sampling-based quantum supremacy experiments you were doing with 50 qubits—you can vary your quantum circuit, calibrating its error rate on circuits that are easier to simulate classically, and then extrapolating to the ones that you can’t directly simulate. Here you’re making an assumption of “no conspiracy theories” (i.e., that the error rate won’t magically depend on whether this is one of the quantum circuits that happened to have an efficient classical simulation).

    (4) Today, there are sophisticated protocols known for forcing a quantum computer to prove its answer to a classical skeptic, as long as the skeptic can submit challenges to the QC interactively. Some of these protocols—those of Aharonov-BenOr-Eban and Broadbent-Fitzsimons-Kashefi—require exchanging single, unentangled qubits with the QC, for instance using a fiber-optic cable. Another—that of Reichardt-Unger-Vazirani—requires two QCs that are entangled but unable to communicate. Finally, there’s the breakthrough protocol of Urmila Mahadev from a year and a half ago, which overcomes both of those issues, but requires a QC that’s powerful enough to implement lattice-based encryption (and depends on cryptographic assumptions).

  90. Adamt Says:

    Financial Times is reporting that Google says it has achieved QC: https://www.ft.com/content/b9bb4e54-dbc1-11e9-8f9b-77216ebe1f17

  91. Scott P. Says:

    1) No need to bring in fancy theories to see what’s wrong with free will:
    Any event depends on previous events, if it doesn’t, it’s what QM refers to as pure randomness.

    On the other hand, it’s possible for a given system to be both deterministic, and random, at different levels (when a radionuclide decays is apparently random, yet the time for half of 1 x 10 ^50 atoms to decay is consistent between any two samples to a very high precision). So I reject the notion that determinism (or indeed the presence of random events) necessarily prohibits free will.

  92. Craig Says:

    Scott 89,

    A skeptic like myself would not be convinced by 2 and 3. I don’t know anything about 4. But if they are successful with 50 to 60 qubits, that is much farther than I expected them to get. We shall see.

  93. Craig Says:

    Scott,

    I agree with you that this is the most exciting time to be alive for someone who is interested in science, since the question of whether quantum supremacy is possible is the one of the most important scientific questions of all time. You may be a quantum supremacist and I may be a member of Antiqua, but I think we can agree on that.

  94. Shtetl-Optimized » Blog Archive » Scott’s Supreme Quantum Supremacy FAQ! Says:

    […] resist dropping some hints here and there (did you catch any?)—for example, in my recent Bernays Lectures in Zürich, a lecture series whose entire structure built up to the brink of this […]

  95. Quantum Computing Community Has Been Buzzing About Quantum Supremacy – News for ThoughtPeople Says:

    […] Aaronson had three lectures on computability and quantum computers. […]